From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02024224F6 for ; Mon, 4 Nov 2024 13:32:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727132; cv=none; b=IJHhbnXRSrl5Dusk0+coLz9fWfL7cZ074c2KuOmASubWVGgtX/ETP17TcRf63ly7m2BhuKxLHuTc30yLhh20LXbkTCfxBytK+tsZlALLgQ2Bmgcko/NvJaBD6BgYDunnDBxBBORh8AK7wxc+gjDN82JlUsyHwAC850u5vw0ecPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727132; c=relaxed/simple; bh=cQFZKUNkNesOamlW7FrbWTKKfmpcC6n8YuDFlsfz03c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SmSHs1zwzoONoL90FPbNpJZFODV0EROoASmwU+W30Ga4wb5BhybxnEtVoe6eJqjl1UtjdcLIv1MIvENXxITegZcUecmZEKQwxhb0U4Yi/419rGm5C9BYjS3b6X47YSEBzjIDl5QrS562MWSmSaon6Itwa+uSBFMTGMq5IVZv6fQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uVdpNrGE; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uVdpNrGE" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea8a5e862eso16646157b3.0 for ; Mon, 04 Nov 2024 05:32:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727130; x=1731331930; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=C3cxg1LygKNu1O6MtAV/wXocYa0JdKRd+G1uF5af3p4=; b=uVdpNrGEmJD870bFyPJhesiX5aMq3ggqiRltVrvfc9/4CA6YphukERqoRwqkR9KgEm sU/DDYizvl+9RD1Mvw60xtihAIMZd+TDJXiJYlHQCQT5H+aGSgKTFAtcAaEmILaFIhwD lhjSZur56mo2d4hCuqq2PaOFweG7K5qeH0jDyZzbz7wtxU5/oubea2jgKnKwcW+tmP1F 1tfxyjLmPsRftSc8xHgiHMSS9M9JdB2wjsfd1nH9v7FCvza62zWl2q0o4UULNdo5sZsC EqRfvHiCRMrpjHmJkqfxkojZhHHYxudG98du+F0M1bU9ZLC57kWKo1Hxs5/N5rUY11if Gvhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727130; x=1731331930; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=C3cxg1LygKNu1O6MtAV/wXocYa0JdKRd+G1uF5af3p4=; b=WIfJmOksPohzJTXki1kIhU2H8if8QZpAApA5861ImQQAPC2Vsvnm2qPpN1yxy1IxmX DTKYc5IZnuHAnTDilQLmxyCUzlr/rx8dgoSxjVBNoWDoAb7T5Zm2JWT7fwMz/9Gytbs8 s72wVG0tJHmarZjlWOKOjqnxEnDF0Bve+Whb5FET7tcNHrTg7i/L7QTkle+QKLbSmjnt RHD0IjJM05z7dsbfxSnkuQ6+tfXlW6DCuvVfSqh44VUWoAa/8zoX/sgm1/57h7OjaupQ RSsIvEqmUE/GJcI3r6rSAhgGHMaThQTIngE2XMLUGVJZWRKope8JUG+B2WsomZUzhAXi bfiQ== X-Forwarded-Encrypted: i=1; AJvYcCUm88944aqm35eEzgYETP5fZcJFyqezjx3AC3elANMIyyAoHJ8V+8OZQF1J/dmyIuNlBwN+dXfR20fB2R8=@vger.kernel.org X-Gm-Message-State: AOJu0YytV9N/63tU/MNMIFZoji8XmYEntr6kFa9LpgDNc0/aXQ7acin9 Qm1Z+17WidzZEC/OFdqlIcu22ZJG6aOXcZ3pSkr2h3Zx9SgSWZsBHWdgvbhdhByZ1EhW+iiHzPN xzL3S6w== X-Google-Smtp-Source: AGHT+IG4oLvWKjpenUPcwb9MUEXuA4k9iUal4VcJqIm7bVxly8pKYoHiLyM9jmlzbnyR07kVBLqdiArEOR21 X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:690c:3603:b0:6e3:ad3:1f19 with SMTP id 00721157ae682-6ea557a7a11mr2085187b3.3.1730727130106; Mon, 04 Nov 2024 05:32:10 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:47 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-2-qperret@google.com> Subject: [PATCH 01/18] KVM: arm64: Change the layout of enum pkvm_page_state From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The 'concrete' (a.k.a non-meta) page states are currently encoded using software bits in PTEs. For performance reasons, the abstract pkvm_page_state enum uses the same bits to encode these states as that makes conversions from and to PTEs easy. In order to prepare the ground for moving the 'concrete' state storage to the hyp vmemmap, re-arrange the enum to use bits 0 and 1 for this purpose. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 0972faccc2af..ca3177481b78 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,25 +24,28 @@ */ enum pkvm_page_state { PKVM_PAGE_OWNED =3D 0ULL, - PKVM_PAGE_SHARED_OWNED =3D KVM_PGTABLE_PROT_SW0, - PKVM_PAGE_SHARED_BORROWED =3D KVM_PGTABLE_PROT_SW1, - __PKVM_PAGE_RESERVED =3D KVM_PGTABLE_PROT_SW0 | - KVM_PGTABLE_PROT_SW1, + PKVM_PAGE_SHARED_OWNED =3D BIT(0), + PKVM_PAGE_SHARED_BORROWED =3D BIT(1), + __PKVM_PAGE_RESERVED =3D BIT(0) | BIT(1), =20 /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE, + PKVM_NOPAGE =3D BIT(2), }; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) =20 #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT= _SW1) static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot pro= t, enum pkvm_page_state state) { - return (prot & ~PKVM_PAGE_STATE_PROT_MASK) | state; + BUG_ON(state & PKVM_PAGE_META_STATES_MASK); + prot &=3D ~PKVM_PAGE_STATE_PROT_MASK; + prot |=3D FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; } =20 static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot pro= t) { - return prot & PKVM_PAGE_STATE_PROT_MASK; + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); } =20 struct host_mmu { --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B04E970816 for ; Mon, 4 Nov 2024 13:32:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727135; cv=none; b=p5hhYwYaekKOm8pZuPMofk+U7+QW8w1K+KaCDxGDcJxXw8sf+A2Y2S0nxWhj+rW/VJQESd4ZC/qzX/bMv5i8IxPwmCrlIAq0dFdiJrlCtNI5CpuG35w4IAFBH9g70Zgq/GXO5OxUGuseEKW+KkErg+RO+Ih4wzBKeDYpFjcfoLI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727135; c=relaxed/simple; bh=JCsNEOwCob5F3IJKkBqX1lBcmUqiGgMoJBMXxpVz9bc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FG5hNTVnXlt9IlzhB9EUZBexpgw+TQMPfdRaiI511hygYdZVuM2Ln6VnXUnR/ogrpaTN+9/aK5rk6xSUlve0YSvzbk7hBYDf7BdysT/L40Ez9yKOG9qkRRF+hhp5duBMdyCdpEmAf+R8xPXtO+clHNbXGLpGtoxa7kZWwcEMBk0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LDVnZmJT; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LDVnZmJT" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e02b5792baaso6669690276.2 for ; Mon, 04 Nov 2024 05:32:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727133; x=1731331933; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PKkm6+349ykGA9NvXy7LF0BQjMAqTfQ2kTd431L999Q=; b=LDVnZmJTLm2KDBP43pzXYADIPBcQyXTcYGcoRcURPf0MwTCXwbBR/KyoE5pIa/3BbP hszIQ7AnrkS1ZgltK5brO5Kdwfn2i74pa+exanLh1AjHtKi70tybKhMTvV71JCiY1oVI UOGO39MB4ea6pilw3YMVmDYTTnKQMBTIICGTGPsYgMhcJaPEppGpkGsziPVhhSkGi9QF J1uMVnw7LcBj9r29T+ReajcaT2drRjYffYtZwOsOGGOSpLaiYTZKf2cbIKGRju7QWZQl DjYv2rn7ErKkx/lfSIo4ow0DsLn1piO7Uok5LsFOPrQtGwTnp/+RJc+4DPf7HymFu3dt 5xLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727133; x=1731331933; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PKkm6+349ykGA9NvXy7LF0BQjMAqTfQ2kTd431L999Q=; b=M2LJWJuqusKXw4bq8TqxUVbzav2cWS3d7kMx9aTzX9uueh2I8g/5SNSRTHkFkeyqU4 aNe9zdung0j5kKK1iFSTM83FSCt3KMXiGRVWRCOS3nwediJ3pKz5DdZ4AJTcySkZ8ywM LFXYKZfvi8JOYmPeoFF5LWLSKWGa1fPA/TPZ7680UMb6YKPwTtOuzuTBgLiYb0LEcMCF X4PcSr+XmXR6fN1pch2178dZcqX7rQrI5h9p9n//reZpcv6/XJFHOp0e033AHuvllwAk +ecjCjV7nSWKMMBXxUg+ZhBdE2uMzFtC6QMLQAhvMJaUGUjn+/hsh3MOZ335fmVFIP7d Zemg== X-Forwarded-Encrypted: i=1; AJvYcCVz7lIVfW7KpQamTcLMocFlNF3wasqlFwJwERqhLmX7YCzv4/ZjCi1G75buNiMOFTIrvbGig1DPLjOWSlo=@vger.kernel.org X-Gm-Message-State: AOJu0YzdWH5GB4tqP27ZEz9iHREQRfWUVuvRzZ9lcJiiisMqEDTt8Jsi VLSh9zS5aY+5waM/KDXXmNIZYCIrs/FuH+3eibPinJ1VeFwHzJoJfNW/ynNH1zpGxNYxwylTbhi Mr58G/Q== X-Google-Smtp-Source: AGHT+IEeIacQkmxOu92UwsFG/d4mBUfXOkjaaTFyk9RQX+gm0U4bljU3gFu+U2mMY/h3tB6h+FVJRJ+qoRNf X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a5b:f0f:0:b0:e2e:2cba:ac1f with SMTP id 3f1490d57ef6-e3087bd5414mr144751276.6.1730727132285; Mon, 04 Nov 2024 05:32:12 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:48 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-3-qperret@google.com> Subject: [PATCH 02/18] KVM: arm64: Move enum pkvm_page_state to memory.h From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to prepare the way for storing page-tracking information in pKVM's vmemmap, move the enum pkvm_page_state definition to nvhe/memory.h. No functional changes intended. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 35 +------------------ arch/arm64/kvm/hyp/include/nvhe/memory.h | 34 ++++++++++++++++++ 2 files changed, 35 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index ca3177481b78..25038ac705d8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,43 +11,10 @@ #include #include #include +#include #include #include =20 -/* - * SW bits 0-1 are reserved to track the memory ownership state of each pa= ge: - * 00: The page is owned exclusively by the page-table owner. - * 01: The page is owned by the page-table owner, but is shared - * with another entity. - * 10: The page is shared with, but not owned by the page-table owner. - * 11: Reserved for future use (lending). - */ -enum pkvm_page_state { - PKVM_PAGE_OWNED =3D 0ULL, - PKVM_PAGE_SHARED_OWNED =3D BIT(0), - PKVM_PAGE_SHARED_BORROWED =3D BIT(1), - __PKVM_PAGE_RESERVED =3D BIT(0) | BIT(1), - - /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE =3D BIT(2), -}; -#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) - -#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT= _SW1) -static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot pro= t, - enum pkvm_page_state state) -{ - BUG_ON(state & PKVM_PAGE_META_STATES_MASK); - prot &=3D ~PKVM_PAGE_STATE_PROT_MASK; - prot |=3D FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); - return prot; -} - -static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot pro= t) -{ - return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); -} - struct host_mmu { struct kvm_arch arch; struct kvm_pgtable pgt; diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index ab205c4d6774..6dfeb000371c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,40 @@ =20 #include =20 +/* + * SW bits 0-1 are reserved to track the memory ownership state of each pa= ge: + * 00: The page is owned exclusively by the page-table owner. + * 01: The page is owned by the page-table owner, but is shared + * with another entity. + * 10: The page is shared with, but not owned by the page-table owner. + * 11: Reserved for future use (lending). + */ +enum pkvm_page_state { + PKVM_PAGE_OWNED =3D 0ULL, + PKVM_PAGE_SHARED_OWNED =3D BIT(0), + PKVM_PAGE_SHARED_BORROWED =3D BIT(1), + __PKVM_PAGE_RESERVED =3D BIT(0) | BIT(1), + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE =3D BIT(2), +}; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) + +#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT= _SW1) +static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot pro= t, + enum pkvm_page_state state) +{ + BUG_ON(state & PKVM_PAGE_META_STATES_MASK); + prot &=3D ~PKVM_PAGE_STATE_PROT_MASK; + prot |=3D FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; +} + +static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot pro= t) +{ + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); +} + struct hyp_page { unsigned short refcount; unsigned short order; --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3541012DD8A for ; Mon, 4 Nov 2024 13:32:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727137; cv=none; b=SkIZlYP+qtux6WGVHoKnMvB5zCELgj7Y9qANOCiaccIe50mPdZAxzTlhSRRd+Hu9LUSeYEbnE1FsdlJwHkCMwtHUwnbGbyG5zL4OUD/J35ptR7Yz7HAjVBoWpglbWlyBjBleNgvL2+ulB6l3W5/+/tJHFajluC9bqdD2FM7fLGs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727137; c=relaxed/simple; bh=REnepbOcmsv0k7th+LSpXXbICi7vnDjh1RvcN0Yje7s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ArmkkJDvt4J8xQzMp3NtuXh8yhzX8dJ5TLlqX5jwp9ONGTYm0YD6vVe8y6scFOojRaHIoqBK+q53KjLzVqvKy8LJo1M2McaMBPxrEWDjmKjLdfD/kERsnmo9Qrm5H1t/nGf1cUD5HsSUiiLx24ht0fS0GhxlqruyEdsvc/+51fs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ZDUCe8Zu; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ZDUCe8Zu" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e30ba241e7eso7606515276.2 for ; Mon, 04 Nov 2024 05:32:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727135; x=1731331935; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DjUYluJ7thLERHfqDAbw880QNuSB0DBGhk0tN16op68=; b=ZDUCe8ZuXyWX7V/8/RVUU+qk5bqMoi15Xcx1Xqm5mGfQxjoD02kiCAD7hZ2rY20UN8 x48GxQ19b4p5A8llU9h2NKYv6E3ETnqeg/swN0MLfUA5iRRrMjunhZVJtRyCOzXkMOwW iiaNvaBmoPACceWrvPlMH4EAJ33JrvFLQh9EbdFr63VAnXgt2n5XH1Nro9Cn/n/OY1un vIzE3/zwiCczegHwCcUkfTJL83heky4JWbY6naSj9s0ldD6whGKSpPPoN5tws+r3zeL6 YsL8lEfXT4XQZzu4DinoqUcyD4d3MbmePl10BpHM5QjsIjMfHPUsx4Jrnm+Y2yCzg9R+ kjKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727135; x=1731331935; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DjUYluJ7thLERHfqDAbw880QNuSB0DBGhk0tN16op68=; b=QbZ7YCnlb1sXOTl1TBfIACqog98dtM99JcHLE390qOkt1q0RQO0wbpvKp/DHxaWhua E/EOj0VRDY+p47sxR3xVq7NNgaWvyMGCrYYwbCZnz5K6+IPLxE2D5+ulBmR0B69m67lJ 0KO2VEk8NFQhuTP5LEkLpj0xCMSzSI74RwPZcDoiU+m28lgkjeLFotD23LMr6/Eov3vZ OPLJFrpUpzA0QNWgNBtpAVuarolGZQMb8ZJzhXhAxYpiaXWLROCaI/Q4fs5CWyjhfNBC jrswoN1WOj7On/jnH7HXC1f347Bv7+PFh4X5JDf1wxTWJA5zwjCOxoPwAk+1ptG6ljeQ IZow== X-Forwarded-Encrypted: i=1; AJvYcCVysnpkNK60qEkjVlSZzG8m4kbMw3VYXMznHBzEjdCQXYdEpyyczyFO/CL8x5d+ZM0a/1dt+Tp/v70we2Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yx9yhaoUVQZGT1REd+r/Trt9mYzDxx0aypd/cYsnX12e7WnGoNk X/eTYDMckgEoAwW3VhmZ31hYBjOuPG7khOgn143e9BfqTt+EaDAvqXc+8LJiLwwJCXv/13rXD0l tRw6VWQ== X-Google-Smtp-Source: AGHT+IGCwsvypUWNk6RlNMRAjdWFDm97ufr3Hg3Q8v/EOjc5GY5BaJQCKOeF2R4oR705NB7B1GCdCKn9Vkg0 X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:ba0d:0:b0:e28:fb8b:9155 with SMTP id 3f1490d57ef6-e3087bfc93amr63281276.9.1730727135113; Mon, 04 Nov 2024 05:32:15 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:49 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-4-qperret@google.com> Subject: [PATCH 03/18] KVM: arm64: Make hyp_page::order a u8 From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We don't need 16 bits to store the hyp page order, and we'll need some bits to store page ownership data soon, so let's reduce the order member. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/inc= lude/nvhe/gfp.h index 97c527ef53c2..f1725bad6331 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include =20 -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff =20 struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[NR_PAGE_ORDERS]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; =20 /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index 6dfeb000371c..88cb8ff9e769 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -42,8 +42,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm= _pgtable_prot prot) } =20 struct hyp_page { - unsigned short refcount; - unsigned short order; + u16 refcount; + u8 order; + u8 reserved; }; =20 extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe= /page_alloc.c index e691290d3765..a1eb27a1a747 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr =3D hyp_page_to_phys(p); =20 @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_p= ool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy =3D __find_buddy_nocheck(pool, p, order); =20 @@ -94,7 +94,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys =3D hyp_page_to_phys(p); - unsigned short order =3D p->order; + u8 order =3D p->order; struct hyp_page *buddy; =20 memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -129,7 +129,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, =20 static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; =20 @@ -183,7 +183,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) =20 void hyp_split_page(struct hyp_page *p) { - unsigned short order =3D p->order; + u8 order =3D p->order; unsigned int i; =20 p->order =3D 0; @@ -195,10 +195,10 @@ void hyp_split_page(struct hyp_page *p) } } =20 -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i =3D order; struct hyp_page *p; + u8 i =3D order; =20 hyp_spin_lock(&pool->lock); =20 --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9125E139D19 for ; Mon, 4 Nov 2024 13:32:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727140; cv=none; b=TSWtzhNKZAo+PXxRLRDr/w7uSk6RFPFzGhGPwnMn7M4HQbE1iAka5m5tKb8RYvdklFGv4c7PuGfl3rlpAinrpSBEasMO4ThOipFy2SKJDJgbilsN+deX8tfIhERZIExL29QkJGTqQ54zm9ucToHnFE0g1xNMVdZpMFPoGuTe4uo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727140; c=relaxed/simple; bh=ZboWPmVmaGlL28LYfBhIUz5qUZOJXRUDz6PhLEdQK4c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=PwWgxCC/YzvOffvo052xsv9BBhILM8op91ZZdY4dIB6V+tH16dXOsPjtd3Adced3QyCZY70v4Zfvw4Kg9DhYD048bmbwQPwG4EcbriLowwXWBBQWt1cpQsXEJeTDw5Wnwwg1a9jqN3+lEdhma3xgLDfXR3BBYyq0ZPUzll3yQts= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=DgeAEV3i; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DgeAEV3i" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea8a5e86e9so24241277b3.2 for ; Mon, 04 Nov 2024 05:32:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727138; x=1731331938; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zQ2UfOd5hZAeaPLHJ1vPNSgFOo3Tgpu5wQh3N2q+oTo=; b=DgeAEV3iTf/KB8FRQgT/aL0dWLlI6p5t6IoJeLjWyyofDeLnyUtX8k0y8ExI7PXfbC i+awMi2LatLft+qLaGMSGJN8doDxtBfxAuAl78mr61APxLw4KDOoWNDoQIlR6ZAkahgV 1UKZ98CtgpxBoWEYiCqXIOV8jwRZ0ocdQf26L1BDs8N6iCXStpCKCvsKVAX4TyC75jeB uz1sMdaWTbh5AB9BNEvPgIPAVFpDqZnlFwv8HBf5KujBfTQDysIQkJeuAE10zvWoA3KA vNkfii1Cky57mzB3zvWby40YOvtf/JZLR17jhgRoXbITr1+5zwn5FVBSkT+9NPMXRSXe XUpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727138; x=1731331938; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zQ2UfOd5hZAeaPLHJ1vPNSgFOo3Tgpu5wQh3N2q+oTo=; b=erBjQGC5P5Pe8jHPmXKbGiD4D3Y9kt/SsFcKvEPlEHBDyXuaiXT5TgubQjonPaPgh7 gZEakhgMIDW4hqwk1vrRtg8w1Q5Jf30GjauTEEAZK/kMPRbt0Hz6xGDfKc+65vvboWI4 d7fn6JbLm6tCDxwVV6Y3RDjKOFHVPWuvx5QA7f//nnMDttWbxfoZ1C0P5KbnhgzLxNvz gqBAM1MTy+Ahs5d9Ba4xUUgvlbntCG/EqwkVAAivS5KpJL1GS7pZMnJz4/luRQLvrI5I gAywdBUNVzsRx54fr5+4S8jU99Lgs1pI1AwdaWYvFJiKJIDNJDh1GpCnIeaH4dqVfLgs D/4A== X-Forwarded-Encrypted: i=1; AJvYcCXm+VgU6N7FELk/PufNKkTFrbQtKjaz8T/e+yEFWf9qiTwhek9cJO0BlPTCA8bM3SiAA1bTI0AzAzkKQeM=@vger.kernel.org X-Gm-Message-State: AOJu0YzWs7QIQOs4U/srMtzEl0kjW91X6W4FXcZtXYfJM40HdT9yMdin w6agUfp198sZzK4mlbfcPl350L77wjGv12ulq6TiG2l1aTTUm87W58e/7eRRY+ntg7YPZM+Xpoc G6Zgq+Q== X-Google-Smtp-Source: AGHT+IF49UTXAe8MpaL0rEMeOO4EMZEqcc7t401Y5lsJ/es+OKsAWaYirm7qQpvT/q6K58FZd8rxMOICH29X X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:690c:9a0e:b0:6ea:70f7:2c38 with SMTP id 00721157ae682-6ea70f733ddmr1082897b3.4.1730727137419; Mon, 04 Nov 2024 05:32:17 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:50 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-5-qperret@google.com> Subject: [PATCH 04/18] KVM: arm64: Move host page ownership tracking to the hyp vmemmap From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We currently store part of the page-tracking state in PTE software bits for the host, guests and the hypervisor. This is sub-optimal when e.g. sharing pages as this forces to break block mappings purely to support this software tracking. This causes an unnecessarily fragmented stage-2 page-table for the host in particular when it shares pages with Secure, which can lead to measurable regressions. Moreover, having this state stored in the page-table forces us to do multiple costly walks on the page transition path, hence causing overhead. In order to work around these problems, move the host-side page-tracking logic from SW bits in its stage-2 PTEs to the hypervisor's vmemmap. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 94 ++++++++++++++++-------- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +- 3 files changed, 71 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index 88cb8ff9e769..08f3a0416d4c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include =20 /* - * SW bits 0-1 are reserved to track the memory ownership state of each pa= ge: + * Bits 0-1 are reserved to track the memory ownership state of each page: * 00: The page is owned exclusively by the page-table owner. * 01: The page is owned by the page-table owner, but is shared * with another entity. @@ -44,7 +44,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm= _pgtable_prot prot) struct hyp_page { u16 refcount; u8 order; - u8 reserved; + + /* Host (non-meta) state. Guarded by the host stage-2 lock. */ + enum pkvm_page_state host_state : 8; }; =20 extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index caba3e4bd09e..1595081c4f6b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -201,8 +201,8 @@ static void *guest_s2_zalloc_page(void *mc) =20 memset(addr, 0, PAGE_SIZE); p =3D hyp_virt_to_page(addr); - memset(p, 0, sizeof(*p)); p->refcount =3D 1; + p->order =3D 0; =20 return addr; } @@ -268,6 +268,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, vo= id *pgd) =20 void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *= mc) { + struct hyp_page *page; void *addr; =20 /* Dump all pgtable pages in the hyp_pool */ @@ -279,7 +280,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct= kvm_hyp_memcache *mc) /* Drain the hyp_pool into the memcache */ addr =3D hyp_alloc_pages(&vm->pool, 0); while (addr) { - memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + page =3D hyp_virt_to_page(addr); + page->refcount =3D 0; + page->order =3D 0; push_hyp_memcache(mc, addr, hyp_virt_to_phys); WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); addr =3D hyp_alloc_pages(&vm->pool, 0); @@ -382,19 +385,25 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } =20 -static bool addr_is_allowed_memory(phys_addr_t phys) +static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) +{ + return range->start <=3D addr && addr < range->end; +} + +static int range_is_allowed_memory(u64 start, u64 end) { struct memblock_region *reg; struct kvm_mem_range range; =20 - reg =3D find_mem_range(phys, &range); + /* Can't check the state of both MMIO and memory regions at once */ + reg =3D find_mem_range(start, &range); + if (!is_in_mem_range(end - 1, &range)) + return -EINVAL; =20 - return reg && !(reg->flags & MEMBLOCK_NOMAP); -} + if (!reg || reg->flags & MEMBLOCK_NOMAP) + return -EPERM; =20 -static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) -{ - return range->start <=3D addr && addr < range->end; + return 0; } =20 static bool range_is_memory(u64 start, u64 end) @@ -454,8 +463,11 @@ static int host_stage2_adjust_range(u64 addr, struct k= vm_mem_range *range) if (kvm_pte_valid(pte)) return -EAGAIN; =20 - if (pte) + if (pte) { + WARN_ON(addr_is_memory(addr) && + !(hyp_phys_to_page(addr)->host_state & PKVM_NOPAGE)); return -EPERM; + } =20 do { u64 granule =3D kvm_granule_size(level); @@ -477,10 +489,29 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } =20 +static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) +{ + phys_addr_t end =3D addr + size; + for (; addr < end; addr +=3D PAGE_SIZE) + hyp_phys_to_page(addr)->host_state =3D state; +} + int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, - addr, size, &host_s2_pool, owner_id); + int ret; + + ret =3D host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, + addr, size, &host_s2_pool, owner_id); + if (ret || !addr_is_memory(addr)) + return ret; + + /* Don't forget to update the vmemmap tracking for the host */ + if (owner_id =3D=3D PKVM_ID_HOST) + __host_update_page_state(addr, size, PKVM_PAGE_OWNED); + else + __host_update_page_state(addr, size, PKVM_NOPAGE); + + return 0; } =20 static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_p= rot prot) @@ -604,35 +635,38 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } =20 -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) -{ - if (!addr_is_allowed_memory(addr)) - return PKVM_NOPAGE; - - if (!kvm_pte_valid(pte) && pte) - return PKVM_NOPAGE; - - return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); -} - static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - struct check_walk_data d =3D { - .desired =3D state, - .get_page_state =3D host_get_page_state, - }; + u64 end =3D addr + size; + int ret; + + ret =3D range_is_allowed_memory(addr, end); + if (ret) + return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - return check_page_state_range(&host_mmu.pgt, addr, size, &d); + for (; addr < end; addr +=3D PAGE_SIZE) { + if (hyp_phys_to_page(addr)->host_state !=3D state) + return -EPERM; + } + + return 0; } =20 static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - enum kvm_pgtable_prot prot =3D pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + if (hyp_phys_to_page(addr)->host_state & PKVM_NOPAGE) { + int ret =3D host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); =20 - return host_stage2_idmap_locked(addr, size, prot); + if (ret) + return ret; + } + + __host_update_page_state(addr, size, state); + + return 0; } =20 static int host_request_owned_transition(u64 *completer_addr, diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index 174007f3fadd..c315710f57ad 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -198,7 +198,6 @@ static void hpool_put_page(void *addr) static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *c= tx, enum kvm_pgtable_walk_flags visit) { - enum kvm_pgtable_prot prot; enum pkvm_page_state state; phys_addr_t phys; =20 @@ -221,16 +220,16 @@ static int fix_host_ownership_walker(const struct kvm= _pgtable_visit_ctx *ctx, case PKVM_PAGE_OWNED: return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - prot =3D pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); + hyp_phys_to_page(phys)->host_state =3D PKVM_PAGE_SHARED_BORROWED; break; case PKVM_PAGE_SHARED_BORROWED: - prot =3D pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + hyp_phys_to_page(phys)->host_state =3D PKVM_PAGE_SHARED_OWNED; break; default: return -EINVAL; } =20 - return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); + return 0; } =20 static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ct= x *ctx, --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D89C312DD8A for ; Mon, 4 Nov 2024 13:32:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727143; cv=none; b=NyzXsoUGcBH6GXMZMib87F/j70qZB/mSSIWv+gmfIKD2TQmKmvSO46qx5iWPNjZiYi2hm2gzSaub+QLkFJdPJj+H8Xrq/+YRWKPw6qtm/pBIncyeh2Rb7H3pzUX52BiQGFZ6RZyYAmvc42SJVZW4NR/Q7DeeGehX54/N4fXCwFg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727143; c=relaxed/simple; bh=8Ka3uHmCTynLgF2f84rI4B+GMS/jz9B90udr8d7Wndo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VoPwzctTu2f40+ftT85eSpsXjWozM97E+nziHzuDjda3FA85PK81eYMULUmrZ+lJA50f6AObNTEnnCEPNLy/3j8XaEwmHrhXi6SvQTh1UDF+wPIyY6OgA32kWD2WQ1jAbq2CqfyVXSIY4vxfW9r65hJ5fN8T1uGoOY7b5znsy+A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OpDLhbYr; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OpDLhbYr" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-a9a157d028aso341744866b.2 for ; Mon, 04 Nov 2024 05:32:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727140; x=1731331940; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zp5y0+Rq0WHrDia4bvFK9Du9WgLZRQc3oKl++fsdJMA=; b=OpDLhbYrBo/055gR1NbZRXDPFYaHOX0I0E1eRwvOOFUn0w4vWxYmLyWxLDnOJuDu+R QjPGimjn1I+wbyUyK3qhFAOxxOgCwkVggGO0sODaeqt7f/9NeUE5/0Yjc6Sn1Gq5doZA Lg+Xwn4F2DuTEQXc/UmBJKxHRifZY7bzMy5gE8ZZlM7qgo36qd1ZCRfMw180rk4nZvxN xekRJRwridGCX9coEhBSi70lT0Vqh0zEqF4aD3SV0sgpzG7vNz6BrO37Dlyy5DzYXaw4 0WrjEjtde05YXF3FufoDjDlOdIVG0Cz8QRiN1nrK6Y3ocbonP1q+k08Urc8yUL/xmxjC rDww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727140; x=1731331940; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zp5y0+Rq0WHrDia4bvFK9Du9WgLZRQc3oKl++fsdJMA=; b=T+/KhbxE2mIO5c+7SWANmy6UUhFxZLZUI4ki9hgRdnWPvDpGoNL2sK9X9RHgp5iHVL vN8OeVSt7GltIKLO/4Hr+HasoAVPUY05xAvXom9u8LtqUR9RhCBXmzthPcAjKYQdtCbS 4MCJKtRXDL7a1FB5d5GlI+wPWBz4i0A1Fh3AUJnHeCUEW8NejgDnpqvQfYPfK7JM8VOF cssB4N+mMjGyDWIHDbWfHESJ633En8JLbTJqccX9zXSuBFu1nt5C0fSDfxAjnmMTHm84 JStilXmAlKnWkNlSUTfukY5u+keK5ktGuwLVSBQ2DvP+oQiq4zxd5NM9x2YT2ESrglCq BzWw== X-Forwarded-Encrypted: i=1; AJvYcCUGqHpcV58YzkpFMAbnZXRZHIVcs/yoV+6ULOOoF9H3DpNdRugGQXQO9N3Y3HnZd57+fchoD6rFYbrDq/4=@vger.kernel.org X-Gm-Message-State: AOJu0YzTi1E2fzlsEgxel1nnleSo8huzSnNFKEBFHKUz9iP9hofU9VGQ Z9UbFwOt8cRwo+3tPyyAY+1Fk4zGC7yN3IMbsRWwI37yR+TDpG5k0teTm2+PQub2iq2UbL3fbEN jtCs+nQ== X-Google-Smtp-Source: AGHT+IGu3UcssODTNd0NubVJlZtWwwslOS4Fc2l45eu0iucEanXM8xQBrQzJXKGVO4SXDuAu4FLnrjRPKE4G X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:ca0b:b0:a99:f617:af50 with SMTP id a640c23a62f3a-a9e6581e0ebmr285966b.10.1730727139947; Mon, 04 Nov 2024 05:32:19 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:51 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-6-qperret@google.com> Subject: [PATCH 05/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" kvm_pgtable_stage2_mkyoung currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 7 +++---- arch/arm64/kvm/mmu.c | 3 ++- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 03f4c3d7839c..442a45d38e23 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -669,6 +669,7 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pg= t, u64 addr, u64 size); * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entr= y. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -677,7 +678,8 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pg= t, u64 addr, u64 size); * * Return: The old page-table entry prior to setting the flag, 0 on failur= e. */ -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); =20 /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the a= ccess diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index b11bcebac908..fa25062f0590 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,15 +1245,14 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable= *pgt, u64 addr, u64 size) NULL, NULL, 0); } =20 -kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +kvm_pte_t kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) { kvm_pte_t pte =3D 0; int ret; =20 ret =3D stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF= , 0, - &pte, NULL, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + &pte, NULL, flags); if (!ret) dsb(ishst); =20 diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 0f7658aefa1a..27e1b281f402 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1708,6 +1708,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_i= pa) { + enum kvm_pgtable_walk_flags flags =3D KVM_PGTABLE_WALK_HANDLE_FAULT | KVM= _PGTABLE_WALK_SHARED; kvm_pte_t pte; struct kvm_s2_mmu *mmu; =20 @@ -1715,7 +1716,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu= , phys_addr_t fault_ipa) =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - pte =3D kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + pte =3D kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); =20 if (kvm_pte_valid(pte)) --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A670245C0B for ; Mon, 4 Nov 2024 13:32:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727145; cv=none; b=p/clISx7sj2iBqPgJQYvO0i5RMzdqsLDLmMR+M9UaDW/uvVAg++adp4FMgaJfFF7w4nxX2yJjsxd7MGpzoPwufQxw/tSK10r/Lt7p9Rm/UjzDLRrDtqYH9qx4XrLfPZYA0BKtpUP36QgYzcLEakLsqLWl32urf5PdH6hZ/axb64= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727145; c=relaxed/simple; bh=a7E7iuuoMcAM1cFQoN6CJyo0hgjTkoNCThbcK8GYogE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l1SMhM3mmvolbCMdO0P5tNoELV+KbB4F606JTP4PgudK5X4+M6+J78kQR7LXQATXhI0EiUAIypdxWiAm7M7tycdDI9gutgz6M5fmMuNgx/RLj2cH0Es+X8KiOEzc4vIH12I1HawrRkUXRpI6ffIYT94HP5cnv5znYPmP+PfFay0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VK9ovmFg; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VK9ovmFg" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e28fc8902e6so7668938276.0 for ; Mon, 04 Nov 2024 05:32:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727143; x=1731331943; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CVlLw2VsW1NEYgyhjgZsv6w0k6EmJDYLHCvZ5J3b7lU=; b=VK9ovmFgiR3Z1rRqss0UMcpJlZRu7PWmupbYcaPk08VkZt2CWs5mAL5Enb5CpI2olZ QYkGvGKdp0msC4QDn8DerjPLds/a6pZq1sP7BkoEkFKiIlTlK2+Fi3sboQxn9sZtQrrV miV/o6dD8hjvKI5el+Yql9VWYzps4VDBeAQ42erplN2dXaIjzV5mMl7l8UG0h/lYrB3M uu0ozfP/lcQbB2LF2VochgmamDIE1cxigsb5J6IP8FkfwB0f4AQ8l7/SKkQ3KBEp93kT LTtfGic2rQSOofPSVWTyfDlIpsNp9YEhm8kkpniVUT3WV0ackpjfgK9VG/OYH9jS58Na +n7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727143; x=1731331943; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CVlLw2VsW1NEYgyhjgZsv6w0k6EmJDYLHCvZ5J3b7lU=; b=qJ+PyWMTvlL2hXQL4DPCIQWmaolf4/xrk3xvhEJKnmmiOMtHb2zL58HgwHXIvCyBOr 8T4GUmMl6+lK5ygryvMtBvppYu/kuhduDMYDS43YiTaOnmWOux2qB+KlN1QybUs2NUhE VooI/XfRGqv5BGB4qZ0g24taQv3c77SMK9v3VSh5SAR9AMZZezTk3HPt/mnL5zCf0wLV Mv+wBL6QkydcIO8IW5yGms9VbvW3S+sVSFknRddBf9iBsZHJd2Q3q5PSncMHrrqoG8oY lYRfCQj6x0tOhYVbihPqiJ9qR4c4POLHpXAKBvlpGI86k+F+B6i9BNDD/cA1pk5/7+Qa ydgw== X-Forwarded-Encrypted: i=1; AJvYcCXI6qWFoTkEx0y1RBAFSC2GzkoQu7pLRIn80itT+PDltSeqrwRAuBIfxurGEXszHnD0SXmsMR1EfWnZz/k=@vger.kernel.org X-Gm-Message-State: AOJu0YxDJG84rjqQP5w1i7ij7HsODgbhyAbC17ZchpYXVNHHApKiQygW RHMhCsbrY/uxLWg+bvod2mRHhET0tvnxrt+gZxZ4elkFteK7i9KPOXrUa/IkQibvSux+ZTDDTz2 mV57R0g== X-Google-Smtp-Source: AGHT+IGaIdjKw90MUAD6f3zhsAl5Ou4eqg/HHU1QzYnDITqGlaaCQnGFAV9PmmYFtFHG3tgEwpSpmlOm/Ebd X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:ed07:0:b0:e1d:912e:9350 with SMTP id 3f1490d57ef6-e3087bc765dmr65574276.6.1730727142628; Mon, 04 Nov 2024 05:32:22 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:52 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-7-qperret@google.com> Subject: [PATCH 06/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" kvm_pgtable_stage2_relax_perms currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 6 ++---- arch/arm64/kvm/mmu.c | 7 +++---- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 442a45d38e23..f52fa8158ce6 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -709,6 +709,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgt= able *pgt, u64 addr, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. * @prot: Additional permissions to grant for the mapping. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -721,7 +722,8 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgt= able *pgt, u64 addr, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot); + enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); =20 /** * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to P= oint diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index fa25062f0590..ee060438dc77 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1310,7 +1310,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_p= gtable *pgt, u64 addr, } =20 int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot) + enum kvm_pgtable_prot prot, enum kvm_pgtable_walk_flags flags) { int ret; s8 level; @@ -1328,9 +1328,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable= *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_X) clr |=3D KVM_PTE_LEAF_ATTR_HI_S2_XN; =20 - ret =3D stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + ret =3D stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, fl= ags); if (!ret || ret =3D=3D -EAGAIN) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 27e1b281f402..80dd61038cc7 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1440,6 +1440,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot =3D KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; + enum kvm_pgtable_walk_flags flags =3D KVM_PGTABLE_WALK_HANDLE_FAULT | KVM= _PGTABLE_WALK_SHARED; =20 if (fault_is_perm) fault_granule =3D kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1683,13 +1684,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &=3D ~KVM_NV_GUEST_MAP_SZ; - ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); } else { ret =3D kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + memcache, flags); } =20 out_unlock: --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5AC221B218D for ; Mon, 4 Nov 2024 13:32:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727150; cv=none; b=oVmDDAWKcaOFCscvMjyA3AoSHpKCDaOW6qs4IUyp7eNe3Nm/yUKuaBZDDsfYQOwYQlCv/BVkGR/P7McBYNPUgN0rI39SYMMh75nT98usL/1kwWcTdicMhzCUuhQM/vspkOgIesAS6AFpb2ZrXu3Meyi2iR7/KxQh/kOa0Q9Kwx4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727150; c=relaxed/simple; bh=Y+NcSPecdDDpRmAMDV3kFDluNG6PwbZtKm26naCPO60=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=k44WbAs73pd9xXDHZ+/uchJLrsDDMdLZKl/busPxekRw/06ystOctpzIx1BQXtgKCWoeLBKmIkaqIufbzaul4/0KAQPFxkrIkkPhzvYyi5WrP7RrOfxwLnBYzLxN+n5iZpuvDHPShrr2Rdglamz7WT9s1+yUjLaluxkfwUhLR/c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FMjC1tB8; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FMjC1tB8" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e30df8dbfd4so8366578276.0 for ; Mon, 04 Nov 2024 05:32:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727145; x=1731331945; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NRaBXKWmHCm6FdE1Chl6Fbm7Oz0LIxurr/17hTAqXWI=; b=FMjC1tB8EkslWq3ABut9zhMVG7McWt+ZkjGH981FBECinaTXmX/ApJfosju0yDJB5D 8GivCbr92b8JQ0x7uo9B7FE0QgWiudpDefIShDr9o5k/wXHHyLPTHhWAq2D0EB/uHj6L Xvsba/6hPy9P7vPRK9gMTUuNkEh1bUEFvosc9TWWGT7Jskdti8z70CyosabZEeJ5G0aH VC6luOuY9EFic5fSSDrs+HwLpsEXJxSwMNBxlfVtjN8h7Qn8Oqc1fSZpowswFNp0Rbil eWMlIR5GJ8Fb77xlNnYxIQUVRdYTiaRrbGxpbPJCU+vu2m3Ii2q6BMctR0mNEk3kwUsJ iduA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727145; x=1731331945; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NRaBXKWmHCm6FdE1Chl6Fbm7Oz0LIxurr/17hTAqXWI=; b=KVeLXJBZ5xZiMy/YHUIDuGTlAwG6dSbKuPxdu/7YIkqeECm2t6sKfsd4Jppf9jYwVQ LYKFE9M0nEO2rvSIywREVGGU7yl9VmrypJs/ctYX9bgbG+LrBOwBzO9J1dNxvjt3wPdO 8ZkzyZEiM9Lp2km8Ca4erjrKPE6FT7p+8ILYxE7QoR0p5GPA5CzNoY6/OAHRfWZZZe0Y x8XP8GgSX7P6Z/erPpOxhHTNQq5A68LnV3/IFvtY8M2XARY8TsMCbgNKRpdXqyZEVb2c XPfpwqO2PRJLJMsEQ24/S27ry/s0e2uX1QI2OT6gw1yJFKuooksnOrBFXS1PAn2CsxYN H0MQ== X-Forwarded-Encrypted: i=1; AJvYcCUokcPp0mk5eMvEEEA8dZm33BoCXqi9s7A+xJEyhyhjQahPTkqFpJNLUF4vC7elgEQNK433EKLegkkC/50=@vger.kernel.org X-Gm-Message-State: AOJu0Yx4pvX40V2s7+l11dvia5xEqLzTdT+Zh8KiIUdwtZCAqxvpPY+/ YK9uvwYBhxtWPr/Fa6hW9ZFIDPI7sRE876NSqQl96p2Na2vgHKRsQarVTCZrv6GRbdLNvjvaS9q a3iCNGA== X-Google-Smtp-Source: AGHT+IEIRmWbJCJ91gChrscm24F1xHepSdW9Ht8okQQnwSOPBLJd4Dm+SGDL7D1GGWqTGSqt9NEIHueAq4mp X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:aa05:0:b0:e25:d46a:a6b6 with SMTP id 3f1490d57ef6-e330266859dmr11233276.8.1730727145034; Mon, 04 Nov 2024 05:32:25 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:53 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-8-qperret@google.com> Subject: [PATCH 07/18] KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Turn kvm_pgtable_stage2_init() into a static inline function instead of a macro. This will allow the usage of typeof() on it later on. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index f52fa8158ce6..047e1c06ae4c 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -526,8 +526,11 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt,= struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb); =20 -#define kvm_pgtable_stage2_init(pgt, mmu, mm_ops) \ - __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL) +static inline int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct = kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + return __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL); +} =20 /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-tab= le. --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A95D1B21A1 for ; Mon, 4 Nov 2024 13:32:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727151; cv=none; b=PCqCViJaaANpOmHMFzvygUR8WThlUv55K7l6zW9Cbuj16lZK9bzrH4Vev8yw40thqWS6o1xCPl9IagFQxpoMZSRnnuGoAXpAaaK9lxcHFmoixp7yD/82B2QtJDO3G8v1Qmpln7XHqEYSjMl2cB/l9jsyb1WQhhCmDiBvGo4Nnyk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727151; c=relaxed/simple; bh=ffOLwbKQb8kwQMl491ZENBc/qOOoZ9iUG3V85WaSIts=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=CLm+Msxk/1GNGkBWyb8Oboo35duyifYmTAWymokpb9IJw2qGnxJ4PUOYjser/EeEiBlyLA1vZpFNjcASOzpqETl2XaQU6J4ULWEaW7OqkzjzWwt+wUtice1MINoJD2DT/De3XQY/K+nPGkBOZ4uJjomr8kKW5+b48bR2n8tdoog= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0Lm1klL0; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0Lm1klL0" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6ea86f1df79so29579007b3.1 for ; Mon, 04 Nov 2024 05:32:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727147; x=1731331947; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uWbtmcYWRp1R8Byo4NXAA6u9713XSDAlNYmvHQ2XPgU=; b=0Lm1klL0LepgH3JWy+h4g4volpUguvubqih7ePEfBh+PH2rUiM4Q3dvvT7HDWoFHxo UNVJDX2BnRZzP4Or4l1G1MP4MHAeLDIHigim8zbF88zFrYqc49QhsjNcVD+cwkU28wA2 +We5qKeTpG8vI/o9bCCrh0YpYP9mRP7/PwIXaE6lMVaeCO/k6q9yFakhmnS1ozQhu1ts BD8DGmUUhoXVaeMHr+b2562HS4f2rzslzkTsT+biuwbkjIJjdXUIDISTFUxHbdxx6N7T T1d38v/fahMqL3hIiLHbkff4r2TEHg/nUnpI9DScbDIhkatLZZSetHUmbpcxi74ecZAf 4KZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727147; x=1731331947; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uWbtmcYWRp1R8Byo4NXAA6u9713XSDAlNYmvHQ2XPgU=; b=uG0yXQJfjZlOKxN7zqCd88qwBB175fgz06Gdo8/s5D2LWmAltA1K5WGPLrsT+r5Qet FTCivU9BIBAK5CB0axyvTN5ghld+PMy54Xf9lDE4Vj+NJZwG4wL34CWeEICEdLZt3CbL 0+2em5ENkZ1a7LGZhunk9CzhO0FxbceR9BpEoxK5UFgQms/YJ0Ntt0KSDK4jTmCPtGjW VvNic09f4p8TZBW6v2xbRTA2Gwmk0Sj/LTRz+87iJtTjcgg/HkvpYgtwM2nzQ7zWcV6G lbsIsYnqYq9/FtgyBZu5X2IqHSj0TkrcDBlxs2KJsWW0f1ZdnhdH6lyzm+irqmI0vm98 8c2A== X-Forwarded-Encrypted: i=1; AJvYcCUb5Dk8KQ6xTf3WaePV0IRGBfKZXqwvnJce4aIz9NNQstXxUs64WDrvjfSds/cn3xzfDfKff14nOTdeJzw=@vger.kernel.org X-Gm-Message-State: AOJu0Yw/ULBMZPF3D3cp0AxJ7csz7p0ScAX6sMzvPSUO30ocdRMv3m8D plZ3DMhLzPcVX1usH422EThB8Fib6l2hW8ZBnUB1ejCRRhvuzrxwoSRZr91ISHec3HnHhfX9vyZ LWTC7SA== X-Google-Smtp-Source: AGHT+IERD2fe/MuO+b+uqGh1AqVUFKurSVSZTroF+QuBipkZRMH9HgRguuT3lLzoarC3p1VSq8xD3yXmKNOM X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a81:a8c3:0:b0:6e3:d670:f603 with SMTP id 00721157ae682-6e9d8aada53mr4236897b3.3.1730727147304; Mon, 04 Nov 2024 05:32:27 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:54 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-9-qperret@google.com> Subject: [PATCH 08/18] KVM: arm64: Introduce pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 28 +++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 92 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 67afac659231..a1c6dbec1871 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -80,6 +80,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 48cafb65d6ac..2bf168b17a77 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -623,12 +623,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) =20 kvm_arch_vcpu_load_debug_state_flags(vcpu); =20 + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } =20 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index 24a9a8330d19..6940eb171a52 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { =20 /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; =20 /* @@ -69,5 +75,6 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); =20 #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index fefc89209f9e..6bcdba4fdc76 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -139,16 +139,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_v= cpu) host_cpu_if->vgic_lr[i] =3D hyp_cpu_if->vgic_lr[i]; } =20 +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu =3D pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &=3D ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |=3D hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; =20 - host_vcpu =3D kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); =20 /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -161,9 +191,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_contex= t *host_ctxt) goto out; } =20 - host_kvm =3D kern_hyp_va(host_vcpu->kvm); - hyp_vcpu =3D pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret =3D -EINVAL; goto out; @@ -174,12 +201,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_cont= ext *host_ctxt) ret =3D __kvm_vcpu_run(&hyp_vcpu->vcpu); =20 sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret =3D __kvm_vcpu_run(host_vcpu); + ret =3D __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) =3D ret; } @@ -415,6 +440,8 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 077d4098548d..9ed2b8a63371 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -20,6 +20,12 @@ unsigned int kvm_arm_vmid_bits; =20 unsigned int kvm_host_sve_max_vl; =20 +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -268,15 +274,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_= t handle, struct pkvm_hyp_vcpu *hyp_vcpu =3D NULL; struct pkvm_hyp_vm *hyp_vm; =20 + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm =3D get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <=3D vcpu_idx) goto unlock; =20 hyp_vcpu =3D hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu =3D NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu =3D this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } =20 @@ -285,10 +306,17 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm =3D pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); =20 hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu =3D NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } =20 +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); +} + static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index b217b256853c..e43a8bb3e6b0 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if =3D &vcpu->arch.vgic_cpu.vgic_v3; =20 - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); =20 if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if =3D &vcpu->arch.vgic_cpu.vgic_v3; =20 - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); =20 if (has_vhe()) --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 004191B4F02 for ; Mon, 4 Nov 2024 13:32:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727152; cv=none; b=KKdrEhH1g0MIIIy1wCfhASGugrhxg3b9Tm7aWU9WCkRBRg+XoxOsDmSCfjtS54QjfHkBUWPhT1VKC8scnt9hXEU7cZU5bkqHvcfw8+9AhP+MJ5RZyhij6BdAWSRJwNnsH8F/SZSu3Uorkoy9PSBkCu9Hy1I4qlx2Y9siY9Kss2M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727152; c=relaxed/simple; bh=2zzcr8dahGRVDNl3wZ1YqXvn1JcqJa8MzblIuk2DUSc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ov+7kaLqwoakzQqmMwokulG0rAsQnDgLjvCU7VvcjH68m1j50QiuIyRSVjZVTH8XZxZBoZuLAjs0xhV2lHJ0TSh8dZqx4i+qBYJxfln4X8GBSW0J52yTF2A4VWf+Ge+O7X0HbeNGa/JAcohUzXKGlyg1VdgVQnxPpo4Zeeqvk6w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VCOOcgR7; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VCOOcgR7" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e33152c8225so4446703276.0 for ; Mon, 04 Nov 2024 05:32:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727150; x=1731331950; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PBItirLg5Oir0h+eEpcfLT0BceEfQcKCnavoc0LgXGQ=; b=VCOOcgR7kOI97hxUZOZiHjmmU7Dsc0oyyejwW3ZlFNFJavsXPdyFGkg8ptzzm+QFx/ kOQ/Ivfb8GjbzrdftUdhCQ/3y56sACcv3n9W1TdwgL0YvMqNHLUymmC2dLWttfOwbQvM VLIKMeDEzHtGlCZ943VHoJd81L96C72fin0AbakhRT5J51nOIY7PbPRWUaoQZPNzaS3X pmFSjozoDBR0leUBxcKs7/BR2+IziDyQnpfhJ9VwB8l6Rk56DXWLEn0R+sWrLPlmc0rA YfadIueWNjwuI8vLYBsZg/IQVwt19bH3BmA8YAIZrw9+AGKpG9xmLWf40d2bYen8whVr zsCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727150; x=1731331950; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PBItirLg5Oir0h+eEpcfLT0BceEfQcKCnavoc0LgXGQ=; b=mRECtllT636LpfKgXeTNDgtNDgbkqAPRQTahz8pchUylfCQERGPgCXAoMJwJ2WPKAR 0TBD9ucIQZSbEwx2BbWT39foVQf8HxybAvntxA/C/9+pIluh05+/0VtXNLoPCTTCLTN9 UPFGgBUxm896rQUiP4ZdiBr9B9txn6+nBdc7WeRhxVWCAkkIqa3fuhlvXs0fT9FHVF2z bh5FN/8rnypTeOzjgPVaQrSg6V5cfPq1jCjHV6f7j1pbuZ5kTynu/bGJgOD5joyG5l8Q +LZ0kXphzzICOzKhNMbbg0XYXTsk4SbKhYntDF9UGCoM8vIsA72HaU2ZKgdtQt4d+LRW /xvw== X-Forwarded-Encrypted: i=1; AJvYcCWQFIdxuNxSvyvfrI1/TJH2801TecZ0EidzKLt2UvOykpeJD9t3/0Z81ax6GaneiMFWweomsvG1y/RQoPQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwUqddk0NyMvag90gjxqLrygWekcHJGQpHdTKh+TpXZxijT2lem Ha26H6OxgBjUV6hgyxZpEG9XgHXdbxil5mwQsVoi3zm6Wamczd4OOIZqxtiiWJ5MSih/wUSewj1 mAmr63w== X-Google-Smtp-Source: AGHT+IFvrgh3CVf1TWze2me+UFlYP3CJE1J8M2LCwjOfq5XmD1MuK3gwOqhUj0cOzAZs1ZrUlVldZRwY5pmA X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:2bc9:0:b0:e27:3e6a:345 with SMTP id 3f1490d57ef6-e33026dcafemr8506276.10.1730727149901; Mon, 04 Nov 2024 05:32:29 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:55 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-10-qperret@google.com> Subject: [PATCH 09/18] KVM: arm64: Introduce {get,put}_pkvm_hyp_vm() helpers From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for accessing pkvm_hyp_vm structures at EL2 in a context where we can't always expect a vCPU to be loaded (e.g. MMU notifiers), introduce get/put helpers to gettemporary references to hyp VMs from any context. Signed-off-by: Quentin Perret --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index 6940eb171a52..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -77,4 +77,7 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t ha= ndle, void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); =20 +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 9ed2b8a63371..d242da1ec56a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -317,6 +317,26 @@ struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) return __this_cpu_read(loaded_hyp_vcpu); } =20 +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm =3D get_vm_by_handle(handle); + if (hyp_vm) + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); + + return hyp_vm; +} + +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) +{ + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void unpin_host_vcpu(struct kvm_vcpu *host_vcpu) { if (host_vcpu) --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 80ED91B6D02 for ; Mon, 4 Nov 2024 13:32:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727155; cv=none; b=WAp64fUvxSUSh3gsvl59lkfCiDa8xWb0YvcekuKykkuiZGxL+QTDkGej0nc30rIsS+qSFwWJk1mapuKGNYCIGZMONfxv/a8eRYDZlCbyPchvqQ+DaTaJSOqvR+60AZmrpJrHMJYbAZFEZ7QLo8gWyLOMhRsb2PgQTB5zsBkeVEE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727155; c=relaxed/simple; bh=QC28NwNzjGezmGMC9u3tuyijG+EOmrVaqakZ6LOZvOY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=P3VDWw+fjdkAj70bbXg03oOElWm5EK3s0eNBw0qTpesyTE0AoMTEd/xBvg/c4t3vrTfm/UalFcewMqKHFIivSrbOUbl8GFE7X6rYRl8+r98e180UOSL8KsKLBCj60eCGt+MiXUveaanrDC8T8t9RhIEYew5GIok8XZKNBUW84DE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=n+Tu2cZX; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="n+Tu2cZX" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e290b8b69f8so7410659276.2 for ; Mon, 04 Nov 2024 05:32:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727152; x=1731331952; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HHLE7T6E8GNWbHqOU0Z7O+qf4/rYeV+4Y5GvCCJXO4g=; b=n+Tu2cZX/xaz5cGnvTtvyxkhz6BW7wNKlzoqVTuRl0N2DSisX5Z9fNh5ge1TDMl5hk vCpg8bxxRwd2uagLj/JKw5WuKofM87RJ9HDN2CiXLMrpIjRab7VcYSyuxc131si285W8 ChPZQE4Em9SsdHgb6fGMW7IzlnW6qYQiKOPrI4sEBXeR1PR+UkBnQqbN3BpNueh0Cfaj w3bM+9jJc0CIq8osRlhaJKRDfN+TlO3ilvyXiWo+FobN+rSmKeX4TYqzH7/Bv0GyDRX6 hztV1yq5RZZkbMQFVXcQPkyU9LY1BLMXU1BmnWEs2dZuujFmZCdT8QhGH74rVsDOBeft Z8pQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727152; x=1731331952; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HHLE7T6E8GNWbHqOU0Z7O+qf4/rYeV+4Y5GvCCJXO4g=; b=RVu+HNeAvAHaKP82ifFL4fjuNDu3R0Ah+rALz8dSr9l9365cCfl9ABEHLWCcj0bgPi zN08OZ7CV1h/Xrx70jEw1W4y9HpIgyqnOz9+D9csA+ziRdG7i1sdb0HGulZikH34d3hZ EQ6Y9Gsf7zXp+ztr4ygGx0xz5qOQRCy5NOZZo884jO765UgqwXI8psMF59WBAaSS9xCT P+M8o+udQajy+wELztaGuaxLWrz7NIyFbMrQnaEatQSF8HZfMA290nGCNlYFd7bz8PfI iNQUpEiVUjTD1XKW+ck8FVHp+nrKNKbnLUBcBMe0/KpseeFuToopBNjEqlyOMJ43LdS2 ZU9g== X-Forwarded-Encrypted: i=1; AJvYcCVZGJ5jSasiyFy9of3JPe5Ocx55U46Su8DUziwsMZLQ84FPLOlizMpk2aTBFFS7g8dLMPGnXFkJWAJZkiU=@vger.kernel.org X-Gm-Message-State: AOJu0Yy9oQGCkUtTLU/1cifSj4G6s1XUORrs04Dsmff8VfQmLxglEJsr RBTI+VahPzmEjchp4zqXtbJrWvkDWP7G+6l0OLvVjCh6pptjC+N4nTGskg5Ga/2gBHrApUZQsZY AVthE9w== X-Google-Smtp-Source: AGHT+IGc9FWtGQmxCfj3VLTtVoyIcI2cUFtsy+Np9/YxvX43mUKUKvkiVWLEzsztKOQDv2IlUxhnn2bpES5d X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:9b42:0:b0:e30:c741:ab06 with SMTP id 3f1490d57ef6-e30cf42ddb2mr12768276.5.1730727152355; Mon, 04 Nov 2024 05:32:32 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:56 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-11-qperret@google.com> Subject: [PATCH 10/18] KVM: arm64: Introduce __pkvm_host_share_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 +++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 70 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++ 7 files changed, 118 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index a1c6dbec1871..b69390108c5a 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index bf64fed9820e..4b02904ec7c0 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -762,6 +762,9 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_memory_cache mmu_page_cache; =20 + /* Pages to be donated to pkvm/EL2 if it runs out */ + struct kvm_hyp_memcache pkvm_memcache; + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; =20 diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 25038ac705d8..a7976e50f556 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,6 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index 08f3a0416d4c..457318215155 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -47,6 +47,8 @@ struct hyp_page { =20 /* Host (non-meta) state. Guarded by the host stage-2 lock. */ enum pkvm_page_state host_state : 8; + + u32 host_share_guest_count; }; =20 extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 6bcdba4fdc76..32bdf6b27958 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -209,6 +209,39 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_conte= xt *host_ctxt) cpu_reg(host_ctxt, 1) =3D ret; } =20 +static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu =3D hyp_vcpu->host_vcpu; + + return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + host_vcpu->arch.pkvm_memcache.nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ct= xt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret =3D pkvm_refill_memcache(hyp_vcpu); + if (ret) + goto out; + + ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -425,6 +458,7 @@ static const hcall_t host_hcall[] =3D { =20 HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 1595081c4f6b..a69d7212b64c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -861,6 +861,27 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } =20 +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, + u64 size, enum pkvm_page_state state) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + struct check_walk_data d =3D { + .desired =3D state, + .get_page_state =3D guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx =3D &share->tx; @@ -1343,3 +1364,52 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) =20 return ret; } + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys =3D hyp_pfn_to_phys(pfn); + u64 ipa =3D hyp_pfn_to_phys(gfn); + struct hyp_page *page; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + ret =3D range_is_allowed_memory(phys, phys + PAGE_SIZE); + if (ret) + return ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + if (ret) + goto unlock; + + page =3D hyp_phys_to_page(phys); + switch (page->host_state) { + case PKVM_PAGE_OWNED: + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); + break; + case PKVM_PAGE_SHARED_OWNED: + /* Only host to np-guest multi-sharing is tolerated */ + WARN_ON(!page->host_share_guest_count); + break; + default: + ret =3D -EPERM; + goto unlock; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); + page->host_share_guest_count++; + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d242da1ec56a..bdcfcc20cf66 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -680,6 +680,13 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) /* Push the metadata pages to the teardown memcache */ for (idx =3D 0; idx < hyp_vm->nr_vcpus; ++idx) { struct pkvm_hyp_vcpu *hyp_vcpu =3D hyp_vm->vcpus[idx]; + struct kvm_hyp_memcache *vcpu_mc =3D &hyp_vcpu->vcpu.arch.pkvm_memcache; + + while (vcpu_mc->nr_pages) { + void *addr =3D pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(addr, PAGE_SIZE); + } =20 teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EEAF71BC065 for ; Mon, 4 Nov 2024 13:32:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727157; cv=none; b=NIulq1Id/RmsbMQrTZ0kU79i1KMaO9LYLx4FljGkwvm+eh0R52bgvbzlVTTBo9nyjRpfK7OXYW+1HvwX86z8GznNPDoI6g2+U4GUELq0M1UZSnnJfWco16pbdhHQis3SwfundyJibWWKOGnmql8weA379vc8RlEz9wDKgW5PxV0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727157; c=relaxed/simple; bh=yxI6nOj/2BlS35z5zeuNbWN5QGACUR0d82SG3HMgbro=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=A13iCmC1OecA+qhUSl+3/QS3lkKI6NTgTEGugVhL5G3zI3Hn7lPs5UFxOkFrx/WXFATnvCioQ3VN+GLZMYGELzKxIsdNqKFJbRe/TgLGROIHAg0BPAPAEa9wXObwtA9dXL4vSn0+ozB61yqYbK9Evto3pa8Akb5De6qPDsR7rCM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yN73upSI; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yN73upSI" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e30df8dbfd4so8366774276.0 for ; Mon, 04 Nov 2024 05:32:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727155; x=1731331955; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VRegJK0d/KTMFa1b8xxTA9sCjP756MDWGX2tZ8RiT/M=; b=yN73upSIgTgDUFMM5CEZmjSDzDq3YPCyeYNx/wZXLjNKS8kdQH0fV2mN0He00vDoFo iXkXxzZPjzKDA1kipS5lcKIjfLAV9cdt8r2SSVhCCQHDmTu7Un/RkRAuzEec2iUJwzsh 3VxBy9vqZU6l33ft2MrJHbFAUbsRXvKsfwYbL3901oYMLWn94xYmuyPN2OfIFAxNTrhf FVcd5vN99dGJDzp09YOoEayshPgV2GigCqNKfrL1Qa5pdxJ7b6YC0g0SwxJQMV6WjnO7 av1VtmS3w+uCD/a+srgATY1Ea7Ihg4mafiDs8GUXVKM+GwwuIfOHA2dq5MZWEMqelhS+ /uWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727155; x=1731331955; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VRegJK0d/KTMFa1b8xxTA9sCjP756MDWGX2tZ8RiT/M=; b=Ki2fVHwzYDCdsz4UR+AqMXfSaj51h8Qu6Yo5P31q+1VchuWDnE5GUR8MDP4X7YXkzp /eLTw9h4pIFeACovvl5CEzVGfvqbm5qbm/NAQgBxJFTiT8xRhpYsWugxYcrw2pAmGMrD tLVTsgUXQmbdfV4/rS0Pd4DuTF2ssJNKGLcMSOtY/NkYLmY2ioPXsYpR0D6eaNmEBybl 9dYL4/92wuck7fao7usw57A1gkhd5hGk0O6NjVZioBqZpRrsdn4n6AK98D8BoF9fVJ4O LKoExItluVuMMTKczPA+SKfjXJ/caEzwkxLkSkPK9Gw7+7rg9Kt905TzSE+yJmQZmJdj wZYg== X-Forwarded-Encrypted: i=1; AJvYcCWFnD+S+uZZxznSUek4nrlfHINMO3PTMo0yz7y7aaOvkzZ4WptiZiUtfJ/Etp/Ffz6lybh87ujwwsg3Drs=@vger.kernel.org X-Gm-Message-State: AOJu0YwtIftAT9ZJOhbd8s94J2ZrbJvi15Qh8RXW7UVxhmJ6qOT/BgIy 6/DaHsXv5+uaFq2h4LKsmqxcemd334WhhhW/Zttgzz85EUaeogpwLQmUqllraHo8Eks2phZhklu foVRpOw== X-Google-Smtp-Source: AGHT+IGR8bsfJ02p9vHy+seXY+BmxTKO7YUK2HiE8eFkkm9wUjhjecldaQfmnnOt9C+IS5sr/2xmgAJlbcZ8 X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:c710:0:b0:e30:d61e:b110 with SMTP id 3f1490d57ef6-e33025662cdmr10816276.5.1730727154895; Mon, 04 Nov 2024 05:32:34 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:57 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-12-qperret@google.com> Subject: [PATCH 11/18] KVM: arm64: Introduce __pkvm_host_unshare_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for letting the host unmap pages from non-protected guests, introduce a new hypercall implementing the host-unshare-guest transition. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 5 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 ++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 78 +++++++++++++++++++ 5 files changed, 109 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index b69390108c5a..e67efee936b6 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -66,6 +66,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index a7976e50f556..e528a42ed60e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -40,6 +40,7 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index be52c5b15e21..5dfc9ece9aa5 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -64,6 +64,11 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkv= m_hyp_vcpu *hyp_vcpu) return vcpu_is_protected(&hyp_vcpu->vcpu); } =20 +static inline bool pkvm_hyp_vm_is_protected(struct pkvm_hyp_vm *hyp_vm) +{ + return kvm_vm_is_protected(&hyp_vm->kvm); +} + void pkvm_hyp_vm_table_init(void *tbl); =20 int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 32bdf6b27958..68bbef69d99a 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -242,6 +242,29 @@ static void handle___pkvm_host_share_guest(struct kvm_= cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_= ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm =3D get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -459,6 +482,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), + HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index a69d7212b64c..f7476a29e1a9 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1413,3 +1413,81 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct= pkvm_hyp_vcpu *vcpu, =20 return ret; } + +static int guest_get_valid_pte(struct pkvm_hyp_vm *vm, u64 *phys, u64 ipa,= u8 order, kvm_pte_t *pte) +{ + size_t size =3D PAGE_SIZE << order; + s8 level; + + if (order && size !=3D PMD_SIZE) + return -EINVAL; + + WARN_ON(kvm_pgtable_get_leaf(&vm->pgt, ipa, pte, &level)); + + if (kvm_granule_size(level) !=3D size) + return -E2BIG; + + if (!kvm_pte_valid(*pte)) + return -ENOENT; + + *phys =3D kvm_pte_to_phys(*pte); + + return 0; +} + +static int __check_host_unshare_guest(struct pkvm_hyp_vm *vm, u64 *phys, u= 64 ipa) +{ + enum pkvm_page_state state; + struct hyp_page *page; + kvm_pte_t pte; + int ret; + + ret =3D guest_get_valid_pte(vm, phys, ipa, 0, &pte); + if (ret) + return ret; + + state =3D guest_get_page_state(pte, ipa); + if (state !=3D PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + ret =3D range_is_allowed_memory(*phys, *phys + PAGE_SIZE); + if (ret) + return ret; + + page =3D hyp_phys_to_page(*phys); + if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + WARN_ON(!page->host_share_guest_count); + + return 0; +} + +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm) +{ + u64 ipa =3D hyp_pfn_to_phys(gfn); + struct hyp_page *page; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(hyp_vm); + + ret =3D __check_host_unshare_guest(hyp_vm, &phys, ipa); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_stage2_unmap(&hyp_vm->pgt, ipa, PAGE_SIZE); + if (ret) + goto unlock; + + page =3D hyp_phys_to_page(phys); + page->host_share_guest_count--; + if (!page->host_share_guest_count) + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + +unlock: + guest_unlock_component(hyp_vm); + host_unlock_component(); + + return ret; +} --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 653D5770ED for ; Mon, 4 Nov 2024 13:32:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727160; cv=none; b=hytt5F/7+CBYI9yDdmI94hxyLBU4DWkjyd+5zF1p6wL97q/pIdn+kqYl0pk4cYpHTp+8zqMxULyXZQO3PW0n0qmAR0ITXiWifoUB7teFtTUA03A4M+j2veMALJ67Qpv7NG0OtJbCNTD4p2tg/ZufWVlCQBu7Jf4+g5GqB7mNYm8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727160; c=relaxed/simple; bh=a+wZqkAlnXsib+Y/xL1c1/aKhBYEAk7cOc2pYkFKbPE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=auD9no8u16eSBDPRss3AdKyb3eP1Vxee/zE1i0iFUbW8g/2LOs+3KGxSfNPrEnMWNVMIIQkdR2346xKKAU43GS+M2I+d24B00cR9PJim+LYJmqkAHnGSWhILyXzIXSC9DQLEu7qbRLNvwCBudMMqNKSZGhIwcvrQ534Ovf7GK78= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=opxGn0/4; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="opxGn0/4" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e32ff6f578eso4836111276.1 for ; Mon, 04 Nov 2024 05:32:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727157; x=1731331957; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7ZKgkOXED2gTnPKJGzc+iVeFIN1NowIl4QRyN7m6aZA=; b=opxGn0/48pct8Toiuxui2bpq/hWtEz23ttQYJ9C3C/iiv9z/fjLHJZ7BOY246LnhlP m4VYmrWu3EDpVkvYLtl1/v9CUGLSK/mrBS8fI3moIY4X9Y7XmG3AbQ44ZfGjqOpEWUwa 63wK2YxKDbM5l/lQ+VZpEPtXpObEsH/DAik6ngNmZWi7zeiZIsfWkYa73uoPf04BUsAo 9DeKWi8EszKufdGASr/7P5Pqg9POohCR9lGD5dbNMRkEaJQDryB+F9Yq15cabG0T4uOz 1IXv5ZdTpUO7vj4Hmh4tAHmrzfHFlIu7iDrDhUqOcXn2y0WuX7nhJz9syU83hOGjD+CP 8uDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727157; x=1731331957; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7ZKgkOXED2gTnPKJGzc+iVeFIN1NowIl4QRyN7m6aZA=; b=sZxYzYchVsKNB7feKCKT/sfLHMEMZx5jUoj/rnLJQMQUsk+kqqt1lIZOrtqHdBV9Pz HGgm9ZksI6tjguHKUt8VfUcLFWnQu/KuoPNpmkB2CHAIHkBEuJLYe0WRfBK0MSnbyIkF onMH4L3AHwWx0vN5lKSIsNqllv0jptAdzgZn+iAz4ly0wFVIJxsInb0IOMTls7O2VEIg ndUr38LRo92QatfQaVrZJ8PqNv7/cFT+3YAuIY+S0D/3w9sEsQgD2PpOyPXoJOL4qDvu MIgtK+Vk5BMHaI6mxDwK80dcjGguwEo1KlNRz0Y46qBn84A6cmZmXHkoN93XXHS4vbnF sYVw== X-Forwarded-Encrypted: i=1; AJvYcCX6jgmZJOCEPyTjnmuBpc+axoedz9yuhucIj7FzIJnW4+qt+dE9uh6hCtocE2zFxkcdn+QY+O+wR790FpU=@vger.kernel.org X-Gm-Message-State: AOJu0YzhNu5n86ofdDeD3Cp3D+16GJtWFSXSBu8/YJSzt809bJTW0h/E c1smezt5j4134XGlShwSCB1cj4aQ2vyz/QiK084OUYRGw1dJwj4Fwt9pABe0crbk3A6DmIQ65Mb ZOWL1aQ== X-Google-Smtp-Source: AGHT+IEdndu9c8L5M9LMgNH5yUWxnOXjnairdpcGEJ+HkfnHM8iJ67/17rm/DHBApYmZlQ7LHV+rdMkopkCC X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:6902:1801:b0:e30:b89f:e3d with SMTP id 3f1490d57ef6-e3328a15f4emr19583276.1.1730727157377; Mon, 04 Nov 2024 05:32:37 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:58 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-13-qperret@google.com> Subject: [PATCH 12/18] KVM: arm64: Introduce __pkvm_host_relax_guest_perms() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new hypercall allowing the host to relax the stage-2 permissions of mappings in a non-protected guest page-table. It will be used later once we start allowing RO memslots and dirty logging. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 20 +++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 25 +++++++++++++++++++ 4 files changed, 47 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index e67efee936b6..f528656e8359 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -67,6 +67,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index e528a42ed60e..db0dd83c2457 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,6 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, str= uct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 68bbef69d99a..d3210719e247 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -265,6 +265,25 @@ static void handle___pkvm_host_unshare_guest(struct kv= m_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_relax_guest_perms(struct kvm_cpu_context *h= ost_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 2); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret =3D __pkvm_host_relax_guest_perms(gfn, prot, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -483,6 +502,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), + HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index f7476a29e1a9..fc6050dcf904 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1491,3 +1491,28 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_h= yp_vm *hyp_vm) =20 return ret; } + +int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, str= uct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + if ((prot & KVM_PGTABLE_PROT_RWX) !=3D prot) + return -EPERM; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9E6961C07C2 for ; Mon, 4 Nov 2024 13:32:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727163; cv=none; b=hmvjYqrf12+hmT/1LxFQqlOHEvRCltHeKLRcZPSYTo9gZw6ZgnIbMGKpGmHJ6JtFAofAnjgpM2Jz86bOsAmuoVluhJaMwv+hWlY3ERRl93gmvqOlhzcSSm4Xy6Uqp0YmwXPTkHX/rPNJHiLU3Zv2GKvNE4dLY2yJbJzyolpayNE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727163; c=relaxed/simple; bh=QNK2du7NE/nakBhDA8m3plYiS7NRinYGNYXnIRmV/B4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BxaCpfBwwEZAA8Joov2UbH0w7cB3183hcQC9NtB4IToPjGxhMhhrrFhG8D6laosXLipdB40ENA43RDaEG8EY7hVZPRG9mRnQpOPNNBf3sWsGT2I0TnD0DJb3WFsgGtA1L0VmO5xgKwYulOXU9AFoYnGZoqwd6KQLBf+6J+iOxpQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=o6wKdx3U; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o6wKdx3U" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e6101877abso79860257b3.0 for ; Mon, 04 Nov 2024 05:32:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727159; x=1731331959; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ucFddVYYmKv6sICxH6cRQFB8Ka0EmmzO4w0j05CMWug=; b=o6wKdx3US04tUVHMikWSCphxL/FVRkjvyUZBi3OfChW5lIKtTkzyjJAgSWjBGPWID5 jPABGHzJmypE2V7QBqhsq29tMATprUfU38KjJ47mOUPO9ywi0xmMonlCuxy3UlPlGqeP 69DC4ARjLsylzy9v+vbkd0kvYE5XkJteQEt31Mz4YDZBbKFGDGDsqeIa9O62VmrX0Dj5 27nOsbEu+QYuxTlxF/Ta4hbx0TrEcx6iKH3dxmmH2DORo3tjNfGo7trJCR7e6YIoYR54 LeEnTCNU3yrJQa+2Y7ngm1so2RXpbD5eaId0B/pZbHOIZAOWDUuoYHzSn9B4CEqc0btC gbsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727159; x=1731331959; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ucFddVYYmKv6sICxH6cRQFB8Ka0EmmzO4w0j05CMWug=; b=Jd4wOsZ54Xftm2Hqbkn8LDGAwUXnmJKglrbzGCNmEQKL0bY1B8PZjQe6liaAT3ulwV pIThuqR6pDvJ0qMBfj4+WX7dXId0FoMswPZamrhgmlwaLqJ3gIznVn2GAYF9szRLzK/N OjkzXnXfqzpk0w928POVV7IQUeABnntwl8EwUVBLaYzg02MyovJZUvGJ/zYAp2FozbhD gPKDEUDuXdzkGAax7FebCodZEdqDEJC7TR/hfygB1yOuFrYgwLWPSqPrRimQl69el0Eq 77HJWRGXa5bjyYO5R8DEOVwkeKqLPjWUAk2w83T+g97rzTvbvin6ADMaY/jwbwrj4ucY C1Iw== X-Forwarded-Encrypted: i=1; AJvYcCWSEah6vWpymYhjPFCvZj/VHPLS+nIO/RYBg1by/G6Fa27jhFKem4IaiDsc/bqldKl18NKuiHdhCteJB58=@vger.kernel.org X-Gm-Message-State: AOJu0Yy8zW+9Qh8MZrt5eKx0KzYOAECHSxWGRpdKrUIrc5A4tiVXtC++ UaZ+OpwaS03MXM+PFtUQw0mnFiwBMqm4aWXR73fejQTkexA13VuE2St+MCVxMCFgGyMJLee/Cvf Lx88F7A== X-Google-Smtp-Source: AGHT+IGBa0WaXIVQ6f/knL91GvKmh8psD2Fj6oZFqWjEZD8hAxiH7KoDd3VReE3WYYM+VRLwqJwKNMXlhvth X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a05:690c:64c6:b0:6e3:c503:d1ec with SMTP id 00721157ae682-6ea64c29ea7mr1983467b3.7.1730727159727; Mon, 04 Nov 2024 05:32:39 -0800 (PST) Date: Mon, 4 Nov 2024 13:31:59 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-14-qperret@google.com> Subject: [PATCH 13/18] KVM: arm64: Introduce __pkvm_host_wrprotect_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new hypercall to remove the write permission from a non-protected guest stage-2 mapping. This will be used for e.g. enabling dirty logging. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 24 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 21 ++++++++++++++++ 4 files changed, 47 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index f528656e8359..3f1f0760c375 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, + __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index db0dd83c2457..8658b5932473 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -42,6 +42,7 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, str= uct pkvm_hyp_vcpu *vcpu); +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index d3210719e247..ce33079072c0 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -284,6 +284,29 @@ static void handle___pkvm_host_relax_guest_perms(struc= t kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *hos= t_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm =3D get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -503,6 +526,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_guest_perms), + HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index fc6050dcf904..3a8751175fd5 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1516,3 +1516,24 @@ int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_= pgtable_prot prot, struct pk =20 return ret; } + +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AF49C1C07DC for ; Mon, 4 Nov 2024 13:32:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727165; cv=none; b=dnUG+9XvfZkLEyMyR9khDmvUW95/2zdyLCFldzSyXwfPmy4NdUzGrgwj5vJajLSDHvKjhXaoHT2+aO1s48gWyXIytAQ92IfOCkXpZ30xtSFkV7pevjDRY0OPLJQfa03LrYbogInQ2u5QejGdQATGsfUCeg4ztXp9g1YoY9XJr90= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727165; c=relaxed/simple; bh=KJS9XKZhhB+nMuymhZC7deLYta0BV1oHH+TAlmStQT0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MEChpq3FFrH1JwW8z8bIXXcfZtiV6BzTzvOFb9kIZ+CBMir/X9rlQvrARbPnw0BRdLDx1i7bdo+pN6A2upZfqbyHPpMN4jJAR21eyaqJfCgS8UzjuMnWWnJhxxXvB01Jdpwbr0J4pNOAnmr5JD3YCKkAS1IOmgPpIR3fHNt8xXQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UScq1aUy; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UScq1aUy" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-a9a2593e9e9so282918666b.0 for ; Mon, 04 Nov 2024 05:32:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727162; x=1731331962; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=BjjoLBvOkIdvqHKae2IJzaas4+AMPqzUZpsNH6gOsjs=; b=UScq1aUyap7nMBV4GBiHkD/47z6aHklQjXHy/Qmjf2YCl+lV0QqeF74R5HJz9UkdHL LfnRv4kgqWo2J2K3Ifoa55XAGrpfyPLiO/VIbS/ZI4yfn9SD8az2vD4jh6cBRDMl0w1A FSta1Emz/LlnFfXKekZv4oQjv2qigFRa7iIFyQmkucdR+Cid1YAUFDG+JnWakedZzJOU 09v5ePAAhNBfyPSJqpIPd113bnYUFPyZ8UKmK/QCDydtHILrIXMdUOKS1GqKUQE7mGlg C5rnLsi+l4+XwnamHz8c7DIuvCG2J7uLVZiYCFo8hIGS+CH4H4SOJ8NK7Ze7jrLKp9i1 JMfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727162; x=1731331962; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=BjjoLBvOkIdvqHKae2IJzaas4+AMPqzUZpsNH6gOsjs=; b=Rif3CiV7ZCaonQe01Fj6QFlkbwOQKztdlh501Z6ALVrEvlOeC98C+OlSzFe4MuIyFL 2sRUHsgFu81XRF0rU+eAks/GVJN+mDbrq4Zhq5GrViXnHQD4CY4e+QVKlfYEz2MRN4yu OB1BUiZs91ZM5wU7HQRjTkzXL0Hqrdoc6LVAPKKOybSSEgCvt/Z8VVVnwtwi4PrVTDHD GrurbVlc5On7GXrOX/pX5f7k123SoAavQjtCoIRZCBPmGDwdRanD7N8V46ODyOCGRdBc XDUAnSur1lElZePjWQ45H/9FxprP2moglfTtn4nTW9tsWfHx27xp2hF+zLJcAQ4m5nBi jSvg== X-Forwarded-Encrypted: i=1; AJvYcCXSedGA1KUm1eja9qnoqv5uwlgM2XU7Ryac51iXDc7cTpPeCWHyQkm4VMZxtHvoIGsjdRsEMnJH10hZ7VA=@vger.kernel.org X-Gm-Message-State: AOJu0YzZ2KMVpBrUFWilPaDJ2gxDIHOdzIwigM4Hz5UQRb4DB8CUPzRM XBbCXaNUz/vrPbdEOmxyn37sEwLl6lo1UfMwzhqCsLt42/JYze8+gp9NSB12n9Z9nejbG3F3vdq VWCLdNA== X-Google-Smtp-Source: AGHT+IEbj6NqToXt4e9wVQWd4WMxb63nj/9jGxcZ0csUp26Lka3674D7FmXmXXn3NSLW/560C2CbSgE7ZVUz X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:ec2:b0:a9a:1e:d858 with SMTP id a640c23a62f3a-a9e6582233cmr306266b.11.1730727161943; Mon, 04 Nov 2024 05:32:41 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:00 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-15-qperret@google.com> Subject: [PATCH 14/18] KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Plumb the kvm_stage2_test_clear_young() callback into pKVM for non-protected guest. It will be later be called from MMU notifiers. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 25 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 21 ++++++++++++++++ 4 files changed, 48 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 3f1f0760c375..acb36762e15f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 8658b5932473..554ce31882e6 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,6 +43,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm= _hyp_vcpu *vcpu, enum k int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, str= uct pkvm_hyp_vcpu *vcpu); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index ce33079072c0..21c8a5e74d14 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -307,6 +307,30 @@ static void handle___pkvm_host_wrprotect_guest(struct = kvm_cpu_context *host_ctxt cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_conte= xt *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(bool, mkold, host_ctxt, 3); + struct pkvm_hyp_vm *hyp_vm; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm =3D get_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + if (pkvm_hyp_vm_is_protected(hyp_vm)) + goto put_hyp_vm; + + ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); +put_hyp_vm: + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -527,6 +551,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__pkvm_host_wrprotect_guest), + HANDLE_FUNC(__pkvm_host_test_clear_young_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 3a8751175fd5..7c2aca459deb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1537,3 +1537,24 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm= _hyp_vm *vm) =20 return ret; } + +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +{ + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54E1F1CAB8 for ; Mon, 4 Nov 2024 13:32:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727167; cv=none; b=nSzW2U7odd+7G2qU9juJnpstJRjbEvKI74JQMm5wVUWit4DVn0zgcjxBArvM3k2UEmyjzRrmgrWFBKslq9U2VTu0fvCTOsO8OBKkGcA/FNQk3SGCXk9hJ/46aFmoL8s+PP+BM81KNNfLys4DZdJld38FtfEVZvtwZ/Jw/KblagE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727167; c=relaxed/simple; bh=YF61Zz8W/fAcQVVljHeekQ+P84vuNKPylXeZYVhXxSU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YyvnMkqNsW+UjlwxPasuXXEOR5vhWYrtQHbEQgDNd2qMtRPOmak8H3xpurYZGYcQoHkpmakJ2vI16LeUmCqdoudYLhvehLWiJ38j3gX0tUCCCsh6n9SrsJPP0puxh6l+e9x0+rRCexX3bVOWL4UgBKEIbL8snc7iygeNDFzuBFE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nmYYGmNC; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nmYYGmNC" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e330f65bcd9so3386367276.1 for ; Mon, 04 Nov 2024 05:32:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727164; x=1731331964; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0O9w3V8+9zARf3Lj4YZXmAy+l2Op+F1G9lpGS1sACo4=; b=nmYYGmNCKFSYHPrXuK42RKV65qA5UknogUsgO9w/QHSqG/qmU8xtCGbm8MwoKqH5WP nOXz2goIIvqA6yDVZHrVlnLyGFwjdYV42BMNsGuh/LUYSXOEdUt4mV0rmYRXcc8ZiBsS P22i4aZhaxwzNX9coYEq1aPXdn7rE+LdJdao8/I9jzAS31TCGNzqmHaASV+KYbMvA4nt 08bsbtUoBwhf+ZIrShDcr/SW/Ru1xidCSH0e9BMaAhexm79j2zNQqT9Mc0k+Jtwsl0Sm b26LR1qYbwNjMzqHAayMMPZRqKk4CZu9AYnE6XB1gmXmDfAjIu6hg3KtgWeaye8+es5D XLQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727164; x=1731331964; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0O9w3V8+9zARf3Lj4YZXmAy+l2Op+F1G9lpGS1sACo4=; b=D3jPA7n61vV8cHImweklghy17iGGsuDDjCVR0QtzKIKgHa0cZoruMZCPuZg+7Dli2i B6NwVpjO61y3f9iw6tjN25GjIMGrWprMPV3iM5zvzHQRqY+FHuAMRjJDwTP9LsAZp8cB iIHeVnN4+bwxR6S/6NUZUF7hcZJOwUbdFjVcCAs3Gp/YULoOEWY8pqdNKlO/GJpz6Iif B5RedCHeJU/6Uw4E3l9vqsPady7tEyWmPaMTYxAdFSjw2EFV98YyrQpFc1TJvTx7j7Wy ejshFYgok5/JXbzULAo1ZmKEPuXSsd0m1NNng81+ml40z/Z2JNxcIpyVw2gDBq0KtIuz HM6w== X-Forwarded-Encrypted: i=1; AJvYcCWSTPvWqxr1INg2SO4SfePoKoQ13YoAmvwfKvzvBbvxSbs87EKf4GjzmzHZOTqG4wZCb6ZJ7o7SoRg2DkQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yzn4rCYL8TbTyDn17Owy9qwVDZF5Bew1r9UaEC2m6ebb/A6Q/RD RpbTTGI3Ww9hX9OLKvMQX+EG9bPsrl1HH+onogJQFTOYjccbTrhId2tnIOObPXmYVRjCvMU11MU 0O/BGsg== X-Google-Smtp-Source: AGHT+IEDxrAcP3xldoLSP2+yxUBuV6Ttuyw9HdeltMFShnVNq/HJr7ENXf3J/L5hnJ5D4xci+Np8Djaz59df X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a5b:24f:0:b0:e0b:f6aa:8088 with SMTP id 3f1490d57ef6-e30e8d353edmr30619276.1.1730727164243; Mon, 04 Nov 2024 05:32:44 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:01 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-16-qperret@google.com> Subject: [PATCH 15/18] KVM: arm64: Introduce __pkvm_host_mkyoung_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Plumb the kvm_pgtable_stage2_mkyoung() callback into pKVM for non-protected guests. It will be called later from the fault handling path. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 19 +++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 24 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index acb36762e15f..4b93fb3a9a96 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_guest_perms, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_mkyoung_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 554ce31882e6..6ec64f1fee3e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -44,6 +44,7 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm= *hyp_vm); int __pkvm_host_relax_guest_perms(u64 gfn, enum kvm_pgtable_prot prot, str= uct pkvm_hyp_vcpu *vcpu); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +kvm_pte_t __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 21c8a5e74d14..904f6b1edced 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -331,6 +331,24 @@ static void handle___pkvm_host_test_clear_young_guest(= struct kvm_cpu_context *ho cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_mkyoung_guest(struct kvm_cpu_context *host_= ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + struct pkvm_hyp_vcpu *hyp_vcpu; + kvm_pte_t ret =3D 0; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret =3D __pkvm_host_mkyoung_guest(gfn, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -552,6 +570,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_relax_guest_perms), HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__pkvm_host_test_clear_young_guest), + HANDLE_FUNC(__pkvm_host_mkyoung_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 7c2aca459deb..a6a47383135b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1558,3 +1558,27 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool= mkold, struct pkvm_hyp_vm * =20 return ret; } + +kvm_pte_t __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa =3D hyp_pfn_to_phys(gfn); + kvm_pte_t pte =3D 0; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_unshare_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + pte =3D kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return pte; + +} --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20F8F1C1ABC for ; Mon, 4 Nov 2024 13:32:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727169; cv=none; b=sHvYuODnFm0zBG6oP6NJCuc+M8SSKHGhbLXCZHBkZjDRvwLE4bhCOUjnbtglZSbTxoBFH5JT2iqnNAsfn3A46YQ+F6bWb7iLcHy65SzVLVcfZBvjIo66PjeyqI3sgy3Yu9t8M/iwxfLcsvrh7+TsfQvF0/l0Ce3GsEyFLee5yZ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727169; c=relaxed/simple; bh=ku4yYhSRmdVla3VIfq3sTP3ec1Je+nU+DhLa+IVQGS0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LiRhEgZ/dBQvFdTW5dthzg6gXabzKrVXlP9H3A0kGtHZnF0ob7zV1mQE71SAujtVBU2k+Yj6wolRCNPENBqKxY3xwcmAtsMz61+S0d/nzDW9I5SpSgNUgeIpiUku260Yqanyc5Ylyzqb4A1Jih5dqqpDUjetjGdCp2TvR6cEdqw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=4LwWZFcW; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="4LwWZFcW" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-a9a2ae49a32so344705466b.3 for ; Mon, 04 Nov 2024 05:32:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727166; x=1731331966; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YA9nOaF/CPva03eUOIXNUH1DNhxgj/qlCLjHWmvoHj8=; b=4LwWZFcWGDklkcXPwDrwsjyUFVnLn7spg/O6QJCfYhcvyb5BfPV0O9J9MF20G4+FQj XhZ7G+b7uMyJDPhb7CN4/QWLDjwHg+wgFmdXlrlPLgdn/C5JUROp2DFJcIXa96EixYMv ufi7E8UkdlFVimkTeMX9h7tqiCS8W2T4hWTj5Mlg/xbeaXONzw/IBFbqY2lhvXl1pmh6 510jJNQUddjzciJXElmO1obQvdm+NCn1U/Pe8flJ64YJLoorOni8I0PeuRN0heeRVJG/ WEwRk8Td1m36dbFIfeUgWkvcbWVlDVxsQza5ojngkT9SvNR+sWeX9N7jsLCs8rp9LONJ kCUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727166; x=1731331966; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YA9nOaF/CPva03eUOIXNUH1DNhxgj/qlCLjHWmvoHj8=; b=j6imrcM6ZzC+SVNmKRn97ATqhLUPn2iB+JYPvJSKZyad8fkgmQZjk8nsvBDSljvh3R Tq/lX5r6EAvwfWbXKga48h4m6AM8wxVMA4tVRafmwpxJwhJvUv8ZKdMd69I4Yrn+HAYA 73dMEbYS2XnBObhmE3wO51wnM01G+RjmDoB0WQBwBR0i6sDO4XH8mO/zC5Siln0nLAEZ WUzykKumc4UgIF1fdVPm/DtmRVg72WeKreiSVbziov2Gu70AWgLrZExfV9VGShUzZOTq lCt2ZFGYnVG8WpqJa1pjxlzx8Ub+P4CeECUpKFQgwJRk3gZcqvlqf2xQvsMYuZaVmkU+ 7o1Q== X-Forwarded-Encrypted: i=1; AJvYcCVhb/smlZskzjmdRYFFHk70pwVOtpkBoAKYT5ym4dpz+u9PW0DzRlojUSXYymZktZOzwh8bHgTrhHVsQwI=@vger.kernel.org X-Gm-Message-State: AOJu0YxYtjhBEKL9EnKvRaT+Fo9gnmYYP3BblRiDep9eKxoyRsjeA8zQ qw5xOIiJaXepMbZWYM83VClzr6kgN+F7b3r6QWMzFVsftYJ79BvJ7t1EQFPIMK4SnBtYME3tOeL x2d1q7g== X-Google-Smtp-Source: AGHT+IHKDJEbkEfKj/R2W9nSJpi/mLwlZYh9Ih88dIXHNCkRftaPK88SGfUQc4OzczUSSCQEg5KywaFPE6uv X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:c20a:b0:a99:fa8a:9772 with SMTP id a640c23a62f3a-a9e50868dd1mr396366b.2.1730727166509; Mon, 04 Nov 2024 05:32:46 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:02 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-17-qperret@google.com> Subject: [PATCH 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new hypercall to flush the TLBs of non-protected guests. The host kernel will be responsible for issuing this hypercall after changing stage-2 permissions using the __pkvm_host_relax_guest_perms() or __pkvm_host_wrprotect_guest() paths. This is left under the host's responsibility for performance reasons. Note however that the TLB maintenance for all *unmap* operations still remains entirely under the hypervisor's responsibility for security reasons -- an unmapped page may be donated to another entity, so a stale TLB entry could be used to leak private data. Signed-off-by: Quentin Perret --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 4b93fb3a9a96..1bf7bc51f50f 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -88,6 +88,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 904f6b1edced..1d8baa14ff1c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -396,6 +396,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu= _context *host_ctxt) __kvm_tlb_flush_vmid(kern_hyp_va(mmu)); } =20 +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + struct pkvm_hyp_vm *hyp_vm; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vm =3D get_pkvm_hyp_vm(handle); + if (!hyp_vm) + return; + + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + put_pkvm_hyp_vm(hyp_vm); +} + static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ct= xt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -588,6 +604,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_teardown_vm), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_tlb_flush_vmid), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DA7C1C2DDE for ; Mon, 4 Nov 2024 13:32:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727172; cv=none; b=gwQA7KZ8XlP9DdZGBzpA7dSqb5A4A9xzngjKUi7Dfi1LT9YeUB8luZr28C7vc7hwFoNLtLuvWDvNhl+Mlg1oqLfQCnwlCGBnfLxrR59R9R3L5loOXfR4vLnZtsxPbHJ8bs9MTQnXHSLAvnuAGQrgPjvc4vXNerYk5q33VOV2u1c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727172; c=relaxed/simple; bh=dSET2om031O47ic/vQaT/kszjb9PKKevw2/1LUEtxGk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HEHa0Tv70vHYQ5ACWMcpgR5fSuX21E8QCwXvA3vedMk6AInj0gPAArPnDqt//xXcz7cOiixg9Bq0+sxQaw4TUmfDeEIdx9KPjJCrk1F0ogCccfVcnf6fnYMjc0PwEaOBkqDVJJurI9u+ntCqjf6kGGXVswbV1TO04XS3YWJY2R4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=eYLOD2CZ; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="eYLOD2CZ" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-a9a2ccb77ceso332406566b.2 for ; Mon, 04 Nov 2024 05:32:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727169; x=1731331969; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=S9P+6CghAZlP90zl0lmxcimUJWiddvnPxJsTkSWCUgY=; b=eYLOD2CZLV3/5NdNP5EQamykcZVkcsAv5L2FJu4LVHOfUNiRrpVMhjqeS4GRsQArF8 w0ppuCjfqU242IPX+WdgqJn5CxP1nMuVjadrlLNZ9W4WGUCCd1sw+DxT6HX4CYRQeOME gYWm7aSf/1E0nrcY4Q1yLG8DkiJgF2SuMqPk2iz/jkLwpWwGRP5+Xs6g84UlR4wEz8CV bD5HjydfKX2hulzcvyNzGfh2nmictm9K06EgoKnD24psYQGLRhJ/cNsT7qkKGhS2W4ue yacpIAZO6YL0jZs5CxWFTbLDIOxrCpLQKB1CuXdBN8ULUBvngvOkEiTGe73maGCIq2Z6 lYqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727169; x=1731331969; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=S9P+6CghAZlP90zl0lmxcimUJWiddvnPxJsTkSWCUgY=; b=p2TPioYZHJ/L2ZHSQjP9+m2emX57zquyn8/2YtZ60XkI89/eh/o0EBTPYi5jpHZYy5 65qfpIoPrmTQiJ/vKRAQZZT9JWZ1e7qAiHoQfENkL/Me7p0nM021WUW07ccT16hVE9rH 681ic5JiEuytE1ecccA4XmwDtRui1GoTQtW9QjRFY7aO9lcXb1EMCm/cwOCGppXat7r/ Pq5Gr4Ai9wQ55iynqelwur+XTbTrqL4GKKA/DC1NRjPbvFgmJG25s0ux77lRAAPYKDs1 6u0sk8GddACcvJRV6rZnj7sea9r/h7Oa0/96bw/RF/u4Yl4DomXVZ8LxgSBZINj2bZNi YhcQ== X-Forwarded-Encrypted: i=1; AJvYcCX9QRaot+rSVpEOItqWtK11JhfL0KCWvnSx9PnxH6D1WangqaRDbBFDgXtpB3coF0Ag6t13HZkTQAp6Ijc=@vger.kernel.org X-Gm-Message-State: AOJu0YzA7Wdl1fQCWt0yHh+Mu6F43pWFannHoM0ZDxsYhtdZMEZqSatw pou2tgx4HstyTEf6kv3K/Ft1fnLiWioDZhORgFZ5wfJ/pjh02uIv3Uta9buvhxYkr5HKqMZs8nD wp1FGGA== X-Google-Smtp-Source: AGHT+IHUzeHYOCyrvkdBeV51c0cW/76sAnrie0GiITF3JTcKQ/wOV0BVt/eR+Wlt38fknSvzPr+8LNYUrM6G X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a17:906:f24d:b0:a99:4c49:d4af with SMTP id a640c23a62f3a-a9de5cbcb54mr798466b.4.1730727168889; Mon, 04 Nov 2024 05:32:48 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:03 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-18-qperret@google.com> Subject: [PATCH 17/18] KVM: arm64: Introduce the EL1 pKVM MMU From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by a new rwlock -- the existing mmu_lock protection does not suffice in the map() path where the tree must be modified while user_mem_abort() only acquires a read_lock. Signed-off-by: Quentin Perret --- The embedded union inside struct kvm_pgtable is arguably a bit horrible currently... I considered making the pgt argument to all kvm_pgtable_*() functions an opaque void * ptr, and moving the definition of struct kvm_pgtable to pgtable.c and the pkvm version into pkvm.c. Given that the allocation of that data-structure is done by the caller, that means we'd need to expose kvm_pgtable_get_pgd_size() or something that each MMU (pgtable.c and pkvm.c) would have to implement and things like that. But that felt like a bigger surgery, so I went with the simpler option. Thoughts welcome :-) Similarly, happy to drop the mappings_lock if we want to teach user_mem_abort() about taking a write lock on the mmu_lock in the pKVM case, but again this implementation is the least invasive into normal KVM so that felt like a reasonable starting point. --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pgtable.h | 27 ++-- arch/arm64/include/asm/kvm_pkvm.h | 28 ++++ arch/arm64/kvm/pkvm.c | 194 +++++++++++++++++++++++++++ 4 files changed, 241 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 4b02904ec7c0..2bfb5983f6f1 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -87,6 +87,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_hyp_memcache { phys_addr_t head; unsigned long nr_pages; + struct pkvm_mapping *mapping; /* only used from EL1 */ }; =20 static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 047e1c06ae4c..9447193ee630 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -412,15 +412,24 @@ static inline bool kvm_pgtable_walk_lock_held(void) * be used instead of block mappings. */ struct kvm_pgtable { - u32 ia_bits; - s8 start_level; - kvm_pteref_t pgd; - struct kvm_pgtable_mm_ops *mm_ops; - - /* Stage-2 only */ - struct kvm_s2_mmu *mmu; - enum kvm_pgtable_stage2_flags flags; - kvm_pgtable_force_pte_cb_t force_pte_cb; + union { + struct { + u32 ia_bits; + s8 start_level; + kvm_pteref_t pgd; + struct kvm_pgtable_mm_ops *mm_ops; + + /* Stage-2 only */ + struct kvm_s2_mmu *mmu; + enum kvm_pgtable_stage2_flags flags; + kvm_pgtable_force_pte_cb_t force_pte_cb; + }; + struct { + struct kvm *kvm; + struct rb_root mappings; + rwlock_t mappings_lock; + } pkvm; + }; }; =20 /** diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index cd56acd9a842..f3eed6a5fa57 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -11,6 +11,12 @@ #include #include =20 +struct pkvm_mapping { + u64 gfn; + u64 pfn; + struct rb_node node; +}; + /* Maximum number of VMs that can co-exist under pKVM. */ #define KVM_MAX_PVMS 255 =20 @@ -137,4 +143,26 @@ static inline size_t pkvm_host_sve_state_size(void) SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); } =20 +static inline pkvm_handle_t pkvm_pgt_to_handle(struct kvm_pgtable *pgt) +{ + return pgt->pkvm.kvm->arch.pkvm.handle; +} + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, str= uct kvm_pgtable_mm_ops *mm_ops); +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt); +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 = size, bool mkold); +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_p= gtable_prot prot, + enum kvm_pgtable_walk_flags flags); +kvm_pte_t pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm= _pgtable_walk_flags flags); +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct= kvm_mmu_memory_cache *mc); +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *p= gtable, s8 level); +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys,= s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte); + #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 85117ea8f351..6d04a1a0fc6b 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -268,3 +269,196 @@ static int __init finalize_pkvm(void) return ret; } device_initcall_sync(finalize_pkvm); + +static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +{ + struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); + struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); + + if (a->gfn < b->gfn) + return -1; + if (a->gfn > b->gfn) + return 1; + return 0; +} + +static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +{ + struct rb_node *node =3D root->rb_node, *prev =3D NULL; + struct pkvm_mapping *mapping; + + while (node) { + mapping =3D rb_entry(node, struct pkvm_mapping, node); + if (mapping->gfn =3D=3D gfn) + return node; + prev =3D node; + node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; + } + + return prev; +} + +#define for_each_mapping_in_range(pgt, start_ipa, end_ipa, mapping, tmp) = \ + for (tmp =3D find_first_mapping_node(&pgt->pkvm.mappings, ((start_ipa) >>= PAGE_SHIFT)); \ + tmp && ({ mapping =3D rb_entry(tmp, struct pkvm_mapping, node); tmp = =3D rb_next(tmp); 1; });) \ + if (mapping->gfn < ((start_ipa) >> PAGE_SHIFT)) \ + continue; \ + else if (mapping->gfn >=3D ((end_ipa) >> PAGE_SHIFT)) \ + break; \ + else + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, str= uct kvm_pgtable_mm_ops *mm_ops) +{ + pgt->pkvm.kvm =3D kvm_s2_mmu_to_kvm(mmu); + pgt->pkvm.mappings =3D RB_ROOT; + rwlock_init(&pgt->pkvm.mappings_lock); + + return 0; +} + +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt) +{ + pkvm_handle_t handle =3D pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *node; + + if (!handle) + return; + + node =3D rb_first(&pgt->pkvm.mappings); + while (node) { + mapping =3D rb_entry(node, struct pkvm_mapping, node); + kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + node =3D rb_next(node); + rb_erase(&mapping->node, &pgt->pkvm.mappings); + kfree(mapping); + } +} + +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags) +{ + struct pkvm_mapping *mapping =3D NULL; + struct kvm_hyp_memcache *cache =3D mc; + u64 gfn =3D addr >> PAGE_SHIFT; + u64 pfn =3D phys >> PAGE_SHIFT; + int ret; + + if (size !=3D PAGE_SIZE) + return -EINVAL; + + write_lock(&pgt->pkvm.mappings_lock); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + if (ret) { + /* Is the gfn already mapped due to a racing vCPU? */ + if (ret =3D=3D -EPERM) + ret =3D -EAGAIN; + goto unlock; + } + + swap(mapping, cache->mapping); + mapping->gfn =3D gfn; + mapping->pfn =3D pfn; + WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm.mappings, cmp_mappings)); +unlock: + write_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + pkvm_handle_t handle =3D pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + int ret =3D 0; + + write_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + if (WARN_ON(ret)) + break; + + rb_erase(&mapping->node, &pgt->pkvm.mappings); + kfree(mapping); + } + write_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + pkvm_handle_t handle =3D pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + int ret =3D 0; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + if (WARN_ON(ret)) + break; + } + read_unlock(&pgt->pkvm.mappings_lock); + + return ret; +} + +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct pkvm_mapping *mapping; + struct rb_node *tmp; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + read_unlock(&pgt->pkvm.mappings_lock); + + return 0; +} + +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 = size, bool mkold) +{ + pkvm_handle_t handle =3D pkvm_pgt_to_handle(pgt); + struct pkvm_mapping *mapping; + struct rb_node *tmp; + bool young =3D false; + + read_lock(&pgt->pkvm.mappings_lock); + for_each_mapping_in_range(pgt, addr, addr + size, mapping, tmp) + young |=3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mappin= g->gfn, mkold); + read_unlock(&pgt->pkvm.mappings_lock); + + return young; +} + +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_p= gtable_prot prot, + enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_relax_guest_perms, addr >> PAGE_SHIF= T, prot); +} + +kvm_pte_t pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm= _pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_mkyoung_guest, addr >> PAGE_SHIFT); +} + +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *p= gtable, s8 level) +{ + WARN_ON(1); +} + +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys,= s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte) +{ + WARN_ON(1); + return NULL; +} + +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct= kvm_mmu_memory_cache *mc) +{ + WARN_ON(1); + return -EINVAL; +} --=20 2.47.0.163.g1226f6d8fa-goog From nobody Sun Nov 24 16:00:04 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 70E511C302E for ; Mon, 4 Nov 2024 13:32:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727174; cv=none; b=ZPWAUZjfUZQro0PJXDJfChcWfhbwJ+SBjmm3wS+NGc38Q7JCMw1Z/X72aWys6fv/kzh1HOz4m7FXDk1C2o6JYOZFKYAy0rZqHdgG4TGXg/q24+UPMbH7GfvSsGLoWq51AiWDkodnblSCwgpa0ZHia0AhB6U43OWufk4QtRFZPyc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1730727174; c=relaxed/simple; bh=fejuCvUox1WxeUimQ6yk8X4AynOTRiGmw1gjGF3j0ek=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LoamUJuK+1RK1TA7ZtwSeLbpWsCmGrGky8UuUgwRxhH535sGiRTaKkenPS2OqoLZqmtubED7sXuj90E2Trl3YvZeYLNQrbLjdnWDpZKF0fUNzDnBy08/0yaBLbm+CdCNBku6+qtCIkO2lkyy7BJ9nyUE3UB5m0EFzmc2MpC4KkU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=N3b8+9Yd; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="N3b8+9Yd" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e02fff66a83so6375805276.0 for ; Mon, 04 Nov 2024 05:32:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1730727171; x=1731331971; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NXCdoRKt7Rne/pKH5oiyWCxEYoVlk91YPJHINOsPplo=; b=N3b8+9YdgMAwMJXMoEWen1a0nSdeF2aJgZxzT9BrCQSI0AMZW9vfVjz2lJkbBOBgFL vJ6ncGHeSjWEjVChKzeIrXTFhSb0CM+LBYKGz8FBt4/EuWZn+kl21WBzcDPDT+gjMc+m ACZqToNIqBrjR1rrZ3QPWJpewLXCAmGsfzLEXwZ9CsYI7FTsOEj5czOF6sS826Ab+dof kPxcdTlkUxCD9q+So0TYem4EnZ5sCL9qukEBoHvNlUxxtjLDVhYOdyngPQfNEIKloipH j4uCTmboi1JpvE0xpYJbYNBxdy6Qi1NjS1zfHy59O2HtyHrI5hDHb//dX9uYueJPONbA wSLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1730727171; x=1731331971; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NXCdoRKt7Rne/pKH5oiyWCxEYoVlk91YPJHINOsPplo=; b=ldEFQS4CzxBS0kVyXAf1KDE9nReXb62YFVsfbZKhdeRE15CQPt6rA68FQYbe+HPtb6 x6tHIVanNz4bsRty8NWtHjsuGLg/voL/3f8SqXxJjoTbzYF3dY2jllRAwVSjfF7Gu6G7 jyNhW1E3Pp3xRrFO5wruAEpIGMrX6eOhIApGerUshZiH5/YfRF9KOo4rcYTj1RrDDtkN /DiA2oEO20Hc9CAl3hJAka4BzZOxbXRPKNwsmSkMIQwlhVXPIDtBUq9QXSQyMaJHZkaM RPdsOjT6VyZnGgKZ/qBmiByzDAyJvDh3IYoqqGCzQeMPkV5LV+/Vuz4OIpRhIRfTZoDx +Ghw== X-Forwarded-Encrypted: i=1; AJvYcCV4smeO1Hj+fEtF+ntjuZ0YS+Yl0fxlzOfX7hc4GgJs2YgQ4Mwj+L7xZFmB8SvpW3FW94exVOp+OJX+WMU=@vger.kernel.org X-Gm-Message-State: AOJu0Ywji8mFyFhtLVsbyj5V0pYdevArVPA+sBC3hSHigvjCLb7jNHJO jHjKD3syWaB7OrMdutqJ4OUhJ5WXBUlgYIgKgtgAiz3wNFjE09qrL66YUk4ghiz3e14H9DUm3/v 0bfxLnw== X-Google-Smtp-Source: AGHT+IGyq0e2NqBrezYd4WLx5KMstrrmZ9yqIIsqORj6Yky486c6IZYV1KwtX6oHvgtlaeUeDVltT641hEpU X-Received: from big-boi.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:129]) (user=qperret job=sendgmr) by 2002:a25:aa83:0:b0:e30:c235:d79f with SMTP id 3f1490d57ef6-e30e5b282d1mr8312276.8.1730727171460; Mon, 04 Nov 2024 05:32:51 -0800 (PST) Date: Mon, 4 Nov 2024 13:32:04 +0000 In-Reply-To: <20241104133204.85208-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241104133204.85208-1-qperret@google.com> X-Mailer: git-send-email 2.47.0.163.g1226f6d8fa-goog Message-ID: <20241104133204.85208-19-qperret@google.com> Subject: [PATCH 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the KVM_PGT_S2() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Signed-off-by: Quentin Perret --- arch/arm64/kvm/arm.c | 9 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 104 +++++++++++++++++++++-------- 3 files changed, 84 insertions(+), 31 deletions(-) diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 2bf168b17a77..890c89874c6b 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -506,7 +506,10 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm))) static_branch_dec(&userspace_irqchip_in_use); =20 - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -578,6 +581,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; =20 + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); =20 @@ -598,6 +604,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran =3D vcpu->vcpu_idx; } =20 +nommu: vcpu->cpu =3D cpu; =20 kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 1d8baa14ff1c..cf0fd83552c9 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vc= pu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl =3D min(host_vcpu->arch.sve_max_vl, kvm_ho= st_sve_max_vl); =20 - hyp_vcpu->vcpu.arch.hw_mmu =3D host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.hcr_el2 =3D host_vcpu->arch.hcr_el2; hyp_vcpu->vcpu.arch.mdcr_el2 =3D host_vcpu->arch.mdcr_el2; =20 diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 80dd61038cc7..fcf8fdcccd22 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,14 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; =20 static unsigned long __ro_after_init io_map_base; =20 +#define KVM_PGT_S2(fn, ...) \ + ({ \ + typeof(kvm_pgtable_stage2_ ## fn) *__fn =3D kvm_pgtable_stage2_ ## fn; \ + if (is_protected_kvm_enabled()) \ + __fn =3D pkvm_pgtable_ ## fn; \ + __fn(__VA_ARGS__); \ + }) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t e= nd, phys_addr_t size) { @@ -147,7 +156,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, ph= ys_addr_t addr, return -EINVAL; =20 next =3D __stage2_range_addr_end(addr, end, chunk_size); - ret =3D kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret =3D KVM_PGT_S2(split, pgt, addr, next - addr, cache); if (ret) break; } while (addr =3D next, addr !=3D end); @@ -168,15 +177,23 @@ static bool memslot_is_logging(struct kvm_memory_slot= *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size =3D nr_pages << PAGE_SHIFT; + u64 addr =3D gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } =20 @@ -225,7 +242,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rc= u_head *head) void *pgtable =3D page_to_virt(page); s8 level =3D page_private(page); =20 - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_S2(free_unlinked, &kvm_s2_mm_ops, pgtable, level); } =20 static void stage2_free_unlinked_table(void *addr, s8 level) @@ -316,6 +333,12 @@ static void invalidate_icache_guest_page(void *va, siz= e_t size) * destroying the VM), otherwise another faulting VCPU may come in and mess * with things behind our backs. */ + +static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(unmap, pgt, addr, size); +} + static void __unmap_stage2_range(struct kvm_s2_mmu *mmu, phys_addr_t start= , u64 size, bool may_block) { @@ -324,8 +347,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu= , phys_addr_t start, u64 =20 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, - may_block)); + WARN_ON(stage2_apply_range(mmu, start, end, kvm_s2_unmap, may_block)); } =20 void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, @@ -334,9 +356,14 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, ph= ys_addr_t start, __unmap_stage2_range(mmu, start, size, may_block); } =20 +static int kvm_s2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(flush, pgt, addr, size); +} + void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys= _addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_flush); } =20 static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +969,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s= 2_mmu *mmu, unsigned long t return -ENOMEM; =20 mmu->arch =3D &kvm->arch; - err =3D kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err =3D KVM_PGT_S2(init, pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; =20 + mmu->pgt =3D pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran =3D alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err =3D -ENOMEM; @@ -959,7 +990,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t mmu->split_page_chunk_size =3D KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero =3D __GFP_ZERO; =20 - mmu->pgt =3D pgt; mmu->pgd_phys =3D __pa(pgt->pgd); =20 if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +998,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t return 0; =20 out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1095,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); =20 if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); kfree(pgt); } } @@ -1082,9 +1112,11 @@ static void *hyp_mc_alloc_fn(void *unused) =20 void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } =20 int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_page= s) @@ -1092,6 +1124,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, = unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; =20 + if (!mc->mapping) { + mc->mapping =3D kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1168,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, break; =20 write_lock(&kvm->mmu_lock); - ret =3D kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret =3D KVM_PGT_S2(map, pgt, addr, PAGE_SIZE, pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1143,6 +1180,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr= _t guest_ipa, return ret; } =20 +static int kvm_s2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(wrprotect, pgt, addr, size); +} /** * kvm_stage2_wp_range() - write protect stage2 memory region range * @mmu: The KVM stage-2 MMU pointer @@ -1151,7 +1192,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_ad= dr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_wrprotect); } =20 /** @@ -1431,9 +1472,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa =3D fault_ipa; struct kvm *kvm =3D vcpu->kvm; - struct kvm_mmu_memory_cache *memcache =3D &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active =3D memslot_is_logging(memslot); @@ -1460,8 +1501,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phy= s_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret =3D kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages =3D kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache =3D &vcpu->arch.mmu_page_cache; + ret =3D kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache =3D &vcpu->arch.pkvm_memcache; + ret =3D topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1482,7 +1530,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1684,9 +1732,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * PTE, which will be preserved. */ prot &=3D ~KVM_NV_GUEST_MAP_SZ; - ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret =3D KVM_PGT_S2(relax_perms, pgt, fault_ipa, prot, flags); } else { - ret =3D kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret =3D KVM_PGT_S2(map, pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } @@ -1715,7 +1763,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu= , phys_addr_t fault_ipa) =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - pte =3D kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + pte =3D KVM_PGT_S2(mkyoung, mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); =20 if (kvm_pte_valid(pte)) @@ -1758,7 +1806,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } =20 /* Falls between the IPA range and the PARange? */ - if (fault_ipa >=3D BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >=3D BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |=3D kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); =20 if (is_iabt) @@ -1924,7 +1972,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1940,7 +1988,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); } --=20 2.47.0.163.g1226f6d8fa-goog