From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 34ED825B2F0 for ; Fri, 13 Jun 2025 06:57:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.48 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797881; cv=none; b=Sytgk1cXM2tMVudspA9+87yVRfnC833wula8cyGjwmxhDOnqwvWcBGXNGZq/taQ9ii6qpbzOf+GnRLNGiZl6EzgWQ/5XbE+++xoZOduP3stmws3gVowWRumDQtvZ3Wk4bLUPK9yIcF9HfcW5osH/ByB0mpb5mUQu/biJyukFVCo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797881; c=relaxed/simple; bh=staaIYNl2k7avfOSLFZ5XPO76ZWP6t7i2gV9awbxnvo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iTxeZr/RFgANzeIv9DJ8o6h7EzQ8/KQgDcM4MsuziCYAFrQCVfLwvUsknTWPYoINfk8Gu4fGT9PbxpykWxsWlehn69+HQQeYDQysFTryTheJjd3nikaXTjI1AL26qvibuC4KZabDTbPe2JzTZV0QSO4fEbsAwZdZv5FuN5K1NJ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=f+nTcRQC; arc=none smtp.client-ip=209.85.216.48 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="f+nTcRQC" Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-311d5fdf1f0so1818825a91.1 for ; Thu, 12 Jun 2025 23:57:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797879; x=1750402679; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ymMx2/Jj5SWc9oh0oJCNF6LPBe8+NWhSuF7K3lIaf9M=; b=f+nTcRQCK4s49Lx1hQmY6hhN5HSOPVDiSLxu65TgcWDTkDI3dmEH4e48FumePTyzGH K44McB3d2TyYLjlG2vRc6idIxVudguYpXvhsLSHmw980+honqpqeImxENi9Xmo2cj86y YVqBwy17bkXNTAH/mK++Ie2X6ddn6Vod1K/tXyAAUBxDSPwDkE+C0ztw4VNNUUPtwgd0 9zCAvnebKzyC08dmW/CkuevmAGaW79AV/XbTkJkWPfb8DTJoLIrz4tD19tFrfRPq92HZ VfQwtOCQU4X4pDAXA3pQ0jlQtAYaPBOpqX2/VF5eeZ3zfTdBzonKmHWqnZj8qt190E6G YpFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797879; x=1750402679; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ymMx2/Jj5SWc9oh0oJCNF6LPBe8+NWhSuF7K3lIaf9M=; b=XwcgJq0d3hVEDdUNDGyDDNDuZzR/5cDpqWHVnaKT/+fiO02qB7lF/m3U3svfCAkZF8 cV2ucvUMczQOzSpXyzILiJS9R1eAzKZ0ONnqnv6id9CfxZf/K4m9N+n9jw1YrogV/Y+r gWJ38mYR3P7QKbpHGrAR4jc4kuz7xOroNgXMfH+ef5poyqAVKq/4LTj9DDuJXtDMw4ba VWOCBVmi00dtP5/5K/EJwRmSjLworOk6SQiniXhrdcs1RPZONSCZy4RFMop52o3h0oQO 7sJK4EJXyYL9W+UrBUioAG8cixSv603KQ7QKLjV8tePYKeV/CdHIsbZ5p9gFnTeDMbRF 6LFA== X-Forwarded-Encrypted: i=1; AJvYcCXQWKJAKLFB4oN3+8F9QXQZo+/Ji4DOVJw1dzB5gv6/fCvQhTZMbvbwVIAmJR+0XrquRGZZk2F2xJeWspc=@vger.kernel.org X-Gm-Message-State: AOJu0YyAkAobuUw+gOm22vQ81EPZruR5jvf+ulOGaRNKmG037coQoRDZ i6Sn5OJT4QtC8zBRVT6QwOI1Kh7LWSD/FU/ri8HCVy6rcw8GClZj9RVWIFVeUA2dHTg= X-Gm-Gg: ASbGncsd5yuFQKlF9JPuXky7J9aulVusP5JlCfuCQYhGQKz1toh/HeRze9ImvAbbz4h pyA/x+nxTDcGR88xaYhYaqpVMIkFJS+nMbGYknxvV2RRu27OoQYCelMNG9ZSlDuAgr1EBmi03E+ 90X29RYOq9/nCaDcs+DkVD/YEaN+d80161W+SgM9HEvetqF+qcsScCXRLUJyfHiENceY8rqkeA+ zWTOh1Oa+IZCtBpACCdZWCtmfPjXYTgvzkWrDqBvvteKaYVckhTnQ8DYtYDuHZwF9M0wBUKoe7Z KWQ53g6SKQoT16MLu/UdXWrnQ8QkH/Ix66Lk+rJqt541LbtRRD5T8AM7KOC20E1LkzPYv1FoumK W6WpTGdmwDtCBs7GdmNc= X-Google-Smtp-Source: AGHT+IFbsRijVxYSu1XTO+0N6i4HTy+vctiYUi4v1qHZj513C6U95CzPutsxwPcR26VA49ZdTmRG4g== X-Received: by 2002:a17:90b:1a88:b0:311:b0ec:135f with SMTP id 98e67ed59e1d1-313d9eeab31mr3207474a91.30.1749797879252; Thu, 12 Jun 2025 23:57:59 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.57.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:57:58 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 01/12] RISC-V: KVM: Check kvm_riscv_vcpu_alloc_vector_context() return value Date: Fri, 13 Jun 2025 12:27:32 +0530 Message-ID: <20250613065743.737102-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_vcpu_alloc_vector_context() does return an error code upon failure so don't ignore this in kvm_arch_vcpu_create(). Signed-off-by: Anup Patel --- arch/riscv/kvm/vcpu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 303aa0a8a5a1..b467dc1f4c7f 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -148,8 +148,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) =20 spin_lock_init(&vcpu->arch.reset_state.lock); =20 - if (kvm_riscv_vcpu_alloc_vector_context(vcpu)) - return -ENOMEM; + rc =3D kvm_riscv_vcpu_alloc_vector_context(vcpu); + if (rc) + return rc; =20 /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92FB8259CA5 for ; Fri, 13 Jun 2025 06:58:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797886; cv=none; b=vACxnYzpn4bdKwlAngeZjoq5HnbxjwBypoSYDLUNnUGM16MAAKjcEN01t3VxVbwYg0+on5DVLtR5ONlfLTqarzGq0z7YWCPVLmlbtVlEwEb5BO2cHwZlWs62M3d6WHzLrfvAV+bZHm/IhjoppFYzmtX2779JWMHAXfh+c2ryS2c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797886; c=relaxed/simple; bh=S4GPuCe590xZl2EuRNnIMDQ2EGVo96MnzdBawsw3VME=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d/oioyNny4brMmxdOVc7nG/eBjjiUx/Lgry+NvHFS/YaoKRMPiEz5dD0agnBYk8uAhIM+4ZJn9ua5jwY7KfzYFjKdf6rmfXTlkhTJoYCJKAlTnydIDdahcupvWKPVdUrpSn1+yCn8I9a9j3m3ogwNsPI0JMJN5ELMdGHpiJTwio= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=lmUgZLo8; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="lmUgZLo8" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-2363e973db1so24190405ad.0 for ; Thu, 12 Jun 2025 23:58:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797884; x=1750402684; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bKQfZHKenjLPRqEIi0Ln2Cx1EmVsnnUE8sKaMZB78eg=; b=lmUgZLo8xgCeCBp+1/EWNdUnInsVBoDGwGrDBLciCnihe9T6Jw8u29g1iqNDxoAUzE TYzBS9Z24/bZtrp3u07JvhkpjXQuDI5n5vDQWFyxX5M+sZEF+bmNySIXSuTWUhY46AmF XIt3Fm2uwyoHmdhH+oXuMCHy4totmmKIg9YOiMY7ULWdDP3LsiJ98J7yXz1hl6rUUkJ8 9gnBU+n9EQv1yPdim4Am7qme8oblhE6eqILz9WcE1shAOTroaPepysiIKzGkYVWg61Ie UVYbtjxyaMpSSa5m+T4xLB392V6vJTexcX4ulPYYeuSTbfP+keNdSi76d+OXkWWaC+dL Z2oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797884; x=1750402684; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bKQfZHKenjLPRqEIi0Ln2Cx1EmVsnnUE8sKaMZB78eg=; b=Rta4uXYilgJe1BRTjGnU5kYAgQxdLdAnGjH/0jNXVlxSecoes3DTYGAbAzobOrSlNP C81YXISUmPVXlgYFT5QCKXtHKk/atdRufAr977bh2ZwlqsqIYt05uYDVDsi3d32gigav ymkdNokT3oz5rhykG3g6N4ZVfmSlfJqVUHSsNJ24t7zgSDkW2D2gvSbsVUec/M0pE59I stMwcihhok7+IBrZw0/t7pHNnRWRRwX5pA3wN+0wE7i5ZrX2usigPop3DgRR2HDRZCnr Xnp2ZwkregoSyvZ0cNe1/ZoIMPchMSQmQhevEBAYOMQ4Z1xEPEzKQ5cmwsUtUYe33vQs lupQ== X-Forwarded-Encrypted: i=1; AJvYcCVmO6S+8dW+tg5q9UX4W9s/gsC+GktoeONpU6jcuXH2YJXvN3CFPX9j4AvcHcdt3jdoeruKpYEnbDq27b8=@vger.kernel.org X-Gm-Message-State: AOJu0Yy0Ysh8LdqlgV6EWE9Bh3kz2fV3fdmHkcqRJf49lHj5TakPEXtz 5+2F+hlGG/oFc4ciKNbSNM/ISnhJSaGLmRiWPzoDBRhmX6/E+iD/WgdcZheyQnbTsqk= X-Gm-Gg: ASbGncu88FNnUeiKpvPNLGMB/F52FpMu+IfdO6EftzVRAZr62Lqr3UI7qCf6s9G+NUd NRWtd2dWn87/DLQj1ubu5srE6z/dyGvqo3kCzAWYKu0HDnEz6Rsh/1qOUiNEVf++T/+qF3HById oOzXD49cDF7REOqb7RxtVS9pnqqyFmnL3Yye6RhHxq7LKaBzRDxhDSpEiw2d3chTWG6Y/dqqHuN b0pj/1opmHzBkuYB+DCc3VpfqQlfpqZko6J3EEkPUL3O3DlyI4GJgVSXgBAH50y9lytZP1+sPjT d1i5/oT4j2MvyyF1kO8Wa79xPQhmwgxZ+/HOk3U8W3Zt8J8eseiXxrM/Ch0Amb2bUy2SExkcYNm fSo4p1nUl73pgdAqLb20= X-Google-Smtp-Source: AGHT+IG1oIFmYmo4nzolmhFmjeasCs1i1bS0vqnVVHiUcmyimhpiT1WD26mREN2ygRQRC/I3Yp7nzA== X-Received: by 2002:a17:902:f647:b0:234:595d:a58e with SMTP id d9443c01a7336-2365fb73e68mr22915165ad.25.1749797883849; Thu, 12 Jun 2025 23:58:03 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.57.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:03 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Nutty Liu Subject: [PATCH v2 02/12] RISC-V: KVM: Drop the return value of kvm_riscv_vcpu_aia_init() Date: Fri, 13 Jun 2025 12:27:33 +0530 Message-ID: <20250613065743.737102-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_vcpu_aia_init() does not return any failure so drop the return value which is always zero. Reviewed-by: Nutty Liu Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_aia.h | 2 +- arch/riscv/kvm/aia_device.c | 6 ++---- arch/riscv/kvm/vcpu.c | 4 +--- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_= aia.h index 3b643b9efc07..0a0f12496f00 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -147,7 +147,7 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, = unsigned int csr_num, =20 int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu); -int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu); =20 int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, u32 hart_index, diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 806c41931cde..b195a93add1c 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -509,12 +509,12 @@ void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_aia_imsic_reset(vcpu); } =20 -int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) { struct kvm_vcpu_aia *vaia =3D &vcpu->arch.aia_context; =20 if (!kvm_riscv_aia_available()) - return 0; + return; =20 /* * We don't do any memory allocations over here because these @@ -526,8 +526,6 @@ int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) /* Initialize default values in AIA vcpu context */ vaia->imsic_addr =3D KVM_RISCV_AIA_UNDEF_ADDR; vaia->hart_index =3D vcpu->vcpu_idx; - - return 0; } =20 void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index b467dc1f4c7f..f9fb3dbbe0c3 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -159,9 +159,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_pmu_init(vcpu); =20 /* Setup VCPU AIA */ - rc =3D kvm_riscv_vcpu_aia_init(vcpu); - if (rc) - return rc; + kvm_riscv_vcpu_aia_init(vcpu); =20 /* * Setup SBI extensions --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B3C84259CA5 for ; Fri, 13 Jun 2025 06:58:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797891; cv=none; b=ncNlpy3CjSLJhdMMQEMuoltKQP6TAA56eS/ogZ+k3hrxuCbt6lo1gAV7hdQo8gdHRI9RuTaE2K6Jl+e08LAXRHZfMWvVqByw8qUqlHxeH9JabSdmyDtrzAy3CFVKv7znoMiNsJL7dcIw48GHv88tgltpK4V9tOHXbwVxpOQde1I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797891; c=relaxed/simple; bh=U71gyvnkFRTjwDI8pxtkBdAEgTm+G+VjYBDnQo+6/v4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KiMrBZ8898OdGnnefWcd9kHvkDLEYm0alFYPd1V3wvxe1ShiKAAzrzxQmP0aA46CJOd7jxGoy90msFVr70QTyBPr2AWbF5FcDLPEWt8Od9gzGQU1ZraCmGdj3J4J6aw9mP+LlA3mSGeQ1cMVtuCSV3cTBxnsLN92sn2hDLA/Ldo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Gbor7Xki; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Gbor7Xki" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-3134c67a173so2013710a91.1 for ; Thu, 12 Jun 2025 23:58:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797889; x=1750402689; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3W7Bv8eoCvTNzkNO4nvyVAxgoupNAJKpC3g9nF1nD78=; b=Gbor7XkiXs9Wc+6pZB7zFCGRj5+GrjBsPyoYRJpyQfc9HdUW8EegECBHgRcUMLFAxV IHsviUyXTb4tq0fKL4+5bTA9O8SGHFWPMoIBOHtbj13rm62djTbHuzJJzKvX0Xj/DoGK K9mWcFxMansr3EOzBTwkq8W4eFFkcn5+eYgFwrthEs2+KfMmaYawD9kclxKpRIGz/q6I rq70SRY6GDJ/8JFiF070oJX8bKRIcE5RoJMCIokvVhgy3tAgmwADgvsdmOBw/1SCjhCS +7f/Vb0sAxzbuC+rDtH5FIC9JU2540X2aH9gVywMraPynSlRIkZM8wjXIB0yTkT4/2EY 5/iw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797889; x=1750402689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3W7Bv8eoCvTNzkNO4nvyVAxgoupNAJKpC3g9nF1nD78=; b=DmmbywjwMaWyfYZzlNHrCb9wcfutJWAJNvTOnT2zNaXLuYgwXh/lTByQqmRtw3jbn8 LGepNd+ZeRuaSfNMZPYssDthndb+0eRqbyMSd5fa0tNCpxhT+L3dPZxlMWJaFEyEqEmU 5GpB83g1XWFvElwhRQICONZ54c+BIr8Nn9qCsJXekStgu7fp+0mV7Py1+844IE0U3IZ6 +lodHui0uIWE67xxOuAmY9JwLFIBHKqNleHKUenvb75mWjO1zYc/ZWuq/IMM6B+7ePnI oX65r1/87NwuG6mOZXUjCx+V009sYjrOprqP1zNMyX+8VfPRE9nJHHmCSnBIcQRIMlN3 2tyQ== X-Forwarded-Encrypted: i=1; AJvYcCWZ/auudPtXiAh8iCu7dm4besZlWfw0qwNyNeja1csWWHHfUtPN3dvKKYKgAU13K9Om4joi2VK77VNfA10=@vger.kernel.org X-Gm-Message-State: AOJu0Yz7wdpHRYdDAwLqDU1Pod/Te9j6eC6++eCIWIGwwn/cMy1M/De1 U2vnqeIPzs0jXTvspR5ygTkrO5a8DLdLhXCFTJHBk4A5+KGsih40OvtlMgxos1D8cBcpOq3Dra1 Ea8mxUYE= X-Gm-Gg: ASbGncsEygQuUa9M9gAyClAVSmR/booqcvCH7doWLqx8nxs0GPFyjgwbppI3zvWq+ny ugACYPvDGNb5YcTG9/PuWORdtWKEoeqe2WolONjyOPnfYFNx3mURnBvr7PFFH/k0U2BYNmTeBvc thw64y9TbBJJq2F+4fBqpafKmI5sNntHHiN7hIoaUjVHj7w08IiCJNLXwSkNCEgNqJrHX9bjiGv 6SyBAgSTUKB+Sr24JFWwEckDL3AW8j8yYC3LmGaAJhf+EuHJLuRs1SV6aefP18/60REz5WP8K6K aPWghdkxiBkE02UyedZ5bk+PnaMdqlk5ojp73EWevLXKRaxKTzBPtx/Gor1rn4xM7LiLVKCgUpJ WgBxUpV5glu9xiepUKhk= X-Google-Smtp-Source: AGHT+IECSuEdfJ3+vv5ZN/l8SWOjIXMckOPoSZF/gGT301HnuJfIR6kjlfzE08KH38AjoOLcOVtXeQ== X-Received: by 2002:a17:90b:4c89:b0:311:c970:c9c0 with SMTP id 98e67ed59e1d1-313d9e926f7mr2384987a91.22.1749797888863; Thu, 12 Jun 2025 23:58:08 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:08 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra , Nutty Liu Subject: [PATCH v2 03/12] RISC-V: KVM: Rename and move kvm_riscv_local_tlb_sanitize() Date: Fri, 13 Jun 2025 12:27:34 +0530 Message-ID: <20250613065743.737102-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_local_tlb_sanitize() deals with sanitizing current VMID related TLB mappings when a VCPU is moved from one host CPU to another. Let's move kvm_riscv_local_tlb_sanitize() to VMID management sources and rename it to kvm_riscv_gstage_vmid_sanitize(). Reviewed-by: Atish Patra Reviewed-by: Nutty Liu Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 3 +-- arch/riscv/kvm/tlb.c | 23 ----------------------- arch/riscv/kvm/vcpu.c | 4 ++-- arch/riscv/kvm/vmid.c | 23 +++++++++++++++++++++++ 4 files changed, 26 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 9a617bf5363d..8aa705ac75a5 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -331,8 +331,6 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 -void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu); - void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); @@ -380,6 +378,7 @@ unsigned long kvm_riscv_gstage_vmid_bits(void); int kvm_riscv_gstage_vmid_init(struct kvm *kvm); bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); =20 int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); =20 diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 2f91ea5f8493..b3461bfd9756 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -156,29 +156,6 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmi= d) csr_write(CSR_HGATP, hgatp); } =20 -void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) -{ - unsigned long vmid; - - if (!kvm_riscv_gstage_vmid_bits() || - vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) - return; - - /* - * On RISC-V platforms with hardware VMID support, we share same - * VMID for all VCPUs of a particular Guest/VM. This means we might - * have stale G-stage TLB entries on the current Host CPU due to - * some other VCPU of the same Guest which ran previously on the - * current Host CPU. - * - * To cleanup stale TLB entries, we simply flush all G-stage TLB - * entries by VMID whenever underlying Host CPU changes for a VCPU. - */ - - vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); - kvm_riscv_local_hfence_gvma_vmid_all(vmid); -} - void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) { kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_RCVD); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index f9fb3dbbe0c3..a2dd4161e5a4 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -962,12 +962,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) } =20 /* - * Cleanup stale TLB enteries + * Sanitize VMID mappings cached (TLB) on current CPU * * Note: This should be done after G-stage VMID has been * updated using kvm_riscv_gstage_vmid_ver_changed() */ - kvm_riscv_local_tlb_sanitize(vcpu); + kvm_riscv_gstage_vmid_sanitize(vcpu); =20 trace_kvm_entry(vcpu); =20 diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index ddc98714ce8e..92c01255f86f 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -122,3 +122,26 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcp= u) kvm_for_each_vcpu(i, v, vcpu->kvm) kvm_make_request(KVM_REQ_UPDATE_HGATP, v); } + +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu) +{ + unsigned long vmid; + + if (!kvm_riscv_gstage_vmid_bits() || + vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) + return; + + /* + * On RISC-V platforms with hardware VMID support, we share same + * VMID for all VCPUs of a particular Guest/VM. This means we might + * have stale G-stage TLB entries on the current Host CPU due to + * some other VCPU of the same Guest which ran previously on the + * current Host CPU. + * + * To cleanup stale TLB entries, we simply flush all G-stage TLB + * entries by VMID whenever underlying Host CPU changes for a VCPU. + */ + + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); + kvm_riscv_local_hfence_gvma_vmid_all(vmid); +} --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 67B3B25B2E7 for ; Fri, 13 Jun 2025 06:58:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797895; cv=none; b=nUoCnVJvGjhFMiot1TUV/fXstDrXLgkSnDROgkFa/s2/D/wPDYp7CxjD49uWlAzNXNNzm/DT95zczOfCs6FD7RgBe+srA1ZVJ79CVkjtk/TH6BdGjYl3MMd5l2Oyu3SxwetVgTf6e8AatOq1LvCfN6hsU8WH1FPTs58U6kR7SAc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797895; c=relaxed/simple; bh=UNJDgyGRwKsBGAVSWW07IY+xb3iaC5t5L0oSJfwXEpY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O8yViMuhgEPbd/MNqa8w5GSLGWuTSGO75uEVS957bKz6erpi3liK5mj+9O0wDQsNvL1L+dFgLCWIJEX18ibxlHAwEiTF/9yobdXkRNdtUXUBevPu5oCEt9PNyLH0a6252fZUK+XkAm/i6xuh8vhaXJ3zqdn3TqQVfJXUqs6flok= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Y2IrjyvL; arc=none smtp.client-ip=209.85.216.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Y2IrjyvL" Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-3137c20213cso2170379a91.3 for ; Thu, 12 Jun 2025 23:58:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797894; x=1750402694; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0APR23xoSpRoPrmHOoIW/fZKPHEwlO7VgBzBq3Nrcg8=; b=Y2IrjyvLyVi2V1jMJivm9rU6tHvy6jPoplagXWPOrX5fTmas/jOUA6m6Ax5Ho+5XZ4 D8O6GIhi3ID0M42TA8PS0oqM4xm/SAWKoZxum2WV4ZDtv5W68fR5oVCdPEFiA8VOaUXR jbtXi94reH1k1AWlRY6rHSi2ECuUDEI0iBdJRXPM77PGGSNoQ/76OxopKxGtI5TnHcBG vw6tpwGpTgATIDbzI55386IBcZ3uvJ4nQ6HEbZoH9ooUE9zPBTYMck/8W3qvyrmhezhK JnvNhRM3G1TNTOWGjzrxWdgYjahy8M0xMmujwt4pZp9K6b7fsuQ4RmnvG9yfuHQbFux2 RpcQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797894; x=1750402694; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0APR23xoSpRoPrmHOoIW/fZKPHEwlO7VgBzBq3Nrcg8=; b=s4tvL/qJonkZQgi+o7S6KzvqaFJ0HS2mgvocejDiY4MC/XXHFMUucBfE3mk3tKQiYz 22o0BuB4qc3GTW2uNlUUXHfH0f3sEEy+kdfYY+J/HUKjjQ4PMZ33CSksZTnijysMPchv PiyW4TeX2EuHY6MR01O0Fb06eN66znCx3unHHvMx1hQKSlRniM7SHcHqStQT/ecg9yhn iEHMcm0D+VyDcBrqOP+GXFv0LUBvCUZdHs67LFdb+CT++RTiLLybHfpy3/JeyalBh0hD IifEwAxvSyrCsFA6vhXfmPTV1Gb4VZBB8tQH/0JnOzBc9VhzSzFXhj3oiGwishrXvuNY uFoA== X-Forwarded-Encrypted: i=1; AJvYcCUYBsVp5TAD6/i0HvrAeuFP8IOiG3t1KMqfhCvajz+XfJM/qjB384fRSQfTaC5lbHwOhOMbJ9fwVGIZ8D4=@vger.kernel.org X-Gm-Message-State: AOJu0YxDHb7Z5Lu+uHdf8Q2ESad/Iw+UuGZn5vZeWw4LujNhmb/Pw9ey QDJc2xVzceLk5TOHe/9XNoQPCtem565VrNmcALiq3RoYRWff8jJsqHEhd9CjfgF7GgY= X-Gm-Gg: ASbGncvJOoQiKdf5FSzofZiv169JumuTJ/NYjTKR9ECHLWzwdrB6Hnmjyw/FsWxSPCJ mn1e3gadHEOKA2eq6JA9qZlVEKEBu6PjwEbwac/DIyP4jC9RN3hvHpYnA1bIvmS1ojipcopSIqO 6yIPrTG4ERJtYs9ONUrJCxlp3fPGwc/kE629MCV2rVzpByfnrJSNOi8517nDUbVkCmNCP7X+7bB I/DopTO6js9l6bJMVGwMKyfb6yUvJm15OGAGidwgM21XKfszaDkbH4aojLCCu8AqqFyJhApc2Yu aoxgYVcL5kJ14AU7wXl2dV/VNK+ax1pPN0uJBFtAzcAl6im75ruQkmQTjUnFMmJl8pH33mqS6iG 1b8nVwgbHlW8Sz237ztE= X-Google-Smtp-Source: AGHT+IF4XmbWIExFu1mlduKfiNfTLu/tQXyVfpn87Q8JCh/3y9agaf4v394uxNt/X9mcs0VMw2lIeA== X-Received: by 2002:a17:90b:5212:b0:312:25dd:1c99 with SMTP id 98e67ed59e1d1-313d9e92702mr2795609a91.19.1749797893528; Thu, 12 Jun 2025 23:58:13 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:13 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v2 04/12] RISC-V: KVM: Replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH Date: Fri, 13 Jun 2025 12:27:35 +0530 Message-ID: <20250613065743.737102-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM_REQ_HFENCE_GVMA_VMID_ALL is same as KVM_REQ_TLB_FLUSH so to avoid confusion let's replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH. Also, rename kvm_riscv_hfence_gvma_vmid_all_process() to kvm_riscv_tlb_flush_process(). Reviewed-by: Atish Patra Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 4 ++-- arch/riscv/kvm/tlb.c | 8 ++++---- arch/riscv/kvm/vcpu.c | 8 ++------ 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 8aa705ac75a5..ff1f76d6f177 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -37,7 +37,6 @@ #define KVM_REQ_UPDATE_HGATP KVM_ARCH_REQ(2) #define KVM_REQ_FENCE_I \ KVM_ARCH_REQ_FLAGS(3, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) -#define KVM_REQ_HFENCE_GVMA_VMID_ALL KVM_REQ_TLB_FLUSH #define KVM_REQ_HFENCE_VVMA_ALL \ KVM_ARCH_REQ_FLAGS(4, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_HFENCE \ @@ -331,8 +330,9 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); + void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); =20 diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index b3461bfd9756..da98ca801d31 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -162,7 +162,7 @@ void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) local_flush_icache_all(); } =20 -void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu) { struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; unsigned long vmid =3D READ_ONCE(v->vmid); @@ -342,14 +342,14 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, data.size =3D gpsz; data.order =3D order; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, - KVM_REQ_HFENCE_GVMA_VMID_ALL, &data); + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_GVMA_VMID_ALL, - KVM_REQ_HFENCE_GVMA_VMID_ALL, NULL); + make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, + KVM_REQ_TLB_FLUSH, NULL); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index a2dd4161e5a4..6eb11c913b13 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -721,12 +721,8 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_v= cpu *vcpu) if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) kvm_riscv_fence_i_process(vcpu); =20 - /* - * The generic KVM_REQ_TLB_FLUSH is same as - * KVM_REQ_HFENCE_GVMA_VMID_ALL - */ - if (kvm_check_request(KVM_REQ_HFENCE_GVMA_VMID_ALL, vcpu)) - kvm_riscv_hfence_gvma_vmid_all_process(vcpu); + if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) + kvm_riscv_tlb_flush_process(vcpu); =20 if (kvm_check_request(KVM_REQ_HFENCE_VVMA_ALL, vcpu)) kvm_riscv_hfence_vvma_all_process(vcpu); --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 054492580E2 for ; Fri, 13 Jun 2025 06:58:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797900; cv=none; b=eoG62fc86nnimjcQKnHWYapM2fEOfts50Dh4kme8cXpLT767KVtrNZ6Gebxys+CcMzU2CKw3C2xHmJ1QK2AKprfX8fcdDYi6JaRXLNmh4TzPqKhh1iM1CuZyYqgNmr5SB+KCq6jnXO9bn7+7MDLbuX+YEV+NNJU3il0ZVDqRJms= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797900; c=relaxed/simple; bh=N1Q7Yyxv25kh8YbbPEq1f4Xpl9cPz/xmC60ZF02nnis=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YeKyZqq/VzE8hHUU9K4lQhzLn9TiMqsolDgHfZ7vAShNHdU0LVddpJbTvqe2quEEs1XtevDoDFt9dqSg+3FhpLSuNVsnHm+W6oe2AqOzbL7DltRf90e+TQPr2xE+dXFwjuJ1d844PyJhK6/03GwbtB2Uuzsnd4/4jTzatsX6fQs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=JhBtF9N2; arc=none smtp.client-ip=209.85.216.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="JhBtF9N2" Received: by mail-pj1-f43.google.com with SMTP id 98e67ed59e1d1-313b6625cf1so1456820a91.0 for ; Thu, 12 Jun 2025 23:58:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797898; x=1750402698; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vPp46eZSp7JL4MdOWPUs+dW6gfTxSrmGdnYXz+FMAYQ=; b=JhBtF9N2FLyRy6kJaLSipSLeBzJr5/v+eVEOXiOAbvdrDeLkZaVgOe/lUcT1Ddl9vn yU/NZ09DuwNG3J3DcJkbMXinrYUpS1+my77++y8hgOE4TWf888P5nYp66mvW3VPPZsw5 /EuM67m06U4zQzH2nq/boh3UQR+S4XTfGBhMdJRkxMryrHRkiNi6bWLXP0ChiBqeQ2Xj I2s17Si7ezP9oJWc5Bz+/5hswj7XoIFSDMDFtpxPeXoReOcpN2q0ZCzkqazVlWkVVjX1 eVja8zu9IwoCc36OXYmoaOGuVxfYuOmL6WSB8ahr1aJaKlJeooFOi6dkddwYLj08+ARQ YY0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797898; x=1750402698; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vPp46eZSp7JL4MdOWPUs+dW6gfTxSrmGdnYXz+FMAYQ=; b=WHtDUrEPiVF5cAqHOlb7w6efwx6Vx5exjiKq/A7NaOHQ3unOcaIRybdHDyDAOcNG48 odq50QfLofso7q3KY7WP2jNVBm8wlW5a2ctwdYeduFkiNxxSvhcKS48NIM29c+Zy5NQG Xwn3vqE21v890DvqqXzQi92IBbXNHAQuLwj+eOToRAl14kd5XzoP9Do0KYlkLfNTaSpd L27YjoLLj31DRmRJ3YLGOmU/mXm3WZxfxtjuki61nrh7Pqpv/jV4Fk6LyVkYpx+z/GX0 YQhpuWeW2YiH+/WyJHXBio06ZHYpWOjuocztiK5IaPHwxKW188Tg9kXSXviJH5T2LQjT /3xg== X-Forwarded-Encrypted: i=1; AJvYcCW+5+nc1Be7IsOdo43kzI1GIFYU3Pb4AGosJnF+24lFzvcfjMwNDEpGC6vmYimpraooyZHCVvIyFf1R5YE=@vger.kernel.org X-Gm-Message-State: AOJu0YzleCSvHPpZXOD0xBJaQKrO1tGnZFj2GpfD28GV9HZ7eZ/KhOV0 cVeeTgEFb58plWnGuueDcP1pQMFnoH+BOKWIwhIVpFNTHpbGG3uHxvhsqT64OaJUQP7WY0FHqcK pi/zULIg= X-Gm-Gg: ASbGnctkuZhNomzqYm+Tb6+6Gt69rMYZM8mZ1unvM3kzk5lrIHhL8ccepR5HNapjhja rub9dlqpaOJq5gHyu/oxI0QXSKj3wwNdppn+JSRFSNfpGUzIyY1v9sU6ih+e8ITCPmoYafg+y8S ZH4lv6IFA7Set+zf1CCgmQtvhtW9+UIrYd78og6H2Mn0v/X6mnWC34ACcsK+nxHngF12JzjPeeE yWjf/9KWivhXx7v1INapiAs5tQBgrVzQzZ7q9AB+KgLqcZguJejiBhd6wWE4/svz8R4E/j1Wrtf 4lDLeeAua/vB1J5UC4uaeig8X9+KTDUn/2XxlKbUKjOnLSSdaVLE0uw2U+1dr9yHGJtA0sniDm3 4WlyGW97SK4Guc5Tidxc= X-Google-Smtp-Source: AGHT+IHlesOh77cakTH2YhHv43j8zOza0rZEzGZ+37qo1i6wlvyL7TrOfPOjUkpSesJgnyG08PjCmA== X-Received: by 2002:a17:90b:1256:b0:311:abba:53b6 with SMTP id 98e67ed59e1d1-313dc25631cmr1682026a91.14.1749797897914; Thu, 12 Jun 2025 23:58:17 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:17 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 05/12] RISC-V: KVM: Don't flush TLB when PTE is unchanged Date: Fri, 13 Jun 2025 12:27:36 +0530 Message-ID: <20250613065743.737102-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The gstage_set_pte() and gstage_op_pte() should flush TLB only when a leaf PTE changes so that unnecessary TLB flushes can be avoided. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/mmu.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 1087ea74567b..29f1bd853a66 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -167,9 +167,11 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; } =20 - set_pte(ptep, *new_pte); - if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + if (pte_val(*ptep) !=3D pte_val(*new_pte)) { + set_pte(ptep, *new_pte); + if (gstage_pte_leaf(ptep)) + gstage_remote_tlb_flush(kvm, current_level, addr); + } =20 return 0; } @@ -229,7 +231,7 @@ static void gstage_op_pte(struct kvm *kvm, gpa_t addr, pte_t *ptep, u32 ptep_level, enum gstage_op op) { int i, ret; - pte_t *next_ptep; + pte_t old_pte, *next_ptep; u32 next_ptep_level; unsigned long next_page_size, page_size; =20 @@ -258,11 +260,13 @@ static void gstage_op_pte(struct kvm *kvm, gpa_t addr, if (op =3D=3D GSTAGE_OP_CLEAR) put_page(virt_to_page(next_ptep)); } else { + old_pte =3D *ptep; if (op =3D=3D GSTAGE_OP_CLEAR) set_pte(ptep, __pte(0)); else if (op =3D=3D GSTAGE_OP_WP) set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); - gstage_remote_tlb_flush(kvm, ptep_level, addr); + if (pte_val(*ptep) !=3D pte_val(old_pte)) + gstage_remote_tlb_flush(kvm, ptep_level, addr); } } =20 --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EE7F25D917 for ; Fri, 13 Jun 2025 06:58:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797904; cv=none; b=dyef7RnnAujZdbMgp9j+A88D/NXXuMOEQJ7OYcw3Frskqs/BLhoZXKgJU79VVff9djUCr5f2u0nHH9H7PUbPnQRuDkhzkWe7IZCgc6gqMKB8iAhO1GqK+aeH/6xnQpNehGruAhKdKOAMVhRqVRlw/yjrscjvV/zO2s/tVb4NdzQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797904; c=relaxed/simple; bh=rq4NA0UvWiBGdQQMC6kdlaKltEwLwO4T2L6EMR3/fYk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lskPg1Y0BZIH6M0F9RBQI5JjLCpfo8LQY14b8++IQLn3cn9fSA10/wKk6Wmklho27XdEjVhsp92c5n3HF8170UTpR5bqGiVM/ePiHPofmkPK6KZdHbt5AtW7TjO0bsxxNG1GJ4yi5DPuP6QvNj2z2nPE6Akdy/+L3ZMzRseVYa8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=KkxSWW+S; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="KkxSWW+S" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-3138b2f0249so1671384a91.2 for ; Thu, 12 Jun 2025 23:58:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797902; x=1750402702; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+QkzcBlS9vLeXpRqbqSrDujMqO/PGHcWXsj9ZWs9Z+U=; b=KkxSWW+SF9FhhCnVtTyaMB67h9yguKRlIVV6bbR1uVKMNCs2wHuVzWy//n/GS4bWb0 SximIxThvye5qm/TntjDXRlBeAjvPLNQkC9xQmgtXDra5ICRHgq8J5pVMPU6bUyI5i1s LHGhsXOdomHF4lKhE6AHXD9dCEfrI/NFydIXgtA6dSzstDRHmKlhGXNqioqUDHbqRNiw ifaAABpmX3PlxeMNQGCvr1sqjmjYqAIY1+BvjvQn2QC68RAlfKAYP5mgOkNhjjhsCz6y 1mMsxg/hJW3icWD28Qqk9ACoxw41kpOP6DDt4T/xPQ3enfjyg/hJw4MiMUJTtIJUftyh GC8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797902; x=1750402702; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+QkzcBlS9vLeXpRqbqSrDujMqO/PGHcWXsj9ZWs9Z+U=; b=qarOpKTZ/C97SSgvUn2D+JGbvoHz5RtY+MhRi5BT9HcNAYVY30SNu1wybwSjWilu+z udfW7RwbX7FfMhuZIVDJ4r6RlMJyWAnrg1a/Ek/hiAD0jcWgN0mfkm9XXn3vLDuydEyp fp7oY8ULLjhtw11I8HVQ8ljjgMUliWXpgOTC9Ew8OhogvDomik+wygbuWpoBqzc7Rtht zqAkNPfhE7JoaXT8Tq2C8nJ6o0B+STdDHIByk5mi6ZhR5zV25jjseBeKawSeeAo/LpSz OqNtsyfaHJk1M1BWCubWIxpMqqr2h2WrqX4gZLggdPb3zFoTmkh2Sr9bmnkZZT9tJltu qZRg== X-Forwarded-Encrypted: i=1; AJvYcCVf/WRUaekgwXCW+qTtu0R97NdmRqJpxz3V1aEotwjn+KPr/SWdrvjHHZ9UNs8qINDBuTDAI9lb1NlQtis=@vger.kernel.org X-Gm-Message-State: AOJu0YwxEgClqIauqNNolqbSk42lRNhAnKeQ+LOtQOELJUcbSd+IawtK zud/X9a05MQ+pYaGRBXs5S1o04R3DyZ3WsgOuPSaiyny5JC/MDkQeOHe6E4b7HjvZ6Y= X-Gm-Gg: ASbGncsr/AL7b4OkDmxe6umPp762SbnhuHp0VAeOf7ueSoKtJchEy+V0Llp7THmkdXu gGAew+IyLtaL/zeRA1udaQNVrQwv96B6fOFE6FCugjC3UskXMPxclluSg6y/t9V2Ofxi7oFDx1D UVz2vC9u0NBo2mAiv2LVJWIPSeX2EDy4YsXefum0IKgPO5pmEpFdhhVs9rHMjg5ZpOQjNNoj2Cg bAMm2OTfI0obLDDH+djK098w80TzRWTyjQsmT/PYW+bE8Nzdv9wrWQiwjVHNd74hnycK5a6fE// Sgl8+ZEQmuiOIAZu83pOYSC2sqCPNHx0OpmcEd/MpgwkUzeCU150d/u4GPADmXUMTFHr2On6i50 OQ2eOXXroRQRnuMQSczk= X-Google-Smtp-Source: AGHT+IFcoxTnNLLCaRe3MbV5XZ2eB09JQHRiE2nVJsRUpfaY0rXLYI7i9qxtUj8gRjcaZRLWDhOK7A== X-Received: by 2002:a17:90b:28cf:b0:313:2768:3f6b with SMTP id 98e67ed59e1d1-313d9ed72f7mr2932665a91.27.1749797902357; Thu, 12 Jun 2025 23:58:22 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:21 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 06/12] RISC-V: KVM: Implement kvm_arch_flush_remote_tlbs_range() Date: Fri, 13 Jun 2025 12:27:37 +0530 Message-ID: <20250613065743.737102-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_arch_flush_remote_tlbs_range() expected by KVM core can be easily implemented for RISC-V using kvm_riscv_hfence_gvma_vmid_gpa() hence provide it. Also with kvm_arch_flush_remote_tlbs_range() available for RISC-V, the mmu_wp_memory_region() can happily use kvm_flush_remote_tlbs_memslot() instead of kvm_flush_remote_tlbs(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 2 ++ arch/riscv/kvm/mmu.c | 2 +- arch/riscv/kvm/tlb.c | 8 ++++++++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index ff1f76d6f177..6162575e2177 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -43,6 +43,8 @@ KVM_ARCH_REQ_FLAGS(5, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_STEAL_UPDATE KVM_ARCH_REQ(6) =20 +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + #define KVM_HEDELEG_DEFAULT (BIT(EXC_INST_MISALIGNED) | \ BIT(EXC_BREAKPOINT) | \ BIT(EXC_SYSCALL) | \ diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 29f1bd853a66..a5387927a1c1 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -344,7 +344,7 @@ static void gstage_wp_memory_region(struct kvm *kvm, in= t slot) spin_lock(&kvm->mmu_lock); gstage_wp_range(kvm, start, end); spin_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index da98ca801d31..f46a27658c2e 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -403,3 +403,11 @@ void kvm_riscv_hfence_vvma_all(struct kvm *kvm, make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, KVM_REQ_HFENCE_VVMA_ALL, NULL); } + +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) +{ + kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, + gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, + PAGE_SHIFT); + return 0; +} --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85E3D25E464 for ; Fri, 13 Jun 2025 06:58:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797909; cv=none; b=NUSBQnBYcNyAvzl5hlIGjT0m+8VVbzlH+h5OpnRbBPXbLBLri3MHvvpEVy27RlSLRwVsHkQbQmZWefttngz4kBAn2pu9B6QFaNJyRIGdOQPfRIyWUyMb+JFGjOd5REwbeNcokj07K99Y8M8ZzOC7F3SOPHzE6Jt1wNirFXpCswA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797909; c=relaxed/simple; bh=qTPZRepGLDx8d3DYNhCymSXfQSTRSyyKFvfeW+3S2AA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Bx02M7xmZEgt+kvqRikenNIrMN/7K+Ms0TpAPBYPwegS8i+G6lzbJpnpsA3KwcwRHV8w3l21ur45T4Kkdlo9t9yf2BDV/XvvjgTH4tcoV99PvtJDYQdeN9FOINKrmjldUJFlIOkALWlCSIdq/bvHlu5NUq9X6iqW264zv1RyeD8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=gtjJGXfp; arc=none smtp.client-ip=209.85.216.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="gtjJGXfp" Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-31393526d0dso1339141a91.0 for ; Thu, 12 Jun 2025 23:58:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797907; x=1750402707; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=x9NuUqyMjDjOZ8RZ5AE6YpvMfKBxH1jx9BmTdXWsOf0=; b=gtjJGXfpiXGwTK7oENzX7mCPiRVNvNLhV5Et5pgvxhM9hGwCKqAIVtpeTQbBVEhVTO 1Gi/YLzkHDo2qMesuleDicqYu7Bmty6Ea64RNsoTsRiSAYvG0QQ4uYKNKwPIYj9nfIAw AFkbKbHxC3cesIPsdcznWDyrBHjMKCQVWC+rKdV9tx6WGp0VAYKbwoijBFrzavIViwdi 1whXwll/rnSaaP/qZGT6CTzTN0k+rsz90FT+FTExMTR2v7FgGCY+MImmxFuz+qAUOlNv IicPDTUfuVKrpeT1YK63t78aXPWfJEbX9qaOYV/EYY8anRZw/cJ70zN+uhV27XlN/Wf9 wLmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797907; x=1750402707; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=x9NuUqyMjDjOZ8RZ5AE6YpvMfKBxH1jx9BmTdXWsOf0=; b=xOY/bxZPXPImT9XT9sWQjNX7I4JTNSq3+hX02L6JSCkE7iJwpFHQFxJxULXFgdy4ap vaQWy7Q9rcbMqN2ngCXUsjKEbMkAAkN9NjnMtrAmNDlmH6cc3Aw4ASHzvr0y4nd1HhPe 3FmftV9Gd4d02j4666YZMRBFZfShVlcdQKUBw8eO7KVJAB0x+w5O1W5+iU7wsN1F7cbF 8rVMAqwNg4FSpMDBrA7MjnrDCodD4vgi8jVdZ49jy8YBn5yDxEruhXjxdqB0DCz60bMT jSXB7sR9BCzcQrwA7XO94QQ2Gy32DVKNAgrTsJrQbpFPVEhRdl1MWa2OZkh/Dth3gfQa 0LcA== X-Forwarded-Encrypted: i=1; AJvYcCXClZdx1ishRKG3XtoTIQTyr6aA8d5oCYDlbzqqLpyJonhYYx/1v6nedJwFpYo2pTCn4gXDfvh3x3r2F3k=@vger.kernel.org X-Gm-Message-State: AOJu0YyS3VbZcH01YX4/cCM6E8ZG0Q5QTp3PRsHXSWpNS2m0D4WI2Npt e8ghRqxq0OYh/GOY9XrXtDaxUemyiNT5KEahuaU3Y0zq69OOKpLXB5xFGiWMP4vYqzY= X-Gm-Gg: ASbGnctmS5Ggm1+8myOVX3QxGq8fiMJuWLypavlxEL0LZNfQ8pFQG2PF3qQSbEabWLb hmNQDUs7jfSFNDJNMPDjFEUdhnYnBOVqebLjB5DigA51N5Q0pdU321obiXpG5R8aGR/UGAjol/7 9J1l+1NLoXB7XrKTPDtF7Qpll3PF31wF8bCqO4fqN9qyek7IGqKKmmbaQyroYC6HzY2MM6r2+vL LB0FiBXbl2IYyX5bHDYIZdkeO/XfIfl0Qk014/179bF6olQ512j4j3JI0ZwwgoKv6V+i5g9Clr+ I1D+zkuMDKGI6y10rYmHAR0f8IhZG3A71dqSzCOy64bF2WjRTG+/+V5MRIR1izztdt2MTNbDrOx 1ZnJOlUKGMEmcWkAJfoY= X-Google-Smtp-Source: AGHT+IE8a5t2vAc7uHbhW52m189RNr8N2yqc2jJGrHdRdfFv50kYLOdkbxH34MMXUkyP4JTNM1l//w== X-Received: by 2002:a17:90b:2686:b0:313:db0b:75d8 with SMTP id 98e67ed59e1d1-313db0b77acmr2060784a91.32.1749797906744; Thu, 12 Jun 2025 23:58:26 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:26 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 07/12] RISC-V: KVM: Use ncsr_xyz() in kvm_riscv_vcpu_trap_redirect() Date: Fri, 13 Jun 2025 12:27:38 +0530 Message-ID: <20250613065743.737102-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The H-extension CSRs accessed by kvm_riscv_vcpu_trap_redirect() will trap when KVM RISC-V is running as Guest/VM hence remove these traps by using ncsr_xyz() instead of csr_xyz(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu_exit.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6e0c18412795..85c43c83e3b9 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include =20 static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) @@ -135,7 +136,7 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcp= u *vcpu, void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) { - unsigned long vsstatus =3D csr_read(CSR_VSSTATUS); + unsigned long vsstatus =3D ncsr_read(CSR_VSSTATUS); =20 /* Change Guest SSTATUS.SPP bit */ vsstatus &=3D ~SR_SPP; @@ -151,15 +152,15 @@ void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vc= pu, vsstatus &=3D ~SR_SIE; =20 /* Update Guest SSTATUS */ - csr_write(CSR_VSSTATUS, vsstatus); + ncsr_write(CSR_VSSTATUS, vsstatus); =20 /* Update Guest SCAUSE, STVAL, and SEPC */ - csr_write(CSR_VSCAUSE, trap->scause); - csr_write(CSR_VSTVAL, trap->stval); - csr_write(CSR_VSEPC, trap->sepc); + ncsr_write(CSR_VSCAUSE, trap->scause); + ncsr_write(CSR_VSTVAL, trap->stval); + ncsr_write(CSR_VSEPC, trap->sepc); =20 /* Set Guest PC to Guest exception vector */ - vcpu->arch.guest_context.sepc =3D csr_read(CSR_VSTVEC); + vcpu->arch.guest_context.sepc =3D ncsr_read(CSR_VSTVEC); =20 /* Set Guest privilege mode to supervisor */ vcpu->arch.guest_context.sstatus |=3D SR_SPP; --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 17773258CF5 for ; Fri, 13 Jun 2025 06:58:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797914; cv=none; b=jjokpjaWO4cfsy4OEw4FwS7hbAlXaM+MuRJluoA9BZP6lCHO7iSN1TSF7PpvFiJ6yieUUjpdYDV8e2sgXcTBAHNLNw9qijLw3K9z0V+Ls2yoaByv4oABHfcWyEknf4sHWP9tBbuNiDnKiNT8C5M3TrkqLRTvhgtYje1Zmi5q7lA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797914; c=relaxed/simple; bh=beVNubEAbVv8Iup4WoUfYFjVoUJRr0/1D6v/8KIqR2E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=X0xzG37hd5FR8jXo6KpajPhMYSIfqvN+0btpXLdksTfK2AChpO1JnIxKCMX2gEAgjpaIRsLrNO8YLoExA1hGmsnAPzHlPKB7GLJkJae8Q0YTL2src5CgmueT4Ys0kY9Aevi1tfKmWgigpOTSrUunyZrde1BDyELa1O8S8JRpsPo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Jxz2hCsY; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Jxz2hCsY" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-234d3261631so12833995ad.1 for ; Thu, 12 Jun 2025 23:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797911; x=1750402711; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QJDMunx77DiWvLiT8iabo538wHDqwftJ+lPwnBfNsX4=; b=Jxz2hCsYNoGFrvcTzXme4rE/mwvCRNDwyPQrxvNkVw1dPNaBsn/z/1mDHwl5Bq8Azl ZX6n08a7fGYyQRNbNtIpDT4xHS75VPGkEwJZkVhyPZhW0o11HkHzM8mOPKSf+hXpsd9r SKWnqlKLfEY9MRYLL1syzt/Ip41K59MQtq6TDkRm8Ry3SSchX8sW9qKjQqWRkhgKNVLR le2rc9N74iqb8jIJ64+elWl1ri2Qm6hyZgylGwV01gtJNEdKNux56KJ8ntZ02oIUjQI+ sndxkuHuMPjAIllkM87c1uF1SGH4JjyGdhZuwm6xP0G7bUDBPIHxHmT+Tix9BgWs9Sbb BONA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797911; x=1750402711; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QJDMunx77DiWvLiT8iabo538wHDqwftJ+lPwnBfNsX4=; b=OOGa/DBY80nzHVYR9My7r0dPUg5DmuUVH8vRZWQFg3KP5ojJ3NLigVuNLJS3A9LnB4 CE+guxPile7MmUfBqVExP3uqCSlu9U6F1JoMIxepcifeNaxwvSZ0FRJ6YAMvik9qq1JF sSGQH/C6ucfeltJS1u65SnGK4a/IEBtcVhsmJzpak1ELhvYL7BniERGq5IqJJjhQ4KM+ 6aXBgqAp9n1KI7PuE7Xunjfx8oW5GWVs3agJBpvT0SBySeQUlus5kDO57ke+v9lg11Pa mnHpEuxb3nEbJG2CLkxNyKTQOpaDRkgEyWMVFu9jiP6CIOp7aaG18qJMSRc6tVUpyHhd ZdIg== X-Forwarded-Encrypted: i=1; AJvYcCUon0XBsoYwJZMz0okhRJEiqngeyXYlNOpTzHmIp/8oTiwCpFiHCDASMpM3/JR4q6ZXMAXP1r2hmJRJTpk=@vger.kernel.org X-Gm-Message-State: AOJu0Yy1jF/whrIaVS+vg0OExzTwVMQaLiGeQZ63IU6pKmX9uSMtt8M9 bEQ5xLJtmOpVFS2GEylQUuK8cmMBW+ObVpkrc5KUu+uuZarG1eTOm7DsuRj73xJ6+mw= X-Gm-Gg: ASbGnctheOqFXAaU5B59ha9h6+AodR3AB2RObjI7X5frJNDaFUUkuaxkjmr6rxjttzt h3fHNy00R7RXMUnkOW5bao4ezEw0PgAHBkq1GGmAoVPa4dzUPDN9dTPX19ZhotZiV/b3iy/QODe EjaUN23g2lrb2TD5cVAJ4lI8xkzSykWgUklWqZG5uq24SCKd6PQikipT248yjUljWj6Dxh9eMe6 E39ss9D4wH1yThfGXQOjgL428ql6mvN5d+3l/ju67PMX4Tl75FCGlW9iJwwJO6fztXnXDy7NViB Y/f8O/i2nN27EJcC8G6B9Q5tjqcIgjF8fEGcTGO0g+yZQlZZ4CNVQkTZILFE32UBQtcR4f6Y2Ja 1Ev+KsOLP99RoEUIPkag= X-Google-Smtp-Source: AGHT+IG/AKFRhvSUib4IO3DiL2Z6cy22qs6RStWYGT7HQao1EiZfTJukYmGap593VUz8G1KvozvSZQ== X-Received: by 2002:a17:903:19d0:b0:234:f15b:f158 with SMTP id d9443c01a7336-2365d8a167dmr27764385ad.13.1749797911264; Thu, 12 Jun 2025 23:58:31 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:30 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 08/12] RISC-V: KVM: Factor-out MMU related declarations into separate headers Date: Fri, 13 Jun 2025 12:27:39 +0530 Message-ID: <20250613065743.737102-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The MMU, TLB, and VMID management for KVM RISC-V already exists as seprate sources so create separate headers along these lines. This further simplifies asm/kvm_host.h header. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 100 +----------------------------- arch/riscv/include/asm/kvm_mmu.h | 26 ++++++++ arch/riscv/include/asm/kvm_tlb.h | 78 +++++++++++++++++++++++ arch/riscv/include/asm/kvm_vmid.h | 27 ++++++++ arch/riscv/kvm/aia_imsic.c | 1 + arch/riscv/kvm/main.c | 1 + arch/riscv/kvm/mmu.c | 1 + arch/riscv/kvm/tlb.c | 2 + arch/riscv/kvm/vcpu.c | 1 + arch/riscv/kvm/vcpu_exit.c | 1 + arch/riscv/kvm/vm.c | 1 + arch/riscv/kvm/vmid.c | 2 + 12 files changed, 143 insertions(+), 98 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_mmu.h create mode 100644 arch/riscv/include/asm/kvm_tlb.h create mode 100644 arch/riscv/include/asm/kvm_vmid.h diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 6162575e2177..bd5341efa127 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include #include #include #include @@ -56,24 +58,6 @@ BIT(IRQ_VS_TIMER) | \ BIT(IRQ_VS_EXT)) =20 -enum kvm_riscv_hfence_type { - KVM_RISCV_HFENCE_UNKNOWN =3D 0, - KVM_RISCV_HFENCE_GVMA_VMID_GPA, - KVM_RISCV_HFENCE_VVMA_ASID_GVA, - KVM_RISCV_HFENCE_VVMA_ASID_ALL, - KVM_RISCV_HFENCE_VVMA_GVA, -}; - -struct kvm_riscv_hfence { - enum kvm_riscv_hfence_type type; - unsigned long asid; - unsigned long order; - gpa_t addr; - gpa_t size; -}; - -#define KVM_RISCV_VCPU_MAX_HFENCE 64 - struct kvm_vm_stat { struct kvm_vm_stat_generic generic; }; @@ -99,15 +83,6 @@ struct kvm_vcpu_stat { struct kvm_arch_memory_slot { }; =20 -struct kvm_vmid { - /* - * Writes to vmid_version and vmid happen with vmid_lock held - * whereas reads happen without any lock held. - */ - unsigned long vmid_version; - unsigned long vmid; -}; - struct kvm_arch { /* G-stage vmid */ struct kvm_vmid vmid; @@ -311,77 +286,6 @@ static inline bool kvm_arch_pmi_in_guest(struct kvm_vc= pu *vcpu) return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu; } =20 -#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 - -void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, - gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); -void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_all(void); -void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, - unsigned long asid, - unsigned long gva, - unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, - unsigned long asid); -void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); - -void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); - -void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); - -void kvm_riscv_fence_i(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); -void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); -void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); -void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long asid); -void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); - -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic); -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, - unsigned long size); -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); -void kvm_riscv_gstage_free_pgd(struct kvm *kvm); -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); -void __init kvm_riscv_gstage_mode_detect(void); -unsigned long __init kvm_riscv_gstage_mode(void); -int kvm_riscv_gstage_gpa_bits(void); - -void __init kvm_riscv_gstage_vmid_detect(void); -unsigned long kvm_riscv_gstage_vmid_bits(void); -int kvm_riscv_gstage_vmid_init(struct kvm *kvm); -bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); -void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); - int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); =20 void __kvm_riscv_unpriv_trap(void); diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h new file mode 100644 index 000000000000..4e1654282ee4 --- /dev/null +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_MMU_H_ +#define __RISCV_KVM_MMU_H_ + +#include + +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, + phys_addr_t hpa, unsigned long size, + bool writable, bool in_atomic); +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, + unsigned long size); +int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *memslot, + gpa_t gpa, unsigned long hva, bool is_write); +int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); +void kvm_riscv_gstage_free_pgd(struct kvm *kvm); +void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_mode_detect(void); +unsigned long kvm_riscv_gstage_mode(void); +int kvm_riscv_gstage_gpa_bits(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h new file mode 100644 index 000000000000..cd00c9a46cb1 --- /dev/null +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_TLB_H_ +#define __RISCV_KVM_TLB_H_ + +#include + +enum kvm_riscv_hfence_type { + KVM_RISCV_HFENCE_UNKNOWN =3D 0, + KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_VVMA_ASID_GVA, + KVM_RISCV_HFENCE_VVMA_ASID_ALL, + KVM_RISCV_HFENCE_VVMA_GVA, +}; + +struct kvm_riscv_hfence { + enum kvm_riscv_hfence_type type; + unsigned long asid; + unsigned long order; + gpa_t addr; + gpa_t size; +}; + +#define KVM_RISCV_VCPU_MAX_HFENCE 64 + +#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 + +void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); +void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_all(void); +void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, + unsigned long asid, + unsigned long gva, + unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, + unsigned long asid); +void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); + +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order, unsigned long asid); +void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long asid); +void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_hfence_vvma_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); + +#endif diff --git a/arch/riscv/include/asm/kvm_vmid.h b/arch/riscv/include/asm/kvm= _vmid.h new file mode 100644 index 000000000000..ab98e1434fb7 --- /dev/null +++ b/arch/riscv/include/asm/kvm_vmid.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_VMID_H_ +#define __RISCV_KVM_VMID_H_ + +#include + +struct kvm_vmid { + /* + * Writes to vmid_version and vmid happen with vmid_lock held + * whereas reads happen without any lock held. + */ + unsigned long vmid_version; + unsigned long vmid; +}; + +void __init kvm_riscv_gstage_vmid_detect(void); +unsigned long kvm_riscv_gstage_vmid_bits(void); +int kvm_riscv_gstage_vmid_init(struct kvm *kvm); +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); +void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); + +#endif diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 29ef9c2133a9..40b469c0a01f 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -16,6 +16,7 @@ #include #include #include +#include =20 #define IMSIC_MAX_EIX (IMSIC_MAX_ID / BITS_PER_TYPE(u64)) =20 diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 4b24705dc63a..b861a5dd7bd9 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index a5387927a1c1..c1a3eb076df3 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index f46a27658c2e..6fc4361c3d75 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -15,6 +15,8 @@ #include #include #include +#include +#include =20 #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) =20 diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 6eb11c913b13..8ad7b31f5939 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 85c43c83e3b9..965df528de90 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include #include =20 static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index b27ec8f96697..8601cf29e5f8 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -11,6 +11,7 @@ #include #include #include +#include =20 const struct _kvm_stats_desc kvm_vm_stats_desc[] =3D { KVM_GENERIC_VM_STATS() diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 92c01255f86f..3b426c800480 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -14,6 +14,8 @@ #include #include #include +#include +#include =20 static unsigned long vmid_version =3D 1; static unsigned long vmid_next; --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4D2525FA0B for ; Fri, 13 Jun 2025 06:58:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797918; cv=none; b=sxsBc4NuAZhq227JMYFmD0EH/UGjEPN2mqUACUGopxzfiMrri5ZQ2gq3amF+UxPHmV+lU3Ld9CJV4LHsQVzCaxsom4kqtnATRPduFWrL6RIf7rQtvBn0A0aCbCoogK3ANL1ccKe6jcPYg5gf4V4rh4S0Uh/9Pk5/HFhSK5nSTI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797918; c=relaxed/simple; bh=PRuzpVjGDJh0mSObq7ljy2DGtx976aS+FkoP/VhjpW4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZwWY0uah2czMVs83eljgOriOVbj4mSjLqs5can8EbKl+SDh3ZGK9xlhszltkYzzYwYmI/y1zyHYKdXu4UGPplIfnpDAnIiiyVM+L8hJ+Ivg2cD7x8YzonFODQvOPqkWAuDvmGPY9eeWMN5meNo+ZShOsG/d8Jl3/H0wA5aQaAZk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bwJfUnyd; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bwJfUnyd" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-3109f106867so2515459a91.1 for ; Thu, 12 Jun 2025 23:58:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797916; x=1750402716; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FNP3ZhC9GNIj7dhp8mAGlOJO0IDx3wYw/8fnPqgsQ4A=; b=bwJfUnydjvWIzvo4B1c9ihMIbSLggYIiu5jiGa7VXEJrmRljtX11Y1gO25Yh/Zrf7D cqDhKAD8QWOzs/HZz77aebeOC8sJyulJ/xNTCa5/9BhdW6EVXIMXfj3RhgZQh41ZlS5s rQtVJG896DP3CRwL0V8H7DkqTy5iBXXnRt0WYHyHYlrk8TxwnJVm2jKVNpBMU05k0un9 7/+bWXTk/57honpJVPgg+2JpYmhPy19kXYPiQyF4QGJ3CopmA/uexbHql8zm2DiKRYUh j0UtEMiw8POm1I21hZ6CjoIisNPzAig7UFx6i0XuUiBYQi7vxYYOzztETGmGPjFjflJx AHBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797916; x=1750402716; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FNP3ZhC9GNIj7dhp8mAGlOJO0IDx3wYw/8fnPqgsQ4A=; b=MkZPfbVCUha1Up5nMA7jikisihVj2lOtDmu975QVVJflN+XFq/VZcqtIsT7byHRQTt ygDC2oBfr2EL0eJ1w0Ij6fzMnEi+3bEA5s08vLPYFkFSaORtZ66BaMDJ0fpGhbgeB0Aq xl+sngoRGeQ2JyPvw2iV3kbVSetimjfmPYkso/DA07Up3s2A0nnGAUvSvGNWY1kBMneJ J5wn6XtYAiYmkT9cPdgP06MqdHnjChY1YoZvI90j0HnaJBXBS+g2jeAiqp68cw8AKXX8 SPkm774FBulxO2CeTJQ1IZr2ynNZlT+M6z8UhKUqNHWq6SIZolqppChb2R0f+A/+LAKV IkHg== X-Forwarded-Encrypted: i=1; AJvYcCUhB04j2iOJ29nZccDuXQTds9xRDGr9CUzkVMTzwrPRz053a7ZsP3YLq3TFVgEl95hpZOorcveRJHDjemo=@vger.kernel.org X-Gm-Message-State: AOJu0YwU89plkcmX+GpZEZzR4IgNY4TA+GzOVVCUgLmvI+ZGk9UBzGZa wrHmLcBkr8Opno7u0yuM/gPPR8eK2DVOtqFsrpNM3tOS/XSe2DxaU+TmGNzBMJGfKyY= X-Gm-Gg: ASbGnctB7DK75/YNoTA4Yqg63NX04TZ/yNId1gypGJK6Q1aVhONwISOHAXBUp4Gy/r4 KSCtKNVzlERFjWymXJmd700BebH9C/NtkpshC53bvWF7dedbng00Dtvvf++adRFaJi7PCLFATpk Pu5K//pdC/9z3rI1Du+UH238bmGzGVazSVZTUX9A7yLa8Jx0dz37m0SxlYrT8P5daY5gn2BsvM3 kYosoxwvDmwL4DkX3mMcNfG2LmfL5hv/1C8+NhSJIPg4ZjFKl4mD8jaaXFt5AuS8FoCXaOd7+XB 0jRjlzufYx43QTLQT34FVg7ApG0Sq//Pgo5JAqz47z2vIjcQ9G8OOyybjHq3pme22m46HjBHpVm nvBg3v2FzncSKHoim9WE= X-Google-Smtp-Source: AGHT+IGDqy2PQavEy6ds0UawCYjteSsSz2IcR4uhx43bZDMn/f28fdAHJhym8i2PaanGblD94psvdQ== X-Received: by 2002:a17:90a:fc47:b0:311:fde5:e224 with SMTP id 98e67ed59e1d1-313d9bcea22mr2924713a91.6.1749797915759; Thu, 12 Jun 2025 23:58:35 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:35 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 09/12] RISC-V: KVM: Introduce struct kvm_gstage_mapping Date: Fri, 13 Jun 2025 12:27:40 +0530 Message-ID: <20250613065743.737102-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce struct kvm_gstage_mapping which represents a g-stage mapping at a particular g-stage page table level. Also, update the kvm_riscv_gstage_map() to return the g-stage mapping upon success. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_mmu.h | 9 ++++- arch/riscv/kvm/mmu.c | 58 ++++++++++++++++++-------------- arch/riscv/kvm/vcpu_exit.c | 3 +- 3 files changed, 43 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 4e1654282ee4..91c11e692dc7 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -8,6 +8,12 @@ =20 #include =20 +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic); @@ -15,7 +21,8 @@ void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); void kvm_riscv_gstage_free_pgd(struct kvm *kvm); void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index c1a3eb076df3..806614b3e46d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -135,18 +135,18 @@ static void gstage_remote_tlb_flush(struct kvm *kvm, = u32 level, gpa_t addr) kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); } =20 -static int gstage_set_pte(struct kvm *kvm, u32 level, - struct kvm_mmu_memory_cache *pcache, - gpa_t addr, const pte_t *new_pte) +static int gstage_set_pte(struct kvm *kvm, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) { u32 current_level =3D gstage_pgd_levels - 1; pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; =20 - if (current_level < level) + if (current_level < map->level) return -EINVAL; =20 - while (current_level !=3D level) { + while (current_level !=3D map->level) { if (gstage_pte_leaf(ptep)) return -EEXIST; =20 @@ -165,13 +165,13 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, } =20 current_level--; - ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; } =20 - if (pte_val(*ptep) !=3D pte_val(*new_pte)) { - set_pte(ptep, *new_pte); + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + gstage_remote_tlb_flush(kvm, current_level, map->addr); } =20 return 0; @@ -181,14 +181,16 @@ static int gstage_map_page(struct kvm *kvm, struct kvm_mmu_memory_cache *pcache, gpa_t gpa, phys_addr_t hpa, unsigned long page_size, - bool page_rdonly, bool page_exec) + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) { - int ret; - u32 level =3D 0; - pte_t new_pte; pgprot_t prot; + int ret; =20 - ret =3D gstage_page_size_to_level(page_size, &level); + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); if (ret) return ret; =20 @@ -216,10 +218,10 @@ static int gstage_map_page(struct kvm *kvm, else prot =3D PAGE_WRITE; } - new_pte =3D pfn_pte(PFN_DOWN(hpa), prot); - new_pte =3D pte_mkdirty(new_pte); + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); =20 - return gstage_set_pte(kvm, level, pcache, gpa, &new_pte); + return gstage_set_pte(kvm, pcache, out_map); } =20 enum gstage_op { @@ -352,7 +354,6 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic) { - pte_t pte; int ret =3D 0; unsigned long pfn; phys_addr_t addr, end; @@ -360,22 +361,25 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t g= pa, .gfp_custom =3D (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0, .gfp_zero =3D __GFP_ZERO, }; + struct kvm_gstage_mapping map; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); =20 for (addr =3D gpa; addr < end; addr +=3D PAGE_SIZE) { - pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.addr =3D addr; + map.pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.level =3D 0; =20 if (!writable) - pte =3D pte_wrprotect(pte); + map.pte =3D pte_wrprotect(map.pte); =20 ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, 0, &pcache, addr, &pte); + ret =3D gstage_set_pte(kvm, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -593,7 +597,8 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) =20 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write) + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -608,6 +613,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, unsigned long vma_pagesize, mmu_seq; struct page *page; =20 + /* Setup initial state of output mapping */ + memset(out_map, 0, sizeof(*out_map)); + /* We need minimum second+third level pages */ ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); if (ret) { @@ -677,10 +685,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, if (writable) { mark_page_dirty(kvm, gfn); ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true); + vma_pagesize, false, true, out_map); } else { ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true); + vma_pagesize, true, true, out_map); } =20 if (ret) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 965df528de90..6b4694bc07ea 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -15,6 +15,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) { + struct kvm_gstage_mapping host_map; struct kvm_memory_slot *memslot; unsigned long hva, fault_addr; bool writable; @@ -43,7 +44,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, } =20 ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); if (ret < 0) return ret; =20 --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F0A7258CF5 for ; Fri, 13 Jun 2025 06:58:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797922; cv=none; b=kNK+u+XVvcAcHz3MQ03J+cUlO0XhjA02ZSnxvDIuj3GK+ejbhjdSLRdaslbdeOYc9Ic6Lsq8V7qz1Q8+JK7kTNrPCixvysr7ZZ6BIzVBoTsAcdkuK1NQjFYbpxF7kQ2aCokAAIrnw5jxiNhPdMv0Yg4OxhgLtgLZSPuqF3bh13U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797922; c=relaxed/simple; bh=/DDImLrZIWe6DHfRVDh2VF/oihV7w86USnafdPxDL18=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RnQxI/nohyktYS37NXl7m1otLLk2wJqVvLtZYWogGUD15TSqzAoPtODT2TVHS0J5Urn9Ena0eeO8FjkfJqP0fcTBQ509M61/BoigT2PD5+qcPQMMcLxBWUaY2A6W7MIODAPkkNpxd3sQ7BnsJNE3cCZN9j6nztjl4YKM/Pispqw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=PbcNlBFR; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="PbcNlBFR" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-3122a63201bso1764222a91.0 for ; Thu, 12 Jun 2025 23:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797920; x=1750402720; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FTUNizeD4GAwjamuleDEZyMo3f2gr2d8rHj6sva7l/8=; b=PbcNlBFRLu/hF26fpRbT13b2I9HXg5m18Euc1J2rnVDlCleJVNBkHlx8OCNwgzkEM4 0G1lrIWAOvi5Kig0Zkp5bot4P1PyYvH/fEvfeGtXPK8A+BlotFEofwvlOSlV5B1KHJQ9 rFA6UK0EC8clhUaMCeTrrNQsXSvuMfgLjUgNSuLwRAAUZrMG+VyM6o2O1XX7vP3LdKuI sQop9C8NsbNO7fHRAbBlY6ECKRih7xEBx9AAVRLgbopmb3qf1Swnd+D7hS9W5kplUgV2 XDFk3G64ReNFf3qhHdScVF8o9+HpelLRrGCA6yUtjYeIPOa+eBSBkkcOsHp9kRfh9Pxe NYRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797920; x=1750402720; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FTUNizeD4GAwjamuleDEZyMo3f2gr2d8rHj6sva7l/8=; b=FquKPWltU0yvuEs+dDZt/v7Q7t6W4WAKU12NZgQRB+O88GDeC6IDFduijO9QNJiEh+ lZv5VoEulXw25PmLwSQjOobeTUn+4JQwOWLvffLyO/f73mFF3KFmyKs51ZLaVRvBzzdY aJDS7PuvnV7FWwNFxlhXC1GByPHedWzGD2FRlefD3w9SxjE/uhcyLKG2oe87mcAUFd41 i8MRycGzOCwnr/+KPKEw1Cp1tr8TMXYUKe1dLwjrkbUohlgr4Y/cHBCWAgRa28tYXwdZ SHTL8mSLp6jbGB0HS7Bz9pWFe7Aq7VcSotrlli14cpA9iiqWZI8doYlqNDrK9pbTm3IR qgzQ== X-Forwarded-Encrypted: i=1; AJvYcCUyAn9Lcf7EB6JyNLGuxTCD7mdpEpf3QV7Np1RwUKPINiFxD2WQxUfRseRMqp1Q9FZR3qjoITLLvkjYHCo=@vger.kernel.org X-Gm-Message-State: AOJu0YzOlITDLoGBF4z4mFjIGa5ew8fqYCcrGU+KB2Q2H0o1WjoqZ7XD MIhB0INmahOoHY0chbphuxkKPifA9GHdHJj/I16sbCd78JvR1xWpjE2fjvaffIUcOE0= X-Gm-Gg: ASbGncs9KiNeUUEtQNWkbkAsdo//wv17nRajPtVsuEjIZgBKBieraCuKbya5mdb65uj EtunlUzdDcXPEhokiMysY305NZPmYCKy2HYPM8cMPQuoE7A2kUqmX7vnXusBgjM76kiVvnBobAF 4iazTGu01/7D+l9rFfCzrQDOK7Z2KTbnGTb8zoanJWcJS1+4Do/bAZ9yYRzKZ+DoylXts781/y+ lU4WNWWbuwWD40P7vyhUVLIFJt8IAtgBLHEkgkmjyFoDOGxWbsw4Ow77vutzB9UI81rRePmkCD8 pincJxTFPtKzNYb0HlWT82qUScCzbYSsG9rqmfychAv9Eo8hi8/ywFv4dh/6YOsDNPoC7JlhLcv XqyY/AVBAuRdpUlfCltE= X-Google-Smtp-Source: AGHT+IFD7ruLh4VfkVxEKY4I/hEXtNaiVk7YH6BljPh0dx2E18jMzd9JyHkdwcF8Bn4qR1nr7HznMQ== X-Received: by 2002:a17:90b:4a4f:b0:312:639:a058 with SMTP id 98e67ed59e1d1-313d9ec4df9mr3120299a91.27.1749797920164; Thu, 12 Jun 2025 23:58:40 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:39 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 10/12] RISC-V: KVM: Add vmid field to struct kvm_riscv_hfence Date: Fri, 13 Jun 2025 12:27:41 +0530 Message-ID: <20250613065743.737102-11-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the struct kvm_riscv_hfence does not have vmid field and various hfence processing functions always pick vmid assigned to the guest/VM. This prevents us from doing hfence operation on arbitrary vmid hence add vmid field to struct kvm_riscv_hfence and use it wherever applicable. Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_tlb.h | 1 + arch/riscv/kvm/tlb.c | 30 ++++++++++++++++-------------- 2 files changed, 17 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index cd00c9a46cb1..f67e03edeaec 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -19,6 +19,7 @@ enum kvm_riscv_hfence_type { struct kvm_riscv_hfence { enum kvm_riscv_hfence_type type; unsigned long asid; + unsigned long vmid; unsigned long order; gpa_t addr; gpa_t size; diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 6fc4361c3d75..349fcfc93f54 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -237,49 +237,43 @@ static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, =20 void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) { - unsigned long vmid; struct kvm_riscv_hfence d =3D { 0 }; - struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; =20 while (vcpu_hfence_dequeue(vcpu, &d)) { switch (d.type) { case KVM_RISCV_HFENCE_UNKNOWN: break; case KVM_RISCV_HFENCE_GVMA_VMID_GPA: - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_gvma_vmid(nacl_shmem(), vmid, + nacl_hfence_gvma_vmid(nacl_shmem(), d.vmid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_gvma_vmid_gpa(vmid, d.addr, + kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma_asid(nacl_shmem(), vmid, d.asid, + nacl_hfence_vvma_asid(nacl_shmem(), d.vmid, d.asid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_vvma_asid_gva(vmid, d.asid, d.addr, + kvm_riscv_local_hfence_vvma_asid_gva(d.vmid, d.asid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma_asid_all(nacl_shmem(), vmid, d.asid); + nacl_hfence_vvma_asid_all(nacl_shmem(), d.vmid, d.asid); else - kvm_riscv_local_hfence_vvma_asid_all(vmid, d.asid); + kvm_riscv_local_hfence_vvma_asid_all(d.vmid, d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma(nacl_shmem(), vmid, + nacl_hfence_vvma(nacl_shmem(), d.vmid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_vvma_gva(vmid, d.addr, + kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; default: @@ -336,10 +330,12 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, gpa_t gpa, gpa_t gpsz, unsigned long order) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -359,10 +355,12 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long gva, unsigned long gvsz, unsigned long order, unsigned long asid) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -374,10 +372,12 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long asid) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D data.size =3D data.order =3D 0; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); @@ -388,10 +388,12 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long gva, unsigned long gvsz, unsigned long order) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gva; data.size =3D gvsz; data.order =3D order; --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pl1-f175.google.com (mail-pl1-f175.google.com [209.85.214.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15909258CF5 for ; Fri, 13 Jun 2025 06:58:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797929; cv=none; b=T6Gc08b7V57gbvW25tcIxaRHBnnXH4tzF7coXFzvi1KQa1iljp6TOjVzmKcWaErzYbybfoks9a85bfau4DbaO+ylade6UzZ//Ka7tgvJAArUffcVUFZiboy/+vHw/q4jkibI5IC+L29hJq8tplr1Df11yqD562qit3k1mw68LAU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797929; c=relaxed/simple; bh=X0N8wDjIxKazhYu/6fsz/Bdqc7YSzKG8rGrYZT7tAuI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oUDSSRO3u6L5gezcQTGetqKgVifBwUJ2OJTFC8obaNGq1FhCHdTSjny+auxQEfysz/fiTH6IT39aTFDuE6Ke5IshMNgdqOp2Z9ZvxgcjpKWkV+cPtLbzkZmXD4UFYseyMB5r+YCA41QzDY0bOhjNEdwzv5EyJfJZZvp/rAHIulo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=kXba75uY; arc=none smtp.client-ip=209.85.214.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="kXba75uY" Received: by mail-pl1-f175.google.com with SMTP id d9443c01a7336-234fcadde3eso22901185ad.0 for ; Thu, 12 Jun 2025 23:58:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797926; x=1750402726; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=82BM7Lr/w5wu8VyfjUmTApgJjDkQfTzaQ0EY82JI9q8=; b=kXba75uYrkTTpIkRQKRmyWSPkoKMCPjiE+WHtzqq7P1bkPsgicAVyMPcPt9XmWxh6S yrvl8hZeD8oadftbu6yJvSg9xvipYzpaLaA4PBeobyB6o+ACk+7zBtFAG/TCJd9Agq70 ABR4UeOjl60NXxjFNgM8SCDys7FzXuGwC0/6e3N4/krS+xkyYfSAvpYnOTUkdbz/CEHi 5+weO5dEviftqm3YftSVnn8YkXFSG0SywJnO6WkYhn7cZX+jRVb4N9U5DmG4PIupbrvV NRW7OI68UftcM2EPYyROidTtKbQfaFCyqCP6S2XnkjB2wnqMpRJTKJNT1GOaKr9L8pfD hqeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797926; x=1750402726; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=82BM7Lr/w5wu8VyfjUmTApgJjDkQfTzaQ0EY82JI9q8=; b=jVKLVOFGwwsK8/O++F8Iwu/zb8ntwr2h7oyvotIMp31aZB8+e9/WTgJCGame2+MbwW 7tEpDjIFmSPeYJmzGaWcwEGalgUeqqSkJNn/cBG2n8SO8iqFsa8V6rcgxkqPwqHKEe8f stcTyI0KRZhiLWV4tWK2r0W3oY9YGL31gCkFDnJ/x9cPju+O4GXHjUUzeByWCD5QC0Kn CzCWXh/Zhzb8lBOF20N8c3s3sjBjyHTc2W2s6XIZJAJuvVp9qHVQOee85ne/fwpq63Q8 GjBIIX4TCGkIWFUTQaK9mQnXn/VP5LRfXG+QuIHsrqnN2Yfnq6JKG+wKIUfNr9hW7wfw aHQw== X-Forwarded-Encrypted: i=1; AJvYcCWwFsU99vz7KoIr7LVXd5/XQtgQA7rQRY70E6HflglGOIa8SQmdT9O6lhnG8CGXPeOBRl0zW60R57FWL5I=@vger.kernel.org X-Gm-Message-State: AOJu0YwrWmKbJQDjf6sySarxe7jyNGo8MCOcTEpEPlukYy3cuzWd/ey8 HElfVC2sMAOMgE+1kvGtNeFXhrC3ndS3uTTH8TS47+IiabVQuLTULOVOx59KlRqr5JU= X-Gm-Gg: ASbGnctEHBnwvXRITIeX6+R2nNARIj06kh+ztqsdqaOKEVnllV6cNFAUhtumotXtmlj Se8PXvHAUlcNyt5jGGzKjel4Rjwys3K119bqOm4qsdHU8EUMe7ufM3uG2Uz2uABSPoJFMKxRPQZ Nrp/jcJtf0TLSKvd3v+9+O9QfA7cV8VmzCJoTqOXnYyYKQWMYkdvIAU0vhW+UjL6zZ9U5hqC7W0 AT8o40689Xw4cTvk8TIvG7KY1E0O63g9GhWhnQ6FBmKSPO1CB/yHkaVSy+ss8egbWO5U50WtSWl Z+M0LXTgvJC2gDvxYRrn7f4elmuavLA2qZzUTklsHyDLWjXsIBddpNSRbKTqwHbzPrJ2Or0Xq9Y 57Fu5xZt3WjEPLKU0e80= X-Google-Smtp-Source: AGHT+IHZZYo1e2gqxAAliwbirIYm87VEfeNBXeZn+DeEbBimoLtS4aCZWlat1tNlUMPeO3Sw/bhBsw== X-Received: by 2002:a17:902:e849:b0:234:b422:7120 with SMTP id d9443c01a7336-2365d89174emr29905265ad.9.1749797926176; Thu, 12 Jun 2025 23:58:46 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:44 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 11/12] RISC-V: KVM: Factor-out g-stage page table management Date: Fri, 13 Jun 2025 12:27:42 +0530 Message-ID: <20250613065743.737102-12-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming nested virtualization can share g-stage page table management with the current host g-stage implementation hence factor-out g-stage page table management as separate sources and also use "kvm_riscv_mmu_" prefix for host g-stage functions. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_gstage.h | 72 ++++ arch/riscv/include/asm/kvm_mmu.h | 32 +- arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia_imsic.c | 11 +- arch/riscv/kvm/gstage.c | 337 +++++++++++++++++++ arch/riscv/kvm/main.c | 2 +- arch/riscv/kvm/mmu.c | 492 ++++++---------------------- arch/riscv/kvm/vcpu.c | 4 +- arch/riscv/kvm/vcpu_exit.c | 5 +- arch/riscv/kvm/vm.c | 6 +- 10 files changed, 530 insertions(+), 432 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_gstage.h create mode 100644 arch/riscv/kvm/gstage.c diff --git a/arch/riscv/include/asm/kvm_gstage.h b/arch/riscv/include/asm/k= vm_gstage.h new file mode 100644 index 000000000000..595e2183173e --- /dev/null +++ b/arch/riscv/include/asm/kvm_gstage.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_GSTAGE_H_ +#define __RISCV_KVM_GSTAGE_H_ + +#include + +struct kvm_gstage { + struct kvm *kvm; + unsigned long flags; +#define KVM_GSTAGE_FLAGS_LOCAL BIT(0) + unsigned long vmid; + pgd_t *pgd; +}; + +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + +#ifdef CONFIG_64BIT +#define kvm_riscv_gstage_index_bits 9 +#else +#define kvm_riscv_gstage_index_bits 10 +#endif + +extern unsigned long kvm_riscv_gstage_mode; +extern unsigned long kvm_riscv_gstage_pgd_levels; + +#define kvm_riscv_gstage_pgd_xbits 2 +#define kvm_riscv_gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + kvm_riscv_gs= tage_pgd_xbits)) +#define kvm_riscv_gstage_gpa_bits (HGATP_PAGE_SHIFT + \ + (kvm_riscv_gstage_pgd_levels * \ + kvm_riscv_gstage_index_bits) + \ + kvm_riscv_gstage_pgd_xbits) +#define kvm_riscv_gstage_gpa_size ((gpa_t)(1ULL << kvm_riscv_gstage_gpa_bi= ts)) + +bool kvm_riscv_gstage_get_leaf(struct kvm_gstage *gstage, gpa_t addr, + pte_t **ptepp, u32 *ptep_level); + +int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map); + +int kvm_riscv_gstage_map_page(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + gpa_t gpa, phys_addr_t hpa, unsigned long page_size, + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map); + +enum kvm_riscv_gstage_op { + GSTAGE_OP_NOP =3D 0, /* Nothing */ + GSTAGE_OP_CLEAR, /* Clear/Unmap */ + GSTAGE_OP_WP, /* Write-protect */ +}; + +void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op); + +void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage, + gpa_t start, gpa_t size, bool may_block); + +void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa= _t end); + +void kvm_riscv_gstage_mode_detect(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 91c11e692dc7..5439e76f0a96 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -6,28 +6,16 @@ #ifndef __RISCV_KVM_MMU_H_ #define __RISCV_KVM_MMU_H_ =20 -#include +#include =20 -struct kvm_gstage_mapping { - gpa_t addr; - pte_t pte; - u32 level; -}; - -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic); -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, - unsigned long size); -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write, - struct kvm_gstage_mapping *out_map); -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); -void kvm_riscv_gstage_free_pgd(struct kvm *kvm); -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_mode_detect(void); -unsigned long kvm_riscv_gstage_mode(void); -int kvm_riscv_gstage_gpa_bits(void); +int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable, bool in_atomic); +void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); +int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memsl= ot, + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); +int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm); +void kvm_riscv_mmu_free_pgd(struct kvm *kvm); +void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu); =20 #endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 06e2d52a9b88..07197395750e 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -14,6 +14,7 @@ kvm-y +=3D aia.o kvm-y +=3D aia_aplic.o kvm-y +=3D aia_device.o kvm-y +=3D aia_imsic.o +kvm-y +=3D gstage.o kvm-y +=3D main.o kvm-y +=3D mmu.o kvm-y +=3D nacl.o diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 40b469c0a01f..ea1a36836d9c 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -704,9 +704,8 @@ void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *= vcpu) */ =20 /* Purge the G-stage mapping */ - kvm_riscv_gstage_iounmap(vcpu->kvm, - vcpu->arch.aia_context.imsic_addr, - IMSIC_MMIO_PAGE_SZ); + kvm_riscv_mmu_iounmap(vcpu->kvm, vcpu->arch.aia_context.imsic_addr, + IMSIC_MMIO_PAGE_SZ); =20 /* TODO: Purge the IOMMU mapping ??? */ =20 @@ -786,9 +785,9 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vc= pu) imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix); =20 /* Update G-stage mapping for the new IMSIC VS-file */ - ret =3D kvm_riscv_gstage_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, - new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, - true, true); + ret =3D kvm_riscv_mmu_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, + new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, + true, true); if (ret) goto fail_free_vsfile_hgei; =20 diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c new file mode 100644 index 000000000000..9c7c44f09b05 --- /dev/null +++ b/arch/riscv/kvm/gstage.c @@ -0,0 +1,337 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_64BIT +unsigned long kvm_riscv_gstage_mode __ro_after_init =3D HGATP_MODE_SV39X4; +unsigned long kvm_riscv_gstage_pgd_levels __ro_after_init =3D 3; +#else +unsigned long kvm_riscv_gstage_mode __ro_after_init =3D HGATP_MODE_SV32X4; +unsigned long kvm_riscv_gstage_pgd_levels __ro_after_init =3D 2; +#endif + +#define gstage_pte_leaf(__ptep) \ + (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) + +static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) +{ + unsigned long mask; + unsigned long shift =3D HGATP_PAGE_SHIFT + (kvm_riscv_gstage_index_bits *= level); + + if (level =3D=3D (kvm_riscv_gstage_pgd_levels - 1)) + mask =3D (PTRS_PER_PTE * (1UL << kvm_riscv_gstage_pgd_xbits)) - 1; + else + mask =3D PTRS_PER_PTE - 1; + + return (addr >> shift) & mask; +} + +static inline unsigned long gstage_pte_page_vaddr(pte_t pte) +{ + return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte))); +} + +static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) +{ + u32 i; + unsigned long psz =3D 1UL << 12; + + for (i =3D 0; i < kvm_riscv_gstage_pgd_levels; i++) { + if (page_size =3D=3D (psz << (i * kvm_riscv_gstage_index_bits))) { + *out_level =3D i; + return 0; + } + } + + return -EINVAL; +} + +static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) +{ + if (kvm_riscv_gstage_pgd_levels < level) + return -EINVAL; + + *out_pgorder =3D 12 + (level * kvm_riscv_gstage_index_bits); + return 0; +} + +static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) +{ + int rc; + unsigned long page_order =3D PAGE_SHIFT; + + rc =3D gstage_level_to_page_order(level, &page_order); + if (rc) + return rc; + + *out_pgsize =3D BIT(page_order); + return 0; +} + +bool kvm_riscv_gstage_get_leaf(struct kvm_gstage *gstage, gpa_t addr, + pte_t **ptepp, u32 *ptep_level) +{ + pte_t *ptep; + u32 current_level =3D kvm_riscv_gstage_pgd_levels - 1; + + *ptep_level =3D current_level; + ptep =3D (pte_t *)gstage->pgd; + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; + while (ptep && pte_val(ptep_get(ptep))) { + if (gstage_pte_leaf(ptep)) { + *ptep_level =3D current_level; + *ptepp =3D ptep; + return true; + } + + if (current_level) { + current_level--; + *ptep_level =3D current_level; + ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; + } else { + ptep =3D NULL; + } + } + + return false; +} + +static void gstage_tlb_flush(struct kvm_gstage *gstage, u32 level, gpa_t a= ddr) +{ + unsigned long order =3D PAGE_SHIFT; + + if (gstage_level_to_page_order(level, &order)) + return; + addr &=3D ~(BIT(order) - 1); + + if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) + kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); + else + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); +} + +int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) +{ + u32 current_level =3D kvm_riscv_gstage_pgd_levels - 1; + pte_t *next_ptep =3D (pte_t *)gstage->pgd; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; + + if (current_level < map->level) + return -EINVAL; + + while (current_level !=3D map->level) { + if (gstage_pte_leaf(ptep)) + return -EEXIST; + + if (!pte_val(ptep_get(ptep))) { + if (!pcache) + return -ENOMEM; + next_ptep =3D kvm_mmu_memory_cache_alloc(pcache); + if (!next_ptep) + return -ENOMEM; + set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), + __pgprot(_PAGE_TABLE))); + } else { + if (gstage_pte_leaf(ptep)) + return -EEXIST; + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + } + + current_level--; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; + } + + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); + if (gstage_pte_leaf(ptep)) + gstage_tlb_flush(gstage, current_level, map->addr); + } + + return 0; +} + +int kvm_riscv_gstage_map_page(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + gpa_t gpa, phys_addr_t hpa, unsigned long page_size, + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) +{ + pgprot_t prot; + int ret; + + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); + if (ret) + return ret; + + /* + * A RISC-V implementation can choose to either: + * 1) Update 'A' and 'D' PTE bits in hardware + * 2) Generate page fault when 'A' and/or 'D' bits are not set + * PTE so that software can update these bits. + * + * We support both options mentioned above. To achieve this, we + * always set 'A' and 'D' PTE bits at time of creating G-stage + * mapping. To support KVM dirty page logging with both options + * mentioned above, we will write-protect G-stage PTEs to track + * dirty pages. + */ + + if (page_exec) { + if (page_rdonly) + prot =3D PAGE_READ_EXEC; + else + prot =3D PAGE_WRITE_EXEC; + } else { + if (page_rdonly) + prot =3D PAGE_READ; + else + prot =3D PAGE_WRITE; + } + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); + + return kvm_riscv_gstage_set_pte(gstage, pcache, out_map); +} + +void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op) +{ + int i, ret; + pte_t old_pte, *next_ptep; + u32 next_ptep_level; + unsigned long next_page_size, page_size; + + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + return; + + WARN_ON(addr & (page_size - 1)); + + if (!pte_val(ptep_get(ptep))) + return; + + if (ptep_level && !gstage_pte_leaf(ptep)) { + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + next_ptep_level =3D ptep_level - 1; + ret =3D gstage_level_to_page_size(next_ptep_level, &next_page_size); + if (ret) + return; + + if (op =3D=3D GSTAGE_OP_CLEAR) + set_pte(ptep, __pte(0)); + for (i =3D 0; i < PTRS_PER_PTE; i++) + kvm_riscv_gstage_op_pte(gstage, addr + i * next_page_size, + &next_ptep[i], next_ptep_level, op); + if (op =3D=3D GSTAGE_OP_CLEAR) + put_page(virt_to_page(next_ptep)); + } else { + old_pte =3D *ptep; + if (op =3D=3D GSTAGE_OP_CLEAR) + set_pte(ptep, __pte(0)); + else if (op =3D=3D GSTAGE_OP_WP) + set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); + if (pte_val(*ptep) !=3D pte_val(old_pte)) + gstage_tlb_flush(gstage, ptep_level, addr); + } +} + +void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage, + gpa_t start, gpa_t size, bool may_block) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + unsigned long page_size; + gpa_t addr =3D start, end =3D start + size; + + while (addr < end) { + found_leaf =3D kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_leve= l); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) + kvm_riscv_gstage_op_pte(gstage, addr, ptep, + ptep_level, GSTAGE_OP_CLEAR); + +next: + addr +=3D page_size; + + /* + * If the range is too large, release the kvm->mmu_lock + * to prevent starvation and lockup detector warnings. + */ + if (!(gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) && may_block && addr < end) + cond_resched_lock(&gstage->kvm->mmu_lock); + } +} + +void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa= _t end) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + gpa_t addr =3D start; + unsigned long page_size; + + while (addr < end) { + found_leaf =3D kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_leve= l); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) + kvm_riscv_gstage_op_pte(gstage, addr, ptep, + ptep_level, GSTAGE_OP_WP); + +next: + addr +=3D page_size; + } +} + +void __init kvm_riscv_gstage_mode_detect(void) +{ +#ifdef CONFIG_64BIT + /* Try Sv57x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { + kvm_riscv_gstage_mode =3D HGATP_MODE_SV57X4; + kvm_riscv_gstage_pgd_levels =3D 5; + goto skip_sv48x4_test; + } + + /* Try Sv48x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { + kvm_riscv_gstage_mode =3D HGATP_MODE_SV48X4; + kvm_riscv_gstage_pgd_levels =3D 4; + } +skip_sv48x4_test: + + csr_write(CSR_HGATP, 0); + kvm_riscv_local_hfence_gvma_all(); +#endif +} diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index b861a5dd7bd9..67c876de74ef 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -135,7 +135,7 @@ static int __init riscv_kvm_init(void) (rc) ? slist : "no features"); } =20 - switch (kvm_riscv_gstage_mode()) { + switch (kvm_riscv_gstage_mode) { case HGATP_MODE_SV32X4: str =3D "Sv32x4"; break; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 806614b3e46d..9f7dcd8cd741 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -6,9 +6,7 @@ * Anup Patel */ =20 -#include #include -#include #include #include #include @@ -17,342 +15,28 @@ #include #include #include -#include -#include - -#ifdef CONFIG_64BIT -static unsigned long gstage_mode __ro_after_init =3D (HGATP_MODE_SV39X4 <<= HGATP_MODE_SHIFT); -static unsigned long gstage_pgd_levels __ro_after_init =3D 3; -#define gstage_index_bits 9 -#else -static unsigned long gstage_mode __ro_after_init =3D (HGATP_MODE_SV32X4 <<= HGATP_MODE_SHIFT); -static unsigned long gstage_pgd_levels __ro_after_init =3D 2; -#define gstage_index_bits 10 -#endif - -#define gstage_pgd_xbits 2 -#define gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + gstage_pgd_xbits)) -#define gstage_gpa_bits (HGATP_PAGE_SHIFT + \ - (gstage_pgd_levels * gstage_index_bits) + \ - gstage_pgd_xbits) -#define gstage_gpa_size ((gpa_t)(1ULL << gstage_gpa_bits)) - -#define gstage_pte_leaf(__ptep) \ - (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) - -static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) -{ - unsigned long mask; - unsigned long shift =3D HGATP_PAGE_SHIFT + (gstage_index_bits * level); - - if (level =3D=3D (gstage_pgd_levels - 1)) - mask =3D (PTRS_PER_PTE * (1UL << gstage_pgd_xbits)) - 1; - else - mask =3D PTRS_PER_PTE - 1; - - return (addr >> shift) & mask; -} =20 -static inline unsigned long gstage_pte_page_vaddr(pte_t pte) -{ - return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte))); -} - -static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) -{ - u32 i; - unsigned long psz =3D 1UL << 12; - - for (i =3D 0; i < gstage_pgd_levels; i++) { - if (page_size =3D=3D (psz << (i * gstage_index_bits))) { - *out_level =3D i; - return 0; - } - } - - return -EINVAL; -} - -static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) -{ - if (gstage_pgd_levels < level) - return -EINVAL; - - *out_pgorder =3D 12 + (level * gstage_index_bits); - return 0; -} - -static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) -{ - int rc; - unsigned long page_order =3D PAGE_SHIFT; - - rc =3D gstage_level_to_page_order(level, &page_order); - if (rc) - return rc; - - *out_pgsize =3D BIT(page_order); - return 0; -} - -static bool gstage_get_leaf_entry(struct kvm *kvm, gpa_t addr, - pte_t **ptepp, u32 *ptep_level) -{ - pte_t *ptep; - u32 current_level =3D gstage_pgd_levels - 1; - - *ptep_level =3D current_level; - ptep =3D (pte_t *)kvm->arch.pgd; - ptep =3D &ptep[gstage_pte_index(addr, current_level)]; - while (ptep && pte_val(ptep_get(ptep))) { - if (gstage_pte_leaf(ptep)) { - *ptep_level =3D current_level; - *ptepp =3D ptep; - return true; - } - - if (current_level) { - current_level--; - *ptep_level =3D current_level; - ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - ptep =3D &ptep[gstage_pte_index(addr, current_level)]; - } else { - ptep =3D NULL; - } - } - - return false; -} - -static void gstage_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) -{ - unsigned long order =3D PAGE_SHIFT; - - if (gstage_level_to_page_order(level, &order)) - return; - addr &=3D ~(BIT(order) - 1); - - kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); -} - -static int gstage_set_pte(struct kvm *kvm, - struct kvm_mmu_memory_cache *pcache, - const struct kvm_gstage_mapping *map) -{ - u32 current_level =3D gstage_pgd_levels - 1; - pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; - - if (current_level < map->level) - return -EINVAL; - - while (current_level !=3D map->level) { - if (gstage_pte_leaf(ptep)) - return -EEXIST; - - if (!pte_val(ptep_get(ptep))) { - if (!pcache) - return -ENOMEM; - next_ptep =3D kvm_mmu_memory_cache_alloc(pcache); - if (!next_ptep) - return -ENOMEM; - set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), - __pgprot(_PAGE_TABLE))); - } else { - if (gstage_pte_leaf(ptep)) - return -EEXIST; - next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - } - - current_level--; - ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; - } - - if (pte_val(*ptep) !=3D pte_val(map->pte)) { - set_pte(ptep, map->pte); - if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, map->addr); - } - - return 0; -} - -static int gstage_map_page(struct kvm *kvm, - struct kvm_mmu_memory_cache *pcache, - gpa_t gpa, phys_addr_t hpa, - unsigned long page_size, - bool page_rdonly, bool page_exec, - struct kvm_gstage_mapping *out_map) -{ - pgprot_t prot; - int ret; - - out_map->addr =3D gpa; - out_map->level =3D 0; - - ret =3D gstage_page_size_to_level(page_size, &out_map->level); - if (ret) - return ret; - - /* - * A RISC-V implementation can choose to either: - * 1) Update 'A' and 'D' PTE bits in hardware - * 2) Generate page fault when 'A' and/or 'D' bits are not set - * PTE so that software can update these bits. - * - * We support both options mentioned above. To achieve this, we - * always set 'A' and 'D' PTE bits at time of creating G-stage - * mapping. To support KVM dirty page logging with both options - * mentioned above, we will write-protect G-stage PTEs to track - * dirty pages. - */ - - if (page_exec) { - if (page_rdonly) - prot =3D PAGE_READ_EXEC; - else - prot =3D PAGE_WRITE_EXEC; - } else { - if (page_rdonly) - prot =3D PAGE_READ; - else - prot =3D PAGE_WRITE; - } - out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); - out_map->pte =3D pte_mkdirty(out_map->pte); - - return gstage_set_pte(kvm, pcache, out_map); -} - -enum gstage_op { - GSTAGE_OP_NOP =3D 0, /* Nothing */ - GSTAGE_OP_CLEAR, /* Clear/Unmap */ - GSTAGE_OP_WP, /* Write-protect */ -}; - -static void gstage_op_pte(struct kvm *kvm, gpa_t addr, - pte_t *ptep, u32 ptep_level, enum gstage_op op) -{ - int i, ret; - pte_t old_pte, *next_ptep; - u32 next_ptep_level; - unsigned long next_page_size, page_size; - - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - return; - - BUG_ON(addr & (page_size - 1)); - - if (!pte_val(ptep_get(ptep))) - return; - - if (ptep_level && !gstage_pte_leaf(ptep)) { - next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - next_ptep_level =3D ptep_level - 1; - ret =3D gstage_level_to_page_size(next_ptep_level, - &next_page_size); - if (ret) - return; - - if (op =3D=3D GSTAGE_OP_CLEAR) - set_pte(ptep, __pte(0)); - for (i =3D 0; i < PTRS_PER_PTE; i++) - gstage_op_pte(kvm, addr + i * next_page_size, - &next_ptep[i], next_ptep_level, op); - if (op =3D=3D GSTAGE_OP_CLEAR) - put_page(virt_to_page(next_ptep)); - } else { - old_pte =3D *ptep; - if (op =3D=3D GSTAGE_OP_CLEAR) - set_pte(ptep, __pte(0)); - else if (op =3D=3D GSTAGE_OP_WP) - set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); - if (pte_val(*ptep) !=3D pte_val(old_pte)) - gstage_remote_tlb_flush(kvm, ptep_level, addr); - } -} - -static void gstage_unmap_range(struct kvm *kvm, gpa_t start, - gpa_t size, bool may_block) -{ - int ret; - pte_t *ptep; - u32 ptep_level; - bool found_leaf; - unsigned long page_size; - gpa_t addr =3D start, end =3D start + size; - - while (addr < end) { - found_leaf =3D gstage_get_leaf_entry(kvm, addr, - &ptep, &ptep_level); - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - break; - - if (!found_leaf) - goto next; - - if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - gstage_op_pte(kvm, addr, ptep, - ptep_level, GSTAGE_OP_CLEAR); - -next: - addr +=3D page_size; - - /* - * If the range is too large, release the kvm->mmu_lock - * to prevent starvation and lockup detector warnings. - */ - if (may_block && addr < end) - cond_resched_lock(&kvm->mmu_lock); - } -} - -static void gstage_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) -{ - int ret; - pte_t *ptep; - u32 ptep_level; - bool found_leaf; - gpa_t addr =3D start; - unsigned long page_size; - - while (addr < end) { - found_leaf =3D gstage_get_leaf_entry(kvm, addr, - &ptep, &ptep_level); - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - break; - - if (!found_leaf) - goto next; - - if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - gstage_op_pte(kvm, addr, ptep, - ptep_level, GSTAGE_OP_WP); - -next: - addr +=3D page_size; - } -} - -static void gstage_wp_memory_region(struct kvm *kvm, int slot) +static void mmu_wp_memory_region(struct kvm *kvm, int slot) { struct kvm_memslots *slots =3D kvm_memslots(kvm); struct kvm_memory_slot *memslot =3D id_to_memslot(slots, slot); phys_addr_t start =3D memslot->base_gfn << PAGE_SHIFT; phys_addr_t end =3D (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 spin_lock(&kvm->mmu_lock); - gstage_wp_range(kvm, start, end); + kvm_riscv_gstage_wp_range(&gstage, start, end); spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic) +int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable, bool in_atomic) { int ret =3D 0; unsigned long pfn; @@ -362,6 +46,12 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, .gfp_zero =3D __GFP_ZERO, }; struct kvm_gstage_mapping map; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); @@ -374,12 +64,12 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gp= a, if (!writable) map.pte =3D pte_wrprotect(map.pte); =20 - ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(&pcache, kvm_riscv_gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, &pcache, &map); + ret =3D kvm_riscv_gstage_set_pte(&gstage, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -392,10 +82,17 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gp= a, return ret; } =20 -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long si= ze) +void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size) { + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + spin_lock(&kvm->mmu_lock); - gstage_unmap_range(kvm, gpa, size, false); + kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -407,8 +104,14 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kv= m *kvm, phys_addr_t base_gfn =3D slot->base_gfn + gfn_offset; phys_addr_t start =3D (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end =3D (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 - gstage_wp_range(kvm, start, end); + kvm_riscv_gstage_wp_range(&gstage, start, end); } =20 void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *mems= lot) @@ -425,7 +128,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) =20 void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_riscv_gstage_free_pgd(kvm); + kvm_riscv_mmu_free_pgd(kvm); } =20 void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -433,9 +136,15 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, { gpa_t gpa =3D slot->base_gfn << PAGE_SHIFT; phys_addr_t size =3D slot->npages << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 spin_lock(&kvm->mmu_lock); - gstage_unmap_range(kvm, gpa, size, false); + kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -450,7 +159,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * the memory slot is write protected. */ if (change !=3D KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) - gstage_wp_memory_region(kvm, new->id); + mmu_wp_memory_region(kvm, new->id); } =20 int kvm_arch_prepare_memory_region(struct kvm *kvm, @@ -472,7 +181,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, * space addressable by the KVM guest GPA space. */ if ((new->base_gfn + new->npages) >=3D - (gstage_gpa_size >> PAGE_SHIFT)) + (kvm_riscv_gstage_gpa_size >> PAGE_SHIFT)) return -EFAULT; =20 hva =3D new->userspace_addr; @@ -528,9 +237,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; } =20 - ret =3D kvm_riscv_gstage_ioremap(kvm, gpa, pa, - vm_end - vm_start, - writable, false); + ret =3D kvm_riscv_mmu_ioremap(kvm, gpa, pa, vm_end - vm_start, + writable, false); if (ret) break; } @@ -541,7 +249,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; =20 if (ret) - kvm_riscv_gstage_iounmap(kvm, base_gpa, size); + kvm_riscv_mmu_iounmap(kvm, base_gpa, size); =20 out: mmap_read_unlock(current->mm); @@ -550,12 +258,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, =20 bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { + struct kvm_gstage gstage; + if (!kvm->arch.pgd) return false; =20 - gstage_unmap_range(kvm, range->start << PAGE_SHIFT, - (range->end - range->start) << PAGE_SHIFT, - range->may_block); + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT, + (range->end - range->start) << PAGE_SHIFT, + range->may_block); return false; } =20 @@ -564,14 +278,19 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) pte_t *ptep; u32 ptep_level =3D 0; u64 size =3D (range->end - range->start) << PAGE_SHIFT; + struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) return false; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 - if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, - &ptep, &ptep_level)) + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, + &ptep, &ptep_level)) return false; =20 return ptep_test_and_clear_young(NULL, 0, ptep); @@ -582,23 +301,27 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) pte_t *ptep; u32 ptep_level =3D 0; u64 size =3D (range->end - range->start) << PAGE_SHIFT; + struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) return false; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 - if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, - &ptep, &ptep_level)) + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, + &ptep, &ptep_level)) return false; =20 return pte_young(ptep_get(ptep)); } =20 -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write, - struct kvm_gstage_mapping *out_map) +int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memsl= ot, + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -611,13 +334,19 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, bool logging =3D (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; + struct kvm_gstage gstage; struct page *page; =20 + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + /* Setup initial state of output mapping */ memset(out_map, 0, sizeof(*out_map)); =20 /* We need minimum second+third level pages */ - ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(pcache, kvm_riscv_gstage_pgd_levels); if (ret) { kvm_err("Failed to topup G-stage cache\n"); return ret; @@ -684,11 +413,11 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, =20 if (writable) { mark_page_dirty(kvm, gfn); - ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true, out_map); + ret =3D kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIF= T, + vma_pagesize, false, true, out_map); } else { - ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true, out_map); + ret =3D kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIF= T, + vma_pagesize, true, true, out_map); } =20 if (ret) @@ -700,7 +429,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, return ret; } =20 -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) +int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm) { struct page *pgd_page; =20 @@ -710,7 +439,7 @@ int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) } =20 pgd_page =3D alloc_pages(GFP_KERNEL | __GFP_ZERO, - get_order(gstage_pgd_size)); + get_order(kvm_riscv_gstage_pgd_size)); if (!pgd_page) return -ENOMEM; kvm->arch.pgd =3D page_to_virt(pgd_page); @@ -719,13 +448,18 @@ int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) return 0; } =20 -void kvm_riscv_gstage_free_pgd(struct kvm *kvm) +void kvm_riscv_mmu_free_pgd(struct kvm *kvm) { + struct kvm_gstage gstage; void *pgd =3D NULL; =20 spin_lock(&kvm->mmu_lock); if (kvm->arch.pgd) { - gstage_unmap_range(kvm, 0UL, gstage_gpa_size, false); + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + kvm_riscv_gstage_unmap_range(&gstage, 0UL, kvm_riscv_gstage_gpa_size, fa= lse); pgd =3D READ_ONCE(kvm->arch.pgd); kvm->arch.pgd =3D NULL; kvm->arch.pgd_phys =3D 0; @@ -733,12 +467,12 @@ void kvm_riscv_gstage_free_pgd(struct kvm *kvm) spin_unlock(&kvm->mmu_lock); =20 if (pgd) - free_pages((unsigned long)pgd, get_order(gstage_pgd_size)); + free_pages((unsigned long)pgd, get_order(kvm_riscv_gstage_pgd_size)); } =20 -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) +void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu) { - unsigned long hgatp =3D gstage_mode; + unsigned long hgatp =3D kvm_riscv_gstage_mode << HGATP_MODE_SHIFT; struct kvm_arch *k =3D &vcpu->kvm->arch; =20 hgatp |=3D (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; @@ -749,37 +483,3 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vc= pu) if (!kvm_riscv_gstage_vmid_bits()) kvm_riscv_local_hfence_gvma_all(); } - -void __init kvm_riscv_gstage_mode_detect(void) -{ -#ifdef CONFIG_64BIT - /* Try Sv57x4 G-stage mode */ - csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); - if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { - gstage_mode =3D (HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); - gstage_pgd_levels =3D 5; - goto skip_sv48x4_test; - } - - /* Try Sv48x4 G-stage mode */ - csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { - gstage_mode =3D (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - gstage_pgd_levels =3D 4; - } -skip_sv48x4_test: - - csr_write(CSR_HGATP, 0); - kvm_riscv_local_hfence_gvma_all(); -#endif -} - -unsigned long __init kvm_riscv_gstage_mode(void) -{ - return gstage_mode >> HGATP_MODE_SHIFT; -} - -int kvm_riscv_gstage_gpa_bits(void) -{ - return gstage_gpa_bits; -} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 8ad7b31f5939..fe028b4274df 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -632,7 +632,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } } =20 - kvm_riscv_gstage_update_hgatp(vcpu); + kvm_riscv_mmu_update_hgatp(vcpu); =20 kvm_riscv_vcpu_timer_restore(vcpu); =20 @@ -717,7 +717,7 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vc= pu *vcpu) kvm_riscv_reset_vcpu(vcpu, true); =20 if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu)) - kvm_riscv_gstage_update_hgatp(vcpu); + kvm_riscv_mmu_update_hgatp(vcpu); =20 if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) kvm_riscv_fence_i_process(vcpu); diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6b4694bc07ea..0bb0c51e3c89 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -43,8 +43,9 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, }; } =20 - ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); + ret =3D kvm_riscv_mmu_map(vcpu, memslot, fault_addr, hva, + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, + &host_map); if (ret < 0) return ret; =20 diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 8601cf29e5f8..66d91ae6e9b2 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -32,13 +32,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long typ= e) { int r; =20 - r =3D kvm_riscv_gstage_alloc_pgd(kvm); + r =3D kvm_riscv_mmu_alloc_pgd(kvm); if (r) return r; =20 r =3D kvm_riscv_gstage_vmid_init(kvm); if (r) { - kvm_riscv_gstage_free_pgd(kvm); + kvm_riscv_mmu_free_pgd(kvm); return r; } =20 @@ -200,7 +200,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) r =3D KVM_USER_MEM_SLOTS; break; case KVM_CAP_VM_GPA_BITS: - r =3D kvm_riscv_gstage_gpa_bits(); + r =3D kvm_riscv_gstage_gpa_bits; break; default: r =3D 0; --=20 2.43.0 From nobody Fri Oct 10 15:59:47 2025 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77F24261588 for ; Fri, 13 Jun 2025 06:58:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797933; cv=none; b=Vis+grfHRFRSDhtMawwXXpEK+ww/exIihNEdG30Cyyqrpkw3gRPPDgzp3fXdvbSY5wRvLkpJBZuQtIBqnpoxlthcwMOt+hgN9mgVdUCu1pjiQg+Ro1wd0wQEdJVCjoHoX/FdS8sdodJlrVk1TmKySMLd3UBDRY4qmLRfRdyiojI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749797933; c=relaxed/simple; bh=F0Me1ybyZIngF4/eJkZvNT8k+pj/FmCQMMFm1YCl2HA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p/w1s9WlaxlRmgiTlKBJVVRICg1RTjWiL6oWrd7XFL4H7tpCeqjnqFBHi8hOWq6uxCck1IiPl8VUU9J601TuUDdzmuOIioZZuwsy8/cwXqEf9NpCz9SFzy24joWaMMCLu+0lWN5EBZflzhNAUe5r8YfbfOP/ST7VtQcX6GRW/uM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=gDnBScU4; arc=none smtp.client-ip=209.85.215.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="gDnBScU4" Received: by mail-pg1-f172.google.com with SMTP id 41be03b00d2f7-b2f62bbb5d6so1341167a12.0 for ; Thu, 12 Jun 2025 23:58:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749797931; x=1750402731; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=gDnBScU4dJ+6ECW+Wm0qq0am08o3a5+9eye7mQ35wHOu9FyqOFKIbpAFiCbMxGViFr H8XmVqdh/9KD0ffmICZvv4pefZFyEc6sbaWpD+mv2Do61Kkp2laVwW4rqoSgZSsersW8 jxTWFHY2Erk7P+RK3AHYcL6PABpCgbliUxE0HaK6o18uao8JjDnjGd1qVog1RJ9reSyi qey17/FKz3c6jIbMcqSSyfa6hGf7qp1bBXGZmNHB4CnT+yc1EbI067XZUcHMhg5lq3/n YErO2ElMh7SB9OR0nr+vVkfRQ6+g2NszquChmoQrkM8jAgZD59pGu8gHCi/C1E3Lhbod +sJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749797931; x=1750402731; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=fUd/GCMy7OkQAm5880/UmD4yFXSEGfuu7l4Ti0/1qEUXu70CliDvlrOVQ6sVr15JG/ HuTkPmMbRH1f6zHMaXeJ4kHqpMt6L53Qos7YYXOd618yUwgoYM+JpTirnXGwEu3tzuD9 fLCAocjcPtaMT51RLSYZyCrgXr8owsBMUYXiwvOiESwS3jfANP9clerPGTOLfPP0SPxC fQaKaRwlWwOQFzNash4YHdLkmWEoY7r6Ptsp5Qgm+hf12/o2eHY2m98BvZTFmKkktd82 J/hx9yqC2SrYCuiax/llFN/nsYcVgvaj5wV/p4c+QOpw5pTtyYHHPxiEFJ8bHTkWRm4q qADQ== X-Forwarded-Encrypted: i=1; AJvYcCWqEPMtb9xuqhcamqTsFIEbwecYG5WL56r/A6unjoBSoRB9ECriaUwp0z8/VC8qRgTDkbw+yPANUoKcBMw=@vger.kernel.org X-Gm-Message-State: AOJu0YzQLs3xY5v7Hz5/WyCcQQTsHOjjTW0xaK+zDoXpNVOwzWmaMbS+ EgcCcAvYbwAD4Zntv50Hr3HrrgU+1pAyJJZSEIwzzmNjLkB6cgDm7PG0oJkpK3PYNfIjUDjcssx X7e5L/6o= X-Gm-Gg: ASbGnctP7zEaooFxSs5fKKvM+cF+853a8gmzAxTUq4e2EP4hu5SSLfJZN9r03VWCGWq jJgs8upQbji2UJi6sI/QjgcLaVl61I//4oWOnmsDnUo7uVxhe4cwVxy5bJ9NKB+soCR4LTKbBZx MKxd8Iak3qH79BIUqX7qvH24yE5VGFbhe/QKXc243Y4nlTnk1lnOWIrlNcf013DkR8PSuFD40Rj GyOSnjvvGAo7HjyqxH5XqnGWej2LR4hwPrnKmrAZTEtg8pJD4rVXs7zoNBrHsQyICDKo/gEbM5Y yZPa90FIQQYGqyBL0u1Gr93x9F9v466eevpjPC3Aur3AfoxrV8uvcq5jeXOBMBK3ZRF0Jq7stUW y7woSOD9WdB+F/11XSfc= X-Google-Smtp-Source: AGHT+IGIN4c9ZvVR6EoxrUs6sx6a1+kFEdVQXr9ZmJ+rly629C37HuyOKLkiMKqAFCHARTElA/FWvg== X-Received: by 2002:a17:90b:48c5:b0:312:1ae9:1529 with SMTP id 98e67ed59e1d1-313d9eaec02mr3116866a91.27.1749797930635; Thu, 12 Jun 2025 23:58:50 -0700 (PDT) Received: from localhost.localdomain ([103.97.166.196]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-313c1b49b7fsm2653022a91.24.2025.06.12.23.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Jun 2025 23:58:50 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v2 12/12] RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs Date: Fri, 13 Jun 2025 12:27:43 +0530 Message-ID: <20250613065743.737102-13-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250613065743.737102-1-apatel@ventanamicro.com> References: <20250613065743.737102-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, all kvm_riscv_hfence_xyz() APIs assume VMID to be the host VMID of the Guest/VM which resticts use of these APIs only for host TLB maintenance. Let's allow passing VMID as a parameter to all kvm_riscv_hfence_xyz() APIs so that they can be re-used for nested virtualization related TLB maintenance. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_tlb.h | 17 ++++++--- arch/riscv/kvm/gstage.c | 3 +- arch/riscv/kvm/tlb.c | 61 ++++++++++++++++++++----------- arch/riscv/kvm/vcpu_sbi_replace.c | 17 +++++---- arch/riscv/kvm/vcpu_sbi_v01.c | 25 ++++++------- 5 files changed, 73 insertions(+), 50 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index f67e03edeaec..38a2f933ad3a 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -11,9 +11,11 @@ enum kvm_riscv_hfence_type { KVM_RISCV_HFENCE_UNKNOWN =3D 0, KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_GVMA_VMID_ALL, KVM_RISCV_HFENCE_VVMA_ASID_GVA, KVM_RISCV_HFENCE_VVMA_ASID_ALL, KVM_RISCV_HFENCE_VVMA_GVA, + KVM_RISCV_HFENCE_VVMA_ALL }; =20 struct kvm_riscv_hfence { @@ -59,21 +61,24 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); + unsigned long order, unsigned long asid, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid); + unsigned long asid, unsigned long vmid); void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); =20 #endif diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c index 9c7c44f09b05..24c270d6d0e2 100644 --- a/arch/riscv/kvm/gstage.c +++ b/arch/riscv/kvm/gstage.c @@ -117,7 +117,8 @@ static void gstage_tlb_flush(struct kvm_gstage *gstage,= u32 level, gpa_t addr) if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); else - kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder, + gstage->vmid); } =20 int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 349fcfc93f54..3c5a70a2b927 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -251,6 +251,12 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_GVMA_VMID_ALL: + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(d.vmid); + break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); if (kvm_riscv_nacl_available()) @@ -276,6 +282,13 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_VVMA_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_vvma_all(d.vmid); + break; default: break; } @@ -328,14 +341,13 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -344,23 +356,28 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, - KVM_REQ_TLB_FLUSH, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid) + unsigned long order, unsigned long asid, + unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -370,15 +387,13 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, =20 void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid) + unsigned long asid, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; - struct kvm_riscv_hfence data; + struct kvm_riscv_hfence data =3D {0}; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); - data.addr =3D data.size =3D data.order =3D 0; + data.vmid =3D vmid; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); } @@ -386,14 +401,13 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -402,16 +416,21 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, } =20 void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, - KVM_REQ_HFENCE_VVMA_ALL, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) { kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, - PAGE_SHIFT); + PAGE_SHIFT, READ_ONCE(kvm->arch.vmid.vmid)); return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index b17fad091bab..b490ed1428a6 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -96,6 +96,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vc= pu, struct kvm_run *run unsigned long hmask =3D cp->a0; unsigned long hbase =3D cp->a1; unsigned long funcid =3D cp->a6; + unsigned long vmid; =20 switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: @@ -103,22 +104,22 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu= *vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, - cp->a2, cp->a3, PAGE_SHIFT); + cp->a2, cp->a3, PAGE_SHIFT, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - hbase, hmask, cp->a4); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, + cp->a4, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - hbase, hmask, - cp->a2, cp->a3, - PAGE_SHIFT, cp->a4); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask, cp->a2, + cp->a3, PAGE_SHIFT, cp->a4, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index 8f4c4fa16227..368dfddd23d9 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -23,6 +23,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu,= struct kvm_run *run, struct kvm *kvm =3D vcpu->kvm; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; struct kvm_cpu_trap *utrap =3D retdata->utrap; + unsigned long vmid; =20 switch (cp->a7) { case SBI_EXT_0_1_CONSOLE_GETCHAR: @@ -78,25 +79,21 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcp= u, struct kvm_run *run, if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_FENCE_I) kvm_riscv_fence_i(vcpu->kvm, 0, hmask); else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_all(vcpu->kvm, - 0, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, 0, hmask, vmid); else - kvm_riscv_hfence_vvma_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT); + kvm_riscv_hfence_vvma_gva(vcpu->kvm, 0, hmask, cp->a1, + cp->a2, PAGE_SHIFT, vmid); } else { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - 0, hmask, - cp->a3); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 0, hmask, + cp->a3, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT, - cp->a3); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, 0, hmask, + cp->a1, cp->a2, PAGE_SHIFT, + cp->a3, vmid); } break; default: --=20 2.43.0