From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86D512DE1E6 for ; Wed, 18 Jun 2025 11:35:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246552; cv=none; b=lSmjjc51lXN5zuk713T73uuKSu4BFnfuyvC4GWywYkBGUj7SI1iz34sXDHOlVdP0uc2AZulTbjj8QEgF1cHhYaAM3otnWMXjc0jMc+nR/xAvXuB9Qmaw3XNs77h1x+AHbN4NSdgGTWgM4lPAemRw/zfeYGV+ak9O1BwCFXNSnGw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246552; c=relaxed/simple; bh=staaIYNl2k7avfOSLFZ5XPO76ZWP6t7i2gV9awbxnvo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lDtkU6MkjCXtEIvQpahOp4RlN+yYXWGgyY7FJ0CePso0cjygKF4vkOG3ujVvbuSCL09OV+FzmzV5IjBkbjcGhCunv115H8DWlTVMyR6J7PwdaelMYtv+b9fEKl/F24Lcl+lhdKunexKZClOcUoX5H2IPhzQOHCPDijCqTn25m7c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=B/0Cs2pQ; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="B/0Cs2pQ" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-234fcadde3eso87660755ad.0 for ; Wed, 18 Jun 2025 04:35:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246550; x=1750851350; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ymMx2/Jj5SWc9oh0oJCNF6LPBe8+NWhSuF7K3lIaf9M=; b=B/0Cs2pQL/xg7UU5kTHz3eysHAgz89vxcqTZAIbV7QnddES35CnPkfx5ZwcvJdGPj3 ZZPVyCWkSvLxr5I6IMPgqBoPFgvUrqDKN5tbktZxu/txfCRmX+foGbt6WRsfWuRAL3oO CD0c00c6PhqUmqAfixBfoFez96qvltgR2FUuukilwt8Cs88lYFuKXB/+dVPi0eFxXjGb CFKhanAwTC0dtBOVRmwyTH+hOhNsshX/9miaO25MSGw8CZku5uRF4vWU6ntzbBa6FZlG r2fYJCKnBjzdEy0AyZdZcU/8MRGzpw9aZdPNRA1DaOdH1xw2zJNbB8+pIklOGy/DwdlY eshQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246550; x=1750851350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ymMx2/Jj5SWc9oh0oJCNF6LPBe8+NWhSuF7K3lIaf9M=; b=h7qJqsXfzUuWM+yiCpkXw/JznJHqA9QXkZbFzO8ISJSX1wsJxOsC3AYaUwybzLszUz yvhboJlpIM6fY3aBTHInai59hct8N+1fEgbHjA4YtFEH6V6Trsempr0u1+6C80/F9v8l ZoyL5pgAIdC3d628JInhM0ciyGn0FIb88kGg6plRpXqjqQOj0RJbaLVojzZRYXi4hQJ7 Tg50E38fR5/bGDM98ny+FiC6tWBmxeINRm7v31r/LkBnLjlgDjy0glm/Hb4LcR6ruVFP 54kCTaFTOboMk6n606N//3jWpsa5EpgLkESWApEPnU0X11QYTxNfsyOH/O+tX+dIabUk vXXg== X-Forwarded-Encrypted: i=1; AJvYcCUH0YuRP5mQ4U2+GemtaSyaDw2Qa4zeZVdfho5au+cH0kL7NBFBoyRXQK1aIyQJt4ysN/FmfhHigaUp1Is=@vger.kernel.org X-Gm-Message-State: AOJu0YyFAY/WWt18lv79NmI3EHn0sMrTDgk7O9tOoUSTmSLKlpCLawBL CmCmtp5fsOXiHQxTuO+qDa2WRt8ZBX9Vdj9TUatftYNvCxdbGAOiQSlVhacphI+cz2a2Ml0YGog yyt2r X-Gm-Gg: ASbGncsuSwGOPGnDKORp36cja0XAKX/0KhG77maeajlSE5jX7ZRwrwvGr0ti9xdp450 TJSNB4FB8tYKV5S1jZZoizyWwfii9u0+R2YKahv6qfEyLnQxBYXGGhQjU3psVYs2OaQUdll+ElI Mh7D7v8Y+LRc1ZS/xkl6f/Dhi4r/Wmjy5tYjc9imbWdX807CZnLT/2j2sLPCG/7dKTsrAaxAcWR ERymWePlV56MAT30muEv2SkyyzwHISEe1eEzOEQW+UtgDm2yth7fi5EGhMS4w0y8BcqCNkheRni yVIYDu883XMngv1E8HtvNOEzTtEhIbZ1017Ub6xaYUDKG1JYsuSl7uNywJdxDwUTTlpjfbVuiPu UoqkKxfi/brVUjBeKpw== X-Google-Smtp-Source: AGHT+IHpQurN4dARXEVEkELnBd1/eFKHOQXO7K5ZeHZVbZ7n3gcsWaLN1FmavIalixJDtUnrPtlsSQ== X-Received: by 2002:a17:903:b8f:b0:234:a44c:ff8c with SMTP id d9443c01a7336-2366b329f71mr223770515ad.18.1750246549511; Wed, 18 Jun 2025 04:35:49 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.35.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:35:49 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 01/12] RISC-V: KVM: Check kvm_riscv_vcpu_alloc_vector_context() return value Date: Wed, 18 Jun 2025 17:05:21 +0530 Message-ID: <20250618113532.471448-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_vcpu_alloc_vector_context() does return an error code upon failure so don't ignore this in kvm_arch_vcpu_create(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/kvm/vcpu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 303aa0a8a5a1..b467dc1f4c7f 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -148,8 +148,9 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) =20 spin_lock_init(&vcpu->arch.reset_state.lock); =20 - if (kvm_riscv_vcpu_alloc_vector_context(vcpu)) - return -ENOMEM; + rc =3D kvm_riscv_vcpu_alloc_vector_context(vcpu); + if (rc) + return rc; =20 /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 776C12DE1FE for ; Wed, 18 Jun 2025 11:35:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246557; cv=none; b=OxIWpuLDG5viS8OICR1UQQFRgpXqRJ4Z+O0fRy/3yk6H1vqEZLuaU46EoS83MfV80LOz+SvWGIaoH3fdqF/GalyQxL8PRLe8juIg62dP91Wq1uTDb2wRe5JLR5X3MrLWviFHLfoYI/VgoHl6ZIULEYwcf8PU+hcey16dqZbdtFo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246557; c=relaxed/simple; bh=S4GPuCe590xZl2EuRNnIMDQ2EGVo96MnzdBawsw3VME=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oy2UgsxVZL3DIUADru6QuK2sV4bB+CaJrr+wys65VwLfiyfEb5XXYQW6JMKieQlftYE5GzifM2O89ahHRhUSfFvuF8TjwV4ZmngZ7zIgeaCr1lrbguVLx51xrzIMXfJ+YrxJfI0auE2xsThL6ZoR1Q5bXm62i0J4w/vST6xpqyU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bBytFe2Q; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bBytFe2Q" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-23633a6ac50so98644025ad.2 for ; Wed, 18 Jun 2025 04:35:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246555; x=1750851355; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bKQfZHKenjLPRqEIi0Ln2Cx1EmVsnnUE8sKaMZB78eg=; b=bBytFe2QjfFEFI4RZpfNa1cQd2QllB2l8bXWjggpC7IFo69TPBg+RGNwzRHQ2bG8r+ 2DJPj21rpI0D74uaHqWG1oNTsg33x5XV0WutyNnLmm6Q33eiiQQohAi2iM8FlTHAoJS4 REW3sn4Bx7jb+37sZlGZJ605wxAYzbPOvwJKrMAHkAzURlZYxsR0poB86d/wScPgBy+u ILqJo70Mhohjiw0wJVNcUHH84k0Rn8vRSpGRKjiC28d8Pd/3E4Jo6CGI6qTH8I6hgDc9 gchsyh6GizHIEYuiEfUZP/tWZCE/tuR74A41aRAhiRevoVVMas5FuQtpdgFRNzb9ABPq nXzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246555; x=1750851355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bKQfZHKenjLPRqEIi0Ln2Cx1EmVsnnUE8sKaMZB78eg=; b=CTD6uK4UItmeiT1LV7KMg65HyQTGp7gkeOnTraNybd7eZv6Hy9+uWxOYEAiyQ5aF5P phitp2qvBxK1lWTJpaSd6snkRvooAea+oUOKEtmEcjtYYb1s7l/MAwwwu2afZOkLLJHd +7+kDPmJ83wbaehmNgOqUJsr2foFgHTx7hn0sbbnhUhIeZow+gyY4dKLMxrsl/h1fbST P+wMiqxca+FaWzPF6HmsrXRFAaySIdklRqUh4ZD2tAPovH9he3HdCfB1OMmXbWx/ZDd3 3qu21cGcpbOlXqOxpg159JAmaWHG1BryBpDNJtugtH6jJkvogLtOZXtB/Gqoiak2GEJX 3c0A== X-Forwarded-Encrypted: i=1; AJvYcCVcJ8pQUWNF675NeeCctgLCJY9UO5s6dSS2KOWMVk67cTYjaIDamCU7PAi7iboTIFdXmvRcQmdyuRcFpvo=@vger.kernel.org X-Gm-Message-State: AOJu0Yx/4ITX5dk9/rtkc/aO0GXe6PbddK/5uZY5fhA2QUTDElqhWcGp RAs58FEBKL9/krIUsuia8fyqSwgHGUHojI/qiIt9q30PZEfn6iKQXOrrstmtCz1Hcq912rbRQRK QiIGm X-Gm-Gg: ASbGncsIrmpGYxW5Rsoc/Mj8FNTT98jA37cLkWinjWvKsHGhwn0UQZRiSJOWrLh67sK z0nX51IT1lK2T2tlc+wu+Ottapdz9E7MKJaa5Bdf6V7RQv31zFH5QU2ZTeart+3F4IxlLB++cQT 69CMYJLzeLd0JFlbHX5mZjW6whvP8Akx5oNhEa8aIv/7SqRHtI2i3udoTHnaLaS0jLa0HCyILv3 V6mhhxzX/chMFKisdmuaRH3bSzwv+VZ4LrJ0Lu/DfB6Ez8h4r7Z3gTKOloy5NA0/1tviG0JW38y 7NS/e9aQVptqqVBRbvlIGo3tWlkQpzaJ87CSg+ro/ahj90GgJuNIiEbkbVa5JYT8ehvycTmGYBm XA1P8c8exU3q3s4fHpQ== X-Google-Smtp-Source: AGHT+IGDxSM7ATGEf8luR6FsHBoP9Rz03O76JFYYcytYYgv6CqHzBB2u3MegIdGDixYFe2vIN2X5yg== X-Received: by 2002:a17:902:dad1:b0:235:e9fe:83c0 with SMTP id d9443c01a7336-2366b3de5damr236240735ad.27.1750246554649; Wed, 18 Jun 2025 04:35:54 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.35.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:35:53 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Nutty Liu Subject: [PATCH v3 02/12] RISC-V: KVM: Drop the return value of kvm_riscv_vcpu_aia_init() Date: Wed, 18 Jun 2025 17:05:22 +0530 Message-ID: <20250618113532.471448-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_vcpu_aia_init() does not return any failure so drop the return value which is always zero. Reviewed-by: Nutty Liu Signed-off-by: Anup Patel Reviewed-by: Atish Patra Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_aia.h | 2 +- arch/riscv/kvm/aia_device.c | 6 ++---- arch/riscv/kvm/vcpu.c | 4 +--- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_= aia.h index 3b643b9efc07..0a0f12496f00 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -147,7 +147,7 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, = unsigned int csr_num, =20 int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu); -int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu); =20 int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, u32 hart_index, diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 806c41931cde..b195a93add1c 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -509,12 +509,12 @@ void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_aia_imsic_reset(vcpu); } =20 -int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) { struct kvm_vcpu_aia *vaia =3D &vcpu->arch.aia_context; =20 if (!kvm_riscv_aia_available()) - return 0; + return; =20 /* * We don't do any memory allocations over here because these @@ -526,8 +526,6 @@ int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) /* Initialize default values in AIA vcpu context */ vaia->imsic_addr =3D KVM_RISCV_AIA_UNDEF_ADDR; vaia->hart_index =3D vcpu->vcpu_idx; - - return 0; } =20 void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index b467dc1f4c7f..f9fb3dbbe0c3 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -159,9 +159,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_pmu_init(vcpu); =20 /* Setup VCPU AIA */ - rc =3D kvm_riscv_vcpu_aia_init(vcpu); - if (rc) - return rc; + kvm_riscv_vcpu_aia_init(vcpu); =20 /* * Setup SBI extensions --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 987682E3AE5 for ; Wed, 18 Jun 2025 11:36:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246565; cv=none; b=rEVPPDxq8uaJSupXdIQVxuklXOuXkhEo4MAvGrFdantXjV+vmZzJr3Gf5LRugaEkkURaTupn3BG5xZjpaH4+ITdLNW199UNeYkPSQB24Tf5QCoJEhvJeJIWzIJ10cZpe1YTlOVqxDAUJI2L0HZcKlQImz+6I/UmK+bAO/sOyJyU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246565; c=relaxed/simple; bh=U71gyvnkFRTjwDI8pxtkBdAEgTm+G+VjYBDnQo+6/v4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QBGyB8g5MDLYR3MJl6p55Xzufp6+IpEX9kAMMNK0EUD1NFEpFr65UGSEEdr3nLnpA7RYmUa0sFuyeaS0Q88EH+Hgz5n/27AtxmzHFQIuw+9Mok1K52XdhaM5gtGPjMt4yffHcC69iCLuLyWRhXJsq0c4WReprrG8pQZBOTxHAB4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=TWtx+WH1; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="TWtx+WH1" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-236192f8770so5212155ad.0 for ; Wed, 18 Jun 2025 04:36:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246560; x=1750851360; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=3W7Bv8eoCvTNzkNO4nvyVAxgoupNAJKpC3g9nF1nD78=; b=TWtx+WH14juYstivK53he26LbWHVaqVb6KWwe2s88xmx+0MJtXdd/61fGIlIvS6d2d wSrKoEqfHlc6Mmy3S3badVu1NtUlCyCvNiOrcQ179MXpHI+oWDKBf196p5dGnv8iYuJx SbZ3pd+TSeAygYWd6lHZ7LN86cFwulUzGyzUvaRXZcaiaGy2E8z4+8aw3qr/rLQN9w5w uxVEi8pW6f2UlQojzcrBV/xc4sv7cgIGTQr1fXVCQlwLr1KZNYdmUz1JyoOgGN792OC6 z2ePngoPrDLZgo8M/+qgM4QisL09RKRL132Y5jnUW4ReKCo2RIzOcfNr6VHVlY3/ZKrl SuvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246560; x=1750851360; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=3W7Bv8eoCvTNzkNO4nvyVAxgoupNAJKpC3g9nF1nD78=; b=aVJSogyqwPO5+vCx/Ka8pAqPMF5kx1970+5jltPV44xlN8alQzQdNelVs1Q8Khqen5 RzIi2fKCw16wwiu1t7Rto67AdIQFIehvkfevGWukHzKWMrHdoSisQ3zFDrvTXfFcxVC3 lnDRnoLA5L2jqHppmDWTajvtxsKXCpisJRiwWC8S2aIz+ot+zK5uvjJAE9ZZ0/yTwIfq PwlqOqBzvCuWN6QVa3AYAtYeUAPRSQDGqiiqHzVmxboeKBTVLu/B0+XIIxoCinqrpn6c oKWYcjv9W+knD1HJnxLHe0ALolv4ft1+X5ustVv8+9ebNVNzof7FqUE9unr2Z2PFUSRW KW/g== X-Forwarded-Encrypted: i=1; AJvYcCX0Z8/TVwCq3j55FKKt6GAf43PHKeud2Z8fXAFzzkFQRak+6FzKjXoLVZt8h3vEDKpTXAAK7G60ohrLIWw=@vger.kernel.org X-Gm-Message-State: AOJu0YxcK1pVWG1lOMi2Cl0g15UTalkvALbjty0REdnIL6bP/vjGsHt1 D/jXCeqAjr4QvLHMo7mg+U3GZ/WwbDsumG7mRMk6LOrZ38zaWH6KATdH/XWaZcuIOT0= X-Gm-Gg: ASbGncvrQOgjI/aEQV8pRtyRoGI9uyqadZt0Ixi6vNl7N+MrX8KaKKQsamufCcauaZV DvclMvD8phQzc3EGxnxcwHn+FQOl2gKGxQ4fVNDD6ZV5eQOa9uvl+ypWkRDm5MKMScIgd3561ru GuAlzn5u7i2jTsVG78P50uj4MFEminuI8YSjJ6PWhwGDr24L4eSKpj2lxfadssr2FjozdzO/y// mglDFam28IRbPEx73t/wTMQiLknJmY87JSA1ufHF5Tq5xt+IUyHg3wo4SSVHVA5aHtYDMC0KHxr v5hNcnRswV7/aNyAVe8TDTjRs+2EdlPh83KVGrmTo0B11NTR0rwDLrq3ZGB5yxdj+Bc3fv/1O6+ 037GbL1F8G3CbZIKYIw== X-Google-Smtp-Source: AGHT+IG6YfMEu9lcXrd2qiAepReJXbM/lA1ChWg4x51TipxipgSLPLblK8ZINKPH1NwWxJ/vO5o07w== X-Received: by 2002:a17:902:c94e:b0:231:9817:6ec1 with SMTP id d9443c01a7336-237c20e022cmr32723525ad.17.1750246559668; Wed, 18 Jun 2025 04:35:59 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.35.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:35:59 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra , Nutty Liu Subject: [PATCH v3 03/12] RISC-V: KVM: Rename and move kvm_riscv_local_tlb_sanitize() Date: Wed, 18 Jun 2025 17:05:23 +0530 Message-ID: <20250618113532.471448-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_local_tlb_sanitize() deals with sanitizing current VMID related TLB mappings when a VCPU is moved from one host CPU to another. Let's move kvm_riscv_local_tlb_sanitize() to VMID management sources and rename it to kvm_riscv_gstage_vmid_sanitize(). Reviewed-by: Atish Patra Reviewed-by: Nutty Liu Signed-off-by: Anup Patel Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 3 +-- arch/riscv/kvm/tlb.c | 23 ----------------------- arch/riscv/kvm/vcpu.c | 4 ++-- arch/riscv/kvm/vmid.c | 23 +++++++++++++++++++++++ 4 files changed, 26 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 9a617bf5363d..8aa705ac75a5 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -331,8 +331,6 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 -void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu); - void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); @@ -380,6 +378,7 @@ unsigned long kvm_riscv_gstage_vmid_bits(void); int kvm_riscv_gstage_vmid_init(struct kvm *kvm); bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); =20 int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); =20 diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 2f91ea5f8493..b3461bfd9756 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -156,29 +156,6 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmi= d) csr_write(CSR_HGATP, hgatp); } =20 -void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) -{ - unsigned long vmid; - - if (!kvm_riscv_gstage_vmid_bits() || - vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) - return; - - /* - * On RISC-V platforms with hardware VMID support, we share same - * VMID for all VCPUs of a particular Guest/VM. This means we might - * have stale G-stage TLB entries on the current Host CPU due to - * some other VCPU of the same Guest which ran previously on the - * current Host CPU. - * - * To cleanup stale TLB entries, we simply flush all G-stage TLB - * entries by VMID whenever underlying Host CPU changes for a VCPU. - */ - - vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); - kvm_riscv_local_hfence_gvma_vmid_all(vmid); -} - void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) { kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_RCVD); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index f9fb3dbbe0c3..a2dd4161e5a4 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -962,12 +962,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) } =20 /* - * Cleanup stale TLB enteries + * Sanitize VMID mappings cached (TLB) on current CPU * * Note: This should be done after G-stage VMID has been * updated using kvm_riscv_gstage_vmid_ver_changed() */ - kvm_riscv_local_tlb_sanitize(vcpu); + kvm_riscv_gstage_vmid_sanitize(vcpu); =20 trace_kvm_entry(vcpu); =20 diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index ddc98714ce8e..92c01255f86f 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -122,3 +122,26 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcp= u) kvm_for_each_vcpu(i, v, vcpu->kvm) kvm_make_request(KVM_REQ_UPDATE_HGATP, v); } + +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu) +{ + unsigned long vmid; + + if (!kvm_riscv_gstage_vmid_bits() || + vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) + return; + + /* + * On RISC-V platforms with hardware VMID support, we share same + * VMID for all VCPUs of a particular Guest/VM. This means we might + * have stale G-stage TLB entries on the current Host CPU due to + * some other VCPU of the same Guest which ran previously on the + * current Host CPU. + * + * To cleanup stale TLB entries, we simply flush all G-stage TLB + * entries by VMID whenever underlying Host CPU changes for a VCPU. + */ + + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); + kvm_riscv_local_hfence_gvma_vmid_all(vmid); +} --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 982602E3AFF for ; Wed, 18 Jun 2025 11:36:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246567; cv=none; b=sasjD7Vlpa4JKlpO3/1nUrlCe7f+7XO5VuPPJ4+QOwKWzaZYdOu0sBHkCsZF/T8ujOpHVEteh4vV4kzcxcvJ8kz9qHC52TfKxEjijqgBLUrQWESGjk0ZttDFPEaIswm0LDXib6aa9UpL87gXifsh5kiyCEOet4MhEVapQg+kLBw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246567; c=relaxed/simple; bh=UNJDgyGRwKsBGAVSWW07IY+xb3iaC5t5L0oSJfwXEpY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XUVTi0/QL88CgI2SL+7p21PiSe4+Xixmd/+kL/LjGzWWaVIlividVfTXJXkJwhRTVcVNZzHHUmlLfabafuCyknl5XkN/JfURukGsrLOQPkK2Be7JXkZ4Dwh+kXiJWcrbNW0wHcI8b4K5OyG9U0wy33vP19YrfTpUXdJjrN2yNc0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bgWbqVDr; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bgWbqVDr" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-236470b2dceso60704405ad.0 for ; Wed, 18 Jun 2025 04:36:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246565; x=1750851365; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0APR23xoSpRoPrmHOoIW/fZKPHEwlO7VgBzBq3Nrcg8=; b=bgWbqVDrTp7hXRpfiE6dz9Ppqhehpuu2FOuEAO4DM5Z7SHw319uT/XmZuhOgabyQXy hDWkB6MVn9WDk+VDYA9FrEsOMcybASmW+dvYGBm3LFKLo7SuFylkHcl+E3+w+3nwnUch ETrW7wNsPZWtEnQIQOHWBpA5u9yMKJjv67Lq3snb/9YW+0CXtCE9hsUMngaUUHzxTZZY 2sWSWl6fomLI3USzJRKh82hixJyZMEFimSSOfFBL5/aNGCgYAP3mSgj+M6LSpCHKbXGm RkRBXSnDmSKIWOEUhHoIMKB9BiCNtpFfarDp3zWMQJW5u9+eAvn8iCRnQVUjmfF0/57w P+Xw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246565; x=1750851365; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0APR23xoSpRoPrmHOoIW/fZKPHEwlO7VgBzBq3Nrcg8=; b=uopPjJplE/fHWG9mBT1iMs7B/jBWC7k2nqesAfSzOgMnf/mdyudf65OhBTVDNSDViO btWZTCud02Zcp+W4HyajaUa8QrjUmoogr+DJNECGnnF/32vigWuGb+sEY6XTfhKKxpf2 Gc/tq3EC1Hwgkfgig6Uh47IotHEEioRxvcszCmLbcZ59V66qviAR+UiiJI4b1Bbgwmj4 NfLsM+E8GoUAoTlibBf9cRdvprU+wkQhs+uTgenNAM9qY5kqTN4AUzNyBa2fV5Y+Vh3C 21xXykQUzsXkhiH6LCDdW/IZsFJ9TAnrruZZVIDWo8QUY9cf3SuU9lYmkiX15lmF0aRJ 9mGw== X-Forwarded-Encrypted: i=1; AJvYcCVFdSvezvABQ0gtISg0+6SBPBzxD0xrmD6Yuk3tSgA1hltTmSWPZC6Q6ygcT3mjXKrMFs5dXEmIC+lHLkA=@vger.kernel.org X-Gm-Message-State: AOJu0YwAtxoTgtpIXeUbe1JqJTWeGDPEyi+0wE7cCmanRwCtzIC29Ym+ bAd9UjkwbSNb6fTEH642EEgBmqnirDeSWa7g4xp5JEeAFWqsHPaVr1vX3ujmx9fGNBEqMB98aRu m/oBJ X-Gm-Gg: ASbGnctjKqvibJpTDjRcjTjkWRuLW5Z/HzydsPqS/sZMdT0Q6IWFjiS6CkCZ2ORVyjp txP3iaR+M5Bes9uIr30iMewtL1EhU+KvtFu0zA5DYFW9nvWY8Sa73V+B3D0CWuheqxpvfStMGI8 9r7bL4M3BOYdKNeg0FwDdwrxiXpak7I7M1ahSKNNNXo0ZFgkvz/7+pb9zUbg/0P7ZHAE9H3Rdsf FdTBey8Fv3CCJCatK/enkHqrUJwmbnLEAC/xw2vXfW3e17X0OCApHUo8yNBpdA6+JrjJ2YijACh HkGCo78Wg10eifw69T8qaniIqsfW2QaYO01oWH6o37QCtk9gRWLST99sG8p/SjfvmQ5w5oyf52G m1IGzMNbPF62d/IlMyLQocJ7unRZC X-Google-Smtp-Source: AGHT+IEI/VNLAuRLYdIAweMxNEdH8gXS8v8LyQf6sVKwZcSZW1oY61MqIeM//ad3SI9TL8TSIJCUlw== X-Received: by 2002:a17:903:1ac5:b0:234:bc4e:4eb4 with SMTP id d9443c01a7336-2366b3e2ff9mr227196035ad.51.1750246564794; Wed, 18 Jun 2025 04:36:04 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:04 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 04/12] RISC-V: KVM: Replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH Date: Wed, 18 Jun 2025 17:05:24 +0530 Message-ID: <20250618113532.471448-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM_REQ_HFENCE_GVMA_VMID_ALL is same as KVM_REQ_TLB_FLUSH so to avoid confusion let's replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH. Also, rename kvm_riscv_hfence_gvma_vmid_all_process() to kvm_riscv_tlb_flush_process(). Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 4 ++-- arch/riscv/kvm/tlb.c | 8 ++++---- arch/riscv/kvm/vcpu.c | 8 ++------ 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 8aa705ac75a5..ff1f76d6f177 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -37,7 +37,6 @@ #define KVM_REQ_UPDATE_HGATP KVM_ARCH_REQ(2) #define KVM_REQ_FENCE_I \ KVM_ARCH_REQ_FLAGS(3, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) -#define KVM_REQ_HFENCE_GVMA_VMID_ALL KVM_REQ_TLB_FLUSH #define KVM_REQ_HFENCE_VVMA_ALL \ KVM_ARCH_REQ_FLAGS(4, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_HFENCE \ @@ -331,8 +330,9 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); + void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); =20 diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index b3461bfd9756..da98ca801d31 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -162,7 +162,7 @@ void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) local_flush_icache_all(); } =20 -void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu) { struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; unsigned long vmid =3D READ_ONCE(v->vmid); @@ -342,14 +342,14 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, data.size =3D gpsz; data.order =3D order; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, - KVM_REQ_HFENCE_GVMA_VMID_ALL, &data); + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_GVMA_VMID_ALL, - KVM_REQ_HFENCE_GVMA_VMID_ALL, NULL); + make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, + KVM_REQ_TLB_FLUSH, NULL); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index a2dd4161e5a4..6eb11c913b13 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -721,12 +721,8 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_v= cpu *vcpu) if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) kvm_riscv_fence_i_process(vcpu); =20 - /* - * The generic KVM_REQ_TLB_FLUSH is same as - * KVM_REQ_HFENCE_GVMA_VMID_ALL - */ - if (kvm_check_request(KVM_REQ_HFENCE_GVMA_VMID_ALL, vcpu)) - kvm_riscv_hfence_gvma_vmid_all_process(vcpu); + if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) + kvm_riscv_tlb_flush_process(vcpu); =20 if (kvm_check_request(KVM_REQ_HFENCE_VVMA_ALL, vcpu)) kvm_riscv_hfence_vvma_all_process(vcpu); --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BE01D2E54C3 for ; Wed, 18 Jun 2025 11:36:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246572; cv=none; b=uCTbO9c/+5iiyL+fhT2uUBjPG0J4yKL5aMnvBLP5hNo42GYqhNcznE2QziH45XsDv/rbkgryLWAd4BNRYd33wRiJ7kAawdQZzgGyYJxYCSOQZdElAowTkTb8OfxQplqgFI6EKc5TjsTjERrvNghvRFNnfLPJFyPXSBL4yAUy5S4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246572; c=relaxed/simple; bh=Tt/vawDYZjRbm2u92sK4nUF168I+VRBgzlUXAX+PzHI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nC3G6PHTNgTomQBI2/3ZsReH+GqLy2HGa3eYcEBbtSVp9iG2xjIc1YMsD2qHb1ixqMmXxBZk2OpNcNnSf18fP8GWMGx+ULWZj/Qc2bDk7m1OCfcAZMep1sfR0JhCpRSsh1u0bXX5s2w06QH4CwrWYdwq0GIuUsyVqFI4T0s0Umw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bHDqBI3/; arc=none smtp.client-ip=209.85.215.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bHDqBI3/" Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-b31d489a76dso150595a12.1 for ; Wed, 18 Jun 2025 04:36:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246570; x=1750851370; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ernJHGr8wUUltKH2OhtBcW3LIbI4jpB3xEJEU1yumiA=; b=bHDqBI3/PWOj7OXSVPzXvwS7SPoaJQK5RH8Zse+fTrPR+8WqbemX+ffo5io9sz/fQT ndWr3cT+1XY5wCXuI0zdIinvJ7Jrqkr/BUZgKZeJhThOf5G/ioTleTg37BSheTEMdrrY f+sqYkYHnYSZyKo54BmSPIK/C2iZAgoPj152CPD1a0sKETReSO9ouwc5pz+S6BGYiJGB rE4SC0XYyGOeqahW6T9eoWcmeGlLvAwUna23BwZQi6biKeJTB0IWEcrLyRT5aDSdybon zj8cPZyJg3NIP89d1Hl0mrFBEM2/VvoUxOSr0P/Wfn4WM10CqjnpwRvx9q+qGnPzSUEG UA2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246570; x=1750851370; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ernJHGr8wUUltKH2OhtBcW3LIbI4jpB3xEJEU1yumiA=; b=egWvKEPKhDZtA09V3ECZyXhnVrO/awOvgFvDLxg1Jfbpr9e+cWPSoMBcrRoOvuk5mI bL25/JybzNspRpVM4BG6O1BGkZJ0flvtTtlB2Ut+Eovprq/n6FOHbRTHW1pXy0dl7wk2 z+sbJea2gM2ZKW6L4jFOubFYUEvNj3J/Q22KeQKvuVN55NZrIiN6rkIMyq0vkFxj+tr6 og8J2AO3p1VINHfsP8PW3ABA6Hl3McBZwi8pGEftPwXzokIMfKzeV6sPtH0KTWtJz2uv t3DYWFRJe+hTdk0aDpZ2oCOkacFKCa8iHsrL/w1MH+5FeG0WPuV4DgoAS+2NqFp6SaTA xlqA== X-Forwarded-Encrypted: i=1; AJvYcCW+NTvb/K+8057HR+Uyfyc0HPgiNo408c/QsVlX0rdZmSH07V7jU32yr1AcDOr4xGFBIFGsUt80AOUbvrs=@vger.kernel.org X-Gm-Message-State: AOJu0Yw0Zp2X9U8qQGEagNt4pgCm3a84o1gUd2rCaUjo+wGrOsXgVrZz s2usQ3KXY0+jLxKS+28g+sMbqq23xLxUfIEiI1I001j2ot39AzzK1QBjD4ahYbunoqs= X-Gm-Gg: ASbGncuUbY1aZOeiK9iZy71HV/c1Kki2fhfhE68uafks+7EEGidiF7QQVzyc3IuwxtO +Emq11EqHqvi4/83wNy77s9WDCmNpD5Z66jTbhDH1XQQTeBUM/8yVd4ppzPiTNEEo1pjJpi0Ne2 ZCwZYK34NsLNi1fgLxaJaf719OLa9v7oJq9NofK6xSbOu1dQ3y78dkOSEsupAucymK7aGO3Qqw5 sL2vRBtiJnjSA9xWxiudwIbjwoojDzX+IzNGKTs9Dd2ZyhRijS6rEOXWnQuWMEWsSI/1PLSCyUo xefbNe8VezjCz7RltZfpw1ht/4/6+IfZSwPl3ShbpUQYwAbesXmoGIeKWMed72BN8msNhaYSyp3 tC2Z0PkYB4wGtnj5U5L+o722WpB8G X-Google-Smtp-Source: AGHT+IED2oGexk/JFxqAVuWpL/++/t8pIqk6FimHQhJH+vt07+Mbc0bPBQs0X+EavqGGcAYnWQlaQw== X-Received: by 2002:a17:90b:1a87:b0:311:abba:53c0 with SMTP id 98e67ed59e1d1-313f1c03004mr26620231a91.9.1750246569846; Wed, 18 Jun 2025 04:36:09 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:08 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 05/12] RISC-V: KVM: Don't flush TLB when PTE is unchanged Date: Wed, 18 Jun 2025 17:05:25 +0530 Message-ID: <20250618113532.471448-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The gstage_set_pte() and gstage_op_pte() should flush TLB only when a leaf PTE changes so that unnecessary TLB flushes can be avoided. Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/kvm/mmu.c | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 1087ea74567b..29f1bd853a66 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -167,9 +167,11 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; } =20 - set_pte(ptep, *new_pte); - if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + if (pte_val(*ptep) !=3D pte_val(*new_pte)) { + set_pte(ptep, *new_pte); + if (gstage_pte_leaf(ptep)) + gstage_remote_tlb_flush(kvm, current_level, addr); + } =20 return 0; } @@ -229,7 +231,7 @@ static void gstage_op_pte(struct kvm *kvm, gpa_t addr, pte_t *ptep, u32 ptep_level, enum gstage_op op) { int i, ret; - pte_t *next_ptep; + pte_t old_pte, *next_ptep; u32 next_ptep_level; unsigned long next_page_size, page_size; =20 @@ -258,11 +260,13 @@ static void gstage_op_pte(struct kvm *kvm, gpa_t addr, if (op =3D=3D GSTAGE_OP_CLEAR) put_page(virt_to_page(next_ptep)); } else { + old_pte =3D *ptep; if (op =3D=3D GSTAGE_OP_CLEAR) set_pte(ptep, __pte(0)); else if (op =3D=3D GSTAGE_OP_WP) set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); - gstage_remote_tlb_flush(kvm, ptep_level, addr); + if (pte_val(*ptep) !=3D pte_val(old_pte)) + gstage_remote_tlb_flush(kvm, ptep_level, addr); } } =20 --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A2DA4126C1E for ; Wed, 18 Jun 2025 11:36:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246578; cv=none; b=jMLJJkryZS8OZ5nB1A8UHq4AY7BuXMEymOSLjAtoqkgWbQIBmKdvjXaKB3LKSk/iHgkB6N9WpMwrKXp/1CNLs99ug0i+VpygL98oewkWNfXBWT/pgLc/xulyjOOHRDK2Q9zIaPkgqlmpDVmC2+aR9L7xOsyRPKmyNJS3EJi/jPk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246578; c=relaxed/simple; bh=lA01uq6QDQodlRWmaVdLcLE/1/AUd0mQeIhxuBJzy3M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d66iwKNWArHOPBVazeQcSpgm+d1l9x5jJz/wcdVvBRNRSee5iXz9jSD7uw10e5ooysabQr89TXN/+/3Szvn+kndJYjnDoNvy+uuD61DbtYJCoeLl0RqHP4kLaxkSjpvLLYAueet5yPrgfn29ZrStIl/IURT410JwpxFVAvN5LU4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=kIkLXFCL; arc=none smtp.client-ip=209.85.214.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="kIkLXFCL" Received: by mail-pl1-f171.google.com with SMTP id d9443c01a7336-236192f8770so5214015ad.0 for ; Wed, 18 Jun 2025 04:36:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246575; x=1750851375; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4fsGDp/PENI7aB5HoDYrfk/XMREQKVS7VE/vtxv5oiE=; b=kIkLXFCLh/ikEB6j9/X+C5gSZD7KkoqSlYvNceKhpUIA6RjZdZ2cjWRVrTOCba+Yj5 SG3oZz9C8Fhnzs06DTVL97Dizp3X+CNP5CUbPQaD8vXzSaOGZm6Smd80QnQd0Z+9E+lH UeC4AwjbfTRhdK87Iydwl2spzQahBYYSQ/X9pIEb6rzmvVr1riFi01QH0W4RjTOYj+dI wi+7lucLkZGfR6XP1sWU9rStpT+kZQFgAw0KZF1hDVLoyg2CeHLKAxLg/29Npct6Ee7Q dOSyw7wRelJ2tH1KWYeWCX8hhRmZ89xO+Xw3hOYcA7Ex0FSwpgJeF7NW1u6cJjNH5wQJ t0EQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246575; x=1750851375; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4fsGDp/PENI7aB5HoDYrfk/XMREQKVS7VE/vtxv5oiE=; b=XYocgDHsE8smohyW2STbxPQ5JnPkEqR2kayJcolhJX5dg4vh2bNd/9NC7VDiypuFA4 +gNmJ7e/pG4jIGqZPg+Wfgd/e5scjgTB5g/+N3tYoCk/R+JyrvH1XBb6HLkYOueYXkpB 2TAqJNsJfKFgakjfH8i2Hw6t8OD3eK6DXLxuLljFxXJRGxeayo/8NWc7KxEVoxTR4ZO7 fuvz0e7iSDkCMGcpYHJyeHQ4FgON3y/35A6cccaGLxOJSKsgl9s3RmJ1F6m02OzW0wZ4 fXIlncQWL6Dyl9TI+E4CuWDIi40ngAGG04052gPw3OpThoHhyLquwpn6oDLFxJfz1dKg wi+Q== X-Forwarded-Encrypted: i=1; AJvYcCXJb32Az1tlKPpXUj3m/U9Fnv7dR98IvYNvqULsuQNaUhUpmOyHVAhpeD3CPq88nV8YZl+aK2E+9LP49C4=@vger.kernel.org X-Gm-Message-State: AOJu0YyWjuFPVOQyiE7wHhwoU4+fjFbk0WyYIfT3Qcsq06/FG0A8PWYn DfVsxtikgGmoEOV+dqLk4DGDmAZ8bNiXe2SQ1uwvReut370AIrvUIQGEEL56PenkMBg= X-Gm-Gg: ASbGncuDU2nc1VD7FeNxsIogB/DFWpaB9IzgAeuk+zGTTO1D2l9cg2Tc4TN6QRKlHJE zebOSVfseQiQMfNEqv++BXqRMP6rjgrV7zYk8EVan0B9WuHen13u+521VNE1hf2k15YVUuKh1I0 5KpAUfYNcR5My0NPuceJ9gimSCjsq7wyZHYfPeeeqXxZmMNs7VGvf50mI7fLij1ImuLOP6ADlwE bFGLIb+QGwQ25Kj9kcJ6LadMzhEEiUVmmiDCrxsQvSyXOaF8FRkQMrI84SS7249BVAhUYZcciF/ n0hHzzX0DC12w5tt2MfGnBmOjVJiMdph5WXQPe26xtgQQ0VaWMeW8srg9Lz3VP9vYYudl1HsPG3 xWeauIoTa+F7OvunHOw== X-Google-Smtp-Source: AGHT+IGPdPYj0gZ7Vn94CMfnaQxwq36XrmsmDwGfdD+JAuyfQGtPOHOXGGbxG4KxhqzNEij1VvDbCA== X-Received: by 2002:a17:902:ec89:b0:22e:457d:3989 with SMTP id d9443c01a7336-237c1d2a8b3mr37447295ad.0.1750246574778; Wed, 18 Jun 2025 04:36:14 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:14 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 06/12] RISC-V: KVM: Implement kvm_arch_flush_remote_tlbs_range() Date: Wed, 18 Jun 2025 17:05:26 +0530 Message-ID: <20250618113532.471448-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_arch_flush_remote_tlbs_range() expected by KVM core can be easily implemented for RISC-V using kvm_riscv_hfence_gvma_vmid_gpa() hence provide it. Also with kvm_arch_flush_remote_tlbs_range() available for RISC-V, the mmu_wp_memory_region() can happily use kvm_flush_remote_tlbs_memslot() instead of kvm_flush_remote_tlbs(). Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 2 ++ arch/riscv/kvm/mmu.c | 2 +- arch/riscv/kvm/tlb.c | 8 ++++++++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index ff1f76d6f177..6162575e2177 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -43,6 +43,8 @@ KVM_ARCH_REQ_FLAGS(5, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_STEAL_UPDATE KVM_ARCH_REQ(6) =20 +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + #define KVM_HEDELEG_DEFAULT (BIT(EXC_INST_MISALIGNED) | \ BIT(EXC_BREAKPOINT) | \ BIT(EXC_SYSCALL) | \ diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 29f1bd853a66..a5387927a1c1 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -344,7 +344,7 @@ static void gstage_wp_memory_region(struct kvm *kvm, in= t slot) spin_lock(&kvm->mmu_lock); gstage_wp_range(kvm, start, end); spin_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index da98ca801d31..f46a27658c2e 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -403,3 +403,11 @@ void kvm_riscv_hfence_vvma_all(struct kvm *kvm, make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, KVM_REQ_HFENCE_VVMA_ALL, NULL); } + +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) +{ + kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, + gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, + PAGE_SHIFT); + return 0; +} --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E12132E719A for ; Wed, 18 Jun 2025 11:36:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246582; cv=none; b=aLHDPtVFf9tQ5F9oldCDwRAO3we8hO3kx5XKqi9Fo6QJ8izpF4gpz23dVv4LGeu+lv0lmr6VxFhm0nDZIBzyLtcfdAt7MdZaXwj4y3T1+MJO4ayOjQJLUMQhAps77j1IPpKoC5465JlWN97UP34qtJ96cYTm3mfIX+F62n3SCgM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246582; c=relaxed/simple; bh=EZamgmfz28ULCJxJqXiePT3VU1t7zAHa6emh76/UPUw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oMcdB4nPFrEHTqG7pgHyvVrQ5ompp2DeR1VoEHOdGu9k3wzj+3wBO1ORGQZRLeUvG3uOZRtrxMJ1iaVqe2a0bAFm/0u9/lqNjofSI1QB5bCk55iqiKMRx0RoSq/03lbI+8JX1q/V9gqptvS6yKeAQ0h8KOoZHbJPDuxugC7r3OE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=GBn0/9aU; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="GBn0/9aU" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-2366e5e4dbaso5709515ad.1 for ; Wed, 18 Jun 2025 04:36:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246580; x=1750851380; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/YNO98LyRrwhUY0pfR89F0hA4+0elQu2+nOXauL2xzw=; b=GBn0/9aUONA8oT3b8SrFD571mvT512gttq7S/8kH7Q9aQtwy+VuJbdj7pbEnFuMbSV TZ9tNObfV80f1oNuT1L1zG8TQ1O/o8UT+q7MUEpAnXrvzlMYyygIBDbIZsjaY9JJCd+6 D+/+C5I0QktErtjh5srxyvkXuCbUkh5dU/R9z0pg/KfwBifCpH3iMqZSBHbuhe57OYcX B7HEeSvEgnEH1emm98tUUHSwuOa9o1zvTLuWk8xdYloe1slslOE+sSaGZvHCZOM53uMc KJ2OnjJdTVGJbhXUjzUQMZ7qwi5DtEDmpK+e8qJ6JaD7FzO0XOSBVHLcu+QVNRyhC8BM dhuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246580; x=1750851380; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/YNO98LyRrwhUY0pfR89F0hA4+0elQu2+nOXauL2xzw=; b=MhuNlfVQlV4iH6Q9iSiIuIzIh8vOYR4BFZP8YY+dwVPj/YO2uL/lAtNOr2myoVRQTf jz92iKpKEBIuzNF+NEecPm2sE5Gl1JaZ2FPQ/es8lwf7fh3Left8/V/9MZ+IlQCGD2JQ rXvHWHCPqac5jO+3W7sa72S/ahX4qdDFulHxs7VnXISmQuHVmr0wPA+q0TwbgReuvyLk 8goGXNlsczrTyWu7kHNkR3s8EFWIa6wgRBPSPdPZuBuyTZuZRq4rotBNTPlWa+z6Z96Y vGh2ftQsdAnCXjNsLELQhL9bg8tcv2MaXb4w13uxQYP0PZpXW1FFqbGEJDHmO0kTNGds 9psg== X-Forwarded-Encrypted: i=1; AJvYcCUhKYCpCKTM1N+Q/CtpRWsfTZk/iPFHZIXCYJBc9iFYl4qmzSeKm6N+2jd/pWwxxNm1LpGpGoobAxktFC0=@vger.kernel.org X-Gm-Message-State: AOJu0YxFAegtUBcwfrMInKu6CiJlQXp0OtGPXC90BjrUpu3KAJTR+D8o LX3j/zE2sL0twHi+V+x5KSwGYIlhTuzAkOFTWxKfDJyOKYiyd969gHhLHsBMcrZlBlI= X-Gm-Gg: ASbGncsKgjwIzNpG2AyuU5ZkoN8y2h0+s0jWpUA4KzVntHY/h8RLzSRa3YqRFXFNhoC wHffbRaM31llLd0Q4zTf2wB2X1Hj8bnhjEuBOTaktVMo3NwzuRxbtiyy5UPZQ226gMOM+U6OJrL 9SldcqtbwBcSyzYNeqjviad5Hk3yaLpI9+Zkk06zCq/SebmTwc/NOWWMBbXSLPjSq2hCQjv1wrE g7SB29+SpoyzQFzPzTrjOiYf6N/Yhz7CiFts1DExnTk1lFuiS9OvH7r44mYNJGbARuCD2ImCLYH 0xjRv6yPYkJQcZWHFNlhblFlS5XYgLMhWL05j1FYsD6R+0Bz4nt8AGlArhvCwlXsbFm83E6POjI 9v5F6Rs19PXtyfoRpqw== X-Google-Smtp-Source: AGHT+IGvNMzA4JkJB3Lqq8VqvqJDedOuU5j6cPnr0JPK19aiVhzcYh8cLYvVgF2E++GIf5MLnFvaZw== X-Received: by 2002:a17:902:ccc8:b0:221:89e6:ccb6 with SMTP id d9443c01a7336-237c210feedmr34862225ad.25.1750246580053; Wed, 18 Jun 2025 04:36:20 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:19 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 07/12] RISC-V: KVM: Use ncsr_xyz() in kvm_riscv_vcpu_trap_redirect() Date: Wed, 18 Jun 2025 17:05:27 +0530 Message-ID: <20250618113532.471448-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The H-extension CSRs accessed by kvm_riscv_vcpu_trap_redirect() will trap when KVM RISC-V is running as Guest/VM hence remove these traps by using ncsr_xyz() instead of csr_xyz(). Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/kvm/vcpu_exit.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6e0c18412795..85c43c83e3b9 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include =20 static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) @@ -135,7 +136,7 @@ unsigned long kvm_riscv_vcpu_unpriv_read(struct kvm_vcp= u *vcpu, void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vcpu, struct kvm_cpu_trap *trap) { - unsigned long vsstatus =3D csr_read(CSR_VSSTATUS); + unsigned long vsstatus =3D ncsr_read(CSR_VSSTATUS); =20 /* Change Guest SSTATUS.SPP bit */ vsstatus &=3D ~SR_SPP; @@ -151,15 +152,15 @@ void kvm_riscv_vcpu_trap_redirect(struct kvm_vcpu *vc= pu, vsstatus &=3D ~SR_SIE; =20 /* Update Guest SSTATUS */ - csr_write(CSR_VSSTATUS, vsstatus); + ncsr_write(CSR_VSSTATUS, vsstatus); =20 /* Update Guest SCAUSE, STVAL, and SEPC */ - csr_write(CSR_VSCAUSE, trap->scause); - csr_write(CSR_VSTVAL, trap->stval); - csr_write(CSR_VSEPC, trap->sepc); + ncsr_write(CSR_VSCAUSE, trap->scause); + ncsr_write(CSR_VSTVAL, trap->stval); + ncsr_write(CSR_VSEPC, trap->sepc); =20 /* Set Guest PC to Guest exception vector */ - vcpu->arch.guest_context.sepc =3D csr_read(CSR_VSTVEC); + vcpu->arch.guest_context.sepc =3D ncsr_read(CSR_VSTVEC); =20 /* Set Guest privilege mode to supervisor */ vcpu->arch.guest_context.sstatus |=3D SR_SPP; --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D3CC32BEFF3 for ; Wed, 18 Jun 2025 11:36:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246587; cv=none; b=sY4qkU8zMf7OagwMMKDZfAWd1xMTo+LFxQlZ2LimeyeqGoIR+Cq2YrJKh5uiV4YX1gsTwp0eqICplF6rg1T3MagA6Hm/tVEzn8ftZREQfcNIXmTHIO9c/5lvF2R/DLqJ2QPEp/AUGQ1wff04iIWa9hRBLdwiP9gdL151JwbW/nk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246587; c=relaxed/simple; bh=2d2hcsyTQ0lvEdoc6WWyvYcrM8xIbbbNagydej5vg1k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JGaMpZ4CZNGx1bJkpEVvGFOCOtOQxYbrgB5UpSaCWIy7kaAUxk3JkXtn2wEBORrK7v0KN2ZP79GAa/e884bTIz/kxEC4KGMGH/3TKhIS0oH6XjfyFFjC5E4ftivF90+Wcj2Nvl2MdSucUOkPoyLdhZGAaWxaRzKRF6Vorsf0vmU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=a5SNmXew; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="a5SNmXew" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-234c5b57557so66062785ad.3 for ; Wed, 18 Jun 2025 04:36:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246585; x=1750851385; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XG7/Ztl2DSM9ZPCCvmv6w8e8h4Eb1mPFR7YIhnXta3M=; b=a5SNmXewmuqRe4Ro0/JLEWOq8DlFikbtaMIs8v5IkyHb+n2yX5GPCVFjlXJ+YPzZje WqEfkESFEMfIyxplhMAUhV24cFvtEnxwqbpd0gUy4Pw2xAROgm/xHHnr24HpIx9oTR8F loR5r1XVU4jGChT50BZAhOstjrf622weIi9Xx82SG8AxDSa2de8LOGolhL/74II6OWsK XjUp25m/41n4623XKcWbk9uY8jIzXLR/1giMrjih7QIq6Kmv/wQfLCP3RjzJ/2Z7o0NH sLUFQDejrTTLPUoD6hdxouKKvLF1oVGaYcCPr1aC8TybbCQu56p4Mh8/wOBZCP+ijEVd wtJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246585; x=1750851385; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XG7/Ztl2DSM9ZPCCvmv6w8e8h4Eb1mPFR7YIhnXta3M=; b=OgWSenxqQGmena6GA7j+a7bU7x/Jn2iddLqrBySu+dt50Sc9ep9PkA06uC1uc6KZ1I xII1Pze5txnFM+K3o1m3cyNhvcrri/8VPIT0nrtTb6EKp/7EBgxl9QtS8XY4MN6sN2hH c4edYxLcUF33oiAQMiyzOz2cgXmtHZVQ0N0v+yiriU2CpEHhjj+99PH5RY0njiZUsq2V XtETLNBjjmC/wc+xB1LvzE8Z+kbiSuFkaGl1TCpFDFODh3xRvkINdKgj/5IlJpbunX86 Bv8WMUMnEf9CCl59n5k7ZpDw9z3TjDj6DJ3zxA+dzMJzNIpmliH8hx0ih5V2YzOhB9yA mt1g== X-Forwarded-Encrypted: i=1; AJvYcCXYguewwY84NncKs1twSdJ+xPKOxbmViAgRI2lQLoGsW2FbynIz8pXbbNplILMmKAGshHI8HBFhnmzQtGs=@vger.kernel.org X-Gm-Message-State: AOJu0YyRBl9uYzVhLi3lh2pEAn8vgtGyoscu2ZG3n4ZI4wyAy+S+1GMT HEZVeEwqoWm1ijVCfMPKmeAxf3YaoE7xHNGmSWExSet0gJnM+t9JlRqCpa7SvYHKvcc= X-Gm-Gg: ASbGncsmWk0V406WUoDsHSKuAxYG9CDhMJUnwF3jOkgGLSbC0VWVZS8ec9UuwP2xtRb Xg6Xq3JlrdUrgAbKADQFhJtCP3uneeeypJMsmd6j6K8SNceWxtafMidPlzF2qkXxi12WN3RIb9e /ePSAchwVh/e1gZoRk4uuqjipZvAvQzv8zlXYULgNQ6jgmEPL9MS7wvAvHenovNGyPewa93b0AU IMigde05g+yX9+xVQfA0IzYzI6jKWqxQHebFNKW7s158SXjWR9QsgY7S1Tq+aCRP9zjIgPaElYe 5zCITzcVMHtEXOiOBJWOsacG/BscfIP9Ge+jZ0E1HrMOqa095abfsDRooe2PNd8VYyvS1krLV7N Jv8PYOBBbg2wiPq4a1A== X-Google-Smtp-Source: AGHT+IEN9kYbtX3MN80cXl6plnuIGLwCBL6Acv0xEEsexI78AVAHqncJpWj7BsS4zuj7DZwkxDeSFQ== X-Received: by 2002:a17:903:32c9:b0:234:986c:66e0 with SMTP id d9443c01a7336-2366b32e4e0mr257201345ad.4.1750246585084; Wed, 18 Jun 2025 04:36:25 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:24 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 08/12] RISC-V: KVM: Factor-out MMU related declarations into separate headers Date: Wed, 18 Jun 2025 17:05:28 +0530 Message-ID: <20250618113532.471448-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The MMU, TLB, and VMID management for KVM RISC-V already exists as seprate sources so create separate headers along these lines. This further simplifies asm/kvm_host.h header. Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 100 +----------------------------- arch/riscv/include/asm/kvm_mmu.h | 26 ++++++++ arch/riscv/include/asm/kvm_tlb.h | 78 +++++++++++++++++++++++ arch/riscv/include/asm/kvm_vmid.h | 27 ++++++++ arch/riscv/kvm/aia_imsic.c | 1 + arch/riscv/kvm/main.c | 1 + arch/riscv/kvm/mmu.c | 1 + arch/riscv/kvm/tlb.c | 2 + arch/riscv/kvm/vcpu.c | 1 + arch/riscv/kvm/vcpu_exit.c | 1 + arch/riscv/kvm/vm.c | 1 + arch/riscv/kvm/vmid.c | 2 + 12 files changed, 143 insertions(+), 98 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_mmu.h create mode 100644 arch/riscv/include/asm/kvm_tlb.h create mode 100644 arch/riscv/include/asm/kvm_vmid.h diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 6162575e2177..bd5341efa127 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include #include #include #include @@ -56,24 +58,6 @@ BIT(IRQ_VS_TIMER) | \ BIT(IRQ_VS_EXT)) =20 -enum kvm_riscv_hfence_type { - KVM_RISCV_HFENCE_UNKNOWN =3D 0, - KVM_RISCV_HFENCE_GVMA_VMID_GPA, - KVM_RISCV_HFENCE_VVMA_ASID_GVA, - KVM_RISCV_HFENCE_VVMA_ASID_ALL, - KVM_RISCV_HFENCE_VVMA_GVA, -}; - -struct kvm_riscv_hfence { - enum kvm_riscv_hfence_type type; - unsigned long asid; - unsigned long order; - gpa_t addr; - gpa_t size; -}; - -#define KVM_RISCV_VCPU_MAX_HFENCE 64 - struct kvm_vm_stat { struct kvm_vm_stat_generic generic; }; @@ -99,15 +83,6 @@ struct kvm_vcpu_stat { struct kvm_arch_memory_slot { }; =20 -struct kvm_vmid { - /* - * Writes to vmid_version and vmid happen with vmid_lock held - * whereas reads happen without any lock held. - */ - unsigned long vmid_version; - unsigned long vmid; -}; - struct kvm_arch { /* G-stage vmid */ struct kvm_vmid vmid; @@ -311,77 +286,6 @@ static inline bool kvm_arch_pmi_in_guest(struct kvm_vc= pu *vcpu) return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu; } =20 -#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 - -void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, - gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); -void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_all(void); -void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, - unsigned long asid, - unsigned long gva, - unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, - unsigned long asid); -void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); - -void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); - -void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); - -void kvm_riscv_fence_i(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); -void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); -void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); -void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long asid); -void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); - -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic); -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, - unsigned long size); -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); -void kvm_riscv_gstage_free_pgd(struct kvm *kvm); -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); -void __init kvm_riscv_gstage_mode_detect(void); -unsigned long __init kvm_riscv_gstage_mode(void); -int kvm_riscv_gstage_gpa_bits(void); - -void __init kvm_riscv_gstage_vmid_detect(void); -unsigned long kvm_riscv_gstage_vmid_bits(void); -int kvm_riscv_gstage_vmid_init(struct kvm *kvm); -bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); -void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); - int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); =20 void __kvm_riscv_unpriv_trap(void); diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h new file mode 100644 index 000000000000..4e1654282ee4 --- /dev/null +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_MMU_H_ +#define __RISCV_KVM_MMU_H_ + +#include + +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, + phys_addr_t hpa, unsigned long size, + bool writable, bool in_atomic); +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, + unsigned long size); +int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *memslot, + gpa_t gpa, unsigned long hva, bool is_write); +int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); +void kvm_riscv_gstage_free_pgd(struct kvm *kvm); +void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_mode_detect(void); +unsigned long kvm_riscv_gstage_mode(void); +int kvm_riscv_gstage_gpa_bits(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h new file mode 100644 index 000000000000..cd00c9a46cb1 --- /dev/null +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_TLB_H_ +#define __RISCV_KVM_TLB_H_ + +#include + +enum kvm_riscv_hfence_type { + KVM_RISCV_HFENCE_UNKNOWN =3D 0, + KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_VVMA_ASID_GVA, + KVM_RISCV_HFENCE_VVMA_ASID_ALL, + KVM_RISCV_HFENCE_VVMA_GVA, +}; + +struct kvm_riscv_hfence { + enum kvm_riscv_hfence_type type; + unsigned long asid; + unsigned long order; + gpa_t addr; + gpa_t size; +}; + +#define KVM_RISCV_VCPU_MAX_HFENCE 64 + +#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 + +void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); +void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_all(void); +void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, + unsigned long asid, + unsigned long gva, + unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, + unsigned long asid); +void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); + +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order, unsigned long asid); +void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long asid); +void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_hfence_vvma_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); + +#endif diff --git a/arch/riscv/include/asm/kvm_vmid.h b/arch/riscv/include/asm/kvm= _vmid.h new file mode 100644 index 000000000000..ab98e1434fb7 --- /dev/null +++ b/arch/riscv/include/asm/kvm_vmid.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_VMID_H_ +#define __RISCV_KVM_VMID_H_ + +#include + +struct kvm_vmid { + /* + * Writes to vmid_version and vmid happen with vmid_lock held + * whereas reads happen without any lock held. + */ + unsigned long vmid_version; + unsigned long vmid; +}; + +void __init kvm_riscv_gstage_vmid_detect(void); +unsigned long kvm_riscv_gstage_vmid_bits(void); +int kvm_riscv_gstage_vmid_init(struct kvm *kvm); +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); +void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); + +#endif diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 29ef9c2133a9..40b469c0a01f 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -16,6 +16,7 @@ #include #include #include +#include =20 #define IMSIC_MAX_EIX (IMSIC_MAX_ID / BITS_PER_TYPE(u64)) =20 diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 4b24705dc63a..b861a5dd7bd9 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index a5387927a1c1..c1a3eb076df3 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index f46a27658c2e..6fc4361c3d75 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -15,6 +15,8 @@ #include #include #include +#include +#include =20 #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) =20 diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 6eb11c913b13..8ad7b31f5939 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 85c43c83e3b9..965df528de90 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include #include =20 static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index b27ec8f96697..8601cf29e5f8 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -11,6 +11,7 @@ #include #include #include +#include =20 const struct _kvm_stats_desc kvm_vm_stats_desc[] =3D { KVM_GENERIC_VM_STATS() diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 92c01255f86f..3b426c800480 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -14,6 +14,8 @@ #include #include #include +#include +#include =20 static unsigned long vmid_version =3D 1; static unsigned long vmid_next; --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E0D852E8DF3 for ; Wed, 18 Jun 2025 11:36:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246592; cv=none; b=Ej2AlXTw5XJ6YbD/iKn966z14IDKq6i8p0hvnmKWOYS3pogJmY6CyZJ2tAHHKTRu+KVtU4b0PaKtMGkOBi5Y8+PSds34yvdpIJgz5KQycXIsf4N3gapDuv2IyVGzbvJtMHznj1gV8B/nsnbH+VNuqEGYjthdT9/9B70PI4IjX0w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246592; c=relaxed/simple; bh=P4lZ2mtLaeQ3GLE74/sFfvJ7vB7exgTSt9fB9Qi+RZk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qNJIIXM6vQeQmxBi8dl2Zn4D8IJ0ktzfPAhM+p32hlMffMKrLqEsw2FKyIVCHWmKTVMc6JbRmA0LcrZloIrFUtElCcUAnW9d4lUA/kBD9y7UWtkE0RRfR5WOh3unfar0sQMA552qaiDSD5RCN2s22vIRqAf3rc6N3hxv53iXvCQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=GF+tXaFh; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="GF+tXaFh" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-2349f096605so91418315ad.3 for ; Wed, 18 Jun 2025 04:36:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246590; x=1750851390; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2p5mv+mJk3QRDyfTxwUw01alJmtpxQQyJ0sOt3jdEOc=; b=GF+tXaFhakXOdhsAFQrrqYPhqQ35qrL5DYaXsx06tmyOm0S9izxP4FIeHbo2nZjOPW JorUkMAU9ALSlmTuqbWcDebLssHbhbGLDxST35ZVO8bwmUhpMbmKAVfKxiDNydHyEdrd LjO+1b87vE50Z9RjhlXMZlZNqMfvq6tDCXmD5ALJck107tCA/mK+MK5R02RJPijC15oo aLujM9+TQjB1gfanrMVMRmVXn7js/RQwOPE7q2khJUObgQc/cRgbF19gEcLOiMRCCbcj BkZnJAl2atY72DD/atSi0XGwgAQNIBFv+JNy4OZZy4IkY9NFX43RS3xfiAMn4gq8I+WR wjJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246590; x=1750851390; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2p5mv+mJk3QRDyfTxwUw01alJmtpxQQyJ0sOt3jdEOc=; b=broONcHataWxqhUYGfYJDCXHkG4TzbksBqoFI2jlExGPfgN7CbA+fd2126WD2PV2Rg eN/nXY7IWUwPW1gqVCk8Gk1o3I1lAuu05/p1TA9ElfXJH5Y+em3lxdVkMNhTBnUCvsic /hjTpu5B9B3fagKLf4FNGu65/RAYUSvXjaQEQQP3h1iURP/sJN73lYlqzMi/EHLa3ET7 8FxMXbF1KDwUhudDqQ2lf/n0hEd1Q+8E907tmOE8YEDzd+lya8u977TGIiYtUS46gsx4 WPobCbmakOkr1PsUEumUg5f+/IDeLTsKdR2RnUAPmCdXwSZUj1uRI91Jxi6cDfmwGDC/ ITtw== X-Forwarded-Encrypted: i=1; AJvYcCW0I6485mrTQiiUTYfxzqvyLKJtzRqHdWuk0PKvBVjVma8Yd2PUmFLYyuiHuj1Efv/l17iyltRxw/mU8Fw=@vger.kernel.org X-Gm-Message-State: AOJu0Yys9j1VF8hYcaq82I2YIQzrr2BTyFsODfJhmJeJotdBwKRK5lgB Nf28MgpZdLLqx7LFHlLI/MxsqBq/KuBisy2oRkR9nHEp8IVf/eHtqyk15XM6Krw05hQ= X-Gm-Gg: ASbGncvDr9WhL7emmChU1hQZeEfOA2TKAIEHSC6FyusgZ6KU//Re/7nYxYI2RXtLTDe bN4q/BHJJRvrs2psu3k264spP3/rbooYL5rg7kJXfX20HGfZrgz9wjyAmAzoEWaXHraSMQCWihS gM99rRcX/8afvCd4z4tDt+gN3c0Lp6iQomBcE4cuFIiUoD2rnwcYbztTJKhz7GkXSEzX1ecNW2N rxHybVQLC+UCNK01al8Rv9W0Sjnl5Ou05o+Y4KAsJOZWcSNB1M3VagiZMsWROkG63Wh1USl6co/ EhWDCsrZMGUTf7PmfBOgtF6OHATa7Qo6x0QbS/MacPXbUTjNh7F2PMo2t8fH33VofsDZh9Rcvn0 om2AIdMENkOgOJwhQisjl+We3K9wE X-Google-Smtp-Source: AGHT+IEBquBrf2XB4FByIM0NjX1oGNwUDdFIttWNjoo91vzWLERz/6T2M0LEaAO4ssBEon09H6iLyQ== X-Received: by 2002:a17:903:74f:b0:234:b422:7120 with SMTP id d9443c01a7336-2366b3137c0mr188903225ad.9.1750246589968; Wed, 18 Jun 2025 04:36:29 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:29 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 09/12] RISC-V: KVM: Introduce struct kvm_gstage_mapping Date: Wed, 18 Jun 2025 17:05:29 +0530 Message-ID: <20250618113532.471448-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce struct kvm_gstage_mapping which represents a g-stage mapping at a particular g-stage page table level. Also, update the kvm_riscv_gstage_map() to return the g-stage mapping upon success. Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_mmu.h | 9 ++++- arch/riscv/kvm/mmu.c | 58 ++++++++++++++++++-------------- arch/riscv/kvm/vcpu_exit.c | 3 +- 3 files changed, 43 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 4e1654282ee4..91c11e692dc7 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -8,6 +8,12 @@ =20 #include =20 +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic); @@ -15,7 +21,8 @@ void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); void kvm_riscv_gstage_free_pgd(struct kvm *kvm); void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index c1a3eb076df3..806614b3e46d 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -135,18 +135,18 @@ static void gstage_remote_tlb_flush(struct kvm *kvm, = u32 level, gpa_t addr) kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); } =20 -static int gstage_set_pte(struct kvm *kvm, u32 level, - struct kvm_mmu_memory_cache *pcache, - gpa_t addr, const pte_t *new_pte) +static int gstage_set_pte(struct kvm *kvm, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) { u32 current_level =3D gstage_pgd_levels - 1; pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; =20 - if (current_level < level) + if (current_level < map->level) return -EINVAL; =20 - while (current_level !=3D level) { + while (current_level !=3D map->level) { if (gstage_pte_leaf(ptep)) return -EEXIST; =20 @@ -165,13 +165,13 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, } =20 current_level--; - ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; } =20 - if (pte_val(*ptep) !=3D pte_val(*new_pte)) { - set_pte(ptep, *new_pte); + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + gstage_remote_tlb_flush(kvm, current_level, map->addr); } =20 return 0; @@ -181,14 +181,16 @@ static int gstage_map_page(struct kvm *kvm, struct kvm_mmu_memory_cache *pcache, gpa_t gpa, phys_addr_t hpa, unsigned long page_size, - bool page_rdonly, bool page_exec) + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) { - int ret; - u32 level =3D 0; - pte_t new_pte; pgprot_t prot; + int ret; =20 - ret =3D gstage_page_size_to_level(page_size, &level); + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); if (ret) return ret; =20 @@ -216,10 +218,10 @@ static int gstage_map_page(struct kvm *kvm, else prot =3D PAGE_WRITE; } - new_pte =3D pfn_pte(PFN_DOWN(hpa), prot); - new_pte =3D pte_mkdirty(new_pte); + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); =20 - return gstage_set_pte(kvm, level, pcache, gpa, &new_pte); + return gstage_set_pte(kvm, pcache, out_map); } =20 enum gstage_op { @@ -352,7 +354,6 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic) { - pte_t pte; int ret =3D 0; unsigned long pfn; phys_addr_t addr, end; @@ -360,22 +361,25 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t g= pa, .gfp_custom =3D (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0, .gfp_zero =3D __GFP_ZERO, }; + struct kvm_gstage_mapping map; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); =20 for (addr =3D gpa; addr < end; addr +=3D PAGE_SIZE) { - pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.addr =3D addr; + map.pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.level =3D 0; =20 if (!writable) - pte =3D pte_wrprotect(pte); + map.pte =3D pte_wrprotect(map.pte); =20 ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, 0, &pcache, addr, &pte); + ret =3D gstage_set_pte(kvm, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -593,7 +597,8 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) =20 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write) + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -608,6 +613,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, unsigned long vma_pagesize, mmu_seq; struct page *page; =20 + /* Setup initial state of output mapping */ + memset(out_map, 0, sizeof(*out_map)); + /* We need minimum second+third level pages */ ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); if (ret) { @@ -677,10 +685,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, if (writable) { mark_page_dirty(kvm, gfn); ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true); + vma_pagesize, false, true, out_map); } else { ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true); + vma_pagesize, true, true, out_map); } =20 if (ret) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 965df528de90..6b4694bc07ea 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -15,6 +15,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) { + struct kvm_gstage_mapping host_map; struct kvm_memory_slot *memslot; unsigned long hva, fault_addr; bool writable; @@ -43,7 +44,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, } =20 ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); if (ret < 0) return ret; =20 --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E58B42DBF4C for ; Wed, 18 Jun 2025 11:36:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246599; cv=none; b=GlB/Fixl5m/gpOltAJjGAF/+u8cEUDGBobZ9Wzwr0lKUTHKIA/4hQj6HyneePrMTBIuCanlpPJbi7nLQ/BQBIRevkifoGoUVsDM90n4H/ZJlp1UgO5HGAWYkW4mDkBcRR4CC+CfKV3FUboAbbwii/pFyeVo1u1pKPI5Zno6tgXY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246599; c=relaxed/simple; bh=9DkH1MnmKplYHcxUKPSWXUR9/q6pa0raZeD0CabKAcM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SetTIjH0FZYvsIAkBJPVQziLRnMU1GXKWf0hEm5Vl++A7k7liANRsAod/J4U6WvYoEZj57ux+ku3czNpP2V7oAO6FQeN7RJNAA2q69jpSk/C1qvDEkEw+IlaJrr6b91CWt4nTmzyU8h50guDYlSLP3kvBsu74AdKDDKQSmLwzYs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=IzM/ZEo5; arc=none smtp.client-ip=209.85.214.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="IzM/ZEo5" Received: by mail-pl1-f179.google.com with SMTP id d9443c01a7336-2353a2bc210so59860495ad.2 for ; Wed, 18 Jun 2025 04:36:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246595; x=1750851395; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wv2Dv1D3mRBWch3U2QfEIQG6aPxPGFrtjz75yYPWcVQ=; b=IzM/ZEo5d4+FVrnCR3TYOO3BBK0QT93GQpbSlRA5liM5icaOONCS9KR/ZuVeN4SgJ9 uTasFjgwgrMFydL10qhxK3KreE66el+N+yKfJxO6DM5lFy/JRFMjlQPn3fQKBTYRyMRI DIeQmE/fY/ZvNBJR8d0Wd5RIciMAn0VkGWMXcIReNhCykitdNFhz87zJwv6zurdwbFYP ZVh/kB680G5uFgg1TAL/6UjCYq2vd2QCtu28IaLgQgS0iroaE3mHUL1Wmx0Bs8V3X1ec H3EcKcTWYXAV/wfcOOVCp8OBrnLVhx9Iq5xxjfuhlnhvvQL+6v7YBbE9MOc62IBquMNQ xFVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246595; x=1750851395; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wv2Dv1D3mRBWch3U2QfEIQG6aPxPGFrtjz75yYPWcVQ=; b=mM0WszlB5db2jdbJgGj73nwOkVbHE1ikR8tNPfxvsQq08tbYUaL5y0GvArLAsB1ZBp btZ00BLL78OOMAZpUNYVnwCzKH2KcBM9y5S52TcXwk4R5jsVypMU8TazIM4zoQo0ELYk KbsB4uIwWXAr2gvIY7VmbxMjbp99+r4a9rGcAIGkAMEeAL8H7XvuBAuT6TlZcPW0PerM ECAucu9SaGxZ60kTc7EKewBwqpaaIrPJFP0F8rVxXHqG04tCVW2TKQdhcUuuKLNyUpg+ fSRnqKwRRlK1uMLjSgZ3KX1oBAaB21fNcDMmBZsO9Qu+YxtTAfgv4wRCzWBwfXG4UycY 4QOg== X-Forwarded-Encrypted: i=1; AJvYcCWP+dQfVf8Oo079Oz6OklZm4TYFTBcAYVqTmkkPETva+C/YW1q86c0XFl6frXeOKxkuaxf792kxjZs4OK0=@vger.kernel.org X-Gm-Message-State: AOJu0YxYKLzaRSeOmQLH4YIkUvuWMYmvmskoWeWeDzGVoT4d9HeYVWyy qGYcEO1NVE7qtxhce+U/L5GnQjriIeMUpdWhD8+rB6GLQ6nVmr/+DRL5onJa1dq51HE= X-Gm-Gg: ASbGncsIEvJMsqQ03jNSZFJiLVB0llOd5xRNXLZgr8LWZvFezb7t8KibA4PMeaBI5Dx Y8ARgzbyf2AGTtJZDS7c3cAsMQW99VuBFcltQdNDVPjMVZDODhm0rBNAJnsTi7Gn1mxbb6HtDe+ +nuW6QZtFVC7ymHZIchcoswuPmeFYruKv3vxBt7P4T1NPzAtoDKmO1Ta11ulJteqmXVtbamslLE xD/LfJG7UgorNKkT2j/o+mwhZkXFY5e8zquoXNoQbkDK7yrLy0QpyZO9uL5CM/38SoZ0sUo2ol0 434pkzq6VW/OTQGx6+wDbGRbJ7m89pZymoWERpXep5LwuvHe3NTtukx6OJ7HBYBhW1wmpAwoR2A 6OGpVtilBZyfhJKVQNw== X-Google-Smtp-Source: AGHT+IHj3Fmtrtd3/iR0RBFJ3K5R+ndUPF941cuYVZNiiA3rRlmLdRt/HqLMxHP7frHFlTuPO9nuwA== X-Received: by 2002:a17:903:41c8:b0:234:f580:a11 with SMTP id d9443c01a7336-2366b344b39mr298672495ad.19.1750246594829; Wed, 18 Jun 2025 04:36:34 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:34 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel , Atish Patra Subject: [PATCH v3 10/12] RISC-V: KVM: Add vmid field to struct kvm_riscv_hfence Date: Wed, 18 Jun 2025 17:05:30 +0530 Message-ID: <20250618113532.471448-11-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the struct kvm_riscv_hfence does not have vmid field and various hfence processing functions always pick vmid assigned to the guest/VM. This prevents us from doing hfence operation on arbitrary vmid hence add vmid field to struct kvm_riscv_hfence and use it wherever applicable. Reviewed-by: Atish Patra Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_tlb.h | 1 + arch/riscv/kvm/tlb.c | 30 ++++++++++++++++-------------- 2 files changed, 17 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index cd00c9a46cb1..f67e03edeaec 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -19,6 +19,7 @@ enum kvm_riscv_hfence_type { struct kvm_riscv_hfence { enum kvm_riscv_hfence_type type; unsigned long asid; + unsigned long vmid; unsigned long order; gpa_t addr; gpa_t size; diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 6fc4361c3d75..349fcfc93f54 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -237,49 +237,43 @@ static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, =20 void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) { - unsigned long vmid; struct kvm_riscv_hfence d =3D { 0 }; - struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; =20 while (vcpu_hfence_dequeue(vcpu, &d)) { switch (d.type) { case KVM_RISCV_HFENCE_UNKNOWN: break; case KVM_RISCV_HFENCE_GVMA_VMID_GPA: - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_gvma_vmid(nacl_shmem(), vmid, + nacl_hfence_gvma_vmid(nacl_shmem(), d.vmid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_gvma_vmid_gpa(vmid, d.addr, + kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma_asid(nacl_shmem(), vmid, d.asid, + nacl_hfence_vvma_asid(nacl_shmem(), d.vmid, d.asid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_vvma_asid_gva(vmid, d.asid, d.addr, + kvm_riscv_local_hfence_vvma_asid_gva(d.vmid, d.asid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma_asid_all(nacl_shmem(), vmid, d.asid); + nacl_hfence_vvma_asid_all(nacl_shmem(), d.vmid, d.asid); else - kvm_riscv_local_hfence_vvma_asid_all(vmid, d.asid); + kvm_riscv_local_hfence_vvma_asid_all(d.vmid, d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma(nacl_shmem(), vmid, + nacl_hfence_vvma(nacl_shmem(), d.vmid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_vvma_gva(vmid, d.addr, + kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; default: @@ -336,10 +330,12 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, gpa_t gpa, gpa_t gpsz, unsigned long order) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -359,10 +355,12 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long gva, unsigned long gvsz, unsigned long order, unsigned long asid) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -374,10 +372,12 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long asid) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D data.size =3D data.order =3D 0; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); @@ -388,10 +388,12 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long gva, unsigned long gvsz, unsigned long order) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gva; data.size =3D gvsz; data.order =3D order; --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAA6C2E9EB9 for ; Wed, 18 Jun 2025 11:36:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246603; cv=none; b=aNa4x4F9dmu9h8euchMmGDdD1R6LPNkiv/FvD03wFz7uItql6oXFkUlnFD/wvP3ejckZdd2iKCYEpshV9jvoSIaBFE9snxmIEXLxsC5PkSOzIMJwLw3TUsOG0iExTcMT7qrwP0erv/1vVv8MEhKAHvg6TkScuxADwRe0P2Sff/A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246603; c=relaxed/simple; bh=X0N8wDjIxKazhYu/6fsz/Bdqc7YSzKG8rGrYZT7tAuI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T3Od5PhtA3PQVFZLULcO4yCgP2vyNsoC9urMpvKHnmcZ3cfuFNFzTjD0fXjziX8bqAELwLZ/AKduuNXRVsNmDjbVo1wpJ+82m8t20tntOxhfRpDN3fA7Cq9c5bjSjlSZuOl4A1wUsMIi9iTe8UeM7uGTxj/pGHlcQJOE0RnGt4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=l6r5mBq3; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="l6r5mBq3" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-236470b2dceso60709235ad.0 for ; Wed, 18 Jun 2025 04:36:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246600; x=1750851400; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=82BM7Lr/w5wu8VyfjUmTApgJjDkQfTzaQ0EY82JI9q8=; b=l6r5mBq3w0E0mAd+sHX+WGpk8s/HrRm64iFludS1HbgAUIa0f+oUij7sK1kuDxJBJq 0wf7mgw7s5yOq4gyVk7sC5uSkuCxdARCHLZNyas7sbt+y5zYSOr1sAeERJV4RlIwZmQx lKNSrZJH4aMQz3UQPOb9zqksKxvWASp+Pw1YVEKskcpZ3fuAEApUGATSMrYqgVgruSHy LT6ktfpKjJbvcAw+ocHRt9xXk70BDYgVxLwDi7AQDfd8XIZV9sCb3zljd1eVQktYx5Cq HfxQT/CQmsaWqlZCBzbWKCjBQ3BB82vb2n77WDZ/8MVZxS+atiDrntcw4EWn8X+/96f4 3JDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246600; x=1750851400; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=82BM7Lr/w5wu8VyfjUmTApgJjDkQfTzaQ0EY82JI9q8=; b=RCTc89CJkNXgpO6In16DhmxppqwP09jIp0xlzi+dKWw1tOUOvc0zLel6T700JPiT1A v5cmYIrt91pFal0FOtrCkM+oCgDUyhO513wBGTZYSjrKFJAcFTAMa3cmMGudQfryT3M4 N5ru9IDg5gUimqeJPkvDA6ICWgeBpHDMVmgbLbdf6XECj1QLe69Tw3npnMCoGudkgUo8 p5VCIaLoeRUwPjPdpjSYj46UQ7soUoFDXD9doHN2DeGmEINlKSTr5/DZ3hyifGCDlbeZ OqUyhboKKItDPqePAHIweqE6QoqVwvrCWFz0DqtC6+cD+RktPPKksOqNLIhVwVhYT6h9 EBlw== X-Forwarded-Encrypted: i=1; AJvYcCWUNO5UWbzGTkP2ZeFvlXvmi6eOIXmEsadwLKNA62y7DTO9yokqdnnjDkRrYQyxxipLj31UWO+F4Tfn7jg=@vger.kernel.org X-Gm-Message-State: AOJu0Yzkf3PbdRDzCH9IcV8+TIY8TCh7EiZyxvc0KDpyoQtpS9LqGLQN A6o1p/ifnJV4j5EGJjDLXdBJJpn7diSZajHHUUg2HD/LneL6iLXrFbuJB7CD/+cRCB7W3aV527Z D1O8F X-Gm-Gg: ASbGncsnOHJc02gs1JRuZtsJQKcO+AqrSFE0kqOAk1uu2hWdvSlBzRnw/sQr/1usTOx a9PjhvO0e3Jx4mHSfS2zWZ4Ww2Et80H2Y2iH72kC1C78sKscsj2BZ7mNyqj7xDXDhS6YzCk0G35 FGMHZnv3xTUk0GsXZ3tj47YRI02S4T59S90hreG1ot8t11p0yzyc0RgrYGIBtdvkNheyFDCWobu BQADBa25D2itzqbTCaVn4OuC13twK1LYU8j7fLUjeSNFQrv+GVr8+verpEy22QgHkNP6lKTmSqa U3qwkd2Wot1zVdODUuQ8Uhk/8+pLFiVwrMrai+iYYIMJuB9HlG0HDpciaTYWugOChPHCZm3Q4jE IZ+JQuNiReHuJA621HPjUyFUS2loX X-Google-Smtp-Source: AGHT+IFtTqBIHy1eDXsLGZscx1YuytvMsKNqPDwplEMCMcqmhs6mPxTzAbcafirh9yc0wLabklItOQ== X-Received: by 2002:a17:902:f642:b0:234:c549:d9f1 with SMTP id d9443c01a7336-2366b3df960mr245881045ad.47.1750246599668; Wed, 18 Jun 2025 04:36:39 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:38 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 11/12] RISC-V: KVM: Factor-out g-stage page table management Date: Wed, 18 Jun 2025 17:05:31 +0530 Message-ID: <20250618113532.471448-12-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming nested virtualization can share g-stage page table management with the current host g-stage implementation hence factor-out g-stage page table management as separate sources and also use "kvm_riscv_mmu_" prefix for host g-stage functions. Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_gstage.h | 72 ++++ arch/riscv/include/asm/kvm_mmu.h | 32 +- arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia_imsic.c | 11 +- arch/riscv/kvm/gstage.c | 337 +++++++++++++++++++ arch/riscv/kvm/main.c | 2 +- arch/riscv/kvm/mmu.c | 492 ++++++---------------------- arch/riscv/kvm/vcpu.c | 4 +- arch/riscv/kvm/vcpu_exit.c | 5 +- arch/riscv/kvm/vm.c | 6 +- 10 files changed, 530 insertions(+), 432 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_gstage.h create mode 100644 arch/riscv/kvm/gstage.c diff --git a/arch/riscv/include/asm/kvm_gstage.h b/arch/riscv/include/asm/k= vm_gstage.h new file mode 100644 index 000000000000..595e2183173e --- /dev/null +++ b/arch/riscv/include/asm/kvm_gstage.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_GSTAGE_H_ +#define __RISCV_KVM_GSTAGE_H_ + +#include + +struct kvm_gstage { + struct kvm *kvm; + unsigned long flags; +#define KVM_GSTAGE_FLAGS_LOCAL BIT(0) + unsigned long vmid; + pgd_t *pgd; +}; + +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + +#ifdef CONFIG_64BIT +#define kvm_riscv_gstage_index_bits 9 +#else +#define kvm_riscv_gstage_index_bits 10 +#endif + +extern unsigned long kvm_riscv_gstage_mode; +extern unsigned long kvm_riscv_gstage_pgd_levels; + +#define kvm_riscv_gstage_pgd_xbits 2 +#define kvm_riscv_gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + kvm_riscv_gs= tage_pgd_xbits)) +#define kvm_riscv_gstage_gpa_bits (HGATP_PAGE_SHIFT + \ + (kvm_riscv_gstage_pgd_levels * \ + kvm_riscv_gstage_index_bits) + \ + kvm_riscv_gstage_pgd_xbits) +#define kvm_riscv_gstage_gpa_size ((gpa_t)(1ULL << kvm_riscv_gstage_gpa_bi= ts)) + +bool kvm_riscv_gstage_get_leaf(struct kvm_gstage *gstage, gpa_t addr, + pte_t **ptepp, u32 *ptep_level); + +int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map); + +int kvm_riscv_gstage_map_page(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + gpa_t gpa, phys_addr_t hpa, unsigned long page_size, + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map); + +enum kvm_riscv_gstage_op { + GSTAGE_OP_NOP =3D 0, /* Nothing */ + GSTAGE_OP_CLEAR, /* Clear/Unmap */ + GSTAGE_OP_WP, /* Write-protect */ +}; + +void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op); + +void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage, + gpa_t start, gpa_t size, bool may_block); + +void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa= _t end); + +void kvm_riscv_gstage_mode_detect(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 91c11e692dc7..5439e76f0a96 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -6,28 +6,16 @@ #ifndef __RISCV_KVM_MMU_H_ #define __RISCV_KVM_MMU_H_ =20 -#include +#include =20 -struct kvm_gstage_mapping { - gpa_t addr; - pte_t pte; - u32 level; -}; - -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic); -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, - unsigned long size); -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write, - struct kvm_gstage_mapping *out_map); -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); -void kvm_riscv_gstage_free_pgd(struct kvm *kvm); -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_mode_detect(void); -unsigned long kvm_riscv_gstage_mode(void); -int kvm_riscv_gstage_gpa_bits(void); +int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable, bool in_atomic); +void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); +int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memsl= ot, + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); +int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm); +void kvm_riscv_mmu_free_pgd(struct kvm *kvm); +void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu); =20 #endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 06e2d52a9b88..07197395750e 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -14,6 +14,7 @@ kvm-y +=3D aia.o kvm-y +=3D aia_aplic.o kvm-y +=3D aia_device.o kvm-y +=3D aia_imsic.o +kvm-y +=3D gstage.o kvm-y +=3D main.o kvm-y +=3D mmu.o kvm-y +=3D nacl.o diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 40b469c0a01f..ea1a36836d9c 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -704,9 +704,8 @@ void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *= vcpu) */ =20 /* Purge the G-stage mapping */ - kvm_riscv_gstage_iounmap(vcpu->kvm, - vcpu->arch.aia_context.imsic_addr, - IMSIC_MMIO_PAGE_SZ); + kvm_riscv_mmu_iounmap(vcpu->kvm, vcpu->arch.aia_context.imsic_addr, + IMSIC_MMIO_PAGE_SZ); =20 /* TODO: Purge the IOMMU mapping ??? */ =20 @@ -786,9 +785,9 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vc= pu) imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix); =20 /* Update G-stage mapping for the new IMSIC VS-file */ - ret =3D kvm_riscv_gstage_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, - new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, - true, true); + ret =3D kvm_riscv_mmu_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, + new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, + true, true); if (ret) goto fail_free_vsfile_hgei; =20 diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c new file mode 100644 index 000000000000..9c7c44f09b05 --- /dev/null +++ b/arch/riscv/kvm/gstage.c @@ -0,0 +1,337 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_64BIT +unsigned long kvm_riscv_gstage_mode __ro_after_init =3D HGATP_MODE_SV39X4; +unsigned long kvm_riscv_gstage_pgd_levels __ro_after_init =3D 3; +#else +unsigned long kvm_riscv_gstage_mode __ro_after_init =3D HGATP_MODE_SV32X4; +unsigned long kvm_riscv_gstage_pgd_levels __ro_after_init =3D 2; +#endif + +#define gstage_pte_leaf(__ptep) \ + (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) + +static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) +{ + unsigned long mask; + unsigned long shift =3D HGATP_PAGE_SHIFT + (kvm_riscv_gstage_index_bits *= level); + + if (level =3D=3D (kvm_riscv_gstage_pgd_levels - 1)) + mask =3D (PTRS_PER_PTE * (1UL << kvm_riscv_gstage_pgd_xbits)) - 1; + else + mask =3D PTRS_PER_PTE - 1; + + return (addr >> shift) & mask; +} + +static inline unsigned long gstage_pte_page_vaddr(pte_t pte) +{ + return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte))); +} + +static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) +{ + u32 i; + unsigned long psz =3D 1UL << 12; + + for (i =3D 0; i < kvm_riscv_gstage_pgd_levels; i++) { + if (page_size =3D=3D (psz << (i * kvm_riscv_gstage_index_bits))) { + *out_level =3D i; + return 0; + } + } + + return -EINVAL; +} + +static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) +{ + if (kvm_riscv_gstage_pgd_levels < level) + return -EINVAL; + + *out_pgorder =3D 12 + (level * kvm_riscv_gstage_index_bits); + return 0; +} + +static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) +{ + int rc; + unsigned long page_order =3D PAGE_SHIFT; + + rc =3D gstage_level_to_page_order(level, &page_order); + if (rc) + return rc; + + *out_pgsize =3D BIT(page_order); + return 0; +} + +bool kvm_riscv_gstage_get_leaf(struct kvm_gstage *gstage, gpa_t addr, + pte_t **ptepp, u32 *ptep_level) +{ + pte_t *ptep; + u32 current_level =3D kvm_riscv_gstage_pgd_levels - 1; + + *ptep_level =3D current_level; + ptep =3D (pte_t *)gstage->pgd; + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; + while (ptep && pte_val(ptep_get(ptep))) { + if (gstage_pte_leaf(ptep)) { + *ptep_level =3D current_level; + *ptepp =3D ptep; + return true; + } + + if (current_level) { + current_level--; + *ptep_level =3D current_level; + ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; + } else { + ptep =3D NULL; + } + } + + return false; +} + +static void gstage_tlb_flush(struct kvm_gstage *gstage, u32 level, gpa_t a= ddr) +{ + unsigned long order =3D PAGE_SHIFT; + + if (gstage_level_to_page_order(level, &order)) + return; + addr &=3D ~(BIT(order) - 1); + + if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) + kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); + else + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); +} + +int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) +{ + u32 current_level =3D kvm_riscv_gstage_pgd_levels - 1; + pte_t *next_ptep =3D (pte_t *)gstage->pgd; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; + + if (current_level < map->level) + return -EINVAL; + + while (current_level !=3D map->level) { + if (gstage_pte_leaf(ptep)) + return -EEXIST; + + if (!pte_val(ptep_get(ptep))) { + if (!pcache) + return -ENOMEM; + next_ptep =3D kvm_mmu_memory_cache_alloc(pcache); + if (!next_ptep) + return -ENOMEM; + set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), + __pgprot(_PAGE_TABLE))); + } else { + if (gstage_pte_leaf(ptep)) + return -EEXIST; + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + } + + current_level--; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; + } + + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); + if (gstage_pte_leaf(ptep)) + gstage_tlb_flush(gstage, current_level, map->addr); + } + + return 0; +} + +int kvm_riscv_gstage_map_page(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + gpa_t gpa, phys_addr_t hpa, unsigned long page_size, + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) +{ + pgprot_t prot; + int ret; + + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); + if (ret) + return ret; + + /* + * A RISC-V implementation can choose to either: + * 1) Update 'A' and 'D' PTE bits in hardware + * 2) Generate page fault when 'A' and/or 'D' bits are not set + * PTE so that software can update these bits. + * + * We support both options mentioned above. To achieve this, we + * always set 'A' and 'D' PTE bits at time of creating G-stage + * mapping. To support KVM dirty page logging with both options + * mentioned above, we will write-protect G-stage PTEs to track + * dirty pages. + */ + + if (page_exec) { + if (page_rdonly) + prot =3D PAGE_READ_EXEC; + else + prot =3D PAGE_WRITE_EXEC; + } else { + if (page_rdonly) + prot =3D PAGE_READ; + else + prot =3D PAGE_WRITE; + } + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); + + return kvm_riscv_gstage_set_pte(gstage, pcache, out_map); +} + +void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op) +{ + int i, ret; + pte_t old_pte, *next_ptep; + u32 next_ptep_level; + unsigned long next_page_size, page_size; + + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + return; + + WARN_ON(addr & (page_size - 1)); + + if (!pte_val(ptep_get(ptep))) + return; + + if (ptep_level && !gstage_pte_leaf(ptep)) { + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + next_ptep_level =3D ptep_level - 1; + ret =3D gstage_level_to_page_size(next_ptep_level, &next_page_size); + if (ret) + return; + + if (op =3D=3D GSTAGE_OP_CLEAR) + set_pte(ptep, __pte(0)); + for (i =3D 0; i < PTRS_PER_PTE; i++) + kvm_riscv_gstage_op_pte(gstage, addr + i * next_page_size, + &next_ptep[i], next_ptep_level, op); + if (op =3D=3D GSTAGE_OP_CLEAR) + put_page(virt_to_page(next_ptep)); + } else { + old_pte =3D *ptep; + if (op =3D=3D GSTAGE_OP_CLEAR) + set_pte(ptep, __pte(0)); + else if (op =3D=3D GSTAGE_OP_WP) + set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); + if (pte_val(*ptep) !=3D pte_val(old_pte)) + gstage_tlb_flush(gstage, ptep_level, addr); + } +} + +void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage, + gpa_t start, gpa_t size, bool may_block) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + unsigned long page_size; + gpa_t addr =3D start, end =3D start + size; + + while (addr < end) { + found_leaf =3D kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_leve= l); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) + kvm_riscv_gstage_op_pte(gstage, addr, ptep, + ptep_level, GSTAGE_OP_CLEAR); + +next: + addr +=3D page_size; + + /* + * If the range is too large, release the kvm->mmu_lock + * to prevent starvation and lockup detector warnings. + */ + if (!(gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) && may_block && addr < end) + cond_resched_lock(&gstage->kvm->mmu_lock); + } +} + +void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa= _t end) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + gpa_t addr =3D start; + unsigned long page_size; + + while (addr < end) { + found_leaf =3D kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_leve= l); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) + kvm_riscv_gstage_op_pte(gstage, addr, ptep, + ptep_level, GSTAGE_OP_WP); + +next: + addr +=3D page_size; + } +} + +void __init kvm_riscv_gstage_mode_detect(void) +{ +#ifdef CONFIG_64BIT + /* Try Sv57x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { + kvm_riscv_gstage_mode =3D HGATP_MODE_SV57X4; + kvm_riscv_gstage_pgd_levels =3D 5; + goto skip_sv48x4_test; + } + + /* Try Sv48x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { + kvm_riscv_gstage_mode =3D HGATP_MODE_SV48X4; + kvm_riscv_gstage_pgd_levels =3D 4; + } +skip_sv48x4_test: + + csr_write(CSR_HGATP, 0); + kvm_riscv_local_hfence_gvma_all(); +#endif +} diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index b861a5dd7bd9..67c876de74ef 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -135,7 +135,7 @@ static int __init riscv_kvm_init(void) (rc) ? slist : "no features"); } =20 - switch (kvm_riscv_gstage_mode()) { + switch (kvm_riscv_gstage_mode) { case HGATP_MODE_SV32X4: str =3D "Sv32x4"; break; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 806614b3e46d..9f7dcd8cd741 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -6,9 +6,7 @@ * Anup Patel */ =20 -#include #include -#include #include #include #include @@ -17,342 +15,28 @@ #include #include #include -#include -#include - -#ifdef CONFIG_64BIT -static unsigned long gstage_mode __ro_after_init =3D (HGATP_MODE_SV39X4 <<= HGATP_MODE_SHIFT); -static unsigned long gstage_pgd_levels __ro_after_init =3D 3; -#define gstage_index_bits 9 -#else -static unsigned long gstage_mode __ro_after_init =3D (HGATP_MODE_SV32X4 <<= HGATP_MODE_SHIFT); -static unsigned long gstage_pgd_levels __ro_after_init =3D 2; -#define gstage_index_bits 10 -#endif - -#define gstage_pgd_xbits 2 -#define gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + gstage_pgd_xbits)) -#define gstage_gpa_bits (HGATP_PAGE_SHIFT + \ - (gstage_pgd_levels * gstage_index_bits) + \ - gstage_pgd_xbits) -#define gstage_gpa_size ((gpa_t)(1ULL << gstage_gpa_bits)) - -#define gstage_pte_leaf(__ptep) \ - (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) - -static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) -{ - unsigned long mask; - unsigned long shift =3D HGATP_PAGE_SHIFT + (gstage_index_bits * level); - - if (level =3D=3D (gstage_pgd_levels - 1)) - mask =3D (PTRS_PER_PTE * (1UL << gstage_pgd_xbits)) - 1; - else - mask =3D PTRS_PER_PTE - 1; - - return (addr >> shift) & mask; -} =20 -static inline unsigned long gstage_pte_page_vaddr(pte_t pte) -{ - return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte))); -} - -static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) -{ - u32 i; - unsigned long psz =3D 1UL << 12; - - for (i =3D 0; i < gstage_pgd_levels; i++) { - if (page_size =3D=3D (psz << (i * gstage_index_bits))) { - *out_level =3D i; - return 0; - } - } - - return -EINVAL; -} - -static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) -{ - if (gstage_pgd_levels < level) - return -EINVAL; - - *out_pgorder =3D 12 + (level * gstage_index_bits); - return 0; -} - -static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) -{ - int rc; - unsigned long page_order =3D PAGE_SHIFT; - - rc =3D gstage_level_to_page_order(level, &page_order); - if (rc) - return rc; - - *out_pgsize =3D BIT(page_order); - return 0; -} - -static bool gstage_get_leaf_entry(struct kvm *kvm, gpa_t addr, - pte_t **ptepp, u32 *ptep_level) -{ - pte_t *ptep; - u32 current_level =3D gstage_pgd_levels - 1; - - *ptep_level =3D current_level; - ptep =3D (pte_t *)kvm->arch.pgd; - ptep =3D &ptep[gstage_pte_index(addr, current_level)]; - while (ptep && pte_val(ptep_get(ptep))) { - if (gstage_pte_leaf(ptep)) { - *ptep_level =3D current_level; - *ptepp =3D ptep; - return true; - } - - if (current_level) { - current_level--; - *ptep_level =3D current_level; - ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - ptep =3D &ptep[gstage_pte_index(addr, current_level)]; - } else { - ptep =3D NULL; - } - } - - return false; -} - -static void gstage_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) -{ - unsigned long order =3D PAGE_SHIFT; - - if (gstage_level_to_page_order(level, &order)) - return; - addr &=3D ~(BIT(order) - 1); - - kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); -} - -static int gstage_set_pte(struct kvm *kvm, - struct kvm_mmu_memory_cache *pcache, - const struct kvm_gstage_mapping *map) -{ - u32 current_level =3D gstage_pgd_levels - 1; - pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; - - if (current_level < map->level) - return -EINVAL; - - while (current_level !=3D map->level) { - if (gstage_pte_leaf(ptep)) - return -EEXIST; - - if (!pte_val(ptep_get(ptep))) { - if (!pcache) - return -ENOMEM; - next_ptep =3D kvm_mmu_memory_cache_alloc(pcache); - if (!next_ptep) - return -ENOMEM; - set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), - __pgprot(_PAGE_TABLE))); - } else { - if (gstage_pte_leaf(ptep)) - return -EEXIST; - next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - } - - current_level--; - ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; - } - - if (pte_val(*ptep) !=3D pte_val(map->pte)) { - set_pte(ptep, map->pte); - if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, map->addr); - } - - return 0; -} - -static int gstage_map_page(struct kvm *kvm, - struct kvm_mmu_memory_cache *pcache, - gpa_t gpa, phys_addr_t hpa, - unsigned long page_size, - bool page_rdonly, bool page_exec, - struct kvm_gstage_mapping *out_map) -{ - pgprot_t prot; - int ret; - - out_map->addr =3D gpa; - out_map->level =3D 0; - - ret =3D gstage_page_size_to_level(page_size, &out_map->level); - if (ret) - return ret; - - /* - * A RISC-V implementation can choose to either: - * 1) Update 'A' and 'D' PTE bits in hardware - * 2) Generate page fault when 'A' and/or 'D' bits are not set - * PTE so that software can update these bits. - * - * We support both options mentioned above. To achieve this, we - * always set 'A' and 'D' PTE bits at time of creating G-stage - * mapping. To support KVM dirty page logging with both options - * mentioned above, we will write-protect G-stage PTEs to track - * dirty pages. - */ - - if (page_exec) { - if (page_rdonly) - prot =3D PAGE_READ_EXEC; - else - prot =3D PAGE_WRITE_EXEC; - } else { - if (page_rdonly) - prot =3D PAGE_READ; - else - prot =3D PAGE_WRITE; - } - out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); - out_map->pte =3D pte_mkdirty(out_map->pte); - - return gstage_set_pte(kvm, pcache, out_map); -} - -enum gstage_op { - GSTAGE_OP_NOP =3D 0, /* Nothing */ - GSTAGE_OP_CLEAR, /* Clear/Unmap */ - GSTAGE_OP_WP, /* Write-protect */ -}; - -static void gstage_op_pte(struct kvm *kvm, gpa_t addr, - pte_t *ptep, u32 ptep_level, enum gstage_op op) -{ - int i, ret; - pte_t old_pte, *next_ptep; - u32 next_ptep_level; - unsigned long next_page_size, page_size; - - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - return; - - BUG_ON(addr & (page_size - 1)); - - if (!pte_val(ptep_get(ptep))) - return; - - if (ptep_level && !gstage_pte_leaf(ptep)) { - next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - next_ptep_level =3D ptep_level - 1; - ret =3D gstage_level_to_page_size(next_ptep_level, - &next_page_size); - if (ret) - return; - - if (op =3D=3D GSTAGE_OP_CLEAR) - set_pte(ptep, __pte(0)); - for (i =3D 0; i < PTRS_PER_PTE; i++) - gstage_op_pte(kvm, addr + i * next_page_size, - &next_ptep[i], next_ptep_level, op); - if (op =3D=3D GSTAGE_OP_CLEAR) - put_page(virt_to_page(next_ptep)); - } else { - old_pte =3D *ptep; - if (op =3D=3D GSTAGE_OP_CLEAR) - set_pte(ptep, __pte(0)); - else if (op =3D=3D GSTAGE_OP_WP) - set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); - if (pte_val(*ptep) !=3D pte_val(old_pte)) - gstage_remote_tlb_flush(kvm, ptep_level, addr); - } -} - -static void gstage_unmap_range(struct kvm *kvm, gpa_t start, - gpa_t size, bool may_block) -{ - int ret; - pte_t *ptep; - u32 ptep_level; - bool found_leaf; - unsigned long page_size; - gpa_t addr =3D start, end =3D start + size; - - while (addr < end) { - found_leaf =3D gstage_get_leaf_entry(kvm, addr, - &ptep, &ptep_level); - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - break; - - if (!found_leaf) - goto next; - - if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - gstage_op_pte(kvm, addr, ptep, - ptep_level, GSTAGE_OP_CLEAR); - -next: - addr +=3D page_size; - - /* - * If the range is too large, release the kvm->mmu_lock - * to prevent starvation and lockup detector warnings. - */ - if (may_block && addr < end) - cond_resched_lock(&kvm->mmu_lock); - } -} - -static void gstage_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) -{ - int ret; - pte_t *ptep; - u32 ptep_level; - bool found_leaf; - gpa_t addr =3D start; - unsigned long page_size; - - while (addr < end) { - found_leaf =3D gstage_get_leaf_entry(kvm, addr, - &ptep, &ptep_level); - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - break; - - if (!found_leaf) - goto next; - - if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - gstage_op_pte(kvm, addr, ptep, - ptep_level, GSTAGE_OP_WP); - -next: - addr +=3D page_size; - } -} - -static void gstage_wp_memory_region(struct kvm *kvm, int slot) +static void mmu_wp_memory_region(struct kvm *kvm, int slot) { struct kvm_memslots *slots =3D kvm_memslots(kvm); struct kvm_memory_slot *memslot =3D id_to_memslot(slots, slot); phys_addr_t start =3D memslot->base_gfn << PAGE_SHIFT; phys_addr_t end =3D (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 spin_lock(&kvm->mmu_lock); - gstage_wp_range(kvm, start, end); + kvm_riscv_gstage_wp_range(&gstage, start, end); spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic) +int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable, bool in_atomic) { int ret =3D 0; unsigned long pfn; @@ -362,6 +46,12 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, .gfp_zero =3D __GFP_ZERO, }; struct kvm_gstage_mapping map; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); @@ -374,12 +64,12 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gp= a, if (!writable) map.pte =3D pte_wrprotect(map.pte); =20 - ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(&pcache, kvm_riscv_gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, &pcache, &map); + ret =3D kvm_riscv_gstage_set_pte(&gstage, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -392,10 +82,17 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gp= a, return ret; } =20 -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long si= ze) +void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size) { + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + spin_lock(&kvm->mmu_lock); - gstage_unmap_range(kvm, gpa, size, false); + kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -407,8 +104,14 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kv= m *kvm, phys_addr_t base_gfn =3D slot->base_gfn + gfn_offset; phys_addr_t start =3D (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end =3D (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 - gstage_wp_range(kvm, start, end); + kvm_riscv_gstage_wp_range(&gstage, start, end); } =20 void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *mems= lot) @@ -425,7 +128,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) =20 void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_riscv_gstage_free_pgd(kvm); + kvm_riscv_mmu_free_pgd(kvm); } =20 void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -433,9 +136,15 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, { gpa_t gpa =3D slot->base_gfn << PAGE_SHIFT; phys_addr_t size =3D slot->npages << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 spin_lock(&kvm->mmu_lock); - gstage_unmap_range(kvm, gpa, size, false); + kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -450,7 +159,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * the memory slot is write protected. */ if (change !=3D KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) - gstage_wp_memory_region(kvm, new->id); + mmu_wp_memory_region(kvm, new->id); } =20 int kvm_arch_prepare_memory_region(struct kvm *kvm, @@ -472,7 +181,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, * space addressable by the KVM guest GPA space. */ if ((new->base_gfn + new->npages) >=3D - (gstage_gpa_size >> PAGE_SHIFT)) + (kvm_riscv_gstage_gpa_size >> PAGE_SHIFT)) return -EFAULT; =20 hva =3D new->userspace_addr; @@ -528,9 +237,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; } =20 - ret =3D kvm_riscv_gstage_ioremap(kvm, gpa, pa, - vm_end - vm_start, - writable, false); + ret =3D kvm_riscv_mmu_ioremap(kvm, gpa, pa, vm_end - vm_start, + writable, false); if (ret) break; } @@ -541,7 +249,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; =20 if (ret) - kvm_riscv_gstage_iounmap(kvm, base_gpa, size); + kvm_riscv_mmu_iounmap(kvm, base_gpa, size); =20 out: mmap_read_unlock(current->mm); @@ -550,12 +258,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, =20 bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { + struct kvm_gstage gstage; + if (!kvm->arch.pgd) return false; =20 - gstage_unmap_range(kvm, range->start << PAGE_SHIFT, - (range->end - range->start) << PAGE_SHIFT, - range->may_block); + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT, + (range->end - range->start) << PAGE_SHIFT, + range->may_block); return false; } =20 @@ -564,14 +278,19 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) pte_t *ptep; u32 ptep_level =3D 0; u64 size =3D (range->end - range->start) << PAGE_SHIFT; + struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) return false; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 - if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, - &ptep, &ptep_level)) + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, + &ptep, &ptep_level)) return false; =20 return ptep_test_and_clear_young(NULL, 0, ptep); @@ -582,23 +301,27 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) pte_t *ptep; u32 ptep_level =3D 0; u64 size =3D (range->end - range->start) << PAGE_SHIFT; + struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) return false; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 - if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, - &ptep, &ptep_level)) + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, + &ptep, &ptep_level)) return false; =20 return pte_young(ptep_get(ptep)); } =20 -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write, - struct kvm_gstage_mapping *out_map) +int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memsl= ot, + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -611,13 +334,19 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, bool logging =3D (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; + struct kvm_gstage gstage; struct page *page; =20 + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + /* Setup initial state of output mapping */ memset(out_map, 0, sizeof(*out_map)); =20 /* We need minimum second+third level pages */ - ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(pcache, kvm_riscv_gstage_pgd_levels); if (ret) { kvm_err("Failed to topup G-stage cache\n"); return ret; @@ -684,11 +413,11 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, =20 if (writable) { mark_page_dirty(kvm, gfn); - ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true, out_map); + ret =3D kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIF= T, + vma_pagesize, false, true, out_map); } else { - ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true, out_map); + ret =3D kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIF= T, + vma_pagesize, true, true, out_map); } =20 if (ret) @@ -700,7 +429,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, return ret; } =20 -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) +int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm) { struct page *pgd_page; =20 @@ -710,7 +439,7 @@ int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) } =20 pgd_page =3D alloc_pages(GFP_KERNEL | __GFP_ZERO, - get_order(gstage_pgd_size)); + get_order(kvm_riscv_gstage_pgd_size)); if (!pgd_page) return -ENOMEM; kvm->arch.pgd =3D page_to_virt(pgd_page); @@ -719,13 +448,18 @@ int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) return 0; } =20 -void kvm_riscv_gstage_free_pgd(struct kvm *kvm) +void kvm_riscv_mmu_free_pgd(struct kvm *kvm) { + struct kvm_gstage gstage; void *pgd =3D NULL; =20 spin_lock(&kvm->mmu_lock); if (kvm->arch.pgd) { - gstage_unmap_range(kvm, 0UL, gstage_gpa_size, false); + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + kvm_riscv_gstage_unmap_range(&gstage, 0UL, kvm_riscv_gstage_gpa_size, fa= lse); pgd =3D READ_ONCE(kvm->arch.pgd); kvm->arch.pgd =3D NULL; kvm->arch.pgd_phys =3D 0; @@ -733,12 +467,12 @@ void kvm_riscv_gstage_free_pgd(struct kvm *kvm) spin_unlock(&kvm->mmu_lock); =20 if (pgd) - free_pages((unsigned long)pgd, get_order(gstage_pgd_size)); + free_pages((unsigned long)pgd, get_order(kvm_riscv_gstage_pgd_size)); } =20 -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) +void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu) { - unsigned long hgatp =3D gstage_mode; + unsigned long hgatp =3D kvm_riscv_gstage_mode << HGATP_MODE_SHIFT; struct kvm_arch *k =3D &vcpu->kvm->arch; =20 hgatp |=3D (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; @@ -749,37 +483,3 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vc= pu) if (!kvm_riscv_gstage_vmid_bits()) kvm_riscv_local_hfence_gvma_all(); } - -void __init kvm_riscv_gstage_mode_detect(void) -{ -#ifdef CONFIG_64BIT - /* Try Sv57x4 G-stage mode */ - csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); - if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { - gstage_mode =3D (HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); - gstage_pgd_levels =3D 5; - goto skip_sv48x4_test; - } - - /* Try Sv48x4 G-stage mode */ - csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { - gstage_mode =3D (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - gstage_pgd_levels =3D 4; - } -skip_sv48x4_test: - - csr_write(CSR_HGATP, 0); - kvm_riscv_local_hfence_gvma_all(); -#endif -} - -unsigned long __init kvm_riscv_gstage_mode(void) -{ - return gstage_mode >> HGATP_MODE_SHIFT; -} - -int kvm_riscv_gstage_gpa_bits(void) -{ - return gstage_gpa_bits; -} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 8ad7b31f5939..fe028b4274df 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -632,7 +632,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } } =20 - kvm_riscv_gstage_update_hgatp(vcpu); + kvm_riscv_mmu_update_hgatp(vcpu); =20 kvm_riscv_vcpu_timer_restore(vcpu); =20 @@ -717,7 +717,7 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vc= pu *vcpu) kvm_riscv_reset_vcpu(vcpu, true); =20 if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu)) - kvm_riscv_gstage_update_hgatp(vcpu); + kvm_riscv_mmu_update_hgatp(vcpu); =20 if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) kvm_riscv_fence_i_process(vcpu); diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6b4694bc07ea..0bb0c51e3c89 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -43,8 +43,9 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, }; } =20 - ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); + ret =3D kvm_riscv_mmu_map(vcpu, memslot, fault_addr, hva, + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, + &host_map); if (ret < 0) return ret; =20 diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 8601cf29e5f8..66d91ae6e9b2 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -32,13 +32,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long typ= e) { int r; =20 - r =3D kvm_riscv_gstage_alloc_pgd(kvm); + r =3D kvm_riscv_mmu_alloc_pgd(kvm); if (r) return r; =20 r =3D kvm_riscv_gstage_vmid_init(kvm); if (r) { - kvm_riscv_gstage_free_pgd(kvm); + kvm_riscv_mmu_free_pgd(kvm); return r; } =20 @@ -200,7 +200,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) r =3D KVM_USER_MEM_SLOTS; break; case KVM_CAP_VM_GPA_BITS: - r =3D kvm_riscv_gstage_gpa_bits(); + r =3D kvm_riscv_gstage_gpa_bits; break; default: r =3D 0; --=20 2.43.0 From nobody Thu Oct 9 09:00:48 2025 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 45A822EA150 for ; Wed, 18 Jun 2025 11:36:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246607; cv=none; b=h7wpvPxGxP6bn1NiYgw8isb6vc3cAawqNe2SlYixoDN3JGmycINw3iguTjvk0ShJK3a9z4JUEAGUP58Mvo0d0BGPYuKOB4OFR8JRBw6/wPbVSef/tZZITA2jkbNG1jntuLWieXqfvfnqBLrX7NCwYwLxhdk/Mta+VtielcnEvfk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750246607; c=relaxed/simple; bh=F0Me1ybyZIngF4/eJkZvNT8k+pj/FmCQMMFm1YCl2HA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ash0d9b7+ejGEMDiy+SeMjES0UZinENqI8U7UHPGjcJT01ZfmenFKKgxJbg+5ujeLvgpm9o7YCvnqYfDDd8q5MvGmSBQdC3JHSfOekZQwLgT1JKFobplNRnfV4xR+8c1V9Kn7rjRgeksRC9HtDw5Ze6x1hzfSIAKn+8eg+JYgeI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=bRnTst4k; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="bRnTst4k" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-2363497cc4dso56818015ad.1 for ; Wed, 18 Jun 2025 04:36:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1750246604; x=1750851404; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=bRnTst4kfPHeh1bjV6oYgOiYt7eydt/4luZ+nRO9iQLYuSUUB1ubdFwOxTiR/hcXA+ UEzYmsgijwWb6FiPNgIVgzs6oaCtJ/Ab9PnjBDcmx5qDdjKFFnxXyPs/nxbjZwvX2mL7 TkLkpyaoUUP7RIQc8ZFO6dIMprYUNIWYsGlX88Oersnx9IgvV51EldqYmB7Lu7zZ0RUq 3/abznWXtfKERtyVpH0ELtWYDPB0A0YhOFyasn7qd16o7rN0WulRarH15Wi1O1Wi6uf+ anSkFL0DhfdzKIhmoO29NbA8OGAPGBUpuDwoqAFJcKapnGdvtB2X7AQv2WvQorT40X0Y gBRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750246604; x=1750851404; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fv+HUsKbkqNgweSMN6BOnXE1OqaGyWp3bxb16If7Nvw=; b=w5TXGKVRS8unTetSHCzp3EchqpfHNO1aDgwgXQjaOcAG/L8xgP0T9aDIJE0Uhv2U7u zU+ATxdUindP3C8G/Or88IOl2h4liOr191BrAJD/m0hlpK7ZKlfzZMNGdsZaYpf0lAoM umc8oXnLeyjd/gC+xlkBTJsV2oVHxNbUmJw5vc+N4USPe4QsRSIXpweUkwvPfEXlR9J+ gSCtzE7wSthH5267NOurODWeb3eE2IzBPHfadnFfY9+YMXYildYF2sQidUX2KeuGrwGn VZZy1uegwIu4c+196eghjzd6km9dPcC6/eJs407u8RPdW5E8rBQNgaMJLO250A8fAzV6 BOVQ== X-Forwarded-Encrypted: i=1; AJvYcCXES+4lho8FwXncX8ENK9W6AY7cQG3Oqhq+5GRFSzB5fObklzRWw9O2wxQ42VWNjhQlPRiKydyeu49/ACo=@vger.kernel.org X-Gm-Message-State: AOJu0YzD+FmVwZwZz4ghGN6YKg/33IpHUTt5p5RxOJ4W8j9rnna3vdyl d98zM6BwctH3OL8qnNUZ2kX/UvIaqMb5xC+M5bk0xl/vJ/KNmRorjTcUdlDoy2ae1tJYR09uWrD ZWEzx X-Gm-Gg: ASbGncvzD93ocMbel/jvj52+0a8u1VoFdGjeU7xN2cnSv0H84vjsJFRzmeCmYarYNiI ZCCv3ccE9mYhmJtofIwi16iKZ5+9H/qo1SqEP+Z432dYpIsvjm+1a+1keKt4h5TcLqOzHe/RMDF o4Du+6H7IPvzBA+SjC9xBnIkbzmTWwAAZUZOHWqoancR14X70Z/goBcg8Vf+CSRCOe9sSlcT5HO i/sVU9P0p/C4AD3qz8nexLjhAyg7j7RAxU2YSyE4mSuJAeLcS0et/L1H6Fm+6/XbaPHzRa5g+uL bsnJ2Ew813s3RUfM2sW4VpMtSg/x91q0KC1mfy37wjjuu7uL441+YQsaVWEGLVfbjia5qroUNeC AM57SBonWoRvn6lypqw== X-Google-Smtp-Source: AGHT+IGLYl1NWd1HtJ2HyUGLmzDbLL3275c7uTWDrNfGR5rw9VFZWeK6rf79dHoLdUV5jp5C3Srd3g== X-Received: by 2002:a17:902:d487:b0:234:c8f6:1afb with SMTP id d9443c01a7336-2366aeea601mr286773315ad.0.1750246604285; Wed, 18 Jun 2025 04:36:44 -0700 (PDT) Received: from localhost.localdomain ([122.171.23.44]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-237c57c63efsm9112475ad.172.2025.06.18.04.36.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Jun 2025 04:36:43 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH v3 12/12] RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs Date: Wed, 18 Jun 2025 17:05:32 +0530 Message-ID: <20250618113532.471448-13-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250618113532.471448-1-apatel@ventanamicro.com> References: <20250618113532.471448-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, all kvm_riscv_hfence_xyz() APIs assume VMID to be the host VMID of the Guest/VM which resticts use of these APIs only for host TLB maintenance. Let's allow passing VMID as a parameter to all kvm_riscv_hfence_xyz() APIs so that they can be re-used for nested virtualization related TLB maintenance. Signed-off-by: Anup Patel Reviewed-by: Nutty Liu Tested-by: Atish Patra --- arch/riscv/include/asm/kvm_tlb.h | 17 ++++++--- arch/riscv/kvm/gstage.c | 3 +- arch/riscv/kvm/tlb.c | 61 ++++++++++++++++++++----------- arch/riscv/kvm/vcpu_sbi_replace.c | 17 +++++---- arch/riscv/kvm/vcpu_sbi_v01.c | 25 ++++++------- 5 files changed, 73 insertions(+), 50 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index f67e03edeaec..38a2f933ad3a 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -11,9 +11,11 @@ enum kvm_riscv_hfence_type { KVM_RISCV_HFENCE_UNKNOWN =3D 0, KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_GVMA_VMID_ALL, KVM_RISCV_HFENCE_VVMA_ASID_GVA, KVM_RISCV_HFENCE_VVMA_ASID_ALL, KVM_RISCV_HFENCE_VVMA_GVA, + KVM_RISCV_HFENCE_VVMA_ALL }; =20 struct kvm_riscv_hfence { @@ -59,21 +61,24 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); + unsigned long order, unsigned long asid, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid); + unsigned long asid, unsigned long vmid); void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); =20 #endif diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c index 9c7c44f09b05..24c270d6d0e2 100644 --- a/arch/riscv/kvm/gstage.c +++ b/arch/riscv/kvm/gstage.c @@ -117,7 +117,8 @@ static void gstage_tlb_flush(struct kvm_gstage *gstage,= u32 level, gpa_t addr) if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); else - kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder, + gstage->vmid); } =20 int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 349fcfc93f54..3c5a70a2b927 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -251,6 +251,12 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_GVMA_VMID_ALL: + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(d.vmid); + break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); if (kvm_riscv_nacl_available()) @@ -276,6 +282,13 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_VVMA_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_vvma_all(d.vmid); + break; default: break; } @@ -328,14 +341,13 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -344,23 +356,28 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, - KVM_REQ_TLB_FLUSH, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid) + unsigned long order, unsigned long asid, + unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -370,15 +387,13 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, =20 void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid) + unsigned long asid, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; - struct kvm_riscv_hfence data; + struct kvm_riscv_hfence data =3D {0}; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); - data.addr =3D data.size =3D data.order =3D 0; + data.vmid =3D vmid; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); } @@ -386,14 +401,13 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -402,16 +416,21 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, } =20 void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, - KVM_REQ_HFENCE_VVMA_ALL, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) { kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, - PAGE_SHIFT); + PAGE_SHIFT, READ_ONCE(kvm->arch.vmid.vmid)); return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index b17fad091bab..b490ed1428a6 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -96,6 +96,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vc= pu, struct kvm_run *run unsigned long hmask =3D cp->a0; unsigned long hbase =3D cp->a1; unsigned long funcid =3D cp->a6; + unsigned long vmid; =20 switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: @@ -103,22 +104,22 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu= *vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, - cp->a2, cp->a3, PAGE_SHIFT); + cp->a2, cp->a3, PAGE_SHIFT, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - hbase, hmask, cp->a4); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, + cp->a4, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - hbase, hmask, - cp->a2, cp->a3, - PAGE_SHIFT, cp->a4); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask, cp->a2, + cp->a3, PAGE_SHIFT, cp->a4, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index 8f4c4fa16227..368dfddd23d9 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -23,6 +23,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu,= struct kvm_run *run, struct kvm *kvm =3D vcpu->kvm; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; struct kvm_cpu_trap *utrap =3D retdata->utrap; + unsigned long vmid; =20 switch (cp->a7) { case SBI_EXT_0_1_CONSOLE_GETCHAR: @@ -78,25 +79,21 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcp= u, struct kvm_run *run, if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_FENCE_I) kvm_riscv_fence_i(vcpu->kvm, 0, hmask); else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_all(vcpu->kvm, - 0, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, 0, hmask, vmid); else - kvm_riscv_hfence_vvma_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT); + kvm_riscv_hfence_vvma_gva(vcpu->kvm, 0, hmask, cp->a1, + cp->a2, PAGE_SHIFT, vmid); } else { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - 0, hmask, - cp->a3); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 0, hmask, + cp->a3, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT, - cp->a3); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, 0, hmask, + cp->a1, cp->a2, PAGE_SHIFT, + cp->a3, vmid); } break; default: --=20 2.43.0