From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4AC251465A5 for ; Thu, 5 Jun 2025 06:15:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.53 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104115; cv=none; b=TOl2YsEVMc4Q3Pbp9a5s3OdWCht+eebL05muOpoV2Gc3hkxVmBpxuezCL6CX+qUH4sVAnLz947KbCmbcplR+KYM2b5unIyxiABm7c900ba42dB99Co30zX8ihydOuqyCAnBAkWEK4dKU+Y5B8H5k6HstsWIR+NeJ5YE1kg1+5uw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104115; c=relaxed/simple; bh=nvXXcrrM+yiGZgmItj7EWeNmKUo3VFTr1hB5yq1cosQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n4bHsjpgtA3AjtDWp2W7Fm8Pq5KM3ELNYqYJQrztFTmxwfzVMCr60kaKpNCEUSpQJZR58soD0wpUpt+6EikmG9i6WOc0b4DCth+auucGCp4oMRzg4CP5IommNZxlF+618FkoiUHZV3ACcq0qEJlNgLZWJthkQr/T9OuuziznvW8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=l4bA4+qY; arc=none smtp.client-ip=209.85.216.53 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="l4bA4+qY" Received: by mail-pj1-f53.google.com with SMTP id 98e67ed59e1d1-311ef4fb43dso501200a91.3 for ; Wed, 04 Jun 2025 23:15:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104109; x=1749708909; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m/Iwx8NoDTniX1jxmQ6HvQXY8SuCaJi4qXxXEsxBkb4=; b=l4bA4+qYBkwW6/PsdkIoOtg04m4GEWjerAWvd/U5a8ct16ijZy4+nhfiUDe7nGnMOj SYIBdcjK+DQoAynNcSIDM7W9/OeuF7/P0Uvt+y8OAbqNqAJBx7+QOtUXFkFnuc+KMXz7 e/aLxlwJ163TIYO1dHQGrV2oDXar1el+1uGxB05akjDGi0d2SCg4035RHAJh5ZfcNsZl BdTfK/3P+vtKuARLYICveMfPH++Uu3v6pm8K8/UQQ5eRHTlzfSb++j89En1T2PGA6ZNT cY30MyrmU3AS14zJNnJrY/dNZm4mpwY47U6x21SubIhz8fse7gpVHgNLSdiHttUsm1jG thUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104109; x=1749708909; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m/Iwx8NoDTniX1jxmQ6HvQXY8SuCaJi4qXxXEsxBkb4=; b=w39N9PmcszbZRcvqxZUXh6GquKyurSuSNyklUOX08zaosg8bxCpZO2PjYPpozYDMhe IsKSwS6ZBAIQCRntH2ziVFicktajvlom7r/u0/wE3PQCX5l0W3uXghI5XwuxWpPSlZnw LS2+6AWh7AJZ9kNWfX1+qtGfmqDtOa3kklaxVEk2alRe8AmzXMIgz3Qwvu9jBh6owoHa 44Alf1b9Ud8JbVq1N9KvAy83v8XbX9bi9Kol5UsWiNVbkXDqjaponOYU4C37z3v+7Clt okSm76cpIU03DkmfLhrNxaRcWX1x2aX85TND9GtALLGYBt33Qs4PgO9DcDHSoy33YbkE CUqg== X-Forwarded-Encrypted: i=1; AJvYcCXn11NY4t2tSDplKFBqdesTtpMAMKA1vwukdMLQfcMAXvmKuq9wVFJ5/zwjZR0DPf3UC/jA+0Cj1E24bzg=@vger.kernel.org X-Gm-Message-State: AOJu0YysA0qtK5iU+nk9y0J0vQLtQqCFRC2puC3V/9CbwtjtklGhEU9B dlVpRst53WHxq2UGty/vNhuRUpJQxTH6Qvf3EK3CwBcZ4S6rSxTdL8raZeRMURn5NXk= X-Gm-Gg: ASbGnctd5jy37E1z1iAOSoEAfC84V0/E/0M68oCmi9lrBY4wLnzGr6ySigIHrghWPVp v8w3G7avBcBj3uOlisgKgzquEzhbWwdXEjPXeAhAKRaM90K3BelfCMnbtb6KFyvXp5Ccstd1XfY O+2/KNnEZM4i4E/N2FdRxFy7bnVDlJJDhsF4N/l39ddhBoG9tArGzJ4VEan47bo5Th3FaBkmCeZ kJv3C0XhIIR4M1TwGsJ4zJQq5Hc575pTV9XrmGLe9YBbw/YaJHcewPUsyJzVmsdHam3HOv/plU8 f/U7MSJL2v2ipsroijKaOB415/Qf+9wexoRi34BiUlNgn/N7XP3XWhUyHCR7m36Bt2ouTVwbrwg MC2cwcXY+zScEayjf X-Google-Smtp-Source: AGHT+IE5uPZLOAhVuqWaFE8UdeKOXIRbJ0H8KWIVX2JV+WCepOe45QIRWAIT5GZJ2vN2n4ZWB/B3YQ== X-Received: by 2002:a17:90b:4ccb:b0:312:f0d0:bc4 with SMTP id 98e67ed59e1d1-3130ccf5055mr7616893a91.5.1749104109337; Wed, 04 Jun 2025 23:15:09 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:08 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 01/13] RISC-V: KVM: Fix the size parameter check in SBI SFENCE calls Date: Thu, 5 Jun 2025 11:44:46 +0530 Message-ID: <20250605061458.196003-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As-per the SBI specification, an SBI remote fence operation applies to the entire address space if either: 1) start_addr and size are both 0 2) size is equal to 2^XLEN-1 From the above, only #1 is checked by SBI SFENCE calls so fix the size parameter check in SBI SFENCE calls to cover #2 as well. Fixes: 13acfec2dbcc ("RISC-V: KVM: Add remote HFENCE functions based on VCP= U requests") Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu_sbi_replace.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index 5fbf3f94f1e8..9752d2ffff68 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -103,7 +103,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *= vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: - if (cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) + if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, @@ -111,7 +111,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *= vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: - if (cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) + if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, cp->a4); else --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B639157A6B for ; Thu, 5 Jun 2025 06:15:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104115; cv=none; b=HJnbQLujV+ji7Nt9c87VGGMFEj7t88bZPBv8aGk91elV4CSltTJwV0j5mjkrD9Zvu05nGZtBrVYTvzJuCDveekZiEyJcDmkx8uHM70LA9s+OeW+/DENTNZ/hqOtUg3Agp5zK4aVOCfk5G1lA+xE6ldMJ4g3ezq0bvCpmSRfUoHM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104115; c=relaxed/simple; bh=woPsQAQ8aD5eXj0e0btcPjRxg9lkePKpxbumcqyxfL8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=teVChwLZlsJ0tgMJmLgsD31o43UzwvQDAqUL/qnvD0hY8YuinAzZiisX9oXzZyaCs26yDEtX4mYSFOuoHtLeMkF+yQJXWkiCXDDbvJsolLSYhg4uI9NOiX+jB/LvipXH/Ccxp/o+skJzXUwe87uP3ZSF7zlYjiVfnyifl8VkzqU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=HQMiZLzv; arc=none smtp.client-ip=209.85.216.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="HQMiZLzv" Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-306bf444ba2so566845a91.1 for ; Wed, 04 Jun 2025 23:15:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104113; x=1749708913; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Z6EY1voNzcSK9ojB16bH1YicliuGIt5A/zH/wdqEteM=; b=HQMiZLzvOlXsupNKAvH/OVWXXbToYao3hUldDJs2IBQHuAXs/Fuq0Wq2rh/V1pAHit fRP9hBEEEhe/C74RlQurjfnuPIc3v002vt/sbaLHJH+UKGv7LvqZJ1bloFoASdg1A+ym inQ3uVXUIOpm8/ue9KFyFwuwSv9sDdBV5OJmImXt3/HL1br1SVpWZcR+U10vW5kLbVfX biFqiBN2mn3ndborCkXGERy5oCpWIiZNcdu9BYVrYdFEjuCnYtQpp0pIhAMPiMA+jZFd lvz8F0v5GOQzrikSq1IRQdyrlnA+cEs5E6SscWW+boIxqF7l9o1INaoeosUCvjXCL0ov ODiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104113; x=1749708913; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Z6EY1voNzcSK9ojB16bH1YicliuGIt5A/zH/wdqEteM=; b=YX0OapScpex1UWAkmptbqeODcREd2RBx773cot9OIyc8TxKEMl7jWsCI824bdR0j4Y BXMcARj6QNGNG59eywTDn5a6A6SVszgd9QgTO/x1E2wQ/hyL7YjOCFnoUBdem9bFmYeT /0jpVKYdOGU49LZr1uAqYfrMAnP/3sWs32+Xf5YZyzu18NDXDogn1Mpvd7hdEygR6CJX 2BdnNrHeSBxARnxiFuKeyHbsskanCpdLrpZpI4fUIzpuh+bTQdTZ3xlh8eyI1B9hkOvn jAqJGpZG18gVm4Kj6YfnODzc0PhFH0cE9MnlNWxPT2tm1F46g3cQeYp1q6BvepOZUG4Z CiNw== X-Forwarded-Encrypted: i=1; AJvYcCX/WUQ+Lq+p9JpwZBB0VsnbldMHCOJdV2d9ngcjzMx8GiQK2zlTWH13tJU0Yqkkgaf//rLUT/MEHIItI3w=@vger.kernel.org X-Gm-Message-State: AOJu0Yzx8Sm6nK767mlxDR5vqsiYhbfX5LtI/v9eJK9C42RyKTx49TE7 MCm8gvjLoYKaD8g0ta1yxAiQ54XCU7hKbGthge/Q/kB4QCWGG4g/u0Oa28TcH15hFus= X-Gm-Gg: ASbGncuOfjOciTmjx1vgEuZcnKns6sg/dODWRJRqvusg6sPUU6FQiDyMdap1hrwvjr8 z6S2AkGXmi8h/CGpW+cdNeIQsNYj9fVPZmIekagbAxdHFX5CIZecVqWFXvs1UtCGPNY8ob5KCrF 3VtMmfcCVy4YJS0IMD2HEuwC0pl9tib25jo1/vNJOfCwGFHd25WEp8oCWMZqZGqVS/0pakPn4ql DF6/yCGUiMftOm9n09P3WUo7+fcwlsI8djyHDz9Q0QBc2CjAR9KSzMgBDYs2V5HTalNLbkLRUC4 HrRb9yZRJ+PeBEX2Tr4wIbZzWIbf15TiHEbFZbM84iotgito+mRRXJrOMFPs4DBEIxyIgjDpvr4 +hCZn6fFQY7YV0ORo X-Google-Smtp-Source: AGHT+IFFxi4gniZBsMh1uNNeEYBLb4/KPrhz9tn6tI1ZmsRYmgudY+gvcTjARMukQYkeSUEQVSE1LA== X-Received: by 2002:a17:90b:164b:b0:312:959:dc42 with SMTP id 98e67ed59e1d1-3130cd15a69mr8793286a91.11.1749104113144; Wed, 04 Jun 2025 23:15:13 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:12 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 02/13] RISC-V: KVM: Don't treat SBI HFENCE calls as NOPs Date: Thu, 5 Jun 2025 11:44:47 +0530 Message-ID: <20250605061458.196003-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable The SBI specification clearly states that SBI HFENCE calls should return SBI_ERR_NOT_SUPPORTED when one of the target hart doesn=E2=80=99t support hypervisor extension (aka nested virtualization in-case of KVM RISC-V). Fixes: c7fa3c48de86 ("RISC-V: KVM: Treat SBI HFENCE calls as NOPs") Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/kvm/vcpu_sbi_replace.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index 9752d2ffff68..b17fad091bab 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -127,9 +127,9 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *= vcpu, struct kvm_run *run case SBI_EXT_RFENCE_REMOTE_HFENCE_VVMA_ASID: /* * Until nested virtualization is implemented, the - * SBI HFENCE calls should be treated as NOPs + * SBI HFENCE calls should return not supported + * hence fallthrough. */ - break; default: retdata->err_val =3D SBI_ERR_NOT_SUPPORTED; } --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B821E1FF1C9 for ; Thu, 5 Jun 2025 06:15:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104119; cv=none; b=T6/P0/tKpvuBkaj3sMRqzz1rRby/FXMvyyWcCwgdBI97IKrLR71JbiuBmnDAQH+8S72uXvImCYXG0UxGqqcBNnyDxhq9neyeAYJfUIRpwhXc2zdpHoKJ51hSVag+eJ3DPQPbtd2X1NPOhaBS8+zJVvxP/Waij62w12nWaBNH4+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104119; c=relaxed/simple; bh=mRQoz8hGC8K4ob6BUuQVFp4blnARaI6s2Lx2a7QTqbA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=WtbXVeN6ChfkKyYFKXEpk6PWO6nV0TVGDnG3e6eRMmAitOAL2T9Jq+fUTC1dYJNHEY1Kle6tbBmpJPxiMhbeTtxPAjGKYJgM80VDXevNhC4N4ZLVU5dkzLYf/kVtxVCsGNaPaLKtq5WPFdUAlF6HcviOmBIoGXnE6ZhvDMIdac8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Vilb6tQy; arc=none smtp.client-ip=209.85.216.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Vilb6tQy" Received: by mail-pj1-f46.google.com with SMTP id 98e67ed59e1d1-30e5430ed0bso651603a91.3 for ; Wed, 04 Jun 2025 23:15:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104117; x=1749708917; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pY/L4bm+P9Rr6i37e8lfQjnEvic7ox3vmCP9UKIlwY4=; b=Vilb6tQyxWLFKZiVdZu43u1i2/TFgpZ2bXh+gm3IWmIP+h2XH1LTsVRcr4CplrSeaG oqZz+camKuqd70VtAZQE30zU7qzpkR7eAi2+REFSiA6HFfy6tGyIzPflgUj3BbS4TOzz imYMsj6fhs8s951gsYzM8d8SYAnvdlzmDbe4IebwCrj2L8ZjLH4l1E4fDoMIGfKFCWTO jvAeO6w7iUvMsDdvg306hVJUX1C8Ssqz2+4UdUPVnghkuTMiP+D7N2fAVPbzukBLhLCa H5sb3+rklHdv8gaoSZQ13eCsZidbN8oyL9kO7qkHwdDyitsTkwS5WiYbFnXd/Am4VY++ Tmeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104117; x=1749708917; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pY/L4bm+P9Rr6i37e8lfQjnEvic7ox3vmCP9UKIlwY4=; b=pI4E/JigKrzmNiBmZIBZ1+SI438XTB1pG7BJPsY7avLJKbQpIPekhvE8Sw9X8Qq2Wt eYo1Phwd2gpK1KxPMGvTZFtpNOnJc8XuTfyK1yDZPQJ1293T0zkIlMv3NgDPRexpfVlE tuJindna3s4ClLKRFL5Gci8IxZNcz8+Exo/55lN7obfHzjp3Qu/I3FWiyGkbNCIvEbgV Roi1pCuKaiLTD0Pw6LjXrGw8oDSe+ujieWK8Xxgk0zouiR1Eth2nzl8l41ZQZx4DhvAe x/8JpnvfbPdN9sR2QBVPFYu989aQJoXwqkR+LMORjtaEMqylN7GLaFzCNxvdHlH8v8Qf rLZg== X-Forwarded-Encrypted: i=1; AJvYcCWNRx8WrhHE5GieArAVIAZR6OaHUFxhx0zA6h9RHEJ88po4t/GzJn22quxreyOZhVdl/WC63iuRmi0U/tk=@vger.kernel.org X-Gm-Message-State: AOJu0YwiqgpI/hbRwKumjNSzKx/d1jdJ78wk5gc6mrCXm9IjU+gOZqWY npo7c5/rlmZ4DIpZlfjwzaBKyl5Wh5rB7zE2jC1IXsdKThZgP4NLPGLM9AreeHgxqHM= X-Gm-Gg: ASbGncu/fyssNMifzmmLRc9s0A8F5TzC5Awjjbh69wQmhR4eNH6gbmbqHfEjYBMcCsq 7UgAgoaKL0K+xd+pGtIbIo4KYlumaYJgOdfIvpTNtD3oj7PGvCgkrIfr7MRrzTXvboC4/FdmY/8 U6/maj7dsIvBfRMI3A+iMSTVPver60MgjJ0QCBeN0r2lztPlNK1hCAlM5sKtOXHADyTPy8VoyCy hcGvA87MUfUO4Lzku72zzN/HHPFr35MqvLd8azAMQ8kBTP5/PTJFKrtp18mDzGs5VgUHgavMoXY /ylD6mzjsxizlG+80MP4+T4XigmV2G/3+rdnIu0EQLjIkuDAIftkGJLZtUryBHyuJxTLuZanlox NHIMf4HvGCh7UAmatq0QROzN9RA8= X-Google-Smtp-Source: AGHT+IH+eTAD19cy9DJlja/MN5YACVJ9sgwSL2k4cnnE8LZ5kPurak1nzFsWixYBq+aRFevEKulw1A== X-Received: by 2002:a17:90b:5587:b0:311:f99e:7f4b with SMTP id 98e67ed59e1d1-3130cd6d86bmr8229933a91.28.1749104116821; Wed, 04 Jun 2025 23:15:16 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:16 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 03/13] RISC-V: KVM: Check kvm_riscv_vcpu_alloc_vector_context() return value Date: Thu, 5 Jun 2025 11:44:48 +0530 Message-ID: <20250605061458.196003-4-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_vcpu_alloc_vector_context() does return an error code upon failure so don't ignore this in kvm_arch_vcpu_create(). Signed-off-by: Anup Patel --- arch/riscv/kvm/vcpu.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index e0a01af426ff..6a1914b21ec3 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -148,8 +148,10 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) =20 spin_lock_init(&vcpu->arch.reset_state.lock); =20 - if (kvm_riscv_vcpu_alloc_vector_context(vcpu)) - return -ENOMEM; + /* Setup VCPU vector context */ + rc =3D kvm_riscv_vcpu_alloc_vector_context(vcpu); + if (rc) + return rc; =20 /* Setup VCPU timer */ kvm_riscv_vcpu_timer_init(vcpu); --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D5B5202F83 for ; Thu, 5 Jun 2025 06:15:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104123; cv=none; b=Q5mCUyGtHLEmCTOU/gMn8A3mS1LcTDNn1omUEb+2atlCL5nXdZjKVXNdiHOv6XjBYjjfrxurRu3D+2PsK6c2W/FmoyE0ovAY/DZkoRA4679K6Y1NW9pWQA2AB3caoMSvKXhQjoLRJtci8+zjCwH6t7p5H9oVNiCnf5MGbGcdY/U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104123; c=relaxed/simple; bh=BTmeW5tzGEJWz7QXNK5VYjyFPuMYNsjSNN0CCr2FXlE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Y971mzwnPPXGc03jUsdcYKw3VQf+8yudXm/w9yxOu0Xp1pyr1uU2Supv6V/u/6oTq5oRjtO0P6ekw54O657E3mZxlt3/Gc38EwrQXufWEWCCmUEQfBkdOq+UQzAyeN2JjvFZFmGfU0chCakG9XErxN6uTXDMsh6e2QqLAZuN7QI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=djHsuOvx; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="djHsuOvx" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-234d366e5f2so7956585ad.1 for ; Wed, 04 Jun 2025 23:15:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104121; x=1749708921; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ete+JrWOt0QHSA4tomw5kPck/IlNBcpn3GaqoFoCnO0=; b=djHsuOvxpGIZ+ixMEGXZjMe21zgGg2qe71dhpyU1DBpSap6Uun+H0QNir7ruy6hY7A +70QmION/j/hVTkTdid2/UJkyVExSzm+OfZP6/ksbFA//x+8x23xcNXuafLgjaTH1JQq tO0+ZGnKLXlRxH/eUt95bQYViYG4NZUz1m8SF2Y3VbY3hB3+otQqLTzcj2BS0AwGkzAq yUWzeJHUH4m8nAqUET2Z8OF30misHWnpjKvYOgoX/q49VqL1qFKr2CVEevMd8IPHZWyL d6MPpYIbrC7G0RV8ioqO9U6j8X0Y4HSEDq5fKO30BeXsfH4pq37MpinWPIv2Dr6M2+oq yIpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104121; x=1749708921; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ete+JrWOt0QHSA4tomw5kPck/IlNBcpn3GaqoFoCnO0=; b=BpYyXZxKdiuUZqwTMDHScOQEWGAcEi0ntuyPFA45tSZUj5uusT8nFq50NHqJgM9Gaq n3o216gIlj5Zr7HysI69cPDLzCKJoNfMTeTH8gW1tfAvykYJtJR8cDlRbcO98FS1kXp3 sBikvaKSd/J0KwIjzvBxMTcwSlXxhv2INQFARUob9dhf8/ZoCXTyIS00Ml5/ssTAU9TO /akiis70NoR1qLDEgdHSo8mormdo/zhOOublUvzHiHreRzQGxnT5nEkklHXSN1t6vRZY xmxzp0frHBzZk8sWBHpWh+epSL6Uj5FaMSLcBTvK1lWzLYbUjpKfKQNzJni+IPbiicFi tHYg== X-Forwarded-Encrypted: i=1; AJvYcCVX/iaLmvdZhE9WmKlcWe+YTbauJgJknEpgWCvaOBi3WkE4ghmGNLxzA1ZBKlO65tzwfVYhUxcbD+6pQH0=@vger.kernel.org X-Gm-Message-State: AOJu0Yz6MofLGiwC2udnKC2rw0LeGX5OA2Hg8ClJEMxv3mpClzNLdPIS U3rLw/Iu1Pz32MbERa1IO6xpjwm5qIEcPY5HkWnPmk1rea9GF2aQErPqRhr5xPmqZo4= X-Gm-Gg: ASbGncu2XbxoPphrCU2UfF7vSCmwT8PSEI1i1QwzGB89fqI1rXQusDH/RzH4zaig6df IMJiWckIl8zYxYinLw2ZfqojV3HLhs/0Y8V3AmdFJAyEEimwdvJR1A+Ogcmhz3A0aSHvfUSAEzI Mhx7C7ke0O8R5lneUpEDi7Lqhw8lzlKirtnJoCOQrgA4wgA5CttLF6a459xQkIqePVEZglG/X7B TO/WzLYefNKFTfkzCChPFZ/8Hw9SyxqiO2fKg5JhAY0vBIe9eiQLb+Gkrjo63AFqdoKrcTX9/Pe syF8QkrzChmLsey46yXluDao/pamK5aBvbCaI8XCDTZUdAJrGOFGzLa8sTmp1JghdMIZAdLri4q G9t6D3g== X-Google-Smtp-Source: AGHT+IH4OBU+MycjQPriNgZTJHSLbifpeWqdkPpUgu3tCEdkhvV1zsaXs7ozI9xOz34l1HXvk8nshQ== X-Received: by 2002:a17:902:da8f:b0:235:f078:4733 with SMTP id d9443c01a7336-235f078524emr28994305ad.8.1749104120604; Wed, 04 Jun 2025 23:15:20 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:20 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 04/13] RISC-V: KVM: Drop the return value of kvm_riscv_vcpu_aia_init() Date: Thu, 5 Jun 2025 11:44:49 +0530 Message-ID: <20250605061458.196003-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_vcpu_aia_init() does not return any failure so drop the return value which is always zero. Signed-off-by: Anup Patel Reviewed-by: Nutty Liu --- arch/riscv/include/asm/kvm_aia.h | 2 +- arch/riscv/kvm/aia_device.c | 6 ++---- arch/riscv/kvm/vcpu.c | 4 +--- 3 files changed, 4 insertions(+), 8 deletions(-) diff --git a/arch/riscv/include/asm/kvm_aia.h b/arch/riscv/include/asm/kvm_= aia.h index 3b643b9efc07..0a0f12496f00 100644 --- a/arch/riscv/include/asm/kvm_aia.h +++ b/arch/riscv/include/asm/kvm_aia.h @@ -147,7 +147,7 @@ int kvm_riscv_vcpu_aia_rmw_ireg(struct kvm_vcpu *vcpu, = unsigned int csr_num, =20 int kvm_riscv_vcpu_aia_update(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu); -int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); +void kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu); void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu); =20 int kvm_riscv_aia_inject_msi_by_id(struct kvm *kvm, u32 hart_index, diff --git a/arch/riscv/kvm/aia_device.c b/arch/riscv/kvm/aia_device.c index 43e472ff3e1a..5b7ed2d987db 100644 --- a/arch/riscv/kvm/aia_device.c +++ b/arch/riscv/kvm/aia_device.c @@ -539,12 +539,12 @@ void kvm_riscv_vcpu_aia_reset(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_aia_imsic_reset(vcpu); } =20 -int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) +void kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) { struct kvm_vcpu_aia *vaia =3D &vcpu->arch.aia_context; =20 if (!kvm_riscv_aia_available()) - return 0; + return; =20 /* * We don't do any memory allocations over here because these @@ -556,8 +556,6 @@ int kvm_riscv_vcpu_aia_init(struct kvm_vcpu *vcpu) /* Initialize default values in AIA vcpu context */ vaia->imsic_addr =3D KVM_RISCV_AIA_UNDEF_ADDR; vaia->hart_index =3D vcpu->vcpu_idx; - - return 0; } =20 void kvm_riscv_vcpu_aia_deinit(struct kvm_vcpu *vcpu) diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 6a1914b21ec3..f98a1894d55b 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -160,9 +160,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) kvm_riscv_vcpu_pmu_init(vcpu); =20 /* Setup VCPU AIA */ - rc =3D kvm_riscv_vcpu_aia_init(vcpu); - if (rc) - return rc; + kvm_riscv_vcpu_aia_init(vcpu); =20 /* * Setup SBI extensions --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f52.google.com (mail-pj1-f52.google.com [209.85.216.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6434F1F4CB2 for ; Thu, 5 Jun 2025 06:15:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.52 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104126; cv=none; b=Zd5sBUp75tbdnJbxj4k91NoQxjNpLEpYEmP6IWiK/3oDu/FwC99RFulT080ohsshHwAd1DTs85eYMRyJ/aTK49jd91Rbg77YYNEgOmMJQ0Hc031SpGXtBy3Vv1SK6p6pemuFA8UJsE7TkZg8j6MR2NlJa0kVc1cU9O4yK5n6bmI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104126; c=relaxed/simple; bh=3XxQ3kIy1hzCopjlDYHh4FepN2dVMrVYgeQ9xSMlP70=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SI5rlAbg31r3Zij79d/XYXAqHDvGF1I6Ab9qJXQjkS3+ySsPyA0ps6Eu2h+08iET71H6SD6yq5uPqvNVs2uvWy6inYnncJ3MImLts5DoMjFRFt+7AXsi9d6AgoiP6n2a7nhwVv9+VALJmG3ggI6D8Ok+bkaPohnlt16L52A9xkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Kh1ikOn/; arc=none smtp.client-ip=209.85.216.52 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Kh1ikOn/" Received: by mail-pj1-f52.google.com with SMTP id 98e67ed59e1d1-30e542e4187so492854a91.3 for ; Wed, 04 Jun 2025 23:15:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104125; x=1749708925; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=g9lKrNhC1LriCt2AmMWJPIsLGEeoSE3sCtMNzfw5vMI=; b=Kh1ikOn/hW8L9YIHiJvRtjdhjSQh9lMYpDWcwXhmeG/VXq3rxx9ZVOItyxXXvioH+1 CDQI9YX4aSmB/qQGIK+9avmSnNbuJst5515VrS7UjLZpvUteC+uUXvC9Cp5svza5nA3s LCJq1yjW7H9deisWs0faIdWhaZMB/CSTtLixrLOdnsCPoHpkU/aa/ieTpI1HH102+6gJ /mw6Ot7N8Ym6B3d5NHbkJP68buk98YRCHFQbYc9TUulDQlNbIJFFPAFFQUG8NZOJlQUf Na7zng/e5mndlq+RPUDpj19m7JhCd6VCooD1vsENo+js0cDe4WD0pZChImVFQMCWeQTd +HDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104125; x=1749708925; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=g9lKrNhC1LriCt2AmMWJPIsLGEeoSE3sCtMNzfw5vMI=; b=RxV30PXa4xvqaXVIRfbttyFAnf5OhCvHlr2tZ/gtbF0H9s1SH+n/x3/klZZF4v6MgY G+rPMNGe4tDBChNEc1xWpionOpFqQgpt71JYdXLo7gPvt65Me3BXIOSzingKRSHAHd13 ycidXVhar3ZolVIVn6xWy+LJ+v+lqfG5wTWCzMOrG8Qzkq9tudrdZIx884T1rYDttQD5 N3+Moh18nW9WtFdekNV4JsCx7UWfFacy7xriaj9sl4PYOcXFBzWnodLlIeFXV3l8Hg9r SBOm2jkviStYFu6VkkWuO+6JBBqNiFapOOIKneWBNUwYVyTZFhLmEWCV2p32nRYQchs2 xBvA== X-Forwarded-Encrypted: i=1; AJvYcCXaCZgFP+41Peh4BDBdbr9Zx1EWVPzAWeEOs1NtIocF88djmYy9paxB5TTUbZ57E29nEQxSvMoOGQ9n3qQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzQAQJXEQxi5akIe0iAaMYqpYiX9ojNZXMWq3ka1MWjBE7sCzM/ oWp2gbZ0zrodMxoXryHIm5GxTWbFtI76FFm2+/LLv5Fu2ra0StN/oPpIq4ISpLtdqxM= X-Gm-Gg: ASbGnct8NCKVAGRNUVHxAehDOhPzH5FtzZYpE/Lzw+MlPpvN9ZJUmyTH4Nm089WA62Z FxhH0woxXTlzH/OY8ivYLAj63K9gN4vWyw4UYQRXDoEHySNBHOxOTULfGOBydWsaACdhuBwu7d3 uZ7b+mPrzEOVpItmqA71K43YybJuHLveSdGP4ySdXLQefTu9rcx3UmCrgWb1G36Zolf3fVdKXBm RimR5lbAzdIVSwg3klFOt5Rd2MqwSmBPRPpCw+iBOdyivVqV5fknZGKkmmyqDEYVFd7pA4am5iw V1c4vSEqwtk9/uFBHEMxVTVKleEciBA3WTznwtrzz5gGHAEE1jEvSDJguQKW0RZxV5bWcC5Tgh5 +YDm1+WS5A/UPUFAi X-Google-Smtp-Source: AGHT+IE/5a5zgQx+kk8SDVCqYIhLKXbR1w/+yPtxXvmHoTXBZp+TCtx5+O8hPK0X2ObUlWcFQP661w== X-Received: by 2002:a17:90b:1d84:b0:312:1508:fb4d with SMTP id 98e67ed59e1d1-3130ce9fecfmr9750546a91.33.1749104124548; Wed, 04 Jun 2025 23:15:24 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:23 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 05/13] RISC-V: KVM: Rename and move kvm_riscv_local_tlb_sanitize() Date: Thu, 5 Jun 2025 11:44:50 +0530 Message-ID: <20250605061458.196003-6-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_riscv_local_tlb_sanitize() deals with sanitizing current VMID related TLB mappings when a VCPU is moved from one host CPU to another. Let's move kvm_riscv_local_tlb_sanitize() to VMID management sources and rename it to kvm_riscv_gstage_vmid_sanitize(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra Reviewed-by: Nutty Liu --- arch/riscv/include/asm/kvm_host.h | 3 +-- arch/riscv/kvm/tlb.c | 23 ----------------------- arch/riscv/kvm/vcpu.c | 4 ++-- arch/riscv/kvm/vmid.c | 23 +++++++++++++++++++++++ 4 files changed, 26 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 85cfebc32e4c..134adc30af52 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -327,8 +327,6 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 -void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu); - void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); @@ -376,6 +374,7 @@ unsigned long kvm_riscv_gstage_vmid_bits(void); int kvm_riscv_gstage_vmid_init(struct kvm *kvm); bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); =20 int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); =20 diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 2f91ea5f8493..b3461bfd9756 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -156,29 +156,6 @@ void kvm_riscv_local_hfence_vvma_all(unsigned long vmi= d) csr_write(CSR_HGATP, hgatp); } =20 -void kvm_riscv_local_tlb_sanitize(struct kvm_vcpu *vcpu) -{ - unsigned long vmid; - - if (!kvm_riscv_gstage_vmid_bits() || - vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) - return; - - /* - * On RISC-V platforms with hardware VMID support, we share same - * VMID for all VCPUs of a particular Guest/VM. This means we might - * have stale G-stage TLB entries on the current Host CPU due to - * some other VCPU of the same Guest which ran previously on the - * current Host CPU. - * - * To cleanup stale TLB entries, we simply flush all G-stage TLB - * entries by VMID whenever underlying Host CPU changes for a VCPU. - */ - - vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); - kvm_riscv_local_hfence_gvma_vmid_all(vmid); -} - void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) { kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_RCVD); diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index f98a1894d55b..cc7d00bcf345 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -961,12 +961,12 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu) } =20 /* - * Cleanup stale TLB enteries + * Sanitize VMID mappings cached (TLB) on current CPU * * Note: This should be done after G-stage VMID has been * updated using kvm_riscv_gstage_vmid_ver_changed() */ - kvm_riscv_local_tlb_sanitize(vcpu); + kvm_riscv_gstage_vmid_sanitize(vcpu); =20 trace_kvm_entry(vcpu); =20 diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index ddc98714ce8e..92c01255f86f 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -122,3 +122,26 @@ void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcp= u) kvm_for_each_vcpu(i, v, vcpu->kvm) kvm_make_request(KVM_REQ_UPDATE_HGATP, v); } + +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu) +{ + unsigned long vmid; + + if (!kvm_riscv_gstage_vmid_bits() || + vcpu->arch.last_exit_cpu =3D=3D vcpu->cpu) + return; + + /* + * On RISC-V platforms with hardware VMID support, we share same + * VMID for all VCPUs of a particular Guest/VM. This means we might + * have stale G-stage TLB entries on the current Host CPU due to + * some other VCPU of the same Guest which ran previously on the + * current Host CPU. + * + * To cleanup stale TLB entries, we simply flush all G-stage TLB + * entries by VMID whenever underlying Host CPU changes for a VCPU. + */ + + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); + kvm_riscv_local_hfence_gvma_vmid_all(vmid); +} --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f44.google.com (mail-pj1-f44.google.com [209.85.216.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 304681F9406 for ; Thu, 5 Jun 2025 06:15:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104130; cv=none; b=GAEtJn9oRxhONHgguSJZbcILhp9PcRLxsVsX4Ew0qAtUUWpesxaTnu72cFqOfuOpdFnejoTu7OcbdBFun0WkigPeQwYbwApfoNr2kVF4biCQMYCJbP4UML1I6PMIuhSx8KsduZYklbAts8pFR0q2lZBAc0BfQVXnJQ5qoAGwt9Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104130; c=relaxed/simple; bh=n+hVNyfTxOTl8t9HsAlkU/sRe5K71i50QL7DtkhI3a8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uqDRPqwNny/4i7oSP4aUeJEGbWxcbbY98yZDjxsCWwpH2fLtKFlWxV7wS3NQMzo4BMCccFBPFV5ghKPYauB/ReeqFOqkRU90kIQlCNumGFJjaENGsME1rpPswgVfXI1003ovSHdupp+KEd1V4vF5UvqWxl+tH4zlsGlVXC/PmrU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=HTSk9tVC; arc=none smtp.client-ip=209.85.216.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="HTSk9tVC" Received: by mail-pj1-f44.google.com with SMTP id 98e67ed59e1d1-312150900afso628443a91.1 for ; Wed, 04 Jun 2025 23:15:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104128; x=1749708928; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kfGplfZ+lMzzXGz+lH1cECvXBoEmz2/+/610R8byNic=; b=HTSk9tVCIr2WlM3syf0Cy9+R7xP+bhP70jWuAcLQXeNEtW+UWCdzByWQVEl40nXn00 c5ssio1WRT5y0Yi1R4nZKH/wdqOQKXsnsDhMpfkzhloOtsWCVTrfqhHf8j8epkQSKTKm iIgsCqeSaVGK2Ru0boWe7BJdDboqJ12a1Qmp/85LFMUaoDl85P2WpGSijV5lMUWMcfi4 l7rT++AEIz1mVJkHDP/kN6dJzunmN5QhBPnA75W+YU3/SRRx86JZuMnbD876mfTukARG iH6iKw9gd4UtoICuwA9BD4OGwMqpp0MyIQiu0e2oYT0kXIfmp7uNKn7DEjrAf7k+E49K hDRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104128; x=1749708928; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kfGplfZ+lMzzXGz+lH1cECvXBoEmz2/+/610R8byNic=; b=svg+wZfJSIbkvu2EuN6dO392LAqfLOOQ86r/NjxeprcuHA3UHKTvyy7LenFNOdwK7f P4XYCuZklSQ91yDgR9QLc5t6f8Ev8LE9quh5nQhqRYkfXJv3l2MEAPFiJsofWxO/Mfp5 /LwnarTBSVabqvDl0RB1O+7px76RHBUBTU5dluvND0nGkIAgahQf6vLd/8zi1SBm50JF oAotBInGxP+i5yZgeADVQzCFoiaF1RrCouGUjRI8E33WElNUmfVRnkc4qxh4zPSx9Kb/ PYTaB9ttgTh4GDRuolzATCM9HmyFAGjSFU8uIJWGKmn8JjC5XUSQ7MjQoCg/QZ1mWDjJ 5S/g== X-Forwarded-Encrypted: i=1; AJvYcCUMEQFB1J1XMQQuKeUkxRnc4Kto9Yyh20nKsWn/uqgxUJdxDOXa3peWhglFSplgxguKLrVv3mKKCx/bQGI=@vger.kernel.org X-Gm-Message-State: AOJu0Yw+bhjuuZnstNAV4wGkr2X47hc+XJN+FS617l/Pc0KGCU7TYTUP Jv03F4QH6ba9xJQ8osjnpSqfnAHVnIgrkUynIbp5cyMWjL7Lwi4mZU0d1sSIi5n9v+k= X-Gm-Gg: ASbGncshVrNWfDBZR5eSNdzhGUDGXNKak5UeQzy1fTRfEXu5YOnR/2KC01WsOQz6ShS 3g0l7K+6Ee4Qs116i33xZ3oFS4N6fN/xKOb5je6ZaJ1sJbGhb1ut21/J3XuDaNRPbZxOEPIlo+m UcsTN/hqvk/uRPld1vAEBmm1PYWQpHULsmgKdgnAQyDDdxG7ehKBdwhz5N7KjMNuCTU7ChpexxA 5Y/W5aZfmM+RHBY1B/UhMMAZuG57Mvoyf1wjU+DRdQrvDyNEM0ytYS43qcRB0hzrj4mnSELxQ3H gjbDFspgm0Jzf/x9tslcnfdPZ7lIh7SRE6ORFw1ok5wJUwpxKqE1NsZfHvURtOZhYtJuSoBuZZX cKDIrjw== X-Google-Smtp-Source: AGHT+IE6sSz9BENpf7WiYfh4yg2SFV4ZB167RNBBbzwVLW1gNWfnra1do1zIt/5EOyVuGPon2+FyaA== X-Received: by 2002:a17:90a:2d0:b0:313:14b5:2525 with SMTP id 98e67ed59e1d1-31314b532a1mr4987134a91.35.1749104128373; Wed, 04 Jun 2025 23:15:28 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:27 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 06/13] RISC-V: KVM: Replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH Date: Thu, 5 Jun 2025 11:44:51 +0530 Message-ID: <20250605061458.196003-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The KVM_REQ_HFENCE_GVMA_VMID_ALL is same as KVM_REQ_TLB_FLUSH so to avoid confusion let's replace KVM_REQ_HFENCE_GVMA_VMID_ALL with KVM_REQ_TLB_FLUSH. Also, rename kvm_riscv_hfence_gvma_vmid_all_process() to kvm_riscv_tlb_flush_process(). Signed-off-by: Anup Patel Reviewed-by: Atish Patra --- arch/riscv/include/asm/kvm_host.h | 4 ++-- arch/riscv/kvm/tlb.c | 8 ++++---- arch/riscv/kvm/vcpu.c | 8 ++------ 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index 134adc30af52..afaf25f2c5ab 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -36,7 +36,6 @@ #define KVM_REQ_UPDATE_HGATP KVM_ARCH_REQ(2) #define KVM_REQ_FENCE_I \ KVM_ARCH_REQ_FLAGS(3, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) -#define KVM_REQ_HFENCE_GVMA_VMID_ALL KVM_REQ_TLB_FLUSH #define KVM_REQ_HFENCE_VVMA_ALL \ KVM_ARCH_REQ_FLAGS(4, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_HFENCE \ @@ -327,8 +326,9 @@ void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, unsigned long order); void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); =20 +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); + void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); =20 diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index b3461bfd9756..da98ca801d31 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -162,7 +162,7 @@ void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu) local_flush_icache_all(); } =20 -void kvm_riscv_hfence_gvma_vmid_all_process(struct kvm_vcpu *vcpu) +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu) { struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; unsigned long vmid =3D READ_ONCE(v->vmid); @@ -342,14 +342,14 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, data.size =3D gpsz; data.order =3D order; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, - KVM_REQ_HFENCE_GVMA_VMID_ALL, &data); + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_GVMA_VMID_ALL, - KVM_REQ_HFENCE_GVMA_VMID_ALL, NULL); + make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, + KVM_REQ_TLB_FLUSH, NULL); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index cc7d00bcf345..684efaf5cee9 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -720,12 +720,8 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_v= cpu *vcpu) if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) kvm_riscv_fence_i_process(vcpu); =20 - /* - * The generic KVM_REQ_TLB_FLUSH is same as - * KVM_REQ_HFENCE_GVMA_VMID_ALL - */ - if (kvm_check_request(KVM_REQ_HFENCE_GVMA_VMID_ALL, vcpu)) - kvm_riscv_hfence_gvma_vmid_all_process(vcpu); + if (kvm_check_request(KVM_REQ_TLB_FLUSH, vcpu)) + kvm_riscv_tlb_flush_process(vcpu); =20 if (kvm_check_request(KVM_REQ_HFENCE_VVMA_ALL, vcpu)) kvm_riscv_hfence_vvma_all_process(vcpu); --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f50.google.com (mail-pj1-f50.google.com [209.85.216.50]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E86B920E00B for ; Thu, 5 Jun 2025 06:15:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.50 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104134; cv=none; b=Bk1VsLMGm7r7As1BX51xnFqyOfZGvvw8z1kg8e190jNg3iltwUL2Vio+niFz3X1Vdwx+FHqFXQ3lAvAXZbMbAWbtPvE+F8R5PIGJ+sziTch0nGR8/ZxD9civ8pwsSOnHE+1n/gOa0qrGbPU50P9CTQjz19V4EnPUP3KskTiv4xo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104134; c=relaxed/simple; bh=vt95taIBAE6xHdLS/+LEWFyOT14BvSlPFxy5vZtFS28=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OcwVWF75e1AaX0rz0D/PqYucpjBDNLXEAkMeDruQQA6mx2GeuvKMGCETT0QwsB1iBZClIqPUKYvacaX+nKNMBNMX+mQcaVVHiZdMa+Wsz5Muyv3LOWpOdLzPFrq66JFizljf1WN3jPRX9dUQGxD7IVj+kbH4AP4UrfVWP7szsN4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Ylf3eFcf; arc=none smtp.client-ip=209.85.216.50 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Ylf3eFcf" Received: by mail-pj1-f50.google.com with SMTP id 98e67ed59e1d1-312a62055d0so651550a91.3 for ; Wed, 04 Jun 2025 23:15:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104132; x=1749708932; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=297HQlUGPdF+XZ67B7Ks8toSmyeODGGOfBIq41lD5gM=; b=Ylf3eFcfpnwCoYDdQCw8fmtYL6IHSxyfO7JtwA0xf0h0Re07jV+2ps1edN4QR2zaZM SWyld5q1sBej93iRQfKFyAazCSzorUvC2khIyUf49HzcOnimINKp9aCmUsorRQWt42sK WHbjdVi1gPurDHfEhFPUNERSY1nPHMkAmhT5F+gzEEfpkRVSM+77EIDP+uMVAsAmHbFa NFkXcLdAhjEToMyAA3fJEx8JHLL9AOS2/W9p7im00Lo/jGrboL4v73rfWGQBvUDa0TA0 2TVJyfsQ3ktM+a0EgSyb7abta1QaNJeLyyQeY0XqwC4XjFxsbvzR2H69wSH/QCdsTQDU lbYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104132; x=1749708932; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=297HQlUGPdF+XZ67B7Ks8toSmyeODGGOfBIq41lD5gM=; b=V3KqJ+fBmabWotEtvjXBoI2QFpxRsqQNrZb3R37ChtPczd4gvRsQ62axKgwtSTL2PA zpUHwLDySrL9M6w+JoLS1JEuhbeMm+TdgR0UaxXLP5PYPWkitDQgBsSQCySRdM9K75/r 7AHdSGd+3HsQslS2g2+1J/MhXKJgJNB6xdsEGd0kSDQBcpvZdsQA/QBvkqX/Md0Uo5Hn 68am+w/VVf48Vvnam32yzKydTdaB98FrgUcaxM/TC9JDy1CB+Fqxx8dDomGjxL7Qf3Lh c2aUspEeeTXqyxiGnS1qrwSd6TX4ZRN/UM564bK2o2jqxkPeCRaDTI3Sjea1Flx9WtCk G5EA== X-Forwarded-Encrypted: i=1; AJvYcCWjNVxBN218xjiuiyYy2w9RRKXt0i14GN8+VNkjTt18svC+0SWRe0vXq8na0viWygGCi3m3vtNPg02z/Q0=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+dn8+7aqhTR0FmHfbU5BomBMj3FVuNLVuNZw7DWbG86uzaqyb 9EoBXoLOHQxrRkJf5gpx3vJ0XdaqSsocIwYIeP+nFGTrwpWdFhA70VFt+J7Hymiqcmk= X-Gm-Gg: ASbGnct8+1r70Q5BEoqF7tLtwqlRuC01n2aYIlHtV/Xtvd6AK+fYsN2SoKnfgnsHEXA SrL9uEddLVK0/XLrFohmBHVcvIPJ52HKlQRC0fXWyJSG8PU/nzkIMmdayPRmdZgdSXhP404PAfK LuqjtZ8dwNpaUrQ3z3nGocmFtfdh2nTI/lkhU73fVqCDZGTi9WRIdq1ykwE2+7MOq3hc0VXyweh /8qFtDwNNcwD7aGxhG1tdj9BcdsvkyaSvtayPKFk3qfBIF9J62HyBF5dE9gKHoz9MUiY3ncEmF3 Nlcfizgf3tCfJgRQZ0mMwRL3v/7IeQTVWv0dxzFhCxhTez5M5C/vYP4mFQWmHT78zhNUAkrh6/c h+HwPFQ== X-Google-Smtp-Source: AGHT+IGaQDp4t9Rc9PacdM1BZZoSLCIgCU7kc4zabKe3J73B2NovkDFU1CUL1ahoIxkagRn4o+A2cg== X-Received: by 2002:a17:90b:2641:b0:311:9c1f:8516 with SMTP id 98e67ed59e1d1-3130cd2c648mr8923161a91.15.1749104132103; Wed, 04 Jun 2025 23:15:32 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:31 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 07/13] RISC-V: KVM: Don't flush TLB in gstage_set_pte() when PTE is unchanged Date: Thu, 5 Jun 2025 11:44:52 +0530 Message-ID: <20250605061458.196003-8-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The gstage_set_pte() should flush remote TLB only when a leaf PTE changes so that unnecessary remote TLB flushes can be avoided. Signed-off-by: Anup Patel --- arch/riscv/kvm/mmu.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 1087ea74567b..d4eb1999b794 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -167,9 +167,11 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; } =20 - set_pte(ptep, *new_pte); - if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + if (pte_val(*ptep) !=3D pte_val(*new_pte)) { + set_pte(ptep, *new_pte); + if (gstage_pte_leaf(ptep)) + gstage_remote_tlb_flush(kvm, current_level, addr); + } =20 return 0; } --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6830211711 for ; Thu, 5 Jun 2025 06:15:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104138; cv=none; b=Wdt/bn9ez3LecINyIX9cyreLg/kQsjHbQ7Xi76gsiuHB4/UAzBdjlHbcaIZ1OrYexi5DwUlc08KDr/QGYPkW9pJvCnw97N+1CPqJyPZLeqMzrNhDaxoPm1eAw/1U5agPWcJRFBEA76IccDSXg1V6lo7qKcxr6lzuNfMDg2WwESE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104138; c=relaxed/simple; bh=vyBV2F4KyvtMk4LF2UAhi7qqpyH1rZyaYbpSJb6E0Qk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=D9fIPLT/HpRIFxHSSLFknwFIVhv/UmF4JChTDxIIS2vgiXAErY8cYpI8sHXXcqzG/BQB/R327FQ+H1MosDnXp7mk6Sp2I2knVA2N6srd7oTPKJ/E6TH0iW9ta6WWOJzyouQo7Gx+nFRiaChxIeYPtEqllg9Auv5dQsIuDFfxSMs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=iZdk4X4m; arc=none smtp.client-ip=209.85.215.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="iZdk4X4m" Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-b26d7ddbfd7so718435a12.0 for ; Wed, 04 Jun 2025 23:15:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104136; x=1749708936; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=PBw4EHffUTV31O6B+TCgyeA+gw4sP4Qg6j7rVTIEdxU=; b=iZdk4X4m5u2KioQUq6G/a/rYk1gh/N94aLc1P44UURo44p1doVGMqCtfD6uII6sjfs 3M1Xrls/bX0eudl4bnRwhM2KB8BvQxvwRRClXdvFKeZbP8GzV9xymMuxspxKX444Zcbr 9/Lu945kVntppYO7Bortsn2X6wZjCXbe3mBZHTOgJFgaq/65z/teVA4YbTwOObM9SSX3 +y6B7UoniFS2WB173CIP2jepTGaDFGghc+UfWx/msGlx1p96iDkMab1TxpygclD2jgMx ebmQ7A4Wkjh9c9AyTxbCZPLcZlkjQNye8wQppoyG19GsasRi5LQyFTPf0eXfVhG+OLq4 NJfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104136; x=1749708936; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PBw4EHffUTV31O6B+TCgyeA+gw4sP4Qg6j7rVTIEdxU=; b=V9X3CbiSOKrHqGDWZnWRX8yBZEegTlrAiMGeuwQpVCkTVchppwoWYbA37y7LJIXYbp 6bPOMlonU27ViWugc3OQ/wJEaEuj4wotdFd/SWJUACrl2G5emLFo/uWLI7AJnSNyPzu3 U7/6etrD3y5DxYewLjdM1+gW5GZ03LFDbmxWQY8kCGTpxRVLzzBy2/nqL3J8tnXBaF7P krXtGkCQVFUm/hEBuzTL6KcQvpjwNtVASSAfgtNNgxICPWlhxicfYmPXaKRENKQbWujn qdkWfZ0mK6P9QYYT9POkXwCjg7quTogriy+T/pk+b1asKzkkJ7OAzZUD8mzJM1AYMJlr Gmwg== X-Forwarded-Encrypted: i=1; AJvYcCWDhW3E4tF8QAZlyGGc+6j7g1w1KPQNhsB1mRPlLzLxyXfdrrzKYtKtAIQsqOJKsAcWnxmLI7XFzBF9cEw=@vger.kernel.org X-Gm-Message-State: AOJu0YyY0xNljzTBulgtR1ZqyziB5yo64vcGGHvTPw2fdCjwifndJbwL 06QrP09SqHHig4KajtadaXmUqQ79a/d7HHSn45FS0+1/BDMl94zc44HecrkgPXQwnDI= X-Gm-Gg: ASbGncuoLrkgYJnOk+RF0lgan4aZBGX8saGuEfmmjbEIalFbxwMP0Sos697ScRH9DDY LfsXBBrFsEsaolEMOQ/Y6d9vqvRj1X0UwWo/JtJeFXNfsnurRiQEQPo2C8nfRoWzbHkhpE9B22m v4lXj3fCXyw5gtwtFZw83jw/ocoWNp5ZEmeb8r99kr0qhbS/q/gBvcH9SXO+sIjT+84vh8tc1EU 2FhpV792ZXxBhNhLs+MC8/KYcPMA3Bc5u7JhJ30G6/EhkXOJQpUY5qkeUZvHIRmQWauBZjaHHhg XvzlhUn8dmS25cklX7xDnz2ZRZ/rjU+3GO2w27OENAxuspo2RqZasZdSYhFXEojr3woIYK4Xo3X sCP3mgVcu2jKcjbnxA5bH+xCq4Ts= X-Google-Smtp-Source: AGHT+IFD6i36g3dYe1k+gDLGsx1D0hcp/ZxtNU3y3iDfAyRruAl8oD12KxZBz84WYVYtRaZXIXAJqw== X-Received: by 2002:a17:90b:3ec7:b0:311:9c9a:58da with SMTP id 98e67ed59e1d1-3130cd12ba3mr8390187a91.8.1749104135972; Wed, 04 Jun 2025 23:15:35 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:35 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 08/13] RISC-V: KVM: Implement kvm_arch_flush_remote_tlbs_range() Date: Thu, 5 Jun 2025 11:44:53 +0530 Message-ID: <20250605061458.196003-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The kvm_arch_flush_remote_tlbs_range() expected by KVM core can be easily implemented for RISC-V using kvm_riscv_hfence_gvma_vmid_gpa() hence provide it. Also with kvm_arch_flush_remote_tlbs_range() available for RISC-V, the mmu_wp_memory_region() can happily use kvm_flush_remote_tlbs_memslot() instead of kvm_flush_remote_tlbs(). Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 2 ++ arch/riscv/kvm/mmu.c | 2 +- arch/riscv/kvm/tlb.c | 8 ++++++++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index afaf25f2c5ab..b9e241c46209 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -42,6 +42,8 @@ KVM_ARCH_REQ_FLAGS(5, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_STEAL_UPDATE KVM_ARCH_REQ(6) =20 +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE + #define KVM_HEDELEG_DEFAULT (BIT(EXC_INST_MISALIGNED) | \ BIT(EXC_BREAKPOINT) | \ BIT(EXC_SYSCALL) | \ diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index d4eb1999b794..834d855b0478 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -342,7 +342,7 @@ static void gstage_wp_memory_region(struct kvm *kvm, in= t slot) spin_lock(&kvm->mmu_lock); gstage_wp_range(kvm, start, end); spin_unlock(&kvm->mmu_lock); - kvm_flush_remote_tlbs(kvm); + kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index da98ca801d31..f46a27658c2e 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -403,3 +403,11 @@ void kvm_riscv_hfence_vvma_all(struct kvm *kvm, make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, KVM_REQ_HFENCE_VVMA_ALL, NULL); } + +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) +{ + kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, + gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, + PAGE_SHIFT); + return 0; +} --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pg1-f181.google.com (mail-pg1-f181.google.com [209.85.215.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E079721422B for ; Thu, 5 Jun 2025 06:15:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104142; cv=none; b=p8uj6tUfJtzv34kCWYutcOu4gZ2+hJjiWPK2rr04lpT21yb9xhnVVLAQLj6O4ZU+y6npQmuYY0BFJ3JhRDoRV50tVJZdyiwSvt/0PviI1RjFre2+MHt2zB1/blpjwR/RcGAzxw4bT37cn73JltD6e/Zz+Dn9W4PlZYb153nNZ7E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104142; c=relaxed/simple; bh=aqhFaQgpndhA0TLNptW4POc6Er0vQpFq4mJ6KXKEQ7Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=J+GklJa+t8q3bZVXJvKeSvko8+0LLQth0+EuD3IqeWOYEwnILt+cgmFnA/KslR1qQJybghPo/VL8BTqWAjdBCOeJj42twEAlJummGciare4d/M+WMPUSmlkJPXPvaoZoCp8h+ZvNV6PeU+2JjVcrodMQHHSv1JUTidnnITWQu9w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=gK53H4L7; arc=none smtp.client-ip=209.85.215.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="gK53H4L7" Received: by mail-pg1-f181.google.com with SMTP id 41be03b00d2f7-b2c4e46a89fso441864a12.2 for ; Wed, 04 Jun 2025 23:15:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104140; x=1749708940; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/5L0+ykOWiXelXLICAGslkImYlGqzZkgVfyp6ia3O1g=; b=gK53H4L7XyLLtcKiYhXnDkYS4zA86eQxo8D/BmVCT5V9KBEkjje8APFGxvVZ/ikq3B 1WSk0eZ2mnEsInQeVKSj1egbDwzJCO54AlmGv1KNrAeYUaraj7UmeY6IlodjyBx0PAg6 NNcpQK+Nnl/8VCPuGI49/Kn3aDQ3aTWmK6AJMRjB+j/QnqKL+pRUe3IC93f4Zcw5SRzK I+lC6ZzfL44Ltv8zIe7Kyse1zix9bjNc2NHpnOjSW7OQulHQKNybGaqqOPztylrTBLs5 DSqShvdSwpB/txgi945DVCjoZzRlwsi0eCwBUSOSvw3pe1ZN/mhvJv9YfAuX2j+xiG/Y 0+6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104140; x=1749708940; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/5L0+ykOWiXelXLICAGslkImYlGqzZkgVfyp6ia3O1g=; b=hYc1yxrAGOHbpI6Ns2zepp1nJAGa5JA3+veL9hPkwe1pu7SUcxEO8SJFA81/Xt3f08 v+jrMRPStche8ImOhx4yitFc/3bLkhKP6YIxX2pICMz0bq/ziDysAPJTSuMyJdclVQX8 S40WqoETlkOBh147K7nwm6p/AaAD3wd26rsQKIWDPjWj1nyJ2d9gg8+YQiHYoKm8d5a0 v1tujOA15O+etTJo943d59MZp/uHlGwc10AN/+pMYgC5xRc5oNFZnOvQtTVnTCsjFzoM ArE2RXLUMleW422vf2dg3HYEfKjwwpPtWaHdabyP4NuImjd/l/u6xIFry9HEyjDU+Be2 umGg== X-Forwarded-Encrypted: i=1; AJvYcCXPF58OvEIRI/eEP0kOuXVnR8p8Dii76HuFhgWfZbxxMu64XrAjDvUCWVh3CovHLa+MmAZjdwFu1nXw/po=@vger.kernel.org X-Gm-Message-State: AOJu0YwVXY94ajcLAmH8fqfmdMgwRQMcDDhan++OdufCUbcgpvtb+Pcm UYMMfyu2c5DCQCxELtn3nOF7sFztd9WWbLJytHORPDYj2CLn0xmt2/ZRWuzWIWRtLB8= X-Gm-Gg: ASbGncvc8qozkrUUallJ+VpOdIWAyM4/U1u8rz+W4jCHVKx0hmoxafbmVWBzGZx4flb NSmjPNwccuAekFFg30deZ2L2fnYB2SK0R4YAIvCZkB209wM+xh61r8wNz/TURwRNlG9Fr4ZYn5d ojK9wJLhLgqOndag7ekV2NP4PqtirnULPImIHbbG/JnxHVgFg/K0WRZGEZutReHRDBpzUEHmyPT aoMQ570KZQgszyw8axJ1lHtrCCmQA35tA2Ot5t/V7ACeA1YL6oMeDXNpnV9jne1vWsPyuXlg+S2 zHnxSq3sV9Bo2qYynwbBjHJhT1F5Sc1ARjBch6Cq07lLzVoDdS5QIq/Si0lk8l5FAsflBzVaUBz VNlc/yA== X-Google-Smtp-Source: AGHT+IGeCP7yjnGqfjqlB4ks6ph12/2Dcv3NbYOPrnzOV1SF+VtBogS3QcmQr2cFbHhenw53773/UA== X-Received: by 2002:a17:90b:1d51:b0:313:2464:ad20 with SMTP id 98e67ed59e1d1-3132464ad44mr5168665a91.13.1749104140057; Wed, 04 Jun 2025 23:15:40 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:39 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 09/13] RISC-V: KVM: Factor-out MMU related declarations into separate headers Date: Thu, 5 Jun 2025 11:44:54 +0530 Message-ID: <20250605061458.196003-10-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The MMU, TLB, and VMID management for KVM RISC-V already exists as seprate sources so create separate headers along these lines. This further simplifies asm/kvm_host.h header. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_host.h | 100 +----------------------------- arch/riscv/include/asm/kvm_mmu.h | 26 ++++++++ arch/riscv/include/asm/kvm_tlb.h | 78 +++++++++++++++++++++++ arch/riscv/include/asm/kvm_vmid.h | 27 ++++++++ arch/riscv/kvm/aia_imsic.c | 1 + arch/riscv/kvm/main.c | 1 + arch/riscv/kvm/mmu.c | 1 + arch/riscv/kvm/tlb.c | 2 + arch/riscv/kvm/vcpu.c | 1 + arch/riscv/kvm/vcpu_exit.c | 1 + arch/riscv/kvm/vm.c | 1 + arch/riscv/kvm/vmid.c | 2 + 12 files changed, 143 insertions(+), 98 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_mmu.h create mode 100644 arch/riscv/include/asm/kvm_tlb.h create mode 100644 arch/riscv/include/asm/kvm_vmid.h diff --git a/arch/riscv/include/asm/kvm_host.h b/arch/riscv/include/asm/kvm= _host.h index b9e241c46209..8d7a59274386 100644 --- a/arch/riscv/include/asm/kvm_host.h +++ b/arch/riscv/include/asm/kvm_host.h @@ -16,6 +16,8 @@ #include #include #include +#include +#include #include #include #include @@ -55,24 +57,6 @@ BIT(IRQ_VS_TIMER) | \ BIT(IRQ_VS_EXT)) =20 -enum kvm_riscv_hfence_type { - KVM_RISCV_HFENCE_UNKNOWN =3D 0, - KVM_RISCV_HFENCE_GVMA_VMID_GPA, - KVM_RISCV_HFENCE_VVMA_ASID_GVA, - KVM_RISCV_HFENCE_VVMA_ASID_ALL, - KVM_RISCV_HFENCE_VVMA_GVA, -}; - -struct kvm_riscv_hfence { - enum kvm_riscv_hfence_type type; - unsigned long asid; - unsigned long order; - gpa_t addr; - gpa_t size; -}; - -#define KVM_RISCV_VCPU_MAX_HFENCE 64 - struct kvm_vm_stat { struct kvm_vm_stat_generic generic; }; @@ -98,15 +82,6 @@ struct kvm_vcpu_stat { struct kvm_arch_memory_slot { }; =20 -struct kvm_vmid { - /* - * Writes to vmid_version and vmid happen with vmid_lock held - * whereas reads happen without any lock held. - */ - unsigned long vmid_version; - unsigned long vmid; -}; - struct kvm_arch { /* G-stage vmid */ struct kvm_vmid vmid; @@ -307,77 +282,6 @@ static inline bool kvm_arch_pmi_in_guest(struct kvm_vc= pu *vcpu) return IS_ENABLED(CONFIG_GUEST_PERF_EVENTS) && !!vcpu; } =20 -#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 - -void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, - gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); -void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_local_hfence_gvma_all(void); -void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, - unsigned long asid, - unsigned long gva, - unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, - unsigned long asid); -void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); - -void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); - -void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); -void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); - -void kvm_riscv_fence_i(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); -void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - gpa_t gpa, gpa_t gpsz, - unsigned long order); -void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); -void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); -void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long asid); -void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, - unsigned long hbase, unsigned long hmask, - unsigned long gva, unsigned long gvsz, - unsigned long order); -void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); - -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic); -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, - unsigned long size); -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); -void kvm_riscv_gstage_free_pgd(struct kvm *kvm); -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); -void __init kvm_riscv_gstage_mode_detect(void); -unsigned long __init kvm_riscv_gstage_mode(void); -int kvm_riscv_gstage_gpa_bits(void); - -void __init kvm_riscv_gstage_vmid_detect(void); -unsigned long kvm_riscv_gstage_vmid_bits(void); -int kvm_riscv_gstage_vmid_init(struct kvm *kvm); -bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); -void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); - int kvm_riscv_setup_default_irq_routing(struct kvm *kvm, u32 lines); =20 void __kvm_riscv_unpriv_trap(void); diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h new file mode 100644 index 000000000000..4e1654282ee4 --- /dev/null +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_MMU_H_ +#define __RISCV_KVM_MMU_H_ + +#include + +int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, + phys_addr_t hpa, unsigned long size, + bool writable, bool in_atomic); +void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, + unsigned long size); +int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, + struct kvm_memory_slot *memslot, + gpa_t gpa, unsigned long hva, bool is_write); +int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); +void kvm_riscv_gstage_free_pgd(struct kvm *kvm); +void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_mode_detect(void); +unsigned long kvm_riscv_gstage_mode(void); +int kvm_riscv_gstage_gpa_bits(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h new file mode 100644 index 000000000000..cd00c9a46cb1 --- /dev/null +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_TLB_H_ +#define __RISCV_KVM_TLB_H_ + +#include + +enum kvm_riscv_hfence_type { + KVM_RISCV_HFENCE_UNKNOWN =3D 0, + KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_VVMA_ASID_GVA, + KVM_RISCV_HFENCE_VVMA_ASID_ALL, + KVM_RISCV_HFENCE_VVMA_GVA, +}; + +struct kvm_riscv_hfence { + enum kvm_riscv_hfence_type type; + unsigned long asid; + unsigned long order; + gpa_t addr; + gpa_t size; +}; + +#define KVM_RISCV_VCPU_MAX_HFENCE 64 + +#define KVM_RISCV_GSTAGE_TLB_MIN_ORDER 12 + +void kvm_riscv_local_hfence_gvma_vmid_gpa(unsigned long vmid, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_vmid_all(unsigned long vmid); +void kvm_riscv_local_hfence_gvma_gpa(gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_local_hfence_gvma_all(void); +void kvm_riscv_local_hfence_vvma_asid_gva(unsigned long vmid, + unsigned long asid, + unsigned long gva, + unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_asid_all(unsigned long vmid, + unsigned long asid); +void kvm_riscv_local_hfence_vvma_gva(unsigned long vmid, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_local_hfence_vvma_all(unsigned long vmid); + +void kvm_riscv_tlb_flush_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_vvma_all_process(struct kvm_vcpu *vcpu); +void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu); + +void kvm_riscv_fence_i(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + gpa_t gpa, gpa_t gpsz, + unsigned long order); +void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); +void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order, unsigned long asid); +void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long asid); +void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, + unsigned long hbase, unsigned long hmask, + unsigned long gva, unsigned long gvsz, + unsigned long order); +void kvm_riscv_hfence_vvma_all(struct kvm *kvm, + unsigned long hbase, unsigned long hmask); + +#endif diff --git a/arch/riscv/include/asm/kvm_vmid.h b/arch/riscv/include/asm/kvm= _vmid.h new file mode 100644 index 000000000000..ab98e1434fb7 --- /dev/null +++ b/arch/riscv/include/asm/kvm_vmid.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_VMID_H_ +#define __RISCV_KVM_VMID_H_ + +#include + +struct kvm_vmid { + /* + * Writes to vmid_version and vmid happen with vmid_lock held + * whereas reads happen without any lock held. + */ + unsigned long vmid_version; + unsigned long vmid; +}; + +void __init kvm_riscv_gstage_vmid_detect(void); +unsigned long kvm_riscv_gstage_vmid_bits(void); +int kvm_riscv_gstage_vmid_init(struct kvm *kvm); +bool kvm_riscv_gstage_vmid_ver_changed(struct kvm_vmid *vmid); +void kvm_riscv_gstage_vmid_update(struct kvm_vcpu *vcpu); +void kvm_riscv_gstage_vmid_sanitize(struct kvm_vcpu *vcpu); + +#endif diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 29ef9c2133a9..40b469c0a01f 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -16,6 +16,7 @@ #include #include #include +#include =20 #define IMSIC_MAX_EIX (IMSIC_MAX_ID / BITS_PER_TYPE(u64)) =20 diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index 4b24705dc63a..b861a5dd7bd9 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 834d855b0478..c9d87e7472fb 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index f46a27658c2e..6fc4361c3d75 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -15,6 +15,8 @@ #include #include #include +#include +#include =20 #define has_svinval() riscv_has_extension_unlikely(RISCV_ISA_EXT_SVINVAL) =20 diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index 684efaf5cee9..bfe4d1369b24 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include =20 diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 6e0c18412795..cc82bbab0e24 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -9,6 +9,7 @@ #include #include #include +#include =20 static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index b27ec8f96697..8601cf29e5f8 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -11,6 +11,7 @@ #include #include #include +#include =20 const struct _kvm_stats_desc kvm_vm_stats_desc[] =3D { KVM_GENERIC_VM_STATS() diff --git a/arch/riscv/kvm/vmid.c b/arch/riscv/kvm/vmid.c index 92c01255f86f..3b426c800480 100644 --- a/arch/riscv/kvm/vmid.c +++ b/arch/riscv/kvm/vmid.c @@ -14,6 +14,8 @@ #include #include #include +#include +#include =20 static unsigned long vmid_version =3D 1; static unsigned long vmid_next; --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9A56214A6A for ; Thu, 5 Jun 2025 06:15:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104146; cv=none; b=p79z0wE5aQsL+iWbc1G2LHcZTHic7YawbOaNkFLvpYd8J8CL3brd2xTqvniPFK6qwl2VMHuZ0pNis98Y04AUhFh+PBPrL7jGmvVWnllRNhP/Dh7ZRf1yGHoDCkGcA+h1sAPd7Tf3auzDA67pR81HNwd900ofPCKTFFXQxlgOyCg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104146; c=relaxed/simple; bh=6sszuUfccYuEtJctsI5BNx1ZHbTwew1hmbwnSMnNPl4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=szvt1Kwp6ZP/wkMMrZG4p7fhlJWg7NeUCrpHzA0vJbfP4p/iwqtgRMJJIqWWOlO6Dsmx5MORYwJxL/tFL+B3UVrlK/EQ3b7ZllTtMAFe9FUxnDz+g+jB7ZAosPh+m75QUwQhbMhj7T/9yP/dVjfHzzM8PHk8I57IEweAHvr9cLU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=G2AairEB; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="G2AairEB" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-23526264386so5583825ad.2 for ; Wed, 04 Jun 2025 23:15:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104144; x=1749708944; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hZLXSy5dKdnrCLdxDNwluOSVBrj34xrZ5br2B7QUPFM=; b=G2AairEBuM94wAjFcm6qq6RM6ARY5RGxyZGFQ16ABMetm8uotzHfENwYxMrkQMBflJ V5qiFgt6WqjyA53fSA8bszeC1Mb9OU1mDJO8M/67Fe79PeIUbCZoywmn41mUVqlOxRdt plX2gNDQ+FE9iHDf3i0wY+HZblxHk7rgyM8MUqiqZFWUnLlikhExsdkoqIKgwjBIkvv6 TS2C0WpUvQvETMXwqFLLeugTiWVB5awTbU6BLYinWvdv3DwsVZ11zBR2LxCcVgsiuzd4 ekfVZHOea+IPogteJJjKpjygyIIkLySE/joj0+3FHGE7pgSPH0wOg5kM+ZxptQHhxptV 1dIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104144; x=1749708944; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hZLXSy5dKdnrCLdxDNwluOSVBrj34xrZ5br2B7QUPFM=; b=nKn+7MmnC9rcoNtgsguXnQ4paYHjM3mE7H3Gf9eIZFkKMS4kX2Uf0R7UWg9ArHrev/ AlRy5+QtyXg/gYuhM/2aBDgGv5L/ZJnXgZlvRt8U55A2ubAR1SDlYcuprdxriUp6FQF0 53GOsnsRI5EvdVQnPEwZlokcCotk01H126+2+s20liWIiR/brSZfo3+W5evlaNW2evUE 0zoAT9AsuTOfLYV0eqz5nha8W+oN9MLvKEfN/8mSAauN0v0tW0GZ9PrTRlYo4Nk9HndN 5RhMC+tji4xdZadJvIzXjVC6zpIzsXtpSGdm3RdICsreBVVr1k5K8fsv5MoiQ5adMJx0 Hz/A== X-Forwarded-Encrypted: i=1; AJvYcCUuSuZu9HzZwYqkcywK5lTbaN4MCxEWHUF8/m4J1WFqYEjYbHCDZJk3RJw/wcBYuaKXYzXIqgGIduqzs00=@vger.kernel.org X-Gm-Message-State: AOJu0YxocGe8krSaj3MYIbzWidb6txqIIbP7bB7yaJh4Dy8ZNAbg6TrG HWxfUt9euJxUBKLE9s2mkPNtQR+r04MfpMGTshzwOxr+u9x7DX1WZgzMdEA0pELAQig= X-Gm-Gg: ASbGncuMQ6Rjnm2ueT9/FuPDAsIa+eT9c5l1X09j7e2V1F82AriTeLFriRFuRzyG4Uh Al2ue0qES1wxSAIKlc/uhNmi8QJ4kVLYyiXQoU24pXRRMaLA7cDCqydnSN5TSfqhxShXG2miaBt UwXXqXIUp3IE1QYbLB7lI+/FYzTQpOGUXpwRE6c6f76bni/pxYwxI4UH2+ynTFQoIdGIqHzTM6K LKLyWXwPCFATvvPsY3BGneWhwH6DvgeQHHtJsN1+iQJcQxfkY2v7H/qpYgZiBBDQx1HMc34otF+ 5r6eko6YKZiatVwLK/dOxf0Y6Ta+Zj+aYNv2NuVmGMbCz2jxiEsvVWVfpAdexaaEERxQNplkJxX qRcfZRlMQS/wXfM/V X-Google-Smtp-Source: AGHT+IFO2epwN2LjtG31OoHm0egnGNF3it3sm9FcAWa7tNOhP+eBWwaD0IdyHqo/ELqbwOrF1xeE1A== X-Received: by 2002:a17:903:230d:b0:234:8f5d:e3bd with SMTP id d9443c01a7336-235e11ebbaemr80514335ad.39.1749104143890; Wed, 04 Jun 2025 23:15:43 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:43 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 10/13] RISC-V: KVM: Introduce struct kvm_gstage_mapping Date: Thu, 5 Jun 2025 11:44:55 +0530 Message-ID: <20250605061458.196003-11-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce struct kvm_gstage_mapping which represents a g-stage mapping at a particular page table level of the g-stage. Also, update the kvm_riscv_gstage_map() to return the g-stage mapping upon success. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_mmu.h | 9 ++++- arch/riscv/kvm/mmu.c | 58 ++++++++++++++++++-------------- arch/riscv/kvm/vcpu_exit.c | 3 +- 3 files changed, 43 insertions(+), 27 deletions(-) diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 4e1654282ee4..91c11e692dc7 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -8,6 +8,12 @@ =20 #include =20 +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic); @@ -15,7 +21,8 @@ void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write); + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); void kvm_riscv_gstage_free_pgd(struct kvm *kvm); void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index c9d87e7472fb..934c97c21130 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -135,18 +135,18 @@ static void gstage_remote_tlb_flush(struct kvm *kvm, = u32 level, gpa_t addr) kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); } =20 -static int gstage_set_pte(struct kvm *kvm, u32 level, - struct kvm_mmu_memory_cache *pcache, - gpa_t addr, const pte_t *new_pte) +static int gstage_set_pte(struct kvm *kvm, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) { u32 current_level =3D gstage_pgd_levels - 1; pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; =20 - if (current_level < level) + if (current_level < map->level) return -EINVAL; =20 - while (current_level !=3D level) { + while (current_level !=3D map->level) { if (gstage_pte_leaf(ptep)) return -EEXIST; =20 @@ -165,13 +165,13 @@ static int gstage_set_pte(struct kvm *kvm, u32 level, } =20 current_level--; - ptep =3D &next_ptep[gstage_pte_index(addr, current_level)]; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; } =20 - if (pte_val(*ptep) !=3D pte_val(*new_pte)) { - set_pte(ptep, *new_pte); + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, addr); + gstage_remote_tlb_flush(kvm, current_level, map->addr); } =20 return 0; @@ -181,14 +181,16 @@ static int gstage_map_page(struct kvm *kvm, struct kvm_mmu_memory_cache *pcache, gpa_t gpa, phys_addr_t hpa, unsigned long page_size, - bool page_rdonly, bool page_exec) + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) { - int ret; - u32 level =3D 0; - pte_t new_pte; pgprot_t prot; + int ret; =20 - ret =3D gstage_page_size_to_level(page_size, &level); + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); if (ret) return ret; =20 @@ -216,10 +218,10 @@ static int gstage_map_page(struct kvm *kvm, else prot =3D PAGE_WRITE; } - new_pte =3D pfn_pte(PFN_DOWN(hpa), prot); - new_pte =3D pte_mkdirty(new_pte); + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); =20 - return gstage_set_pte(kvm, level, pcache, gpa, &new_pte); + return gstage_set_pte(kvm, pcache, out_map); } =20 enum gstage_op { @@ -350,7 +352,6 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, unsigned long size, bool writable, bool in_atomic) { - pte_t pte; int ret =3D 0; unsigned long pfn; phys_addr_t addr, end; @@ -358,22 +359,25 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t g= pa, .gfp_custom =3D (in_atomic) ? GFP_ATOMIC | __GFP_ACCOUNT : 0, .gfp_zero =3D __GFP_ZERO, }; + struct kvm_gstage_mapping map; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); =20 for (addr =3D gpa; addr < end; addr +=3D PAGE_SIZE) { - pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.addr =3D addr; + map.pte =3D pfn_pte(pfn, PAGE_KERNEL_IO); + map.level =3D 0; =20 if (!writable) - pte =3D pte_wrprotect(pte); + map.pte =3D pte_wrprotect(map.pte); =20 ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, 0, &pcache, addr, &pte); + ret =3D gstage_set_pte(kvm, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -591,7 +595,8 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_r= ange *range) =20 int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write) + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -606,6 +611,9 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, unsigned long vma_pagesize, mmu_seq; struct page *page; =20 + /* Setup initial state of output mapping */ + memset(out_map, 0, sizeof(*out_map)); + /* We need minimum second+third level pages */ ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); if (ret) { @@ -675,10 +683,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, if (writable) { mark_page_dirty(kvm, gfn); ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true); + vma_pagesize, false, true, out_map); } else { ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true); + vma_pagesize, true, true, out_map); } =20 if (ret) diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index cc82bbab0e24..4fadf2bcd070 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -14,6 +14,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struct kvm_run *run, struct kvm_cpu_trap *trap) { + struct kvm_gstage_mapping host_map; struct kvm_memory_slot *memslot; unsigned long hva, fault_addr; bool writable; @@ -42,7 +43,7 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, } =20 ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false); + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); if (ret < 0) return ret; =20 --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f51.google.com (mail-pj1-f51.google.com [209.85.216.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8D9321765B for ; Thu, 5 Jun 2025 06:15:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.51 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104150; cv=none; b=YVGggJguBEH/yJCG7SkM16m7GWxPBbKUZVA7Im4jua3Xa0NlBMm/zEMVHGINyamOHvs7thB4N5oVuAbjc83gy5whyIl3ElgNPG5NVybgSLbXiM6iOiodGGzA6JYArnj2F5o2ZRbo8HsDBEjBU3KL2XQNyhIV43b5RaDuPU9CJ1U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104150; c=relaxed/simple; bh=/DDImLrZIWe6DHfRVDh2VF/oihV7w86USnafdPxDL18=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pPRM6fOLEHYN+0ni/roVYxmZJPhO34+Gg/CSU6UL/w0gW4rAgHmfS+w9o/mjG1YwAR9gOjDh3Q4Ynvia3U6HP2ckh41o+WFBZsxNgUFTK+JOKfYLYgrISdZqIGIZEdrwwYgP+9N/M6aCrvavLDsx4UpLs3WV2KxST3h7S9LC7Z0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=V6Wdhcz5; arc=none smtp.client-ip=209.85.216.51 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="V6Wdhcz5" Received: by mail-pj1-f51.google.com with SMTP id 98e67ed59e1d1-311ef4fb43dso501607a91.3 for ; Wed, 04 Jun 2025 23:15:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104148; x=1749708948; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=FTUNizeD4GAwjamuleDEZyMo3f2gr2d8rHj6sva7l/8=; b=V6Wdhcz5aiHGC0C84nAsaI442TSnPk/yar5bAr6t84SXSVyu2VIB4JHKrrxSkheWBr ATyXi0sljLqLnNjB6CWTVb8qN02VITDtwnsBMbpWTMEbt2fmLEs2YGxYprQzqsHg1sGf 4q7k/ut1ra5alow5+yYYIwjHmgYw3cEEU04NRl4vlxaXBib2x8ehKQZaXxTXFgEUp/BQ cSNi2UTM+eniEMz0LgzsM2jWgdhgPma47QD06kKgdxjzy69oT39HSgU8TJxzT55DoRrW rOp5iOsZOHtav4pNiFp3rzcvhNV8wdA/M8nIiTWjfxrk86eRIZBrsmBfqO04z55ykFgM P+ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104148; x=1749708948; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FTUNizeD4GAwjamuleDEZyMo3f2gr2d8rHj6sva7l/8=; b=FUnKKQwlLWL2o9CVkehsKf4Fv2w06Xq0OG03J5PGbX+Sr/6KD4erXtWL2Ugk1YyCug 7zBDkhgcLEpWjOG0NiI6QICl4wIC06nUNMqUBl1BmJI950I6UsA+fd0tsf3yPDMq8VRd vSA4737kAHrQKfTIbBzQ5bPL2shWxJ/c7Vs2LtRwcnUoTazEsEw8cwxahv1hkzu93GEN nkM6zsZL0L3mzF/AIot0ck0Eq1GArJogRf/rWbfeXPOrFOvKPA2SC5yxxeLWpeGSBCFC R74rQw90Kv7a8YU2/zCUpBAy02zNb63PrsrlPOYvNxdQ1O5jqt0mDm8KQ3b8jFtYs/d6 +rfQ== X-Forwarded-Encrypted: i=1; AJvYcCWEb/6IVW4zgK20Td1mzsttrwaUWmzU3cX21imnqqTkSevjdpIClFKz86/vs5YaCEuMr0j8Dzz/aP4UwsA=@vger.kernel.org X-Gm-Message-State: AOJu0YwmCRjEcHFWF5DifuynmYzPFs2sWPdrIZgD2ujoS7E2p5gTSRkC yOajGCPG1XTCkp4TWHtLQZuxMrpLgvC5dwiXxo9B5sY0G9ZwCcUBkRAMeUubiNTz4UQ= X-Gm-Gg: ASbGncuo9TViGJHkLZ2hzoqQ3EHH97t41nKNzSgUbW3SQa1vpBh+9WcY3Zof4Ktgzpq K6II0nOn69/EWmMXSMfF8PmIMRiGzNEyc/RQLcHglDaEljHIMGHXjgzvWsBN+TXdi7gSjCR3aTQ ZmQYIYMkSmsPhNjIfhEIcWKl+EcxMISujz2f2WrWRkfdo3sYVzrZlgHBa1qSQN/mO4VSOLtaWvi c0aV40PCDq2z3SeaJ1QaJ4e5kbAo72FXzn0xHOAtBmDVbmfGC4S4L2jMMetUA1xzOTKV3L417Jp v7TURolDBrSfF/5QoTD4Jy5lfk2jwwTKHmUnQlBbWbvxFLou0okxXfof7ljFCX2MI/Tkuf3YJL1 ermvXhg== X-Google-Smtp-Source: AGHT+IF1vChQlmPdBhFqDNCJg7mNgqFPE+jCNdqDdFH2eL595DiIm4Pp7WZXYGhvZd5N4Z0BmeSfBw== X-Received: by 2002:a17:90a:d444:b0:311:abba:53c0 with SMTP id 98e67ed59e1d1-3130cd12d75mr8464937a91.9.1749104147746; Wed, 04 Jun 2025 23:15:47 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:47 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 11/13] RISC-V: KVM: Add vmid field to struct kvm_riscv_hfence Date: Thu, 5 Jun 2025 11:44:56 +0530 Message-ID: <20250605061458.196003-12-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, the struct kvm_riscv_hfence does not have vmid field and various hfence processing functions always pick vmid assigned to the guest/VM. This prevents us from doing hfence operation on arbitrary vmid hence add vmid field to struct kvm_riscv_hfence and use it wherever applicable. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_tlb.h | 1 + arch/riscv/kvm/tlb.c | 30 ++++++++++++++++-------------- 2 files changed, 17 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index cd00c9a46cb1..f67e03edeaec 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -19,6 +19,7 @@ enum kvm_riscv_hfence_type { struct kvm_riscv_hfence { enum kvm_riscv_hfence_type type; unsigned long asid; + unsigned long vmid; unsigned long order; gpa_t addr; gpa_t size; diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 6fc4361c3d75..349fcfc93f54 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -237,49 +237,43 @@ static bool vcpu_hfence_enqueue(struct kvm_vcpu *vcpu, =20 void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) { - unsigned long vmid; struct kvm_riscv_hfence d =3D { 0 }; - struct kvm_vmid *v =3D &vcpu->kvm->arch.vmid; =20 while (vcpu_hfence_dequeue(vcpu, &d)) { switch (d.type) { case KVM_RISCV_HFENCE_UNKNOWN: break; case KVM_RISCV_HFENCE_GVMA_VMID_GPA: - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_gvma_vmid(nacl_shmem(), vmid, + nacl_hfence_gvma_vmid(nacl_shmem(), d.vmid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_gvma_vmid_gpa(vmid, d.addr, + kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma_asid(nacl_shmem(), vmid, d.asid, + nacl_hfence_vvma_asid(nacl_shmem(), d.vmid, d.asid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_vvma_asid_gva(vmid, d.asid, d.addr, + kvm_riscv_local_hfence_vvma_asid_gva(d.vmid, d.asid, d.addr, d.size, d.order); break; case KVM_RISCV_HFENCE_VVMA_ASID_ALL: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma_asid_all(nacl_shmem(), vmid, d.asid); + nacl_hfence_vvma_asid_all(nacl_shmem(), d.vmid, d.asid); else - kvm_riscv_local_hfence_vvma_asid_all(vmid, d.asid); + kvm_riscv_local_hfence_vvma_asid_all(d.vmid, d.asid); break; case KVM_RISCV_HFENCE_VVMA_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); - vmid =3D READ_ONCE(v->vmid); if (kvm_riscv_nacl_available()) - nacl_hfence_vvma(nacl_shmem(), vmid, + nacl_hfence_vvma(nacl_shmem(), d.vmid, d.addr, d.size, d.order); else - kvm_riscv_local_hfence_vvma_gva(vmid, d.addr, + kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; default: @@ -336,10 +330,12 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, gpa_t gpa, gpa_t gpsz, unsigned long order) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -359,10 +355,12 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long gva, unsigned long gvsz, unsigned long order, unsigned long asid) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -374,10 +372,12 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long asid) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D data.size =3D data.order =3D 0; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); @@ -388,10 +388,12 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long gva, unsigned long gvsz, unsigned long order) { + struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; + data.vmid =3D READ_ONCE(v->vmid); data.addr =3D gva; data.size =3D gvsz; data.order =3D order; --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CE4B9218ABA for ; Thu, 5 Jun 2025 06:15:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104155; cv=none; b=fKACXzAjE7uy0S/wVDFYE4LjW/g1wwXlAbrCe4PCF2Qt6ej2TYHgBpKBuW2NKrHYQ/eFS4EPvUIPYqiPVA4/jw7e/zf0435FFMz6mnxUvWcJpS944dZvWXzPly6vhKamK2eOD20azfw92x59T3hDQ24hwKXrj9dlrHSfptqSsUA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104155; c=relaxed/simple; bh=1T+5U596f3dA/YPSNAOltWgISKWaPc9hTEJTVhp8Dw0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MFtUgRzvhdeAEIZxBs/pRiWK9YzfbiLKAaiMbwSlpruvhiLT7ohK6C2mo6niM+ofNOLi+8m/KonFFUEo/f0sTl6c9en2cjUiZVoNcqHgHuVQhWs7cF8mTmFxqyzQ5LvhVl3aWDD95Ko4KvjHBh3wIEgF8uZi+klarUVMlSU97ww= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=Enkcg+PG; arc=none smtp.client-ip=209.85.214.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="Enkcg+PG" Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-234bfe37cccso7268335ad.0 for ; Wed, 04 Jun 2025 23:15:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104152; x=1749708952; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BRraX5zqPEIu27knbCgVox/vzZ57ews0d3/eBDkqJxA=; b=Enkcg+PGk28/nUukkXHw3lllW+NDJQk8L+IjvYef937fBPKYSDAFONSvktnoAXAc74 fu+QQ1evuaD/Oh3ham6pOdL5BFkP3oKwNfpX8nzkdwXvqCZlrONcvnoaGvfoZ1fDj+4P ub7YR2sVbMmVgRA3rB++Law7o0FB1VXjs8RB70IzMopRSkprrMhd/2M/6UVDSJS4+Dm/ cDnEeF8J7o4XIh40Qp/sb/ruUsueApe1VlJZEmCtyJe2C3mqR3T9waBepx4ZEgKCMUmQ E9lgkjT28LWAaAD72lM7YCZ2Tg3FfcG+VdLgR1XmZmePvC9cHAaHiYbOtVuuSmSCC/6Q eLtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104152; x=1749708952; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BRraX5zqPEIu27knbCgVox/vzZ57ews0d3/eBDkqJxA=; b=WsszBWY3Fo4D0jomSdsla+wCsYJm/PnSC8LntBSO+pKjDLBCpjN/mgj0Zpc0kP48Bg I+fR5uY1N1GenLhiVMbfQ3FCSStKiGDfalwXjKFGkB8KJ6eztB9FSfPCzoikN3jhtR+6 SGWKglafgOI1boBuVXvOMhqzy79h01ECPdxpmvBLkPRLs9ktD4J5tHKgTEktk9Nf+A3j cUBxOzCRuwT/4+ubXvOnYzZDKnlqwqZt/BRKvGzUas1KYmLCtYFKFP9rK5UFJ+aKQKj0 4pJ/BgCZhAzN5Wl1GfpWufvEtlcd02CfyFCx21ipVX7vIVH8km+580NqwEPEx157QHS1 7o2w== X-Forwarded-Encrypted: i=1; AJvYcCWQG5Y4PVHosGyO6ZSAcA2xKGN+Kx5zQV9ZLGfXtrdGrrvRhxS13W85ODwg6sNLMcN3gx3cBK8P2BkVpFc=@vger.kernel.org X-Gm-Message-State: AOJu0YySM34W4mL7tAAX+B171bWRDco9p0+pPtL3tuR8C8zh0x6wI/sX uzFoIHJDBNhR3yxljPD5yihXoADoRHNRiV9q1POlyPl92EKRrOEyDYkzusSbUE1D8QQ= X-Gm-Gg: ASbGncuXHwtMPXRlKtN6cdqixdRbwr+eSxqaTsFpnxVUfeu3hZDizmQNJv1bdCo0dCr UPl1lxnRgIBCVoCTLykSPi6voWvP/bbRSA+amWmmXaXCWcp1M0BhVtn8uWKaFViNyJ0VdHxKJTt leTGI3fbYhsHSlkPYS8UYHIlY+A6Imbka1mj1pVjNBpV7UxgFL7uJBttG75OwOOvbNAGRAB8MIx N9pTtokCB5wWXX4JKucjnvk4AZE99OglgiqAuzBlYwpu9+TGQYxpmNUMN47IJ97tGgAV5hgefX7 EBge9b7bucN/Ts8vL1bXRLJmcZ6/JeIJyCZmX2o6VNLbTdZl1aOFwFLWYIKtzDgqwLTZkDD0UqQ 74Ahkq5L1kIj2xiRQ X-Google-Smtp-Source: AGHT+IGYGTyTa8SFhLhi99Le1sXP7Ki5TCTst4Mq5ezq+Odq6ROVX+MSCkircODyWSq338B3BPTPkw== X-Received: by 2002:a17:90b:1ccb:b0:311:df4b:4b93 with SMTP id 98e67ed59e1d1-3130cca7a23mr8376360a91.7.1749104151877; Wed, 04 Jun 2025 23:15:51 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:51 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 12/13] RISC-V: KVM: Factor-out g-stage page table management Date: Thu, 5 Jun 2025 11:44:57 +0530 Message-ID: <20250605061458.196003-13-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The upcoming nested virtualization can share g-stage page table management with the current host g-stage implementation hence factor-out g-stage page table management into separate sources and also use "kvm_riscv_mmu_" prefix for host g-stage functions. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_gstage.h | 72 ++++ arch/riscv/include/asm/kvm_mmu.h | 32 +- arch/riscv/kvm/Makefile | 1 + arch/riscv/kvm/aia_imsic.c | 11 +- arch/riscv/kvm/gstage.c | 335 +++++++++++++++++++ arch/riscv/kvm/main.c | 2 +- arch/riscv/kvm/mmu.c | 490 ++++++---------------------- arch/riscv/kvm/vcpu.c | 4 +- arch/riscv/kvm/vcpu_exit.c | 5 +- arch/riscv/kvm/vm.c | 6 +- 10 files changed, 528 insertions(+), 430 deletions(-) create mode 100644 arch/riscv/include/asm/kvm_gstage.h create mode 100644 arch/riscv/kvm/gstage.c diff --git a/arch/riscv/include/asm/kvm_gstage.h b/arch/riscv/include/asm/k= vm_gstage.h new file mode 100644 index 000000000000..595e2183173e --- /dev/null +++ b/arch/riscv/include/asm/kvm_gstage.h @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#ifndef __RISCV_KVM_GSTAGE_H_ +#define __RISCV_KVM_GSTAGE_H_ + +#include + +struct kvm_gstage { + struct kvm *kvm; + unsigned long flags; +#define KVM_GSTAGE_FLAGS_LOCAL BIT(0) + unsigned long vmid; + pgd_t *pgd; +}; + +struct kvm_gstage_mapping { + gpa_t addr; + pte_t pte; + u32 level; +}; + +#ifdef CONFIG_64BIT +#define kvm_riscv_gstage_index_bits 9 +#else +#define kvm_riscv_gstage_index_bits 10 +#endif + +extern unsigned long kvm_riscv_gstage_mode; +extern unsigned long kvm_riscv_gstage_pgd_levels; + +#define kvm_riscv_gstage_pgd_xbits 2 +#define kvm_riscv_gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + kvm_riscv_gs= tage_pgd_xbits)) +#define kvm_riscv_gstage_gpa_bits (HGATP_PAGE_SHIFT + \ + (kvm_riscv_gstage_pgd_levels * \ + kvm_riscv_gstage_index_bits) + \ + kvm_riscv_gstage_pgd_xbits) +#define kvm_riscv_gstage_gpa_size ((gpa_t)(1ULL << kvm_riscv_gstage_gpa_bi= ts)) + +bool kvm_riscv_gstage_get_leaf(struct kvm_gstage *gstage, gpa_t addr, + pte_t **ptepp, u32 *ptep_level); + +int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map); + +int kvm_riscv_gstage_map_page(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + gpa_t gpa, phys_addr_t hpa, unsigned long page_size, + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map); + +enum kvm_riscv_gstage_op { + GSTAGE_OP_NOP =3D 0, /* Nothing */ + GSTAGE_OP_CLEAR, /* Clear/Unmap */ + GSTAGE_OP_WP, /* Write-protect */ +}; + +void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op); + +void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage, + gpa_t start, gpa_t size, bool may_block); + +void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa= _t end); + +void kvm_riscv_gstage_mode_detect(void); + +#endif diff --git a/arch/riscv/include/asm/kvm_mmu.h b/arch/riscv/include/asm/kvm_= mmu.h index 91c11e692dc7..5439e76f0a96 100644 --- a/arch/riscv/include/asm/kvm_mmu.h +++ b/arch/riscv/include/asm/kvm_mmu.h @@ -6,28 +6,16 @@ #ifndef __RISCV_KVM_MMU_H_ #define __RISCV_KVM_MMU_H_ =20 -#include +#include =20 -struct kvm_gstage_mapping { - gpa_t addr; - pte_t pte; - u32 level; -}; - -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic); -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, - unsigned long size); -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write, - struct kvm_gstage_mapping *out_map); -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm); -void kvm_riscv_gstage_free_pgd(struct kvm *kvm); -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu); -void kvm_riscv_gstage_mode_detect(void); -unsigned long kvm_riscv_gstage_mode(void); -int kvm_riscv_gstage_gpa_bits(void); +int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable, bool in_atomic); +void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size); +int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memsl= ot, + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map); +int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm); +void kvm_riscv_mmu_free_pgd(struct kvm *kvm); +void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu); =20 #endif diff --git a/arch/riscv/kvm/Makefile b/arch/riscv/kvm/Makefile index 4e0bba91d284..4b199dc3e58b 100644 --- a/arch/riscv/kvm/Makefile +++ b/arch/riscv/kvm/Makefile @@ -14,6 +14,7 @@ kvm-y +=3D aia.o kvm-y +=3D aia_aplic.o kvm-y +=3D aia_device.o kvm-y +=3D aia_imsic.o +kvm-y +=3D gstage.o kvm-y +=3D main.o kvm-y +=3D mmu.o kvm-y +=3D nacl.o diff --git a/arch/riscv/kvm/aia_imsic.c b/arch/riscv/kvm/aia_imsic.c index 40b469c0a01f..ea1a36836d9c 100644 --- a/arch/riscv/kvm/aia_imsic.c +++ b/arch/riscv/kvm/aia_imsic.c @@ -704,9 +704,8 @@ void kvm_riscv_vcpu_aia_imsic_release(struct kvm_vcpu *= vcpu) */ =20 /* Purge the G-stage mapping */ - kvm_riscv_gstage_iounmap(vcpu->kvm, - vcpu->arch.aia_context.imsic_addr, - IMSIC_MMIO_PAGE_SZ); + kvm_riscv_mmu_iounmap(vcpu->kvm, vcpu->arch.aia_context.imsic_addr, + IMSIC_MMIO_PAGE_SZ); =20 /* TODO: Purge the IOMMU mapping ??? */ =20 @@ -786,9 +785,9 @@ int kvm_riscv_vcpu_aia_imsic_update(struct kvm_vcpu *vc= pu) imsic_vsfile_local_clear(new_vsfile_hgei, imsic->nr_hw_eix); =20 /* Update G-stage mapping for the new IMSIC VS-file */ - ret =3D kvm_riscv_gstage_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, - new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, - true, true); + ret =3D kvm_riscv_mmu_ioremap(kvm, vcpu->arch.aia_context.imsic_addr, + new_vsfile_pa, IMSIC_MMIO_PAGE_SZ, + true, true); if (ret) goto fail_free_vsfile_hgei; =20 diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c new file mode 100644 index 000000000000..c7d61f14f6be --- /dev/null +++ b/arch/riscv/kvm/gstage.c @@ -0,0 +1,335 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2019 Western Digital Corporation or its affiliates. + * Copyright (c) 2025 Ventana Micro Systems Inc. + */ + +#include +#include +#include +#include +#include +#include + +#ifdef CONFIG_64BIT +unsigned long kvm_riscv_gstage_mode __ro_after_init =3D HGATP_MODE_SV39X4; +unsigned long kvm_riscv_gstage_pgd_levels __ro_after_init =3D 3; +#else +unsigned long kvm_riscv_gstage_mode __ro_after_init =3D HGATP_MODE_SV32X4; +unsigned long kvm_riscv_gstage_pgd_levels __ro_after_init =3D 2; +#endif + +#define gstage_pte_leaf(__ptep) \ + (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) + +static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) +{ + unsigned long mask; + unsigned long shift =3D HGATP_PAGE_SHIFT + (kvm_riscv_gstage_index_bits *= level); + + if (level =3D=3D (kvm_riscv_gstage_pgd_levels - 1)) + mask =3D (PTRS_PER_PTE * (1UL << kvm_riscv_gstage_pgd_xbits)) - 1; + else + mask =3D PTRS_PER_PTE - 1; + + return (addr >> shift) & mask; +} + +static inline unsigned long gstage_pte_page_vaddr(pte_t pte) +{ + return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte))); +} + +static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) +{ + u32 i; + unsigned long psz =3D 1UL << 12; + + for (i =3D 0; i < kvm_riscv_gstage_pgd_levels; i++) { + if (page_size =3D=3D (psz << (i * kvm_riscv_gstage_index_bits))) { + *out_level =3D i; + return 0; + } + } + + return -EINVAL; +} + +static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) +{ + if (kvm_riscv_gstage_pgd_levels < level) + return -EINVAL; + + *out_pgorder =3D 12 + (level * kvm_riscv_gstage_index_bits); + return 0; +} + +static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) +{ + int rc; + unsigned long page_order =3D PAGE_SHIFT; + + rc =3D gstage_level_to_page_order(level, &page_order); + if (rc) + return rc; + + *out_pgsize =3D BIT(page_order); + return 0; +} + +bool kvm_riscv_gstage_get_leaf(struct kvm_gstage *gstage, gpa_t addr, + pte_t **ptepp, u32 *ptep_level) +{ + pte_t *ptep; + u32 current_level =3D kvm_riscv_gstage_pgd_levels - 1; + + *ptep_level =3D current_level; + ptep =3D (pte_t *)gstage->pgd; + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; + while (ptep && pte_val(ptep_get(ptep))) { + if (gstage_pte_leaf(ptep)) { + *ptep_level =3D current_level; + *ptepp =3D ptep; + return true; + } + + if (current_level) { + current_level--; + *ptep_level =3D current_level; + ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + ptep =3D &ptep[gstage_pte_index(addr, current_level)]; + } else { + ptep =3D NULL; + } + } + + return false; +} + +static void gstage_tlb_flush(struct kvm_gstage *gstage, u32 level, gpa_t a= ddr) +{ + unsigned long order =3D PAGE_SHIFT; + + if (gstage_level_to_page_order(level, &order)) + return; + addr &=3D ~(BIT(order) - 1); + + if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) + kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); + else + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); +} + +int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + const struct kvm_gstage_mapping *map) +{ + u32 current_level =3D kvm_riscv_gstage_pgd_levels - 1; + pte_t *next_ptep =3D (pte_t *)gstage->pgd; + pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; + + if (current_level < map->level) + return -EINVAL; + + while (current_level !=3D map->level) { + if (gstage_pte_leaf(ptep)) + return -EEXIST; + + if (!pte_val(ptep_get(ptep))) { + if (!pcache) + return -ENOMEM; + next_ptep =3D kvm_mmu_memory_cache_alloc(pcache); + if (!next_ptep) + return -ENOMEM; + set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), + __pgprot(_PAGE_TABLE))); + } else { + if (gstage_pte_leaf(ptep)) + return -EEXIST; + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + } + + current_level--; + ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; + } + + if (pte_val(*ptep) !=3D pte_val(map->pte)) { + set_pte(ptep, map->pte); + if (gstage_pte_leaf(ptep)) + gstage_tlb_flush(gstage, current_level, map->addr); + } + + return 0; +} + +int kvm_riscv_gstage_map_page(struct kvm_gstage *gstage, + struct kvm_mmu_memory_cache *pcache, + gpa_t gpa, phys_addr_t hpa, unsigned long page_size, + bool page_rdonly, bool page_exec, + struct kvm_gstage_mapping *out_map) +{ + pgprot_t prot; + int ret; + + out_map->addr =3D gpa; + out_map->level =3D 0; + + ret =3D gstage_page_size_to_level(page_size, &out_map->level); + if (ret) + return ret; + + /* + * A RISC-V implementation can choose to either: + * 1) Update 'A' and 'D' PTE bits in hardware + * 2) Generate page fault when 'A' and/or 'D' bits are not set + * PTE so that software can update these bits. + * + * We support both options mentioned above. To achieve this, we + * always set 'A' and 'D' PTE bits at time of creating G-stage + * mapping. To support KVM dirty page logging with both options + * mentioned above, we will write-protect G-stage PTEs to track + * dirty pages. + */ + + if (page_exec) { + if (page_rdonly) + prot =3D PAGE_READ_EXEC; + else + prot =3D PAGE_WRITE_EXEC; + } else { + if (page_rdonly) + prot =3D PAGE_READ; + else + prot =3D PAGE_WRITE; + } + out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); + out_map->pte =3D pte_mkdirty(out_map->pte); + + return kvm_riscv_gstage_set_pte(gstage, pcache, out_map); +} + +void kvm_riscv_gstage_op_pte(struct kvm_gstage *gstage, gpa_t addr, + pte_t *ptep, u32 ptep_level, enum kvm_riscv_gstage_op op) +{ + int i, ret; + pte_t *next_ptep; + u32 next_ptep_level; + unsigned long next_page_size, page_size; + + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + return; + + WARN_ON(addr & (page_size - 1)); + + if (!pte_val(ptep_get(ptep))) + return; + + if (ptep_level && !gstage_pte_leaf(ptep)) { + next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); + next_ptep_level =3D ptep_level - 1; + ret =3D gstage_level_to_page_size(next_ptep_level, &next_page_size); + if (ret) + return; + + if (op =3D=3D GSTAGE_OP_CLEAR) + set_pte(ptep, __pte(0)); + for (i =3D 0; i < PTRS_PER_PTE; i++) + kvm_riscv_gstage_op_pte(gstage, addr + i * next_page_size, + &next_ptep[i], next_ptep_level, op); + if (op =3D=3D GSTAGE_OP_CLEAR) + put_page(virt_to_page(next_ptep)); + } else { + if (op =3D=3D GSTAGE_OP_CLEAR) + set_pte(ptep, __pte(0)); + else if (op =3D=3D GSTAGE_OP_WP) + set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); + gstage_tlb_flush(gstage, ptep_level, addr); + } +} + +void kvm_riscv_gstage_unmap_range(struct kvm_gstage *gstage, + gpa_t start, gpa_t size, bool may_block) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + unsigned long page_size; + gpa_t addr =3D start, end =3D start + size; + + while (addr < end) { + found_leaf =3D kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_leve= l); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) + kvm_riscv_gstage_op_pte(gstage, addr, ptep, + ptep_level, GSTAGE_OP_CLEAR); + +next: + addr +=3D page_size; + + /* + * If the range is too large, release the kvm->mmu_lock + * to prevent starvation and lockup detector warnings. + */ + if (!(gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) && may_block && addr < end) + cond_resched_lock(&gstage->kvm->mmu_lock); + } +} + +void kvm_riscv_gstage_wp_range(struct kvm_gstage *gstage, gpa_t start, gpa= _t end) +{ + int ret; + pte_t *ptep; + u32 ptep_level; + bool found_leaf; + gpa_t addr =3D start; + unsigned long page_size; + + while (addr < end) { + found_leaf =3D kvm_riscv_gstage_get_leaf(gstage, addr, &ptep, &ptep_leve= l); + ret =3D gstage_level_to_page_size(ptep_level, &page_size); + if (ret) + break; + + if (!found_leaf) + goto next; + + if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) + kvm_riscv_gstage_op_pte(gstage, addr, ptep, + ptep_level, GSTAGE_OP_WP); + +next: + addr +=3D page_size; + } +} + +void __init kvm_riscv_gstage_mode_detect(void) +{ +#ifdef CONFIG_64BIT + /* Try Sv57x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { + kvm_riscv_gstage_mode =3D HGATP_MODE_SV57X4; + kvm_riscv_gstage_pgd_levels =3D 5; + goto skip_sv48x4_test; + } + + /* Try Sv48x4 G-stage mode */ + csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); + if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { + kvm_riscv_gstage_mode =3D HGATP_MODE_SV48X4; + kvm_riscv_gstage_pgd_levels =3D 4; + } +skip_sv48x4_test: + + csr_write(CSR_HGATP, 0); + kvm_riscv_local_hfence_gvma_all(); +#endif +} diff --git a/arch/riscv/kvm/main.c b/arch/riscv/kvm/main.c index b861a5dd7bd9..67c876de74ef 100644 --- a/arch/riscv/kvm/main.c +++ b/arch/riscv/kvm/main.c @@ -135,7 +135,7 @@ static int __init riscv_kvm_init(void) (rc) ? slist : "no features"); } =20 - switch (kvm_riscv_gstage_mode()) { + switch (kvm_riscv_gstage_mode) { case HGATP_MODE_SV32X4: str =3D "Sv32x4"; break; diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index 934c97c21130..9f7dcd8cd741 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -6,9 +6,7 @@ * Anup Patel */ =20 -#include #include -#include #include #include #include @@ -17,340 +15,28 @@ #include #include #include -#include -#include - -#ifdef CONFIG_64BIT -static unsigned long gstage_mode __ro_after_init =3D (HGATP_MODE_SV39X4 <<= HGATP_MODE_SHIFT); -static unsigned long gstage_pgd_levels __ro_after_init =3D 3; -#define gstage_index_bits 9 -#else -static unsigned long gstage_mode __ro_after_init =3D (HGATP_MODE_SV32X4 <<= HGATP_MODE_SHIFT); -static unsigned long gstage_pgd_levels __ro_after_init =3D 2; -#define gstage_index_bits 10 -#endif - -#define gstage_pgd_xbits 2 -#define gstage_pgd_size (1UL << (HGATP_PAGE_SHIFT + gstage_pgd_xbits)) -#define gstage_gpa_bits (HGATP_PAGE_SHIFT + \ - (gstage_pgd_levels * gstage_index_bits) + \ - gstage_pgd_xbits) -#define gstage_gpa_size ((gpa_t)(1ULL << gstage_gpa_bits)) - -#define gstage_pte_leaf(__ptep) \ - (pte_val(*(__ptep)) & (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)) - -static inline unsigned long gstage_pte_index(gpa_t addr, u32 level) -{ - unsigned long mask; - unsigned long shift =3D HGATP_PAGE_SHIFT + (gstage_index_bits * level); - - if (level =3D=3D (gstage_pgd_levels - 1)) - mask =3D (PTRS_PER_PTE * (1UL << gstage_pgd_xbits)) - 1; - else - mask =3D PTRS_PER_PTE - 1; - - return (addr >> shift) & mask; -} =20 -static inline unsigned long gstage_pte_page_vaddr(pte_t pte) -{ - return (unsigned long)pfn_to_virt(__page_val_to_pfn(pte_val(pte))); -} - -static int gstage_page_size_to_level(unsigned long page_size, u32 *out_lev= el) -{ - u32 i; - unsigned long psz =3D 1UL << 12; - - for (i =3D 0; i < gstage_pgd_levels; i++) { - if (page_size =3D=3D (psz << (i * gstage_index_bits))) { - *out_level =3D i; - return 0; - } - } - - return -EINVAL; -} - -static int gstage_level_to_page_order(u32 level, unsigned long *out_pgorde= r) -{ - if (gstage_pgd_levels < level) - return -EINVAL; - - *out_pgorder =3D 12 + (level * gstage_index_bits); - return 0; -} - -static int gstage_level_to_page_size(u32 level, unsigned long *out_pgsize) -{ - int rc; - unsigned long page_order =3D PAGE_SHIFT; - - rc =3D gstage_level_to_page_order(level, &page_order); - if (rc) - return rc; - - *out_pgsize =3D BIT(page_order); - return 0; -} - -static bool gstage_get_leaf_entry(struct kvm *kvm, gpa_t addr, - pte_t **ptepp, u32 *ptep_level) -{ - pte_t *ptep; - u32 current_level =3D gstage_pgd_levels - 1; - - *ptep_level =3D current_level; - ptep =3D (pte_t *)kvm->arch.pgd; - ptep =3D &ptep[gstage_pte_index(addr, current_level)]; - while (ptep && pte_val(ptep_get(ptep))) { - if (gstage_pte_leaf(ptep)) { - *ptep_level =3D current_level; - *ptepp =3D ptep; - return true; - } - - if (current_level) { - current_level--; - *ptep_level =3D current_level; - ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - ptep =3D &ptep[gstage_pte_index(addr, current_level)]; - } else { - ptep =3D NULL; - } - } - - return false; -} - -static void gstage_remote_tlb_flush(struct kvm *kvm, u32 level, gpa_t addr) -{ - unsigned long order =3D PAGE_SHIFT; - - if (gstage_level_to_page_order(level, &order)) - return; - addr &=3D ~(BIT(order) - 1); - - kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, addr, BIT(order), order); -} - -static int gstage_set_pte(struct kvm *kvm, - struct kvm_mmu_memory_cache *pcache, - const struct kvm_gstage_mapping *map) -{ - u32 current_level =3D gstage_pgd_levels - 1; - pte_t *next_ptep =3D (pte_t *)kvm->arch.pgd; - pte_t *ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; - - if (current_level < map->level) - return -EINVAL; - - while (current_level !=3D map->level) { - if (gstage_pte_leaf(ptep)) - return -EEXIST; - - if (!pte_val(ptep_get(ptep))) { - if (!pcache) - return -ENOMEM; - next_ptep =3D kvm_mmu_memory_cache_alloc(pcache); - if (!next_ptep) - return -ENOMEM; - set_pte(ptep, pfn_pte(PFN_DOWN(__pa(next_ptep)), - __pgprot(_PAGE_TABLE))); - } else { - if (gstage_pte_leaf(ptep)) - return -EEXIST; - next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - } - - current_level--; - ptep =3D &next_ptep[gstage_pte_index(map->addr, current_level)]; - } - - if (pte_val(*ptep) !=3D pte_val(map->pte)) { - set_pte(ptep, map->pte); - if (gstage_pte_leaf(ptep)) - gstage_remote_tlb_flush(kvm, current_level, map->addr); - } - - return 0; -} - -static int gstage_map_page(struct kvm *kvm, - struct kvm_mmu_memory_cache *pcache, - gpa_t gpa, phys_addr_t hpa, - unsigned long page_size, - bool page_rdonly, bool page_exec, - struct kvm_gstage_mapping *out_map) -{ - pgprot_t prot; - int ret; - - out_map->addr =3D gpa; - out_map->level =3D 0; - - ret =3D gstage_page_size_to_level(page_size, &out_map->level); - if (ret) - return ret; - - /* - * A RISC-V implementation can choose to either: - * 1) Update 'A' and 'D' PTE bits in hardware - * 2) Generate page fault when 'A' and/or 'D' bits are not set - * PTE so that software can update these bits. - * - * We support both options mentioned above. To achieve this, we - * always set 'A' and 'D' PTE bits at time of creating G-stage - * mapping. To support KVM dirty page logging with both options - * mentioned above, we will write-protect G-stage PTEs to track - * dirty pages. - */ - - if (page_exec) { - if (page_rdonly) - prot =3D PAGE_READ_EXEC; - else - prot =3D PAGE_WRITE_EXEC; - } else { - if (page_rdonly) - prot =3D PAGE_READ; - else - prot =3D PAGE_WRITE; - } - out_map->pte =3D pfn_pte(PFN_DOWN(hpa), prot); - out_map->pte =3D pte_mkdirty(out_map->pte); - - return gstage_set_pte(kvm, pcache, out_map); -} - -enum gstage_op { - GSTAGE_OP_NOP =3D 0, /* Nothing */ - GSTAGE_OP_CLEAR, /* Clear/Unmap */ - GSTAGE_OP_WP, /* Write-protect */ -}; - -static void gstage_op_pte(struct kvm *kvm, gpa_t addr, - pte_t *ptep, u32 ptep_level, enum gstage_op op) -{ - int i, ret; - pte_t *next_ptep; - u32 next_ptep_level; - unsigned long next_page_size, page_size; - - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - return; - - BUG_ON(addr & (page_size - 1)); - - if (!pte_val(ptep_get(ptep))) - return; - - if (ptep_level && !gstage_pte_leaf(ptep)) { - next_ptep =3D (pte_t *)gstage_pte_page_vaddr(ptep_get(ptep)); - next_ptep_level =3D ptep_level - 1; - ret =3D gstage_level_to_page_size(next_ptep_level, - &next_page_size); - if (ret) - return; - - if (op =3D=3D GSTAGE_OP_CLEAR) - set_pte(ptep, __pte(0)); - for (i =3D 0; i < PTRS_PER_PTE; i++) - gstage_op_pte(kvm, addr + i * next_page_size, - &next_ptep[i], next_ptep_level, op); - if (op =3D=3D GSTAGE_OP_CLEAR) - put_page(virt_to_page(next_ptep)); - } else { - if (op =3D=3D GSTAGE_OP_CLEAR) - set_pte(ptep, __pte(0)); - else if (op =3D=3D GSTAGE_OP_WP) - set_pte(ptep, __pte(pte_val(ptep_get(ptep)) & ~_PAGE_WRITE)); - gstage_remote_tlb_flush(kvm, ptep_level, addr); - } -} - -static void gstage_unmap_range(struct kvm *kvm, gpa_t start, - gpa_t size, bool may_block) -{ - int ret; - pte_t *ptep; - u32 ptep_level; - bool found_leaf; - unsigned long page_size; - gpa_t addr =3D start, end =3D start + size; - - while (addr < end) { - found_leaf =3D gstage_get_leaf_entry(kvm, addr, - &ptep, &ptep_level); - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - break; - - if (!found_leaf) - goto next; - - if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - gstage_op_pte(kvm, addr, ptep, - ptep_level, GSTAGE_OP_CLEAR); - -next: - addr +=3D page_size; - - /* - * If the range is too large, release the kvm->mmu_lock - * to prevent starvation and lockup detector warnings. - */ - if (may_block && addr < end) - cond_resched_lock(&kvm->mmu_lock); - } -} - -static void gstage_wp_range(struct kvm *kvm, gpa_t start, gpa_t end) -{ - int ret; - pte_t *ptep; - u32 ptep_level; - bool found_leaf; - gpa_t addr =3D start; - unsigned long page_size; - - while (addr < end) { - found_leaf =3D gstage_get_leaf_entry(kvm, addr, - &ptep, &ptep_level); - ret =3D gstage_level_to_page_size(ptep_level, &page_size); - if (ret) - break; - - if (!found_leaf) - goto next; - - if (!(addr & (page_size - 1)) && ((end - addr) >=3D page_size)) - gstage_op_pte(kvm, addr, ptep, - ptep_level, GSTAGE_OP_WP); - -next: - addr +=3D page_size; - } -} - -static void gstage_wp_memory_region(struct kvm *kvm, int slot) +static void mmu_wp_memory_region(struct kvm *kvm, int slot) { struct kvm_memslots *slots =3D kvm_memslots(kvm); struct kvm_memory_slot *memslot =3D id_to_memslot(slots, slot); phys_addr_t start =3D memslot->base_gfn << PAGE_SHIFT; phys_addr_t end =3D (memslot->base_gfn + memslot->npages) << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 spin_lock(&kvm->mmu_lock); - gstage_wp_range(kvm, start, end); + kvm_riscv_gstage_wp_range(&gstage, start, end); spin_unlock(&kvm->mmu_lock); kvm_flush_remote_tlbs_memslot(kvm, memslot); } =20 -int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, - phys_addr_t hpa, unsigned long size, - bool writable, bool in_atomic) +int kvm_riscv_mmu_ioremap(struct kvm *kvm, gpa_t gpa, phys_addr_t hpa, + unsigned long size, bool writable, bool in_atomic) { int ret =3D 0; unsigned long pfn; @@ -360,6 +46,12 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gpa, .gfp_zero =3D __GFP_ZERO, }; struct kvm_gstage_mapping map; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 end =3D (gpa + size + PAGE_SIZE - 1) & PAGE_MASK; pfn =3D __phys_to_pfn(hpa); @@ -372,12 +64,12 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gp= a, if (!writable) map.pte =3D pte_wrprotect(map.pte); =20 - ret =3D kvm_mmu_topup_memory_cache(&pcache, gstage_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(&pcache, kvm_riscv_gstage_pgd_levels); if (ret) goto out; =20 spin_lock(&kvm->mmu_lock); - ret =3D gstage_set_pte(kvm, &pcache, &map); + ret =3D kvm_riscv_gstage_set_pte(&gstage, &pcache, &map); spin_unlock(&kvm->mmu_lock); if (ret) goto out; @@ -390,10 +82,17 @@ int kvm_riscv_gstage_ioremap(struct kvm *kvm, gpa_t gp= a, return ret; } =20 -void kvm_riscv_gstage_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long si= ze) +void kvm_riscv_mmu_iounmap(struct kvm *kvm, gpa_t gpa, unsigned long size) { + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + spin_lock(&kvm->mmu_lock); - gstage_unmap_range(kvm, gpa, size, false); + kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -405,8 +104,14 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kv= m *kvm, phys_addr_t base_gfn =3D slot->base_gfn + gfn_offset; phys_addr_t start =3D (base_gfn + __ffs(mask)) << PAGE_SHIFT; phys_addr_t end =3D (base_gfn + __fls(mask) + 1) << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 - gstage_wp_range(kvm, start, end); + kvm_riscv_gstage_wp_range(&gstage, start, end); } =20 void kvm_arch_sync_dirty_log(struct kvm *kvm, struct kvm_memory_slot *mems= lot) @@ -423,7 +128,7 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) =20 void kvm_arch_flush_shadow_all(struct kvm *kvm) { - kvm_riscv_gstage_free_pgd(kvm); + kvm_riscv_mmu_free_pgd(kvm); } =20 void kvm_arch_flush_shadow_memslot(struct kvm *kvm, @@ -431,9 +136,15 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, { gpa_t gpa =3D slot->base_gfn << PAGE_SHIFT; phys_addr_t size =3D slot->npages << PAGE_SHIFT; + struct kvm_gstage gstage; + + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; =20 spin_lock(&kvm->mmu_lock); - gstage_unmap_range(kvm, gpa, size, false); + kvm_riscv_gstage_unmap_range(&gstage, gpa, size, false); spin_unlock(&kvm->mmu_lock); } =20 @@ -448,7 +159,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, * the memory slot is write protected. */ if (change !=3D KVM_MR_DELETE && new->flags & KVM_MEM_LOG_DIRTY_PAGES) - gstage_wp_memory_region(kvm, new->id); + mmu_wp_memory_region(kvm, new->id); } =20 int kvm_arch_prepare_memory_region(struct kvm *kvm, @@ -470,7 +181,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, * space addressable by the KVM guest GPA space. */ if ((new->base_gfn + new->npages) >=3D - (gstage_gpa_size >> PAGE_SHIFT)) + (kvm_riscv_gstage_gpa_size >> PAGE_SHIFT)) return -EFAULT; =20 hva =3D new->userspace_addr; @@ -526,9 +237,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; } =20 - ret =3D kvm_riscv_gstage_ioremap(kvm, gpa, pa, - vm_end - vm_start, - writable, false); + ret =3D kvm_riscv_mmu_ioremap(kvm, gpa, pa, vm_end - vm_start, + writable, false); if (ret) break; } @@ -539,7 +249,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, goto out; =20 if (ret) - kvm_riscv_gstage_iounmap(kvm, base_gpa, size); + kvm_riscv_mmu_iounmap(kvm, base_gpa, size); =20 out: mmap_read_unlock(current->mm); @@ -548,12 +258,18 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, =20 bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { + struct kvm_gstage gstage; + if (!kvm->arch.pgd) return false; =20 - gstage_unmap_range(kvm, range->start << PAGE_SHIFT, - (range->end - range->start) << PAGE_SHIFT, - range->may_block); + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + kvm_riscv_gstage_unmap_range(&gstage, range->start << PAGE_SHIFT, + (range->end - range->start) << PAGE_SHIFT, + range->may_block); return false; } =20 @@ -562,14 +278,19 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) pte_t *ptep; u32 ptep_level =3D 0; u64 size =3D (range->end - range->start) << PAGE_SHIFT; + struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) return false; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 - if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, - &ptep, &ptep_level)) + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, + &ptep, &ptep_level)) return false; =20 return ptep_test_and_clear_young(NULL, 0, ptep); @@ -580,23 +301,27 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) pte_t *ptep; u32 ptep_level =3D 0; u64 size =3D (range->end - range->start) << PAGE_SHIFT; + struct kvm_gstage gstage; =20 if (!kvm->arch.pgd) return false; =20 WARN_ON(size !=3D PAGE_SIZE && size !=3D PMD_SIZE && size !=3D PUD_SIZE); =20 - if (!gstage_get_leaf_entry(kvm, range->start << PAGE_SHIFT, - &ptep, &ptep_level)) + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + if (!kvm_riscv_gstage_get_leaf(&gstage, range->start << PAGE_SHIFT, + &ptep, &ptep_level)) return false; =20 return pte_young(ptep_get(ptep)); } =20 -int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, - struct kvm_memory_slot *memslot, - gpa_t gpa, unsigned long hva, bool is_write, - struct kvm_gstage_mapping *out_map) +int kvm_riscv_mmu_map(struct kvm_vcpu *vcpu, struct kvm_memory_slot *memsl= ot, + gpa_t gpa, unsigned long hva, bool is_write, + struct kvm_gstage_mapping *out_map) { int ret; kvm_pfn_t hfn; @@ -609,13 +334,19 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, bool logging =3D (memslot->dirty_bitmap && !(memslot->flags & KVM_MEM_READONLY)) ? true : false; unsigned long vma_pagesize, mmu_seq; + struct kvm_gstage gstage; struct page *page; =20 + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + /* Setup initial state of output mapping */ memset(out_map, 0, sizeof(*out_map)); =20 /* We need minimum second+third level pages */ - ret =3D kvm_mmu_topup_memory_cache(pcache, gstage_pgd_levels); + ret =3D kvm_mmu_topup_memory_cache(pcache, kvm_riscv_gstage_pgd_levels); if (ret) { kvm_err("Failed to topup G-stage cache\n"); return ret; @@ -682,11 +413,11 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, =20 if (writable) { mark_page_dirty(kvm, gfn); - ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, false, true, out_map); + ret =3D kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIF= T, + vma_pagesize, false, true, out_map); } else { - ret =3D gstage_map_page(kvm, pcache, gpa, hfn << PAGE_SHIFT, - vma_pagesize, true, true, out_map); + ret =3D kvm_riscv_gstage_map_page(&gstage, pcache, gpa, hfn << PAGE_SHIF= T, + vma_pagesize, true, true, out_map); } =20 if (ret) @@ -698,7 +429,7 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, return ret; } =20 -int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) +int kvm_riscv_mmu_alloc_pgd(struct kvm *kvm) { struct page *pgd_page; =20 @@ -708,7 +439,7 @@ int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) } =20 pgd_page =3D alloc_pages(GFP_KERNEL | __GFP_ZERO, - get_order(gstage_pgd_size)); + get_order(kvm_riscv_gstage_pgd_size)); if (!pgd_page) return -ENOMEM; kvm->arch.pgd =3D page_to_virt(pgd_page); @@ -717,13 +448,18 @@ int kvm_riscv_gstage_alloc_pgd(struct kvm *kvm) return 0; } =20 -void kvm_riscv_gstage_free_pgd(struct kvm *kvm) +void kvm_riscv_mmu_free_pgd(struct kvm *kvm) { + struct kvm_gstage gstage; void *pgd =3D NULL; =20 spin_lock(&kvm->mmu_lock); if (kvm->arch.pgd) { - gstage_unmap_range(kvm, 0UL, gstage_gpa_size, false); + gstage.kvm =3D kvm; + gstage.flags =3D 0; + gstage.vmid =3D READ_ONCE(kvm->arch.vmid.vmid); + gstage.pgd =3D kvm->arch.pgd; + kvm_riscv_gstage_unmap_range(&gstage, 0UL, kvm_riscv_gstage_gpa_size, fa= lse); pgd =3D READ_ONCE(kvm->arch.pgd); kvm->arch.pgd =3D NULL; kvm->arch.pgd_phys =3D 0; @@ -731,12 +467,12 @@ void kvm_riscv_gstage_free_pgd(struct kvm *kvm) spin_unlock(&kvm->mmu_lock); =20 if (pgd) - free_pages((unsigned long)pgd, get_order(gstage_pgd_size)); + free_pages((unsigned long)pgd, get_order(kvm_riscv_gstage_pgd_size)); } =20 -void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vcpu) +void kvm_riscv_mmu_update_hgatp(struct kvm_vcpu *vcpu) { - unsigned long hgatp =3D gstage_mode; + unsigned long hgatp =3D kvm_riscv_gstage_mode << HGATP_MODE_SHIFT; struct kvm_arch *k =3D &vcpu->kvm->arch; =20 hgatp |=3D (READ_ONCE(k->vmid.vmid) << HGATP_VMID_SHIFT) & HGATP_VMID; @@ -747,37 +483,3 @@ void kvm_riscv_gstage_update_hgatp(struct kvm_vcpu *vc= pu) if (!kvm_riscv_gstage_vmid_bits()) kvm_riscv_local_hfence_gvma_all(); } - -void __init kvm_riscv_gstage_mode_detect(void) -{ -#ifdef CONFIG_64BIT - /* Try Sv57x4 G-stage mode */ - csr_write(CSR_HGATP, HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); - if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV57X4) { - gstage_mode =3D (HGATP_MODE_SV57X4 << HGATP_MODE_SHIFT); - gstage_pgd_levels =3D 5; - goto skip_sv48x4_test; - } - - /* Try Sv48x4 G-stage mode */ - csr_write(CSR_HGATP, HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - if ((csr_read(CSR_HGATP) >> HGATP_MODE_SHIFT) =3D=3D HGATP_MODE_SV48X4) { - gstage_mode =3D (HGATP_MODE_SV48X4 << HGATP_MODE_SHIFT); - gstage_pgd_levels =3D 4; - } -skip_sv48x4_test: - - csr_write(CSR_HGATP, 0); - kvm_riscv_local_hfence_gvma_all(); -#endif -} - -unsigned long __init kvm_riscv_gstage_mode(void) -{ - return gstage_mode >> HGATP_MODE_SHIFT; -} - -int kvm_riscv_gstage_gpa_bits(void) -{ - return gstage_gpa_bits; -} diff --git a/arch/riscv/kvm/vcpu.c b/arch/riscv/kvm/vcpu.c index bfe4d1369b24..834405b03862 100644 --- a/arch/riscv/kvm/vcpu.c +++ b/arch/riscv/kvm/vcpu.c @@ -631,7 +631,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) } } =20 - kvm_riscv_gstage_update_hgatp(vcpu); + kvm_riscv_mmu_update_hgatp(vcpu); =20 kvm_riscv_vcpu_timer_restore(vcpu); =20 @@ -716,7 +716,7 @@ static void kvm_riscv_check_vcpu_requests(struct kvm_vc= pu *vcpu) kvm_riscv_reset_vcpu(vcpu, true); =20 if (kvm_check_request(KVM_REQ_UPDATE_HGATP, vcpu)) - kvm_riscv_gstage_update_hgatp(vcpu); + kvm_riscv_mmu_update_hgatp(vcpu); =20 if (kvm_check_request(KVM_REQ_FENCE_I, vcpu)) kvm_riscv_fence_i_process(vcpu); diff --git a/arch/riscv/kvm/vcpu_exit.c b/arch/riscv/kvm/vcpu_exit.c index 4fadf2bcd070..02da4695e0c8 100644 --- a/arch/riscv/kvm/vcpu_exit.c +++ b/arch/riscv/kvm/vcpu_exit.c @@ -42,8 +42,9 @@ static int gstage_page_fault(struct kvm_vcpu *vcpu, struc= t kvm_run *run, }; } =20 - ret =3D kvm_riscv_gstage_map(vcpu, memslot, fault_addr, hva, - (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, &host_m= ap); + ret =3D kvm_riscv_mmu_map(vcpu, memslot, fault_addr, hva, + (trap->scause =3D=3D EXC_STORE_GUEST_PAGE_FAULT) ? true : false, + &host_map); if (ret < 0) return ret; =20 diff --git a/arch/riscv/kvm/vm.c b/arch/riscv/kvm/vm.c index 8601cf29e5f8..66d91ae6e9b2 100644 --- a/arch/riscv/kvm/vm.c +++ b/arch/riscv/kvm/vm.c @@ -32,13 +32,13 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long typ= e) { int r; =20 - r =3D kvm_riscv_gstage_alloc_pgd(kvm); + r =3D kvm_riscv_mmu_alloc_pgd(kvm); if (r) return r; =20 r =3D kvm_riscv_gstage_vmid_init(kvm); if (r) { - kvm_riscv_gstage_free_pgd(kvm); + kvm_riscv_mmu_free_pgd(kvm); return r; } =20 @@ -200,7 +200,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long = ext) r =3D KVM_USER_MEM_SLOTS; break; case KVM_CAP_VM_GPA_BITS: - r =3D kvm_riscv_gstage_gpa_bits(); + r =3D kvm_riscv_gstage_gpa_bits; break; default: r =3D 0; --=20 2.43.0 From nobody Fri Dec 19 19:19:04 2025 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 92001219317 for ; Thu, 5 Jun 2025 06:15:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104159; cv=none; b=tptjIU5uK/nDNDwChSC1zlohtHG3vNxznKuh4q0rpenf6LMlYwK32o5tfzshap4WUCYfgOEloqA+BT/8zrpJuvYfVw3Cf9h8F1Ebn5MNHWY2sY/1Ry6Z4HMRig/vCp0qE+lKmmEGOoaZCIQnjpqiDuV1SooZ6u5zl+g+j5aX++k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749104159; c=relaxed/simple; bh=XN4nRLB9K0FlDxsE36UR8rEo5bM83tbmLM+G36akoCw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uu0adJ4AqiKfv7Qw1gUphzYgGsLK3+WeUP5OYCjweVAxA9/SYAkx7/YlI0ktzvamWIz1mWfqErCb0UPITnpiDWoSUl8+KiCTR7q8kPopGXeHxDw3LMkURtJkKHL6ix8ReD0iBZGkp3a5KQc7qE4HpeXm1sZ5BMO+KQ87jPACYHU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com; spf=pass smtp.mailfrom=ventanamicro.com; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b=pVH7Zo3e; arc=none smtp.client-ip=209.85.216.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ventanamicro.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ventanamicro.com header.i=@ventanamicro.com header.b="pVH7Zo3e" Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-312150900afso628932a91.1 for ; Wed, 04 Jun 2025 23:15:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1749104156; x=1749708956; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qJnA2ITAN8AbKW8vRXTI9OBsRCn2zeX47qUEnC92+48=; b=pVH7Zo3eJUjMdw6RIdUUnCCV5HAan3rxFMZx8slpwUIJIP9lH1nbdiE5EABMWts1Cm LJrWDkr6Ve9xN/hG6sOVyMiclAis5aMEPbTmsX2nJtPA/dUlkt8QWBuDF3C0AXH87Ybc ksYeggWHlRQs0x+RBCQU/ZfiHPas0EinP6mPrytWMnqGbE/BludRshEWJk7usfhZzfP3 bfmJ8I/E124g/cEQcLswr6MX1a/8+yTULXHLj5i8zfrcMLLRqLqh5vvoY3R/5fg9yIcs rwJnyL0p8e5HqIac+a7A8Pz3vSH8aRIIFVMmXmPQfLqXM4q5Xe9nGEq4gipBHzQxNFEK J8oA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749104156; x=1749708956; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qJnA2ITAN8AbKW8vRXTI9OBsRCn2zeX47qUEnC92+48=; b=YlOfCMoBld9NWXufv83jrbxGSqJP7vSVzN75pSs0jsc96ta1s5T6TG9SC9sMpqIaha XK7ljjJrngNJMMncTxEW0TGLkUzTvL2oS/iBlmk2iHuv0deVt7e/lfeNoAQPgLGVJ3W4 687kTyU8LlzsE7sDlHRF7+sTF5GoX4GqhVumOgaXIuXrEQnxsI8XxPatBCK4C5bvP+Wp n1lSiqBFQdPoHLOkq17CoYEWD6QgO+WpSwf77l7wuXdjqtXdyEp7+qd5iGTIxnVq8rmH b+54grQc43jR9w4LUf0IApAV53+KheGvdHR8cZO+Mbuah9O43aQOXh6Gb3X+PkUl9w16 qsBg== X-Forwarded-Encrypted: i=1; AJvYcCVRONq26dVgP2n9BuNpOi4TfD1KwsE7yQalqNf0q45TwTsZ6zSN0DbkH6HmLiYH998Bk+hO7RIB9rk9MKk=@vger.kernel.org X-Gm-Message-State: AOJu0YxDSCL2OuStcEUUzzXbYX3Eyu3BztpqPak0SAwmixfCEJR069QO Nks07uvR/IStgO4BTsVMAU8GkQA/32kyEKjZ6T+s+tRveKpMmfAv8vEgVwh6cuft6cM= X-Gm-Gg: ASbGncs3ENFNG+U6oLwHcJ9bwOrwDdt1I1vzeQ0ChwydGdnuK+tZiQ+4K/+kC/4rYuP CzmfFQQqv3MfaVCAy+L7pw59kRvVaPHn/3mTo9oUFcWzuVgEF6cBWvgergIO9ERJxbb0k/sT/Qv 6Uti1RzAKZNJr+1UXQcC+0cg8eKTIDAvQm2HkMPHxQz8tD7GqpXtBeQMDOIc+OmB28nAgVTfRkY 69sCM69e24wKr+W2gEZwy6YAlbqH+trERYm7Tq49hqUcPY5DGi1F/1Yc6w7DmpYCXeb6d5NUF5F gDSQh7ZjpsuJa0AH2g1gHAsjZkyRUaeTZ1mDCzJFYiBMISvVLhmnYPg5Xg4dirkVtm5GThs/+gy NoJvZmQ== X-Google-Smtp-Source: AGHT+IGidU5oO/NtKWVZfC9oWMHMT1bnyxO6WBsndaUc4qqkBjwaBRbjIPk9+Qztot5hQY4gLIxDWQ== X-Received: by 2002:a17:90b:2e49:b0:311:a314:c2ca with SMTP id 98e67ed59e1d1-3130ccf6f95mr7911585a91.6.1749104155801; Wed, 04 Jun 2025 23:15:55 -0700 (PDT) Received: from localhost.localdomain ([14.141.91.70]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3132c0bedc7sm716026a91.49.2025.06.04.23.15.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Jun 2025 23:15:55 -0700 (PDT) From: Anup Patel To: Atish Patra Cc: Palmer Dabbelt , Paul Walmsley , Alexandre Ghiti , Andrew Jones , Anup Patel , kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Anup Patel Subject: [PATCH 13/13] RISC-V: KVM: Pass VMID as parameter to kvm_riscv_hfence_xyz() APIs Date: Thu, 5 Jun 2025 11:44:58 +0530 Message-ID: <20250605061458.196003-14-apatel@ventanamicro.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250605061458.196003-1-apatel@ventanamicro.com> References: <20250605061458.196003-1-apatel@ventanamicro.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, all kvm_riscv_hfence_xyz() APIs assume VMID to be the host VMID of the Guest/VM which resticts use of these APIs only for host TLB maintenance. Let's allow passing VMID as parameter to all kvm_riscv_hfence_xyz() APIs so that they can be re-used for nested virtualization related TLB maintenance. Signed-off-by: Anup Patel --- arch/riscv/include/asm/kvm_tlb.h | 17 ++++++--- arch/riscv/kvm/gstage.c | 3 +- arch/riscv/kvm/tlb.c | 61 ++++++++++++++++++++----------- arch/riscv/kvm/vcpu_sbi_replace.c | 17 +++++---- arch/riscv/kvm/vcpu_sbi_v01.c | 25 ++++++------- 5 files changed, 73 insertions(+), 50 deletions(-) diff --git a/arch/riscv/include/asm/kvm_tlb.h b/arch/riscv/include/asm/kvm_= tlb.h index f67e03edeaec..38a2f933ad3a 100644 --- a/arch/riscv/include/asm/kvm_tlb.h +++ b/arch/riscv/include/asm/kvm_tlb.h @@ -11,9 +11,11 @@ enum kvm_riscv_hfence_type { KVM_RISCV_HFENCE_UNKNOWN =3D 0, KVM_RISCV_HFENCE_GVMA_VMID_GPA, + KVM_RISCV_HFENCE_GVMA_VMID_ALL, KVM_RISCV_HFENCE_VVMA_ASID_GVA, KVM_RISCV_HFENCE_VVMA_ASID_ALL, KVM_RISCV_HFENCE_VVMA_GVA, + KVM_RISCV_HFENCE_VVMA_ALL }; =20 struct kvm_riscv_hfence { @@ -59,21 +61,24 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid); + unsigned long order, unsigned long asid, + unsigned long vmid); void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid); + unsigned long asid, unsigned long vmid); void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order); + unsigned long order, unsigned long vmid); void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask); + unsigned long hbase, unsigned long hmask, + unsigned long vmid); =20 #endif diff --git a/arch/riscv/kvm/gstage.c b/arch/riscv/kvm/gstage.c index c7d61f14f6be..c5dc47b156c4 100644 --- a/arch/riscv/kvm/gstage.c +++ b/arch/riscv/kvm/gstage.c @@ -117,7 +117,8 @@ static void gstage_tlb_flush(struct kvm_gstage *gstage,= u32 level, gpa_t addr) if (gstage->flags & KVM_GSTAGE_FLAGS_LOCAL) kvm_riscv_local_hfence_gvma_vmid_gpa(gstage->vmid, addr, BIT(order), ord= er); else - kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder); + kvm_riscv_hfence_gvma_vmid_gpa(gstage->kvm, -1UL, 0, addr, BIT(order), o= rder, + gstage->vmid); } =20 int kvm_riscv_gstage_set_pte(struct kvm_gstage *gstage, diff --git a/arch/riscv/kvm/tlb.c b/arch/riscv/kvm/tlb.c index 349fcfc93f54..3c5a70a2b927 100644 --- a/arch/riscv/kvm/tlb.c +++ b/arch/riscv/kvm/tlb.c @@ -251,6 +251,12 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_gvma_vmid_gpa(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_GVMA_VMID_ALL: + if (kvm_riscv_nacl_available()) + nacl_hfence_gvma_vmid_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_gvma_vmid_all(d.vmid); + break; case KVM_RISCV_HFENCE_VVMA_ASID_GVA: kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_RCVD); if (kvm_riscv_nacl_available()) @@ -276,6 +282,13 @@ void kvm_riscv_hfence_process(struct kvm_vcpu *vcpu) kvm_riscv_local_hfence_vvma_gva(d.vmid, d.addr, d.size, d.order); break; + case KVM_RISCV_HFENCE_VVMA_ALL: + kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_RCVD); + if (kvm_riscv_nacl_available()) + nacl_hfence_vvma_all(nacl_shmem(), d.vmid); + else + kvm_riscv_local_hfence_vvma_all(d.vmid); + break; default: break; } @@ -328,14 +341,13 @@ void kvm_riscv_fence_i(struct kvm *kvm, void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, unsigned long hbase, unsigned long hmask, gpa_t gpa, gpa_t gpsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_GPA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gpa; data.size =3D gpsz; data.order =3D order; @@ -344,23 +356,28 @@ void kvm_riscv_hfence_gvma_vmid_gpa(struct kvm *kvm, } =20 void kvm_riscv_hfence_gvma_vmid_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_TLB_FLUSH, - KVM_REQ_TLB_FLUSH, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_GVMA_VMID_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_TLB_FLUSH, &data); } =20 void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order, unsigned long asid) + unsigned long order, unsigned long asid, + unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_GVA; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -370,15 +387,13 @@ void kvm_riscv_hfence_vvma_asid_gva(struct kvm *kvm, =20 void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, unsigned long hbase, unsigned long hmask, - unsigned long asid) + unsigned long asid, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; - struct kvm_riscv_hfence data; + struct kvm_riscv_hfence data =3D {0}; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_ASID_ALL; data.asid =3D asid; - data.vmid =3D READ_ONCE(v->vmid); - data.addr =3D data.size =3D data.order =3D 0; + data.vmid =3D vmid; make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, KVM_REQ_HFENCE_VVMA_ALL, &data); } @@ -386,14 +401,13 @@ void kvm_riscv_hfence_vvma_asid_all(struct kvm *kvm, void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, unsigned long hbase, unsigned long hmask, unsigned long gva, unsigned long gvsz, - unsigned long order) + unsigned long order, unsigned long vmid) { - struct kvm_vmid *v =3D &kvm->arch.vmid; struct kvm_riscv_hfence data; =20 data.type =3D KVM_RISCV_HFENCE_VVMA_GVA; data.asid =3D 0; - data.vmid =3D READ_ONCE(v->vmid); + data.vmid =3D vmid; data.addr =3D gva; data.size =3D gvsz; data.order =3D order; @@ -402,16 +416,21 @@ void kvm_riscv_hfence_vvma_gva(struct kvm *kvm, } =20 void kvm_riscv_hfence_vvma_all(struct kvm *kvm, - unsigned long hbase, unsigned long hmask) + unsigned long hbase, unsigned long hmask, + unsigned long vmid) { - make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE_VVMA_ALL, - KVM_REQ_HFENCE_VVMA_ALL, NULL); + struct kvm_riscv_hfence data =3D {0}; + + data.type =3D KVM_RISCV_HFENCE_VVMA_ALL; + data.vmid =3D vmid; + make_xfence_request(kvm, hbase, hmask, KVM_REQ_HFENCE, + KVM_REQ_HFENCE_VVMA_ALL, &data); } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pa= ges) { kvm_riscv_hfence_gvma_vmid_gpa(kvm, -1UL, 0, gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT, - PAGE_SHIFT); + PAGE_SHIFT, READ_ONCE(kvm->arch.vmid.vmid)); return 0; } diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_re= place.c index b17fad091bab..b490ed1428a6 100644 --- a/arch/riscv/kvm/vcpu_sbi_replace.c +++ b/arch/riscv/kvm/vcpu_sbi_replace.c @@ -96,6 +96,7 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu *vc= pu, struct kvm_run *run unsigned long hmask =3D cp->a0; unsigned long hbase =3D cp->a1; unsigned long funcid =3D cp->a6; + unsigned long vmid; =20 switch (funcid) { case SBI_EXT_RFENCE_REMOTE_FENCE_I: @@ -103,22 +104,22 @@ static int kvm_sbi_ext_rfence_handler(struct kvm_vcpu= *vcpu, struct kvm_run *run kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_FENCE_I_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, hbase, hmask, vmid); else kvm_riscv_hfence_vvma_gva(vcpu->kvm, hbase, hmask, - cp->a2, cp->a3, PAGE_SHIFT); + cp->a2, cp->a3, PAGE_SHIFT, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_SENT); break; case SBI_EXT_RFENCE_REMOTE_SFENCE_VMA_ASID: + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if ((cp->a2 =3D=3D 0 && cp->a3 =3D=3D 0) || cp->a3 =3D=3D -1UL) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - hbase, hmask, cp->a4); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, hbase, hmask, + cp->a4, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - hbase, hmask, - cp->a2, cp->a3, - PAGE_SHIFT, cp->a4); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, hbase, hmask, cp->a2, + cp->a3, PAGE_SHIFT, cp->a4, vmid); kvm_riscv_vcpu_pmu_incr_fw(vcpu, SBI_PMU_FW_HFENCE_VVMA_ASID_SENT); break; case SBI_EXT_RFENCE_REMOTE_HFENCE_GVMA: diff --git a/arch/riscv/kvm/vcpu_sbi_v01.c b/arch/riscv/kvm/vcpu_sbi_v01.c index 8f4c4fa16227..368dfddd23d9 100644 --- a/arch/riscv/kvm/vcpu_sbi_v01.c +++ b/arch/riscv/kvm/vcpu_sbi_v01.c @@ -23,6 +23,7 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcpu,= struct kvm_run *run, struct kvm *kvm =3D vcpu->kvm; struct kvm_cpu_context *cp =3D &vcpu->arch.guest_context; struct kvm_cpu_trap *utrap =3D retdata->utrap; + unsigned long vmid; =20 switch (cp->a7) { case SBI_EXT_0_1_CONSOLE_GETCHAR: @@ -78,25 +79,21 @@ static int kvm_sbi_ext_v01_handler(struct kvm_vcpu *vcp= u, struct kvm_run *run, if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_FENCE_I) kvm_riscv_fence_i(vcpu->kvm, 0, hmask); else if (cp->a7 =3D=3D SBI_EXT_0_1_REMOTE_SFENCE_VMA) { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_all(vcpu->kvm, - 0, hmask); + kvm_riscv_hfence_vvma_all(vcpu->kvm, 0, hmask, vmid); else - kvm_riscv_hfence_vvma_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT); + kvm_riscv_hfence_vvma_gva(vcpu->kvm, 0, hmask, cp->a1, + cp->a2, PAGE_SHIFT, vmid); } else { + vmid =3D READ_ONCE(vcpu->kvm->arch.vmid.vmid); if (cp->a1 =3D=3D 0 && cp->a2 =3D=3D 0) - kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, - 0, hmask, - cp->a3); + kvm_riscv_hfence_vvma_asid_all(vcpu->kvm, 0, hmask, + cp->a3, vmid); else - kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, - 0, hmask, - cp->a1, cp->a2, - PAGE_SHIFT, - cp->a3); + kvm_riscv_hfence_vvma_asid_gva(vcpu->kvm, 0, hmask, + cp->a1, cp->a2, PAGE_SHIFT, + cp->a3, vmid); } break; default: --=20 2.43.0