From nobody Fri Apr 3 22:15:14 2026 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 0E22C28C2A1; Mon, 23 Mar 2026 02:56:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=114.242.206.163 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234581; cv=none; b=MKgTDCoMk7W/AxRHlNLDNOSAp+yy9GIdmZGa4Yl6rmYTecsfFbbpm5sFPOlHVAj8PfmGD0a27krH5gi/SHXDPew2dD5HdUCFVXPtxO8ktdXE2qjaozvhw1YgGkvNNVaevWqx1hcsQl+gmacyqEgWeJ3JjuDJBswh++KWIacS0x8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234581; c=relaxed/simple; bh=omcBfFSmTGtKmubn8NNmh/ELlSQUrctCVpJbJ/JNpas=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TCyqz8ObAJmU4DeKKjlac06yXJH7pUI5rO94CjDB802NvHfcQelR/dBlADPQZ7/clXgfJghpwp0VkvEddJw+95TVnMPaG+bRiw/mgj/k6rNvVyxs6QOJxPp4Q/iSM5g16zaUlUxHN0LMehJpl1fbkJr8gyKyLXDQkTAjofMNZhk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn; spf=pass smtp.mailfrom=loongson.cn; arc=none smtp.client-ip=114.242.206.163 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=loongson.cn Received: from loongson.cn (unknown [10.2.5.213]) by gateway (Coremail) with SMTP id _____8Dx7qnQq8Bpm6YdAA--.25527S3; Mon, 23 Mar 2026 10:56:16 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.213]) by front1 (Coremail) with SMTP id qMiowJBxbcLOq8Bpcg5bAA--.40805S3; Mon, 23 Mar 2026 10:56:15 +0800 (CST) From: Bibo Mao To: Tianrui Zhao , Huacai Chen Cc: kernel@xen0n.name, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, Juergen Gross Subject: [PATCH v3 1/4] LoongArch: KVM: Add kvm_request_pending checking in kvm_late_check_requests() Date: Mon, 23 Mar 2026 10:56:10 +0800 Message-Id: <20260323025613.3260876-2-maobibo@loongson.cn> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20260323025613.3260876-1-maobibo@loongson.cn> References: <20260323025613.3260876-1-maobibo@loongson.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: qMiowJBxbcLOq8Bpcg5bAA--.40805S3 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Content-Type: text/plain; charset="utf-8" Add kvm_request_pending() checking firstly in kvm_late_check_requests(), at most time there is no pending request, the following pending bit checking can be skipped then. Also move function kvm_check_pmu() in kvm_late_check_requests(), and put it after kvm_request_pending() checking. Signed-off-by: Bibo Mao --- arch/loongarch/kvm/vcpu.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 8ffd50a470e6..028eb3d5a33b 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -149,14 +149,6 @@ static void kvm_lose_pmu(struct kvm_vcpu *vcpu) kvm_restore_host_pmu(vcpu); } =20 -static void kvm_check_pmu(struct kvm_vcpu *vcpu) -{ - if (kvm_check_request(KVM_REQ_PMU, vcpu)) { - kvm_own_pmu(vcpu); - vcpu->arch.aux_inuse |=3D KVM_LARCH_PMU; - } -} - static void kvm_update_stolen_time(struct kvm_vcpu *vcpu) { u32 version; @@ -232,6 +224,14 @@ static int kvm_check_requests(struct kvm_vcpu *vcpu) static void kvm_late_check_requests(struct kvm_vcpu *vcpu) { lockdep_assert_irqs_disabled(); + if (!kvm_request_pending(vcpu)) + return; + + if (kvm_check_request(KVM_REQ_PMU, vcpu)) { + kvm_own_pmu(vcpu); + vcpu->arch.aux_inuse |=3D KVM_LARCH_PMU; + } + if (kvm_check_request(KVM_REQ_TLB_FLUSH_GPA, vcpu)) if (vcpu->arch.flush_gpa !=3D INVALID_GPA) { kvm_flush_tlb_gpa(vcpu, vcpu->arch.flush_gpa); @@ -312,7 +312,6 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) /* Make sure the vcpu mode has been written */ smp_store_mb(vcpu->mode, IN_GUEST_MODE); kvm_check_vpid(vcpu); - kvm_check_pmu(vcpu); =20 /* * Called after function kvm_check_vpid() --=20 2.39.3 From nobody Fri Apr 3 22:15:14 2026 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 34F8028852E; Mon, 23 Mar 2026 02:56:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=114.242.206.163 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234581; cv=none; b=Dnd/vAv5w9GEJVXcUlATYHR8WQgXQwWv6gOt63zX5JzKP6u4BvgxQTQpuYcTLoBb/bcitqnPl5TohgTRmsXyt3tiIkhf1kmXUSPZdbdCaQqXLAAJ4j2emgKuPYblko3ahx1kkuDW7ovqbhYG248QEkjCB9LeBF8UXpRDdIOXWkE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234581; c=relaxed/simple; bh=CYX3G3ifnOGdRTffmuHm0LU74ney0de1dwmKeDVPgFY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HLBl2GZIZ1g1uu15c6P/+fQjg7pZpHCSQeB6F6nK3tCK6B1ot6GLKBZOs8OiQv4hJqo+iwqQL94G8B7txe2fXvX5CFq46GhGRCorDA9U3boKPYG4OJKgFJaXYew9iOv6QVHni2hRB2rnjT9e1FXxUOqetq7TissMZaEFkqxpsjY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn; spf=pass smtp.mailfrom=loongson.cn; arc=none smtp.client-ip=114.242.206.163 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=loongson.cn Received: from loongson.cn (unknown [10.2.5.213]) by gateway (Coremail) with SMTP id _____8CxJMHRq8BpoaYdAA--.38171S3; Mon, 23 Mar 2026 10:56:17 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.213]) by front1 (Coremail) with SMTP id qMiowJBxbcLOq8Bpcg5bAA--.40805S4; Mon, 23 Mar 2026 10:56:16 +0800 (CST) From: Bibo Mao To: Tianrui Zhao , Huacai Chen Cc: kernel@xen0n.name, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, Juergen Gross Subject: [PATCH v3 2/4] LoongArch: KVM: Move host CSR_EENTRY save and restore in context switch Date: Mon, 23 Mar 2026 10:56:11 +0800 Message-Id: <20260323025613.3260876-3-maobibo@loongson.cn> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20260323025613.3260876-1-maobibo@loongson.cn> References: <20260323025613.3260876-1-maobibo@loongson.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: qMiowJBxbcLOq8Bpcg5bAA--.40805S4 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Content-Type: text/plain; charset="utf-8" CSR register LOONGARCH_CSR_EENTRY is shared between host cpu and guest vCPU, KVM need save and restore LOONGARCH_CSR_EENTRY register. Here move CSR register LOONGARCH_CSR_EENTRY saving in context switch function, rather than VM enter entry. At most time VM enter/exit is much more frequent than vCPU thread context switch. Signed-off-by: Bibo Mao --- arch/loongarch/kvm/vcpu.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index 028eb3d5a33b..e82b2c84ce17 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -319,7 +319,6 @@ static int kvm_pre_enter_guest(struct kvm_vcpu *vcpu) * and it may also clear KVM_REQ_TLB_FLUSH_GPA pending bit */ kvm_late_check_requests(vcpu); - vcpu->arch.host_eentry =3D csr_read64(LOONGARCH_CSR_EENTRY); /* Clear KVM_LARCH_SWCSR_LATEST as CSR will change when enter guest */ vcpu->arch.aux_inuse &=3D ~KVM_LARCH_SWCSR_LATEST; =20 @@ -1624,9 +1623,11 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int= cpu) * If not, any old guest state from this vCPU will have been clobbered. */ context =3D per_cpu_ptr(vcpu->kvm->arch.vmcs, cpu); - if (migrated || (context->last_vcpu !=3D vcpu)) + if (migrated || (context->last_vcpu !=3D vcpu)) { vcpu->arch.aux_inuse &=3D ~KVM_LARCH_HWCSR_USABLE; - context->last_vcpu =3D vcpu; + context->last_vcpu =3D vcpu; + vcpu->arch.host_eentry =3D csr_read64(LOONGARCH_CSR_EENTRY); + } =20 /* Restore timer state regardless */ kvm_restore_timer(vcpu); --=20 2.39.3 From nobody Fri Apr 3 22:15:14 2026 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by smtp.subspace.kernel.org (Postfix) with ESMTP id AA3BB299937; Mon, 23 Mar 2026 02:56:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=114.242.206.163 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234583; cv=none; b=WPDA+sZL+sRtn9zyyh5zB4LwvI8ajIpUrVhNMTXbqTEipqGxAD6ehk8r895arduY2PiTaSaOk8EEpR8k6HS+pZnNsn0OthL4ok+29MZKDNK24Ju+kUEcV9lTp57SmjIJ5qmxVQYBLCUixVkwSVsUzk0TPXdzdQjLucK1N2dIf+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234583; c=relaxed/simple; bh=vJn81t4uxYYWNgXRhGeseiQqdg75Zy7PleMWDkS5N6M=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=jbwV4k0k+GUo2Brkr+8X+CP8Rru6GbjN59B24tCMTdihjgSVL4XwP/9NSnh0FCG55OXhLPPxTZY+xnfkb2fSdqbBxojmBM/7GGbbXGBV2FwDSPa1sLn5Dc1fKU8Ux0yIPE6cIZINC4AuLpmuv7bu5aODfT660Cf1rjez4kyg2jg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn; spf=pass smtp.mailfrom=loongson.cn; arc=none smtp.client-ip=114.242.206.163 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=loongson.cn Received: from loongson.cn (unknown [10.2.5.213]) by gateway (Coremail) with SMTP id _____8DxssDTq8BppqYdAA--.25936S3; Mon, 23 Mar 2026 10:56:19 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.213]) by front1 (Coremail) with SMTP id qMiowJBxbcLOq8Bpcg5bAA--.40805S5; Mon, 23 Mar 2026 10:56:17 +0800 (CST) From: Bibo Mao To: Tianrui Zhao , Huacai Chen Cc: kernel@xen0n.name, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, Juergen Gross Subject: [PATCH v3 3/4] LoongArch: KVM: Move host CSR_GSTAT save and restore in context switch Date: Mon, 23 Mar 2026 10:56:12 +0800 Message-Id: <20260323025613.3260876-4-maobibo@loongson.cn> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20260323025613.3260876-1-maobibo@loongson.cn> References: <20260323025613.3260876-1-maobibo@loongson.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: qMiowJBxbcLOq8Bpcg5bAA--.40805S5 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Content-Type: text/plain; charset="utf-8" CSR register CSR_GSTAT stores guest VMID information. With existing implementation method, VMID is per vCPU similar with ASID on kernel. Register CSR_GSTAT is written at VM entry even if VMID is not changed. Here move CSR_GSTAT save/restore in vCPU context switch, and update register CSR_GSTAT only when VMID is updated at VM entry. At most time VM enter/exit is much more frequent than vCPU thread context switch. Signed-off-by: Bibo Mao --- arch/loongarch/kvm/main.c | 8 ++++---- arch/loongarch/kvm/vcpu.c | 2 ++ 2 files changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c index 2c593ac7892f..304c83863e71 100644 --- a/arch/loongarch/kvm/main.c +++ b/arch/loongarch/kvm/main.c @@ -271,11 +271,11 @@ void kvm_check_vpid(struct kvm_vcpu *vcpu) * memory with new address is changed on other VCPUs. */ set_gcsr_llbctl(CSR_LLBCTL_WCLLB); - } =20 - /* Restore GSTAT(0x50).vpid */ - vpid =3D (vcpu->arch.vpid & vpid_mask) << CSR_GSTAT_GID_SHIFT; - change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); + /* Restore GSTAT(0x50).vpid */ + vpid =3D (vcpu->arch.vpid & vpid_mask) << CSR_GSTAT_GID_SHIFT; + change_csr_gstat(vpid_mask << CSR_GSTAT_GID_SHIFT, vpid); + } } =20 void kvm_init_vmcs(struct kvm *kvm) diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c index e82b2c84ce17..8810fcd7e26e 100644 --- a/arch/loongarch/kvm/vcpu.c +++ b/arch/loongarch/kvm/vcpu.c @@ -1695,6 +1695,7 @@ static int _kvm_vcpu_load(struct kvm_vcpu *vcpu, int = cpu) =20 /* Restore Root.GINTC from unused Guest.GINTC register */ write_csr_gintc(csr->csrs[LOONGARCH_CSR_GINTC]); + write_csr_gstat(csr->csrs[LOONGARCH_CSR_GSTAT]); =20 /* * We should clear linked load bit to break interrupted atomics. This @@ -1790,6 +1791,7 @@ static int _kvm_vcpu_put(struct kvm_vcpu *vcpu, int c= pu) kvm_save_hw_gcsr(csr, LOONGARCH_CSR_ISR3); } =20 + csr->csrs[LOONGARCH_CSR_GSTAT] =3D read_csr_gstat(); vcpu->arch.aux_inuse |=3D KVM_LARCH_SWCSR_LATEST; =20 out: --=20 2.39.3 From nobody Fri Apr 3 22:15:14 2026 Received: from mail.loongson.cn (mail.loongson.cn [114.242.206.163]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3E7B928751B; Mon, 23 Mar 2026 02:56:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=114.242.206.163 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234592; cv=none; b=OeJRAYs88jDk/ShUzbD1zbkcqG9wO1z+dr2wWaiFBMDMrGfdGZ/oGgqtt5mbZwsV9AITYF4+pP0FtqocIfcVBFlSEqiN9kbGEX1ctSxYRVFdwdwiyc2B6iV1bCo01cAFFMP8OGWpqAfWVGWCRXUTWI17s7fsVpiSUNBRl3F6Ghs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774234592; c=relaxed/simple; bh=LwZUD10WsoemmzZdeek5lER2PPpIHKjrjVkcEaapbcI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=md2Xlf1EBywDOVA/LMI67DUXFlwhs1rTW8VKZxHtp/YcvFVBWkW1KrkZYF/1HGBlK9FsN515Tr89WSokb/rW4MdxTRYM368m404u7Cu+j2pMYv9LI467N6som0oSVouDwN/phVt2wMEztjeiBwA9pbSDFeuXQtRbYgK+WpfxvOk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn; spf=pass smtp.mailfrom=loongson.cn; arc=none smtp.client-ip=114.242.206.163 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=loongson.cn Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=loongson.cn Received: from loongson.cn (unknown [10.2.5.213]) by gateway (Coremail) with SMTP id _____8BxD6rVq8Bpq6YdAA--.26075S3; Mon, 23 Mar 2026 10:56:21 +0800 (CST) Received: from localhost.localdomain (unknown [10.2.5.213]) by front1 (Coremail) with SMTP id qMiowJCxWeDUq8Bpdw5bAA--.43248S2; Mon, 23 Mar 2026 10:56:20 +0800 (CST) From: Bibo Mao To: Tianrui Zhao , Huacai Chen Cc: kernel@xen0n.name, kvm@vger.kernel.org, loongarch@lists.linux.dev, linux-kernel@vger.kernel.org, Juergen Gross Subject: [PATCH v3 4/4] LoongArch: KVM: Set vcpu_is_preempted() with macro rather than function Date: Mon, 23 Mar 2026 10:56:13 +0800 Message-Id: <20260323025613.3260876-5-maobibo@loongson.cn> X-Mailer: git-send-email 2.39.3 In-Reply-To: <20260323025613.3260876-1-maobibo@loongson.cn> References: <20260323025613.3260876-1-maobibo@loongson.cn> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-CM-TRANSID: qMiowJCxWeDUq8Bpdw5bAA--.43248S2 X-CM-SenderInfo: xpdruxter6z05rqj20fqof0/ X-Coremail-Antispam: 1Uk129KBjDUn29KB7ZKAUJUUUUU529EdanIXcx71UUUUU7KY7 ZEXasCq-sGcSsGvfJ3UbIjqfuFe4nvWSU5nxnvy29KBjDU0xBIdaVrnUUvcSsGvfC2Kfnx nUUI43ZEXa7xR_UUUUUUUUU== Content-Type: text/plain; charset="utf-8" vcpu_is_preempted() is performance sensitive called in function osq_lock(), here set it as macro. So that parameter is not parsed at most time, it can avoid cache line thrashing across numa node. Here is part of unixbench result on 3C5000 DualWay machine with 32 Cores and 2 Numa node. original inline with patch execl 7025.7 6991.2 7242.3 fstime 474.6 703.1 1071 From the test result, with macro method is the best, and there is some improvment compared with original function method. Signed-off-by: Bibo Mao --- arch/loongarch/include/asm/qspinlock.h | 27 +++++++++++++++++++++----- arch/loongarch/kernel/paravirt.c | 15 ++------------ 2 files changed, 24 insertions(+), 18 deletions(-) diff --git a/arch/loongarch/include/asm/qspinlock.h b/arch/loongarch/includ= e/asm/qspinlock.h index 66244801db67..b5d7a038faf1 100644 --- a/arch/loongarch/include/asm/qspinlock.h +++ b/arch/loongarch/include/asm/qspinlock.h @@ -5,8 +5,10 @@ #include =20 #ifdef CONFIG_PARAVIRT - +#include DECLARE_STATIC_KEY_FALSE(virt_spin_lock_key); +DECLARE_STATIC_KEY_FALSE(virt_preempt_key); +DECLARE_PER_CPU(struct kvm_steal_time, steal_time); =20 #define virt_spin_lock virt_spin_lock =20 @@ -34,10 +36,25 @@ static inline bool virt_spin_lock(struct qspinlock *loc= k) return true; } =20 -#define vcpu_is_preempted vcpu_is_preempted - -bool vcpu_is_preempted(int cpu); - +/* + * Macro is better than inline function here + * With inline function, parameter cpu is parsed even though it is not use= d. + * This may cause cache line thrashing across NUMA node. + * With macro method, parameter cpu is parsed only when it is used. + */ +#define vcpu_is_preempted(cpu) \ +({ \ + bool __val; \ + \ + if (!static_branch_unlikely(&virt_preempt_key)) \ + __val =3D false; \ + else { \ + struct kvm_steal_time *src; \ + src =3D &per_cpu(steal_time, cpu); \ + __val =3D !!(READ_ONCE(src->preempted) & KVM_VCPU_PREEMPTED); \ + } \ + __val; \ +}) #endif /* CONFIG_PARAVIRT */ =20 #include diff --git a/arch/loongarch/kernel/paravirt.c b/arch/loongarch/kernel/parav= irt.c index b74fe6db49ab..2d1206e486e2 100644 --- a/arch/loongarch/kernel/paravirt.c +++ b/arch/loongarch/kernel/paravirt.c @@ -10,8 +10,8 @@ #include =20 static int has_steal_clock; -static DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); -static DEFINE_STATIC_KEY_FALSE(virt_preempt_key); +DEFINE_PER_CPU(struct kvm_steal_time, steal_time) __aligned(64); +DEFINE_STATIC_KEY_FALSE(virt_preempt_key); DEFINE_STATIC_KEY_FALSE(virt_spin_lock_key); =20 static bool steal_acc =3D true; @@ -261,17 +261,6 @@ static int pv_time_cpu_down_prepare(unsigned int cpu) return 0; } =20 -bool vcpu_is_preempted(int cpu) -{ - struct kvm_steal_time *src; - - if (!static_branch_unlikely(&virt_preempt_key)) - return false; - - src =3D &per_cpu(steal_time, cpu); - return !!(src->preempted & KVM_VCPU_PREEMPTED); -} -EXPORT_SYMBOL(vcpu_is_preempted); #endif =20 static void pv_cpu_reboot(void *unused) --=20 2.39.3