From nobody Sun May 5 03:58:50 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1552378290097887.9263748567871; Tue, 12 Mar 2019 01:11:30 -0700 (PDT) Received: from localhost ([127.0.0.1]:47071 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h3cVK-0007sL-1f for importer@patchew.org; Tue, 12 Mar 2019 04:11:18 -0400 Received: from eggs.gnu.org ([209.51.188.92]:39421) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1h3cTW-0006j4-6i for qemu-devel@nongnu.org; Tue, 12 Mar 2019 04:09:28 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1h3cJK-00088m-4G for qemu-devel@nongnu.org; Tue, 12 Mar 2019 03:58:54 -0400 Received: from szxga05-in.huawei.com ([45.249.212.191]:2248 helo=huawei.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1h3cJG-00084J-FS; Tue, 12 Mar 2019 03:58:50 -0400 Received: from DGGEMS407-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 17827E147CC62515C01F; Tue, 12 Mar 2019 15:58:47 +0800 (CST) Received: from localhost.localdomain (10.175.104.222) by DGGEMS407-HUB.china.huawei.com (10.3.19.207) with Microsoft SMTP Server id 14.3.408.0; Tue, 12 Mar 2019 15:58:37 +0800 From: Heyi Guo To: , Date: Tue, 12 Mar 2019 15:57:19 +0800 Message-ID: <1552377439-24640-1-git-send-email-guoheyi@huawei.com> X-Mailer: git-send-email 1.8.3.1 MIME-Version: 1.0 X-Originating-IP: [10.175.104.222] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 45.249.212.191 Subject: [Qemu-devel] [RFC] arm/cpu: fix soft lockup panic after resuming from stop X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Heyi Guo , wanghaibin.wang@huawei.com, Peter Maydell Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When we stop a VM for more than 30 seconds and then resume it, by qemu monitor command "stop" and "cont", Linux on VM will complain of "soft lockup - CPU#x stuck for xxs!" as below: [ 2783.809517] watchdog: BUG: soft lockup - CPU#3 stuck for 2395s! [ 2783.809559] watchdog: BUG: soft lockup - CPU#2 stuck for 2395s! [ 2783.809561] watchdog: BUG: soft lockup - CPU#1 stuck for 2395s! [ 2783.809563] Modules linked in... This is because Guest Linux uses generic timer virtual counter as a software watchdog, and CNTVCT_EL0 does not stop when VM is stopped by qemu. This patch is to fix this issue by saving the value of CNTVCT_EL0 when stopping and restoring it when resuming. Cc: Peter Maydell Signed-off-by: Heyi Guo --- target/arm/kvm.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++++++= ++++ 1 file changed, 66 insertions(+) diff --git a/target/arm/kvm.c b/target/arm/kvm.c index 79a79f0..73b9ecb 100644 --- a/target/arm/kvm.c +++ b/target/arm/kvm.c @@ -39,11 +39,77 @@ static bool cap_has_inject_serror_esr; =20 static ARMHostCPUFeatures arm_host_cpu_features; =20 +static int get_vcpu_timer_tick(CPUState *cs, uint64_t *tick_at_pause) +{ + int err; + struct kvm_one_reg reg; + + reg.id =3D KVM_REG_ARM_TIMER_CNT; + reg.addr =3D (uintptr_t) tick_at_pause; + + err =3D kvm_vcpu_ioctl(cs, KVM_GET_ONE_REG, ®); + return err; +} + +static int set_vcpu_timer_tick(CPUState *cs, uint64_t tick_at_pause) +{ + int err; + struct kvm_one_reg reg; + + reg.id =3D KVM_REG_ARM_TIMER_CNT; + reg.addr =3D (uintptr_t) &tick_at_pause; + + err =3D kvm_vcpu_ioctl(cs, KVM_SET_ONE_REG, ®); + return err; +} + +static void arch_timer_change_state_handler(void *opaque, int running, + RunState state) +{ + static uint64_t hw_ticks_at_paused; + static RunState pre_state =3D RUN_STATE__MAX; + int err; + CPUState *cs =3D (CPUState *)opaque; + + switch (state) { + case RUN_STATE_PAUSED: + err =3D get_vcpu_timer_tick(cs, &hw_ticks_at_paused); + if (err) { + error_report("Get vcpu timer tick failed: %d", err); + } + break; + case RUN_STATE_RUNNING: + if (pre_state =3D=3D RUN_STATE_PAUSED) { + err =3D set_vcpu_timer_tick(cs, hw_ticks_at_paused); + if (err) { + error_report("Resume vcpu timer tick failed: %d", err); + } + } + break; + default: + break; + } + + pre_state =3D state; +} + int kvm_arm_vcpu_init(CPUState *cs) { ARMCPU *cpu =3D ARM_CPU(cs); struct kvm_vcpu_init init; =20 + /* + * Only add change state handler for arch timer once, for KVM will hel= p to + * synchronize virtual timer of all VCPUs. + */ + static bool arch_timer_change_state_handler_added; + + + if (!arch_timer_change_state_handler_added) { + qemu_add_vm_change_state_handler(arch_timer_change_state_handler, = cs); + arch_timer_change_state_handler_added =3D true; + } + init.target =3D cpu->kvm_target; memcpy(init.features, cpu->kvm_init_features, sizeof(init.features)); =20 --=20 1.8.3.1