From nobody Tue Oct 28 01:50:39 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1515660037681848.6768085966523; Thu, 11 Jan 2018 00:40:37 -0800 (PST) Received: from localhost ([::1]:53101 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eZYPc-0003yq-S0 for importer@patchew.org; Thu, 11 Jan 2018 03:40:36 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46684) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eZYBv-0000ck-5s for qemu-devel@nongnu.org; Thu, 11 Jan 2018 03:26:28 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eZYBu-0001Po-4G for qemu-devel@nongnu.org; Thu, 11 Jan 2018 03:26:27 -0500 Received: from mail.ispras.ru ([83.149.199.45]:41792) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eZYBt-0001Ow-Nf for qemu-devel@nongnu.org; Thu, 11 Jan 2018 03:26:26 -0500 Received: from [127.0.1.1] (unknown [85.142.117.226]) by mail.ispras.ru (Postfix) with ESMTPSA id E2CE454006B; Thu, 11 Jan 2018 11:26:24 +0300 (MSK) From: Pavel Dovgalyuk To: qemu-devel@nongnu.org Date: Thu, 11 Jan 2018 11:26:26 +0300 Message-ID: <20180111082626.27295.52919.stgit@pasha-VirtualBox> In-Reply-To: <20180111082452.27295.85707.stgit@pasha-VirtualBox> References: <20180111082452.27295.85707.stgit@pasha-VirtualBox> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [fuzzy] X-Received-From: 83.149.199.45 Subject: [Qemu-devel] [RFC PATCH v3 16/30] cpus: only take BQL for sleeping threads X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kwolf@redhat.com, peter.maydell@linaro.org, pavel.dovgaluk@ispras.ru, mst@redhat.com, jasowang@redhat.com, quintela@redhat.com, zuban32s@gmail.com, maria.klimushenkova@ispras.ru, dovgaluk@ispras.ru, kraxel@redhat.com, boost.lists@gmail.com, pbonzini@redhat.com, alex.bennee@linaro.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 From: Alex Benn=C3=A9e Now the only real need to hold the BQL is for when we sleep on the cpu->halt conditional. The lock is actually dropped while the thread sleeps so the actual window for contention is pretty small. This also means we can remove the special case hack for exclusive work and simply declare that work no longer has an implicit BQL held. This isn't a major problem async work is generally only changing things in the context of its own vCPU. If it needs to work across vCPUs it should be using the exclusive mechanism or possibly taking the lock itself. Signed-off-by: Alex Benn=C3=A9e Tested-by: Pavel Dovgalyuk --- cpus-common.c | 13 +++++-------- cpus.c | 10 ++++------ 2 files changed, 9 insertions(+), 14 deletions(-) diff --git a/cpus-common.c b/cpus-common.c index 59f751e..64661c3 100644 --- a/cpus-common.c +++ b/cpus-common.c @@ -310,6 +310,11 @@ void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_f= unc func, queue_work_on_cpu(cpu, wi); } =20 +/* Work items run outside of the BQL. This is essential for avoiding a + * deadlock for exclusive work but also applies to non-exclusive work. + * If the work requires cross-vCPU changes then it should use the + * exclusive mechanism. + */ void process_queued_cpu_work(CPUState *cpu) { struct qemu_work_item *wi; @@ -327,17 +332,9 @@ void process_queued_cpu_work(CPUState *cpu) } qemu_mutex_unlock(&cpu->work_mutex); if (wi->exclusive) { - /* Running work items outside the BQL avoids the following dea= dlock: - * 1) start_exclusive() is called with the BQL taken while ano= ther - * CPU is running; 2) cpu_exec in the other CPU tries to takes= the - * BQL, so it goes to sleep; start_exclusive() is sleeping too= , so - * neither CPU can proceed. - */ - qemu_mutex_unlock_iothread(); start_exclusive(); wi->func(cpu, wi->data); end_exclusive(); - qemu_mutex_lock_iothread(); } else { wi->func(cpu, wi->data); } diff --git a/cpus.c b/cpus.c index 82dcbf8..79dda49 100644 --- a/cpus.c +++ b/cpus.c @@ -1149,31 +1149,29 @@ static bool qemu_tcg_should_sleep(CPUState *cpu) =20 static void qemu_tcg_wait_io_event(CPUState *cpu) { - qemu_mutex_lock_iothread(); =20 while (qemu_tcg_should_sleep(cpu)) { + qemu_mutex_lock_iothread(); stop_tcg_kick_timer(); qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); + qemu_mutex_unlock_iothread(); } =20 start_tcg_kick_timer(); =20 qemu_wait_io_event_common(cpu); - - qemu_mutex_unlock_iothread(); } =20 static void qemu_kvm_wait_io_event(CPUState *cpu) { - qemu_mutex_lock_iothread(); =20 while (cpu_thread_is_idle(cpu)) { + qemu_mutex_lock_iothread(); qemu_cond_wait(cpu->halt_cond, &qemu_global_mutex); + qemu_mutex_unlock_iothread(); } =20 qemu_wait_io_event_common(cpu); - - qemu_mutex_unlock_iothread(); } =20 static void qemu_hvf_wait_io_event(CPUState *cpu)