From nobody Sat Nov 23 01:34:46 2024 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CFDB1B0F16 for ; Thu, 14 Nov 2024 22:09:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622175; cv=none; b=lfeuQpmqX2zrbFVUlhLkNK0WP9HOddNOnNAdgLTqSiGQJvg89F7nEBrsda8RkXKWaemiQiF9ZkcS5C/pntRgca7RaJEK5LJjaWUTXYeEuJ/Z8T+e/ypRJJTjQYCsG+bgi/xUldy5mRqqSc9QOV9W/olBBqM89kn35geeCyr2Fdg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622175; c=relaxed/simple; bh=FXF39w61mNxxA37BRFFzg13pzMbcauk0JxD5udDvbNs=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=sDBhuS2nZoGahDhL5MSGHSzsz8XMDx2bUcjhhPp4a9hoRhBSPTkuUUTFdhlq27kKJIxdY0E3K+zp00gysxjGIWUE2bOiSuXUdGuonJgtGaDMjM6gDjjnNh9tcmpBl4p/S3luDBgfNoPXtHz88Kf4J4ooTt3j1BwnxD9N07CXKaM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uKmQs4r2; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uKmQs4r2" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6e6101877abso22224887b3.0 for ; Thu, 14 Nov 2024 14:09:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622172; x=1732226972; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ltTqN1vUYgDIEjeUsRotinFDZNC54N+YNek7x4otQhk=; b=uKmQs4r2sfC3N2Mm7T2NPt45wuhEGXkAJMkPw7WbYrUDhKq7XytcCvX+dG6PAm3ARv BUqfTU4MmipuMSBC4cGnYPJ3lUaoz/MQcrUeRGdogxjxPkZnoH1ywkUHPK8NjHxHLaKS hqURQBeQoCRKxMEEJ+edmHiMsuPMqrTuPyKLAloP1l2aEySylV6qBYlAeMXqrSM+DYAL 8pBVh6acwfE2t+muBWyg+gwXSWeBm70ZksAY280ICgADkfwJfl4HVQ3PCeOb2o6UmgoI jcI0WmpsXMCs8Ka5vw50qV5ZKceb9h/j2GmBxm0SDIP+lENi4951lz9pazRNBHSxqs9d 1s3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622172; x=1732226972; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ltTqN1vUYgDIEjeUsRotinFDZNC54N+YNek7x4otQhk=; b=c2TjlXfBeblYU0QwyDRP4bMFSYmyw5o2vm6W1EAY1Ii97fHbbz+T39s12+Jn4TpXL3 RnmqmcQ5GqQZ8cpkf92AMIX1IvmLDcKkUH+2X7261BS2Ui2/hwYuw/FGVzXaFl46I062 lz8q8ME5cLl/ukbVPijhxpvASBy9NGQyh1a146aJVRvHIu2F1da4FBkS0oygk8Z3Dd9J U1prY02aFUBDfO9TZBuN6AkWx5XIQ+q9kERBGuJKzmO3IOWiWXxzhYmR0pOPvXO6REbi dGEkz8Y9yXSaTL7lRAy4TXsMGJNBuu9VJ7GkujS9oE5ujVQVqUsGSlNTG8xrvt7qrkOu GheA== X-Forwarded-Encrypted: i=1; AJvYcCUfv7IY7CqzKpKFhiSLoBxOdb+ANVnrjO/gGLN/TEh+bjVtKf3vd6BpeqNszEE6KKqfBL21jOKRulO+lpA=@vger.kernel.org X-Gm-Message-State: AOJu0YzBjdnr2PfiVEkzhEXecgOpiLIfuGYlEabVnEMNLpKNOk1vRS6i QBT1br3/1KN09bkwz50AwltuaB2b+7V/5Ht/vAJBIulEWkZDkdnjCOKWzUoBz42oFsPtubKN4WP 34CNFm63qC+myEw== X-Google-Smtp-Source: AGHT+IGJ/4jVZ1/xg/LGCl3APYeAMNbxGDbBEvRotblohLu+ttsAW0UM3N5LDvfVv2yeayzGHKI14yjv6HYVEP4= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a05:690c:6806:b0:6ea:8dad:c3b1 with SMTP id 00721157ae682-6ee55c9b324mr496857b3.3.1731622172602; Thu, 14 Nov 2024 14:09:32 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:15 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-2-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 1/5] PM: sleep: Fix runtime PM issue in dpm_resume() From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Some devices might have their is_suspended flag set to false. In these cases, dpm_resume() should skip doing anything for those devices. However, runtime PM enable and a few others steps are done before checking for this flag. Fix it so that we do things in the right order. Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 4a67e83300e1..86e51b9fefab 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -913,6 +913,9 @@ static void device_resume(struct device *dev, pm_messag= e_t state, bool async) if (dev->power.syscore) goto Complete; =20 + if (!dev->power.is_suspended) + goto Unlock; + if (dev->power.direct_complete) { /* Match the pm_runtime_disable() in __device_suspend(). */ pm_runtime_enable(dev); @@ -931,9 +934,6 @@ static void device_resume(struct device *dev, pm_messag= e_t state, bool async) */ dev->power.is_prepared =3D false; =20 - if (!dev->power.is_suspended) - goto Unlock; - if (dev->pm_domain) { info =3D "power domain "; callback =3D pm_op(&dev->pm_domain->ops, state); --=20 2.47.0.338.g60cca15819-goog From nobody Sat Nov 23 01:34:46 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 155931B140D for ; Thu, 14 Nov 2024 22:09:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622179; cv=none; b=BWqLsSPsVv73b4pRbbCYp7g1SHcjEH2BKSMh1FkDjI6Oz0lR6Fg8SqL5KQ5OYXzJX2NCy2E+f1PWAAYCyIFMzPVsePVCsasvmiQhPumYQeHIRCCtULa4VhQX1ZSaXZ4Srx68MWRh/U2Re2JCx7sXwd1X4g/UZQ9LO4TIRMCXm7Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622179; c=relaxed/simple; bh=jYgI6Lv2TaRKTGXABWBeP3aEG3bkPZXhlUi0HpUIshM=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=mHNjXxtGHeiX5YK7BI6nricGnbdCBXThXhhUBGWveWcCJAUji0/VXN9i9f0NH13OraMwtySl/LYZ/Xa6Zd0B8NNE/tkfEe2MJfn5Cp/lO09phjsIe0oCkAR8cR11yZmGBr/BKc+m3COVO2vNrbIUPf30JSlSZQeeFNNojskiAKk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HN+/RPI5; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HN+/RPI5" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea86f1df79so21260287b3.1 for ; Thu, 14 Nov 2024 14:09:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622177; x=1732226977; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YK+OszyPJF9IsNwlHAXzh79EKRYRWDFx/RXocPFNckQ=; b=HN+/RPI5j3Ms+AoeGWTa0rVHwY6bHu35YMjC6nH4jYHXT11TSW8HgRkygYIahltTtY 6FlotNcu0MO6fn7+18gO2We1bs761O+yd6nFy45B3DMlPEzsFo6IqottKYtvJHI7h2gi fci6zyubwoKHS1sjwM22ywHGJvgVMaJEDCbaNmb31Sye5VAVAB5aKM3EEohm6VcFcTGz TkFpC56qf21Hfip3LvkaS543VK/G30zzjN+BIJbRBP46QK8RO4L5hLiG0mOCzTqCmy1n YmDsKlLc1FRUJRya7mkbTCQ6KEcmkQrN//6msGz/qiWwf+nXmCA40V+zdyRlS4C7Zd1z BAMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622177; x=1732226977; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YK+OszyPJF9IsNwlHAXzh79EKRYRWDFx/RXocPFNckQ=; b=dWkBtIWplTzazqqblZnP/wM/UagIEAMDZHeBAK/1rK+u/LY3aPjKe8CH0+x29Alig2 LCU3vHsYSycCfXaCXO7RazA6Krt5pa2ebdcvhrbcb25xWKH9S/LpGea69j2L7dDznc5S WmBNLwiztrT7qMZZDI4etam1LuOMK9I7T9XlGJCVFIz3RGm8Tj6rcKmQA9jkDwK9KVvo /L++zDl6U9Ew7Q90cuSOMQ9a/f/jYwMtR2rlstysx/VUQTG9s7g1nT9XAl8oMY9PQ9uS IkRHrvDq/8ZFqiRrRrCm1ABpLr41L34rB4+qyp4XHRuz4lmAV+bCWx8F2eX7ykPAf0zC 4l0Q== X-Forwarded-Encrypted: i=1; AJvYcCUhriO2qIqwXWrpyhO+KhIIbzU2A05LDErNE13jQDmJUKkxyq5cgA/vIhd6HEq7HXgkO7G3zTf5c3YtTJw=@vger.kernel.org X-Gm-Message-State: AOJu0Yxn20GwvYQou1BWXQuesfyT2DDEL8SmmPPAytqhBKeJpPsmAi7k CVxUEDzgv9IBKumZVRO/TMAmBuRshBXjjCPtfktXwWr9DXSDIy2W1rTVh4C+eoUwO85KKC4ZfGX ZpHXT7HB/Oy2NbA== X-Google-Smtp-Source: AGHT+IH5IPLhIyXGelZjgJIpI/dK6jQIqzw4zXbJcpm2KbdtZor186y0i6MHOA+ceIHOi+9vCMk9NuyHBVYsZQ4= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a81:ad0a:0:b0:6e3:d670:f603 with SMTP id 00721157ae682-6ee55c2f44fmr218917b3.3.1731622176991; Thu, 14 Nov 2024 14:09:36 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:16 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-3-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 2/5] PM: sleep: Remove unnecessary mutex lock when waiting on parent From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Locking is not needed to do get_device(dev->parent). We either get a NULL (if the parent was cleared) or the actual parent. Also, when a device is deleted (device_del()) and removed from the dpm_list, its completion variable is also complete_all()-ed. So, we don't have to worry about waiting indefinitely on a deleted parent device. Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 86e51b9fefab..9b9b6088e56a 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -284,18 +284,9 @@ static bool dpm_wait_for_superior(struct device *dev, = bool async) * counting the parent once more unless the device has been deleted * already (in which case return right away). */ - mutex_lock(&dpm_list_mtx); - - if (!device_pm_initialized(dev)) { - mutex_unlock(&dpm_list_mtx); - return false; - } - parent =3D get_device(dev->parent); - - mutex_unlock(&dpm_list_mtx); - - dpm_wait(parent, async); + if (device_pm_initialized(dev)) + dpm_wait(parent, async); put_device(parent); =20 dpm_wait_for_suppliers(dev, async); --=20 2.47.0.338.g60cca15819-goog From nobody Sat Nov 23 01:34:46 2024 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 762051B2196 for ; Thu, 14 Nov 2024 22:09:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622184; cv=none; b=G1RTbdQTtOsqz3Ykq+vNw4v3pnE3IChT2FLGY0dABZy01VxgN9fwOat0wPvJM4/etyWVPuakuAyvjlOpcG4IqeNFpa5zma4lOPtSMn2bAK6w0iZoWsAw0RSxnkTuF7UT8seoSG/7A0aV98vvb7QobnR0rbBwHu9HkThR3WB1gws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622184; c=relaxed/simple; bh=yWIxtwftansYT2XGRoXOSgWp9Y+jUrFX9t88GwIz62w=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=e5wZNOADRQ445dgBevB3gonwRB76hjEAv4mBBimTRTBmD/gAXk5TJQhTSkDuSZI9kmzTjvk6cfYcrXmEQi0t4OTeLJNLySPwAK8ibYAURpAdOHTkwsn+j9eHdKHuc66Pu4nwq0F4RUdRbU01qqisIVPOP3UMy9RQKad8Ih6ad70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MUtV37RL; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MUtV37RL" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea9618de40so23612117b3.3 for ; Thu, 14 Nov 2024 14:09:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622181; x=1732226981; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oCE+wTj2NSDSuKVEvxg1xnDYLLwUUBXTSuE+NoiWWkY=; b=MUtV37RLBbu7x65rBG8kr/jhSMGvrxJ+LOr1Ft5zwl9zBLmG8JSD5GzNfMj/6Ho4B0 Fy8EUM5QWLpiDmoyHjQD4Uw0PN53fEaANLvNjiVlijTZZRxYQpxuiqEWUxiEiCWTvenu jeWdo5qG4L0qjzIzZ8z4huMic+59aNWVXMSTGjFYkRmHY15aejSgWjipegES2oE+BuEw CVlxQnISTzu/lQMaZJPORyrIdYVVwDF+hRI43sRb4bgvVhhIb9X5Dm7SGeW0ahjc+N3z ZotYMkDDkZivgPEv8qskny8d9G5nqVUb9hFVLbG5lQWDvW6IN2OU/6phNoSwpLlMQizq ERig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622181; x=1732226981; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oCE+wTj2NSDSuKVEvxg1xnDYLLwUUBXTSuE+NoiWWkY=; b=dhb6sczc4RgYXn0XHBd8dgyoEFNKoJpDD/jY+FMqUs63lJ+zFvV8WOqWMq81djsBxJ 1j+wD97noM1Ods5W2SKqXZwAqfgvSZ4qfqlShyduKpAXTqogV9z3gdQr83gGGgLLCzAF AHZCklXD6vD5hD2J8J70CzLUxR++pdmWrF4HBU1FsYhM/Uu6f4FA3qkNj4fO++G727CP 2v810e7bdcEUx7kNmJW3Bjd9Y/iw6l58o7emv6Z/TS/oT1uVq0eCcafAmrdSMn/DIK/6 Opf8CAcrItfHWNrhibasriX6Fp9w5WWjuawC/KcM9QcDNoi8BI1y2GYAZm0LKwO9NQ2K u8QQ== X-Forwarded-Encrypted: i=1; AJvYcCVmEuekSXqHcQWFVC9LxITfKyKhkuu4zU9XvXAMzZCBTUu06G0lnBW44eO6CJsuVDUHfljuiyMPWwcnIwg=@vger.kernel.org X-Gm-Message-State: AOJu0YxOu33JnE75zU1zgpMDtE9Vo2T0WFOegbZwXY7eAzWzvMdgtzo6 cXeXHt6nCTVDxZvW2bPRfOEPOnfSwRQHqQ4tq/ceqbIYgSoSA8XlNDWzZZ9azfzMZIk1ICV8yj+ MtX19kitkESVL0w== X-Google-Smtp-Source: AGHT+IGjEOUS7wMr4ogKkM7WbTZGcbWzZb2t3AGnu1yHNIrxDvljJSXUdc9GZvFgcfoE8/FWaYEEVqH22qJDxjg= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a05:690c:a17:b0:6ea:3c62:17c1 with SMTP id 00721157ae682-6ee55a553e0mr18637b3.1.1731622181452; Thu, 14 Nov 2024 14:09:41 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:17 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-4-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 3/5] PM: sleep: Add helper functions to loop through superior/subordinate devs From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We have a lot of code that does or will loop through superior/subordinate devices and take action on them. Refactor the code to pull out the common "loop through" functionality into helper functions to avoid repeating the logic. Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 70 ++++++++++++++++++++++----------------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 9b9b6088e56a..aa1470ef6ac0 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -247,15 +247,22 @@ static int dpm_wait_fn(struct device *dev, void *asyn= c_ptr) return 0; } =20 -static void dpm_wait_for_children(struct device *dev, bool async) -{ - device_for_each_child(dev, &async, dpm_wait_fn); -} - -static void dpm_wait_for_suppliers(struct device *dev, bool async) +static int dpm_for_each_superior(struct device *dev, void *data, + int (*fn)(struct device *dev, void *data)) { + struct device *parent; + int ret =3D 0, idx; struct device_link *link; - int idx; + + if (!dev) + return 0; + + parent =3D get_device(dev->parent); + if (parent) + ret =3D fn(parent, data); + put_device(parent); + if (ret) + return ret; =20 idx =3D device_links_read_lock(); =20 @@ -267,29 +274,20 @@ static void dpm_wait_for_suppliers(struct device *dev= , bool async) * walking. */ list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) - if (READ_ONCE(link->status) !=3D DL_STATE_DORMANT) - dpm_wait(link->supplier, async); + if (READ_ONCE(link->status) !=3D DL_STATE_DORMANT) { + ret =3D fn(link->supplier, data); + if (ret) + break; + } =20 device_links_read_unlock(idx); + + return ret; } =20 static bool dpm_wait_for_superior(struct device *dev, bool async) { - struct device *parent; - - /* - * If the device is resumed asynchronously and the parent's callback - * deletes both the device and the parent itself, the parent object may - * be freed while this function is running, so avoid that by reference - * counting the parent once more unless the device has been deleted - * already (in which case return right away). - */ - parent =3D get_device(dev->parent); - if (device_pm_initialized(dev)) - dpm_wait(parent, async); - put_device(parent); - - dpm_wait_for_suppliers(dev, async); + dpm_for_each_superior(dev, &async, dpm_wait_fn); =20 /* * If the parent's callback has deleted the device, attempting to resume @@ -298,10 +296,18 @@ static bool dpm_wait_for_superior(struct device *dev,= bool async) return device_pm_initialized(dev); } =20 -static void dpm_wait_for_consumers(struct device *dev, bool async) +static int dpm_for_each_subordinate(struct device *dev, void *data, + int (*fn)(struct device *dev, void *data)) { + int ret, idx; struct device_link *link; - int idx; + + if (!dev) + return 0; + + ret =3D device_for_each_child(dev, data, fn); + if (ret) + return ret; =20 idx =3D device_links_read_lock(); =20 @@ -315,16 +321,20 @@ static void dpm_wait_for_consumers(struct device *dev= , bool async) * unregistration). */ list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node) - if (READ_ONCE(link->status) !=3D DL_STATE_DORMANT) - dpm_wait(link->consumer, async); + if (READ_ONCE(link->status) !=3D DL_STATE_DORMANT) { + ret =3D fn(link->consumer, data); + if (ret) + break; + } =20 device_links_read_unlock(idx); + + return ret; } =20 static void dpm_wait_for_subordinate(struct device *dev, bool async) { - dpm_wait_for_children(dev, async); - dpm_wait_for_consumers(dev, async); + dpm_for_each_subordinate(dev, &async, dpm_wait_fn); } =20 /** --=20 2.47.0.338.g60cca15819-goog From nobody Sat Nov 23 01:34:46 2024 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 262B81B21B2 for ; Thu, 14 Nov 2024 22:09:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622189; cv=none; b=spVY6J1ATG66575HfoUvlDQX6CRiSeGDyjoef6S5KoD6CxzF+ZJoNE3Du5NU5BbUAKFSV7zj/Y5uwnQiKEuXgWNdQlPTuB7cmz0A70P2B0Hiv6OCwaqhFb5aj/DJmiuivRgBqfSubaX2xHNDwHWgo4wD+3OP3WLfIQdV9r2fc2E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622189; c=relaxed/simple; bh=zT69LE77WSFfAhqCDH5w950kvxq/gLKyUSfzX7wNoIg=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=LWdW7xgjBX2P3voYJeT6Q/7pQh5LJzFwfz/v7cKReXaaYWSuC/nraMChg+z7fc30A5hZsTCB0Ijpy4hCfOP7rmIAa5GGi9Vyr5TuxUHgCLuEZ8E8tnyGd3H+EHvQR0rSfCOTezY0Z7bOQxTpr4YvSfpZz3J2UKhtTM7S+iq0MJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ItHwvwAb; arc=none smtp.client-ip=209.85.219.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ItHwvwAb" Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-e29135d1d0cso1707059276.1 for ; Thu, 14 Nov 2024 14:09:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622186; x=1732226986; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cuwLd2wG5w5IEQwXRfGEca25xUbPGee3GN6Vgv7vgbI=; b=ItHwvwAbgCu7II3PVd5JtpIDdSo/KOMlylB5YZIg1isP/Uadv/3qOkN7uJufuEdobX m9O9uAhcYJ5Ne3SJ3ZsCgpEv3+hl0Q71AgVccMRqyWH0vco1yaIe5rwzJJBTezQ26aRK E+x14Ey7cvZiUbum3hglr8F5iPlKwa/eUP/mdQ8wH69hy6WOiivupikoX9YUitBUubs9 sfnEitzAaBhDrMGmEhGAQ0vLge+1xD+raFigpC0nQqNfGPIA22ziupppN30tLu1UJiu2 ag9PnuyLGSkDIC/u0dMC5yhugw9XCi8t+WxhwbcrCIMdOZgEfIZnpTKg+VdY6UazGob+ OreA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622186; x=1732226986; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cuwLd2wG5w5IEQwXRfGEca25xUbPGee3GN6Vgv7vgbI=; b=abCPNb7Q+AkwQeo1wcsMaO+/CEHMevxeCrO+FuyuIOjZb+M4imbXGwycpJCJ36CidN Xq1hl8xkAY3JhJ1qxwo9eTIhNq8J0PGjdI1tf7pXJ7OX3MqShk0YHYfBfb3Lnavt15wV H261cMoaYTPn2pDUVW9+pejmwW4Z4UtCd9KwMGtt7ocKCcV/HYm/jAjeVUHmk7ulKoCd 0J+1h7yHTDJkYYoDEBq9NAJGHLqHDIDchkNRKys7Xow5MZ19v0GhggZIeIKMY5HFo2R9 gUCqBwW1bX7MML2aw3gMz6ztvFjHEYtW3SUURJqlJY5JYqhw+aOi4fOcM4Xbrvc1T2ql oZSw== X-Forwarded-Encrypted: i=1; AJvYcCWAKFgILyZITTqKq+Tij13NuC58X3e61blfuAq9nfxBUgtGDzHPYQCeWeeUfcMP33UWuTJKYltvACJWF7A=@vger.kernel.org X-Gm-Message-State: AOJu0YzOpLqK/XlabePkfzXauzvLvFsHKkuKdOUE4r22565qK7Gzl57f 8YGU4erupfoDFBUu6ulhAy3apgq/3O3N2q4JfAvSAxrvUUVW4fs8rJ/C+pkn3p4VoZuEsQjJftx dzyrnPSCETEKPCw== X-Google-Smtp-Source: AGHT+IFkvpUZhEiNC+bQ9HRdBsFcgiebG16qqorItpAQ1ScVgOB47lhw/wazGuODgw4aX0JHOPU35kN2IPbBe74= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a25:2d12:0:b0:e29:6e61:3daf with SMTP id 3f1490d57ef6-e3825d38f0cmr322276.2.1731622185728; Thu, 14 Nov 2024 14:09:45 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:18 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-5-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 4/5] PM: sleep: Do breadth first suspend/resume for async suspend/resume From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The dpm_list used for suspend/resume ensures that all superior devices (parents and suppliers) precede subordinate devices (children and consumers). Current async resume logic: ------------------------------- * For each resume phase (except the "complete" phase, which is always serialized), the resume logic first queues all async devices in the dpm_list. It then loops through the dpm_list again to resume the sync devices one by one. * Async devices wait for all their superior devices to resume before starting their own resume operation. * This process results in multiple sleep and wake-up cycles before an async device actually resumes. This sleeping also causes kworker threads to stall with work for a period. Consequently, the workqueue framework spins up more kworker threads to handle the other async devices. * The end result is excessive thread creation, wake-ups, sleeps, and context switches for every async device. This overhead makes a full async resume (with all devices marked as async-capable) much slower than a synchronous resume. Current async suspend logic: -------------------------------- * The async suspend logic differs from the async resume logic. The suspend logic loops through the dpm_list. When it finds an async device, it queues the work and moves on. However, if it encounters a sync device, it waits until the sync device (and all its subordinate devices) have suspended before proceeding to the next device. Therefore, an async suspend device can be left waiting on an unrelated device before even being queued. * Once queued, an async device experiences the same inefficiencies as in the resume logic (thread creation, wake-ups, sleeps, and context switches). On a Pixel 6, averaging over 100 suspend/resume cycles, the data is as follows: +---------------------------+-----------+------------+----------+ | Phase | Full sync | Full async | % change | +---------------------------+-----------+------------+----------+ | Total dpm_suspend*() time | 107 ms | 72 ms | -33% | +---------------------------+-----------+------------+----------+ | Total dpm_resume*() time | 75 ms | 90 ms | +20% | +---------------------------+-----------+------------+----------+ | Sum | 182 ms | 162 ms | -11% | +---------------------------+-----------+------------+----------+ This shows that full async suspend/resume is not a viable option. It makes the user-visible resume phase slower and only improves the overall time by 11%. To fix all this, this patches introduces a new async suspend/resume logic. New suspend/resume logic: ------------------------- * For each suspend/resume phase (except "complete" and "prepare," which are always serialized), the logic first queues only the async devices that don't have to wait for any subordinates (for suspend) or superiors (for resume). It then loops through the dpm_list again to suspend/resume the sync devices one by one. * When a device (sync or async) successfully suspends/resumes, it examines its superiors/subordinates and queues only the async devices that don't need to wait for any subordinates/superiors. With this new logic: * Queued async devices don't have to wait for completion and are always ready to perform their suspend/resume operation. * The queue of async devices remains short. * kworkers never sleep for extended periods, and the workqueue framework doesn't spin up many new threads to handle a backlog of async devices. * The result is approximately NCPU kworker threads running in parallel without sleeping until all async devices finish. On a Pixel 6, averaging over 100 suspend/resume cycles, the new logic yields improved results: +---------------------------+-----------+------------+------------------+ | Phase | Old full sync | New full async | % change | +---------------------------+-----------+------------+------------------+ | Total dpm_suspend*() time | 107 ms | 60 ms | -44% | +---------------------------+-----------+------------+------------------+ | Total dpm_resume*() time | 75 ms | 74 ms | -1% | +---------------------------+-----------+------------+------------------+ | Sum | 182 ms | 134 ms | -26% | +---------------------------+-----------+------------+------------------+ Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 242 ++++++++++++++++++++++++++++++++------ 1 file changed, 205 insertions(+), 37 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index aa1470ef6ac0..65c195be48b8 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -121,6 +121,12 @@ void device_pm_unlock(void) mutex_unlock(&dpm_list_mtx); } =20 +static bool is_async(struct device *dev) +{ + return dev->power.async_suspend && pm_async_enabled + && !pm_trace_is_enabled(); +} + /** * device_pm_add - Add a device to the PM core's list of active devices. * @dev: Device to add to the list. @@ -287,6 +293,9 @@ static int dpm_for_each_superior(struct device *dev, vo= id *data, =20 static bool dpm_wait_for_superior(struct device *dev, bool async) { + if (is_async(dev)) + return true; + dpm_for_each_superior(dev, &async, dpm_wait_fn); =20 /* @@ -334,9 +343,22 @@ static int dpm_for_each_subordinate(struct device *dev= , void *data, =20 static void dpm_wait_for_subordinate(struct device *dev, bool async) { + if (is_async(dev)) + return; + dpm_for_each_subordinate(dev, &async, dpm_wait_fn); } =20 +static int dpm_check_fn(struct device *dev, void *data) +{ + return completion_done(&dev->power.completion) ? 0 : -EBUSY; +} + +static int dpm_check_subordinate(struct device *dev) +{ + return dpm_for_each_subordinate(dev, NULL, dpm_check_fn); +} + /** * pm_op - Return the PM operation appropriate for given PM event. * @ops: PM operations to choose from. @@ -578,33 +600,39 @@ bool dev_pm_skip_resume(struct device *dev) return !dev->power.must_resume; } =20 -static bool is_async(struct device *dev) +static void dpm_async_fn(struct device *dev, async_func_t func) { - return dev->power.async_suspend && pm_async_enabled - && !pm_trace_is_enabled(); -} - -static bool dpm_async_fn(struct device *dev, async_func_t func) -{ - reinit_completion(&dev->power.completion); - - if (is_async(dev)) { - dev->power.async_in_progress =3D true; - - get_device(dev); + /* + * Multiple async devices could trigger the async queuing of this + * device. Make sure not to double queue it. + */ + spin_lock(&dev->power.lock); + if (dev->power.async_in_progress) { + spin_unlock(&dev->power.lock); + return; + } + dev->power.async_in_progress =3D true; + spin_unlock(&dev->power.lock); =20 - if (async_schedule_dev_nocall(func, dev)) - return true; + get_device(dev); =20 - put_device(dev); - } /* - * Because async_schedule_dev_nocall() above has returned false or it - * has not been called at all, func() is not running and it is safe to - * update the async_in_progress flag without extra synchronization. + * We aren't going to call any callbacks if the device has none. Also, + * direct_complete means all the resume and suspend callbacks should be + * skipped and the device should go straight to dpm_complete(). In both + * of these case, calling the func() synchronously will be a lot quicker + * than even queuing the async work. So, do that. */ - dev->power.async_in_progress =3D false; - return false; + if (dev->power.no_pm_callbacks || dev->power.direct_complete) { + func(dev, 0); + return; + } + + if (async_schedule_dev_nocall(func, dev)) + return; + + WARN(1, "Unable to schedule async suspend/resume calls!\n"); + put_device(dev); } =20 /** @@ -692,12 +720,47 @@ static void device_resume_noirq(struct device *dev, p= m_message_t state, bool asy } } =20 +static void dpm_reinit_dev_state(struct list_head *list) +{ + struct device *dev; + + list_for_each_entry(dev, list, power.entry) { + reinit_completion(&dev->power.completion); + dev->power.async_in_progress =3D false; + } +} + +static int dpm_check_superior(struct device *dev) +{ + return dpm_for_each_superior(dev, NULL, dpm_check_fn); +} + +static int dpm_async_queue_resume_ready_fn(struct device *dev, void *data) +{ + if (!is_async(dev) || dpm_check_superior(dev)) + return 0; + + dpm_async_fn(dev, data); + return 0; +} + +static void dpm_async_resume_loop(struct list_head *from, async_func_t fun= c) +{ + struct device *dev; + + list_for_each_entry(dev, from, power.entry) + dpm_async_queue_resume_ready_fn(dev, func); +} + static void async_resume_noirq(void *data, async_cookie_t cookie) { struct device *dev =3D data; =20 device_resume_noirq(dev, pm_transition, true); put_device(dev); + + dpm_for_each_subordinate(dev, async_resume_noirq, + dpm_async_queue_resume_ready_fn); } =20 static void dpm_noirq_resume_devices(pm_message_t state) @@ -712,23 +775,26 @@ static void dpm_noirq_resume_devices(pm_message_t sta= te) =20 mutex_lock(&dpm_list_mtx); =20 + dpm_reinit_dev_state(&dpm_noirq_list); + /* * Trigger the resume of "async" devices upfront so they don't have to * wait for the "non-async" ones they don't depend on. */ - list_for_each_entry(dev, &dpm_noirq_list, power.entry) - dpm_async_fn(dev, async_resume_noirq); + dpm_async_resume_loop(&dpm_noirq_list, async_resume_noirq); =20 while (!list_empty(&dpm_noirq_list)) { dev =3D to_device(dpm_noirq_list.next); list_move_tail(&dev->power.entry, &dpm_late_early_list); =20 - if (!dev->power.async_in_progress) { + if (!is_async(dev)) { get_device(dev); =20 mutex_unlock(&dpm_list_mtx); =20 device_resume_noirq(dev, state, false); + dpm_for_each_subordinate(dev, async_resume_noirq, + dpm_async_queue_resume_ready_fn); =20 put_device(dev); =20 @@ -834,6 +900,9 @@ static void async_resume_early(void *data, async_cookie= _t cookie) =20 device_resume_early(dev, pm_transition, true); put_device(dev); + + dpm_for_each_subordinate(dev, async_resume_early, + dpm_async_queue_resume_ready_fn); } =20 /** @@ -852,23 +921,26 @@ void dpm_resume_early(pm_message_t state) =20 mutex_lock(&dpm_list_mtx); =20 + dpm_reinit_dev_state(&dpm_late_early_list); + /* * Trigger the resume of "async" devices upfront so they don't have to * wait for the "non-async" ones they don't depend on. */ - list_for_each_entry(dev, &dpm_late_early_list, power.entry) - dpm_async_fn(dev, async_resume_early); + dpm_async_resume_loop(&dpm_late_early_list, async_resume_early); =20 while (!list_empty(&dpm_late_early_list)) { dev =3D to_device(dpm_late_early_list.next); list_move_tail(&dev->power.entry, &dpm_suspended_list); =20 - if (!dev->power.async_in_progress) { + if (!is_async(dev)) { get_device(dev); =20 mutex_unlock(&dpm_list_mtx); =20 device_resume_early(dev, state, false); + dpm_for_each_subordinate(dev, async_resume_early, + dpm_async_queue_resume_ready_fn); =20 put_device(dev); =20 @@ -996,6 +1068,9 @@ static void async_resume(void *data, async_cookie_t co= okie) =20 device_resume(dev, pm_transition, true); put_device(dev); + + dpm_for_each_subordinate(dev, async_resume, + dpm_async_queue_resume_ready_fn); } =20 /** @@ -1018,23 +1093,26 @@ void dpm_resume(pm_message_t state) =20 mutex_lock(&dpm_list_mtx); =20 + dpm_reinit_dev_state(&dpm_suspended_list); + /* * Trigger the resume of "async" devices upfront so they don't have to * wait for the "non-async" ones they don't depend on. */ - list_for_each_entry(dev, &dpm_suspended_list, power.entry) - dpm_async_fn(dev, async_resume); + dpm_async_resume_loop(&dpm_suspended_list, async_resume); =20 while (!list_empty(&dpm_suspended_list)) { dev =3D to_device(dpm_suspended_list.next); list_move_tail(&dev->power.entry, &dpm_prepared_list); =20 - if (!dev->power.async_in_progress) { + if (!is_async(dev)) { get_device(dev); =20 mutex_unlock(&dpm_list_mtx); =20 device_resume(dev, state, false); + dpm_for_each_subordinate(dev, async_resume, + dpm_async_queue_resume_ready_fn); =20 put_device(dev); =20 @@ -1274,12 +1352,35 @@ static int device_suspend_noirq(struct device *dev,= pm_message_t state, bool asy return error; } =20 +static int dpm_async_queue_suspend_ready_fn(struct device *dev, void *data) +{ + if (!is_async(dev) || dpm_check_subordinate(dev)) + return 0; + + dpm_async_fn(dev, data); + return 0; +} + +static void dpm_async_suspend_loop(struct list_head *from, async_func_t fu= nc) +{ + struct device *dev; + + list_for_each_entry_reverse(dev, from, power.entry) + dpm_async_queue_suspend_ready_fn(dev, func); +} + static void async_suspend_noirq(void *data, async_cookie_t cookie) { struct device *dev =3D data; =20 device_suspend_noirq(dev, pm_transition, true); put_device(dev); + + if (async_error) + return; + + dpm_for_each_superior(dev, async_suspend_noirq, + dpm_async_queue_suspend_ready_fn); } =20 static int dpm_noirq_suspend_devices(pm_message_t state) @@ -1294,12 +1395,20 @@ static int dpm_noirq_suspend_devices(pm_message_t s= tate) =20 mutex_lock(&dpm_list_mtx); =20 + dpm_reinit_dev_state(&dpm_late_early_list); + + /* + * Trigger the "async" devices upfront so they don't have to wait for + * the "non-async" ones they don't depend on. + */ + dpm_async_suspend_loop(&dpm_late_early_list, async_suspend_noirq); + while (!list_empty(&dpm_late_early_list)) { struct device *dev =3D to_device(dpm_late_early_list.prev); =20 list_move(&dev->power.entry, &dpm_noirq_list); =20 - if (dpm_async_fn(dev, async_suspend_noirq)) + if (is_async(dev)) continue; =20 get_device(dev); @@ -1307,13 +1416,20 @@ static int dpm_noirq_suspend_devices(pm_message_t s= tate) mutex_unlock(&dpm_list_mtx); =20 error =3D device_suspend_noirq(dev, state, false); + if (!error && !async_error) + dpm_for_each_superior(dev, async_suspend_noirq, + dpm_async_queue_suspend_ready_fn); =20 put_device(dev); =20 mutex_lock(&dpm_list_mtx); =20 - if (error || async_error) + if (error || async_error) { + /* See explanation in dpm_suspend() */ + list_splice_init(&dpm_late_early_list, &dpm_noirq_list); break; + } + } =20 mutex_unlock(&dpm_list_mtx); @@ -1447,6 +1563,12 @@ static void async_suspend_late(void *data, async_coo= kie_t cookie) =20 device_suspend_late(dev, pm_transition, true); put_device(dev); + + if (async_error) + return; + + dpm_for_each_superior(dev, async_suspend_late, + dpm_async_queue_suspend_ready_fn); } =20 /** @@ -1467,12 +1589,20 @@ int dpm_suspend_late(pm_message_t state) =20 mutex_lock(&dpm_list_mtx); =20 + dpm_reinit_dev_state(&dpm_suspended_list); + + /* + * Trigger the "async" devices upfront so they don't have to wait for + * the "non-async" ones they don't depend on. + */ + dpm_async_suspend_loop(&dpm_suspended_list, async_suspend_late); + while (!list_empty(&dpm_suspended_list)) { struct device *dev =3D to_device(dpm_suspended_list.prev); =20 list_move(&dev->power.entry, &dpm_late_early_list); =20 - if (dpm_async_fn(dev, async_suspend_late)) + if (is_async(dev)) continue; =20 get_device(dev); @@ -1480,13 +1610,19 @@ int dpm_suspend_late(pm_message_t state) mutex_unlock(&dpm_list_mtx); =20 error =3D device_suspend_late(dev, state, false); + if (!error && !async_error) + dpm_for_each_superior(dev, async_suspend_late, + dpm_async_queue_suspend_ready_fn); =20 put_device(dev); =20 mutex_lock(&dpm_list_mtx); =20 - if (error || async_error) + if (error || async_error) { + /* See explanation in dpm_suspend() */ + list_splice_init(&dpm_suspended_list, &dpm_late_early_list); break; + } } =20 mutex_unlock(&dpm_list_mtx); @@ -1712,6 +1848,12 @@ static void async_suspend(void *data, async_cookie_t= cookie) =20 device_suspend(dev, pm_transition, true); put_device(dev); + + if (async_error) + return; + + dpm_for_each_superior(dev, async_suspend, + dpm_async_queue_suspend_ready_fn); } =20 /** @@ -1734,12 +1876,20 @@ int dpm_suspend(pm_message_t state) =20 mutex_lock(&dpm_list_mtx); =20 + dpm_reinit_dev_state(&dpm_prepared_list); + + /* + * Trigger the "async" devices upfront so they don't have to wait for + * the "non-async" ones they don't depend on. + */ + dpm_async_suspend_loop(&dpm_prepared_list, async_suspend); + while (!list_empty(&dpm_prepared_list)) { struct device *dev =3D to_device(dpm_prepared_list.prev); =20 list_move(&dev->power.entry, &dpm_suspended_list); =20 - if (dpm_async_fn(dev, async_suspend)) + if (is_async(dev)) continue; =20 get_device(dev); @@ -1747,13 +1897,31 @@ int dpm_suspend(pm_message_t state) mutex_unlock(&dpm_list_mtx); =20 error =3D device_suspend(dev, state, false); + if (!error && !async_error) + dpm_for_each_superior(dev, async_suspend, + dpm_async_queue_suspend_ready_fn); =20 put_device(dev); =20 mutex_lock(&dpm_list_mtx); =20 - if (error || async_error) + if (error || async_error) { + /* + * async devices that come after the current sync device + * in the list might have already suspended. We need to + * make sure those async devices are resumed again. By + * moving all devices to the next list, we make sure the + * error handling path resumes all suspended devices. + * + * However, we also need to avoid resuming devices that + * haven't been suspended yet. Fortunately, the existing + * is_suspended/is_noirq_suspended/is_late_suspended + * flags take care of skipping the resume of a device if + * it hasn't been suspended yet. + */ + list_splice_init(&dpm_prepared_list, &dpm_suspended_list); break; + } } =20 mutex_unlock(&dpm_list_mtx); --=20 2.47.0.338.g60cca15819-goog From nobody Sat Nov 23 01:34:46 2024 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A9471B3931 for ; Thu, 14 Nov 2024 22:09:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622192; cv=none; b=X0KtjMRpcbIvJDei4zZz4b3d551UAy3wm/U+v9id2Z/bX5CPFB9zsrqN8rUXbm+lWLHi0D5+FGeve4ENOjEV6tjJtNbe7aqQ8M+0CChvaaHa24C8o6x80wOdCjnW516pTZT5ZXI6tHUltqXoiYjPUbUZ96O5QDBknsYan5cf42s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622192; c=relaxed/simple; bh=erE7x5UyBXgceNKubyiTOJ7BXeO3c7MeYHV9axV0DZw=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=FmKV2MtpKj35g0ByIannH1TUrQOKDOWOMXt0gdvb58VrrzL0sOT2FMDvlzyf8lnGgC973WGNZ0dIqJd8CgvQX3xEGOwpZZHrcctFfJr99WE68Unm9xU3N/t67m0pPi47udsp9wKMV2nvixL4+UH8kphSBI2itKsDeGlKSz7rBQQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Sv9YPr3n; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Sv9YPr3n" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e381d10dd2aso1327790276.2 for ; Thu, 14 Nov 2024 14:09:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622190; x=1732226990; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ix2aZjNEV5v90Kyb9lVYbv1gl/XRVKcqFa1DVAk8aZ0=; b=Sv9YPr3nuqoK8Mxj0uLy67ZQrYPVOEVW2/3nXGcHvyTg1E/5s3tSEuPMSbT2CEOTc9 esgkJJ7B31K4XcSsZbYDDYrXjz5AJbY3qhwIkoPl4aokj3mJLOw5sCfxw7sWLRjj0MKF JbDSbQOZLJSJPmQS4X5F9XCg6/ISxwv0Cn0kZ+HPKxl8gzE/GkqZl1btVVQDwbRoB3Gq XLfqXKFyk8eg7681tdMDpQbA2E+5L+R+1S2V0me/3DH1mQ6nukegp8uXWKMwiHFZRY8K HmgDOHYLvg8ufhQJKQla302wM8T4M2rfqKFczo1+N9wVXrUO1t67KhqJu3OGeee5vJtJ kGYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622190; x=1732226990; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ix2aZjNEV5v90Kyb9lVYbv1gl/XRVKcqFa1DVAk8aZ0=; b=TOyxci5eWw1PJ37+o/Wkl3UjGdF9uspSj5ZXsLzhoRzH00fugmQdcHjxp3F4DdjafE D1AAiCyJOT3inYpGugM1bIpNAFxAiWo7Ukgt6da4vaHL0Kr3jalIrzS3DcmsmFD/zDjI 30eEcfDb0zEfOcPUeIozVlEw34dF8rSevjRA4Nv6RaO/qbp5amvi8Jd6iVaeF7qHyF9G plw6AuE7y33TAFblT+lzXiqANaHM0aMgTFUJkZQX+fIUb/GNfziDtJCznHcAbDqKTygM tt4hq/m52T9OU272yjolHSrs43d2fyLHXqoBXhhDEpngfgpkHw6JUXC88rSVMU7dKAaF MCpQ== X-Forwarded-Encrypted: i=1; AJvYcCVplTytPiCDAm/raxG85ncoOD8RUb+QUl3lCsKVhgs9njOORXIzp4Rs4pTGW4OlOxPoaCZ/N7eiqTlHUMg=@vger.kernel.org X-Gm-Message-State: AOJu0YwOaZIWsD0PHk3BooDf80Dh8wg9nnR6ShprjrbVO/Ave7wLD2A+ MzjrAtzu4Nld8IE74JUBUmlZhNmKRzCs3h2rpC0Vy06tPvH+/p/j6o6FPENvsh9ic8XUsFLq5rC CeON3PLndA2XEtQ== X-Google-Smtp-Source: AGHT+IF8VfOz6LQsMr/CGI8GCQ4S/zLj7Vdday2+nUG/rZc9SbBgXM0Sf7829fpe/0ZP6BT+O+UG5Ot4zG8sR2g= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a25:ce05:0:b0:e38:25b5:e33 with SMTP id 3f1490d57ef6-e38263d5d0dmr314276.7.1731622190059; Thu, 14 Nov 2024 14:09:50 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:19 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-6-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 5/5] PM: sleep: Spread out async kworker threads during dpm_resume*() phases From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" As of today, the scheduler doesn't spread out all the kworker threads across all the available CPUs during suspend/resume. This causes significant resume latency during the dpm_resume*() phases. System resume latency is a very user-visible event. Reducing the latency is more important than trying to be energy aware during that period. Since there are no userspace processes running during this time and this is a very short time window, we can simply disable EAS during resume so that the parallel resume of the devices is spread across all the CPUs. On a Pixel 6, averaging over 100 suspend/resume cycles, the new logic plus disabling EAS for resume yields significant improvements: +---------------------------+-----------+------------+------------------+ | Phase | Old full sync | New full async | % change | | | | + EAS disabled | | +---------------------------+-----------+------------+------------------+ | Total dpm_suspend*() time | 107 ms | 62 ms | -42% | +---------------------------+-----------+------------+------------------+ | Total dpm_resume*() time | 75 ms | 61 ms | -19% | +---------------------------+-----------+------------+------------------+ | Sum | 182 ms | 123 ms | -32% | +---------------------------+-----------+------------+------------------+ Signed-off-by: Saravana Kannan --- kernel/power/suspend.c | 16 ++++++++++++++++ kernel/sched/topology.c | 13 +++++++++++++ 2 files changed, 29 insertions(+) diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 09f8397bae15..7304dc39958f 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -393,6 +393,12 @@ void __weak arch_suspend_enable_irqs(void) local_irq_enable(); } =20 +/* + * Intentionally not part of a header file to avoid risk of abuse by other + * drivers. + */ +void sched_set_energy_aware(unsigned int enable); + /** * suspend_enter - Make the system enter the given sleep state. * @state: System sleep state to enter. @@ -468,6 +474,15 @@ static int suspend_enter(suspend_state_t state, bool *= wakeup) =20 Platform_wake: platform_resume_noirq(state); + /* + * We do this only for resume instead of suspend and resume for these + * reasons: + * - Performance is more important than power for resume. + * - Power spent entering suspend is more important for suspend. Also, + * stangely, disabling EAS was making suspent a few milliseconds + * slower in my testing. + */ + sched_set_energy_aware(0); dpm_resume_noirq(PMSG_RESUME); =20 Platform_early_resume: @@ -520,6 +535,7 @@ int suspend_devices_and_enter(suspend_state_t state) Resume_devices: suspend_test_start(); dpm_resume_end(PMSG_RESUME); + sched_set_energy_aware(1); suspend_test_finish("resume devices"); trace_suspend_resume(TPS("resume_console"), state, true); resume_console(); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9748a4c8d668..c069c0b17cbf 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -284,6 +284,19 @@ void rebuild_sched_domains_energy(void) mutex_unlock(&sched_energy_mutex); } =20 +void sched_set_energy_aware(unsigned int enable) +{ + int state; + + if (!sched_is_eas_possible(cpu_active_mask)) + return; + + sysctl_sched_energy_aware =3D enable; + state =3D static_branch_unlikely(&sched_energy_present); + if (state !=3D sysctl_sched_energy_aware) + rebuild_sched_domains_energy(); +} + #ifdef CONFIG_PROC_SYSCTL static int sched_energy_aware_handler(const struct ctl_table *table, int w= rite, void *buffer, size_t *lenp, loff_t *ppos) --=20 2.47.0.338.g60cca15819-goog