From nobody Wed Sep 17 22:38:40 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADC26C4332F for ; Wed, 14 Dec 2022 11:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238315AbiLNLpX (ORCPT ); Wed, 14 Dec 2022 06:45:23 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53832 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229695AbiLNLpC (ORCPT ); Wed, 14 Dec 2022 06:45:02 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E1C7020F56 for ; Wed, 14 Dec 2022 03:45:00 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id i9-20020a1c3b09000000b003d21fa95c38so4326668wma.3 for ; Wed, 14 Dec 2022 03:45:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YgnN0QtEEWTyKETKPOOJ3VtCQ7cxhjYteagNX9ETV3M=; b=bXB0wAppfK3GA2HYbRuWqvPsTY3f+aGB1QepHMKJA7vImXpQpYm31VFbXfPrtLWxo3 l31asC6VSSz8FeCw53O2jOYzWz5FREHkbkwPvVGX9DQtjgefPrdxNqCIXuZFPhx/dEMx fEvcTISA6ki2F765SfusarEWqS61UtPqwk68EhbtOHx8VT/N8kZrAs1j4oyAnjgt3fnd zIuglo80o3MN2ZZqgRXAiqYiicVNTRIMIlk3ynovic8O66vjoaURSLgYsn4vKk2Ei6Ah gWVuJZd47aotqPqoe9VZAe9xhF+UQDbElM+yssZL0eMnuKZkRCLFLIRmcmKeGiHw3WRt DDHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YgnN0QtEEWTyKETKPOOJ3VtCQ7cxhjYteagNX9ETV3M=; b=hG/cTbnqJtq113ivPryJPCcV+5TIo4r/VmJ3kubwQkwa7QngeCATbvPd3SQe6G0BJL A7B/okyxTbOnaJKtLtG0Z6zyij2+BUFyIpKwQ/lzIMIJH/ZuRmh5osie6px6xA7tZpa5 TeyG86ZaabEV/5P/rgwSA3ohgAfMsI/g9zxd8+EqQAcDQRiSPwEFQTBOf9rlVf7l/oDJ THGVCe6QMnqynHjifF35gI8xoiHZJwYuxEt6Zk2ZaCPOjwsF/Mpb/IYEbYSD9+NA5M6R gJwoRm3i4OCrwsGho2rxXuW0+7fWjvJxw/OygICxLTnxMnaNGw/IIsOofLveVpb9CUds IBjw== X-Gm-Message-State: ANoB5plf0vic4DUek081y55kAcLYUs1sWYYCgxJaGISxzd2Z5k3rxbDW PV+Wu9+YuA5SnNMZm621ita+oslf8k3tNIA3Fg== X-Google-Smtp-Source: AA0mqf7PZYyIlItN8gMWiPI+uUBISa7Is0IFGUwElHkY84BBMeF7+mJCdxUek9iWUeerTUeTu2IJjhx3qRTlllYFQw== X-Received: from peternewman-vh.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:30cc]) (user=peternewman job=sendgmr) by 2002:a1c:6a14:0:b0:3cf:878c:6555 with SMTP id f20-20020a1c6a14000000b003cf878c6555mr184510wmc.38.1671018299448; Wed, 14 Dec 2022 03:44:59 -0800 (PST) Date: Wed, 14 Dec 2022 12:44:47 +0100 In-Reply-To: <20221214114447.1935755-1-peternewman@google.com> Mime-Version: 1.0 References: <20221214114447.1935755-1-peternewman@google.com> X-Mailer: git-send-email 2.39.0.rc1.256.g54fd8350bd-goog Message-ID: <20221214114447.1935755-2-peternewman@google.com> Subject: [PATCH v5 1/1] x86/resctrl: Fix task CLOSID/RMID update race From: Peter Newman To: reinette.chatre@intel.com, fenghua.yu@intel.com Cc: bp@alien8.de, derkling@google.com, eranian@google.com, hpa@zytor.com, james.morse@arm.com, jannh@google.com, kpsingh@google.com, linux-kernel@vger.kernel.org, mingo@redhat.com, tglx@linutronix.de, x86@kernel.org, Peter Newman Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When the user moves a running task to a new rdtgroup using the tasks file interface or by deleting its rdtgroup, the resulting change in CLOSID/RMID must be immediately propagated to the PQR_ASSOC MSR on the task(s) CPUs. x86 allows reordering loads with prior stores, so if the task starts running between a task_curr() check that the CPU hoisted before the stores in the CLOSID/RMID update then it can start running with the old CLOSID/RMID until it is switched again because __rdtgroup_move_task() failed to determine that it needs to be interrupted to obtain the new CLOSID/RMID. Refer to the diagram below: CPU 0 CPU 1 Reviewed-by: Reinette Chatre ----- ----- __rdtgroup_move_task(): curr <- t1->cpu->rq->curr __schedule(): rq->curr <- t1 resctrl_sched_in(): t1->{closid,rmid} -> {1,1} t1->{closid,rmid} <- {2,2} if (curr =3D=3D t1) // false IPI(t1->cpu) A similar race impacts rdt_move_group_tasks(), which updates tasks in a deleted rdtgroup. In both cases, use smp_mb() to order the task_struct::{closid,rmid} stores before the loads in task_curr(). In particular, in the rdt_move_group_tasks() case, simply execute an smp_mb() on every iteration with a matching task. It is possible to use a single smp_mb() in rdt_move_group_tasks(), but this would require two passes and a means of remembering which task_structs were updated in the first loop. However, benchmarking results below showed too little performance impact in the simple approach to justify implementing the two-pass approach. Times below were collected using `perf stat` to measure the time to remove a group containing a 1600-task, parallel workload. CPU: Intel(R) Xeon(R) Platinum P-8136 CPU @ 2.00GHz (112 threads) # mkdir /sys/fs/resctrl/test # echo $$ > /sys/fs/resctrl/test/tasks # perf bench sched messaging -g 40 -l 100000 task-clock time ranges collected using: # perf stat rmdir /sys/fs/resctrl/test Baseline: 1.54 - 1.60 ms smp_mb() every matching task: 1.57 - 1.67 ms Signed-off-by: Peter Newman --- arch/x86/kernel/cpu/resctrl/rdtgroup.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/r= esctrl/rdtgroup.c index e5a48f05e787..5993da21d822 100644 --- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c +++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c @@ -580,8 +580,10 @@ static int __rdtgroup_move_task(struct task_struct *ts= k, /* * Ensure the task's closid and rmid are written before determining if * the task is current that will decide if it will be interrupted. + * This pairs with the full barrier between the rq->curr update and + * resctrl_sched_in() during context switch. */ - barrier(); + smp_mb(); =20 /* * By now, the task's closid and rmid are set. If the task is current @@ -2401,6 +2403,14 @@ static void rdt_move_group_tasks(struct rdtgroup *fr= om, struct rdtgroup *to, WRITE_ONCE(t->closid, to->closid); WRITE_ONCE(t->rmid, to->mon.rmid); =20 + /* + * Order the closid/rmid stores above before the loads + * in task_curr(). This pairs with the full barrier + * between the rq->curr update and resctrl_sched_in() + * during context switch. + */ + smp_mb(); + /* * If the task is on a CPU, set the CPU in the mask. * The detection is inaccurate as tasks might move or --=20 2.39.0.rc1.256.g54fd8350bd-goog