The HK_TYPE_DOMAIN isolation cpumask, and further the
HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the
future.
The affected subsystems will need to synchronize against those cpumask
changes so that:
* The reader get a coherent snapshot
* The housekeeping subsystem can safely propagate a cpumask update to
the susbsytems after it has been published.
Protect against readsides that can sleep with per-cpu rwsem. Updates are
expected to be very rare given that CPU isolation is a niche usecase and
related cpuset setup happen only in preparation work. On the other hand
read sides can occur in more frequent paths.
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
include/linux/sched/isolation.h | 7 +++++++
kernel/sched/isolation.c | 12 ++++++++++++
kernel/sched/sched.h | 1 +
3 files changed, 20 insertions(+)
diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h
index f98ba0d71c52..8de4f625a5c1 100644
--- a/include/linux/sched/isolation.h
+++ b/include/linux/sched/isolation.h
@@ -41,6 +41,9 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
return true;
}
+extern void housekeeping_lock(void);
+extern void housekeeping_unlock(void);
+
extern void __init housekeeping_init(void);
#else
@@ -73,6 +76,8 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type)
return true;
}
+static inline void housekeeping_lock(void) { }
+static inline void housekeeping_unlock(void) { }
static inline void housekeeping_init(void) { }
#endif /* CONFIG_CPU_ISOLATION */
@@ -84,4 +89,6 @@ static inline bool cpu_is_isolated(int cpu)
cpuset_cpu_is_isolated(cpu);
}
+DEFINE_LOCK_GUARD_0(housekeeping, housekeeping_lock(), housekeeping_unlock())
+
#endif /* _LINUX_SCHED_ISOLATION_H */
diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
index 83cec3853864..8c02eeccea3b 100644
--- a/kernel/sched/isolation.c
+++ b/kernel/sched/isolation.c
@@ -18,12 +18,24 @@ static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX];
unsigned long housekeeping_flags;
EXPORT_SYMBOL_GPL(housekeeping_flags);
+DEFINE_STATIC_PERCPU_RWSEM(housekeeping_pcpu_lock);
+
bool housekeeping_enabled(enum hk_type type)
{
return !!(housekeeping_flags & BIT(type));
}
EXPORT_SYMBOL_GPL(housekeeping_enabled);
+void housekeeping_lock(void)
+{
+ percpu_down_read(&housekeeping_pcpu_lock);
+}
+
+void housekeeping_unlock(void)
+{
+ percpu_up_read(&housekeeping_pcpu_lock);
+}
+
int housekeeping_any_cpu(enum hk_type type)
{
int cpu;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 475bb5998295..0cdb560ef2f3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -46,6 +46,7 @@
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/mutex_api.h>
+#include <linux/percpu-rwsem.h>
#include <linux/plist.h>
#include <linux/poll.h>
#include <linux/proc_fs.h>
--
2.48.1
On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > The HK_TYPE_DOMAIN isolation cpumask, and further the > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > future. > > The affected subsystems will need to synchronize against those cpumask > changes so that: > > * The reader get a coherent snapshot > * The housekeeping subsystem can safely propagate a cpumask update to > the susbsytems after it has been published. > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > expected to be very rare given that CPU isolation is a niche usecase and > related cpuset setup happen only in preparation work. On the other hand > read sides can occur in more frequent paths. > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Thanks for the patch series and it certainly has some good ideas. However I am a bit concern about the overhead of using percpu-rwsem for synchronization especially when the readers have to wait for the completion on the writer side. From my point of view, during the transition period when new isolated CPUs are being added or old ones being removed, the reader will either get the old CPU data or the new one depending on the exact timing. The effect the CPU selection may persist for a while after the end of the critical section. Can we just rely on RCU to make sure that it either get the new one or the old one but nothing in between without the additional overhead? My current thinking is to make use CPU hotplug to enable better CPU isolation. IOW, I would shut down the affected CPUs, change the housekeeping masks and then bring them back online again. That means the writer side will take a while to complete. Cheers, Longman > --- > include/linux/sched/isolation.h | 7 +++++++ > kernel/sched/isolation.c | 12 ++++++++++++ > kernel/sched/sched.h | 1 + > 3 files changed, 20 insertions(+) > > diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h > index f98ba0d71c52..8de4f625a5c1 100644 > --- a/include/linux/sched/isolation.h > +++ b/include/linux/sched/isolation.h > @@ -41,6 +41,9 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > return true; > } > > +extern void housekeeping_lock(void); > +extern void housekeeping_unlock(void); > + > extern void __init housekeeping_init(void); > > #else > @@ -73,6 +76,8 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > return true; > } > > +static inline void housekeeping_lock(void) { } > +static inline void housekeeping_unlock(void) { } > static inline void housekeeping_init(void) { } > #endif /* CONFIG_CPU_ISOLATION */ > > @@ -84,4 +89,6 @@ static inline bool cpu_is_isolated(int cpu) > cpuset_cpu_is_isolated(cpu); > } > > +DEFINE_LOCK_GUARD_0(housekeeping, housekeeping_lock(), housekeeping_unlock()) > + > #endif /* _LINUX_SCHED_ISOLATION_H */ > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > index 83cec3853864..8c02eeccea3b 100644 > --- a/kernel/sched/isolation.c > +++ b/kernel/sched/isolation.c > @@ -18,12 +18,24 @@ static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX]; > unsigned long housekeeping_flags; > EXPORT_SYMBOL_GPL(housekeeping_flags); > > +DEFINE_STATIC_PERCPU_RWSEM(housekeeping_pcpu_lock); > + > bool housekeeping_enabled(enum hk_type type) > { > return !!(housekeeping_flags & BIT(type)); > } > EXPORT_SYMBOL_GPL(housekeeping_enabled); > > +void housekeeping_lock(void) > +{ > + percpu_down_read(&housekeeping_pcpu_lock); > +} > + > +void housekeeping_unlock(void) > +{ > + percpu_up_read(&housekeeping_pcpu_lock); > +} > + > int housekeeping_any_cpu(enum hk_type type) > { > int cpu; > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > index 475bb5998295..0cdb560ef2f3 100644 > --- a/kernel/sched/sched.h > +++ b/kernel/sched/sched.h > @@ -46,6 +46,7 @@ > #include <linux/mm.h> > #include <linux/module.h> > #include <linux/mutex_api.h> > +#include <linux/percpu-rwsem.h> > #include <linux/plist.h> > #include <linux/poll.h> > #include <linux/proc_fs.h>
Le Mon, Jun 23, 2025 at 01:34:58PM -0400, Waiman Long a écrit : > On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > > The HK_TYPE_DOMAIN isolation cpumask, and further the > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > > future. > > > > The affected subsystems will need to synchronize against those cpumask > > changes so that: > > > > * The reader get a coherent snapshot > > * The housekeeping subsystem can safely propagate a cpumask update to > > the susbsytems after it has been published. > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > > expected to be very rare given that CPU isolation is a niche usecase and > > related cpuset setup happen only in preparation work. On the other hand > > read sides can occur in more frequent paths. > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > Thanks for the patch series and it certainly has some good ideas. However I > am a bit concern about the overhead of using percpu-rwsem for > synchronization especially when the readers have to wait for the completion > on the writer side. From my point of view, during the transition period when > new isolated CPUs are being added or old ones being removed, the reader will > either get the old CPU data or the new one depending on the exact timing. > The effect the CPU selection may persist for a while after the end of the > critical section. It depends. 1) If the read side queues a work and wait for it (case of work_on_cpu()), we can protect the whole under the same sleeping lock and there is no persistance beyond. 2) But if the read side just queues some work or defines some cpumask for future queue then there is persistance and some action must be taken by housekeeping after the update to propagare the new cpumask (flush pending works, etc...) > Can we just rely on RCU to make sure that it either get the new one or the > old one but nothing in between without the additional overhead? This is the case as well and it is covered by 2) above. The sleeping parts handled in 1) would require more thoughts. > My current thinking is to make use CPU hotplug to enable better CPU > isolation. IOW, I would shut down the affected CPUs, change the housekeeping > masks and then bring them back online again. That means the writer side will > take a while to complete. You mean that an isolated partition should only be set on offline CPUs ? That's the plan for nohz_full but it may be too late for domain isolation. Thanks. -- Frederic Weisbecker SUSE Labs
On 6/25/25 10:18 AM, Frederic Weisbecker wrote: > Le Mon, Jun 23, 2025 at 01:34:58PM -0400, Waiman Long a écrit : >> On 6/20/25 11:22 AM, Frederic Weisbecker wrote: >>> The HK_TYPE_DOMAIN isolation cpumask, and further the >>> HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the >>> future. >>> >>> The affected subsystems will need to synchronize against those cpumask >>> changes so that: >>> >>> * The reader get a coherent snapshot >>> * The housekeeping subsystem can safely propagate a cpumask update to >>> the susbsytems after it has been published. >>> >>> Protect against readsides that can sleep with per-cpu rwsem. Updates are >>> expected to be very rare given that CPU isolation is a niche usecase and >>> related cpuset setup happen only in preparation work. On the other hand >>> read sides can occur in more frequent paths. >>> >>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> >> Thanks for the patch series and it certainly has some good ideas. However I >> am a bit concern about the overhead of using percpu-rwsem for >> synchronization especially when the readers have to wait for the completion >> on the writer side. From my point of view, during the transition period when >> new isolated CPUs are being added or old ones being removed, the reader will >> either get the old CPU data or the new one depending on the exact timing. >> The effect the CPU selection may persist for a while after the end of the >> critical section. > It depends. > > 1) If the read side queues a work and wait for it > (case of work_on_cpu()), we can protect the whole under the same > sleeping lock and there is no persistance beyond. > > 2) But if the read side just queues some work or defines some cpumask > for future queue then there is persistance and some action must be > taken by housekeeping after the update to propagare the new cpumask > (flush pending works, etc...) I don't mind doing actions to make sure that the cpumask is properly propagated after changing housekeeping cpumasks. I just don't want to introduce too much latency on the reader which could be a latency sensitive task running on an isolated CPU. I would say it should be OK to have a grace period (reusing the RCU term) after changing the housekeeping cpumasks that tasks running on those CPUs that are affected by cpumask changes may or may not experience the full effect of the cpumask change. However, we should minimize the overhead of those tasks that run on CPUs unrelated to the cpumask change ASAP. >> Can we just rely on RCU to make sure that it either get the new one or the >> old one but nothing in between without the additional overhead? > This is the case as well and it is covered by 2) above. > The sleeping parts handled in 1) would require more thoughts. > >> My current thinking is to make use CPU hotplug to enable better CPU >> isolation. IOW, I would shut down the affected CPUs, change the housekeeping >> masks and then bring them back online again. That means the writer side will >> take a while to complete. > You mean that an isolated partition should only be set on offline CPUs ? That's > the plan for nohz_full but it may be too late for domain isolation. Actually I was talking mainly about nohz_full, but we should handle changes in HK_TYPE_DOMAIN cpumask the same way. Cheers, Longman
Hi Waiman, On Mon, Jun 23, 2025 at 01:34:58PM -0400 Waiman Long wrote: > On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > > The HK_TYPE_DOMAIN isolation cpumask, and further the > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > > future. > > > > The affected subsystems will need to synchronize against those cpumask > > changes so that: > > > > * The reader get a coherent snapshot > > * The housekeeping subsystem can safely propagate a cpumask update to > > the susbsytems after it has been published. > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > > expected to be very rare given that CPU isolation is a niche usecase and > > related cpuset setup happen only in preparation work. On the other hand > > read sides can occur in more frequent paths. > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > Thanks for the patch series and it certainly has some good ideas. However I > am a bit concern about the overhead of using percpu-rwsem for > synchronization especially when the readers have to wait for the completion > on the writer side. From my point of view, during the transition period when > new isolated CPUs are being added or old ones being removed, the reader will > either get the old CPU data or the new one depending on the exact timing. > The effect the CPU selection may persist for a while after the end of the > critical section. > > Can we just rely on RCU to make sure that it either get the new one or the > old one but nothing in between without the additional overhead? > > My current thinking is to make use CPU hotplug to enable better CPU > isolation. IOW, I would shut down the affected CPUs, change the housekeeping > masks and then bring them back online again. That means the writer side will > take a while to complete. The problem with this approach is that offlining a cpu effects all the other cpus and causes latency spikes on other low latency tasks which may already be running on other parts of the system. I just don't want us to finally get to dynamic isolation and have it not usable for the usecases asking for it. Cheers, Phil > > Cheers, > Longman > > > --- > > include/linux/sched/isolation.h | 7 +++++++ > > kernel/sched/isolation.c | 12 ++++++++++++ > > kernel/sched/sched.h | 1 + > > 3 files changed, 20 insertions(+) > > > > diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h > > index f98ba0d71c52..8de4f625a5c1 100644 > > --- a/include/linux/sched/isolation.h > > +++ b/include/linux/sched/isolation.h > > @@ -41,6 +41,9 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > > return true; > > } > > +extern void housekeeping_lock(void); > > +extern void housekeeping_unlock(void); > > + > > extern void __init housekeeping_init(void); > > #else > > @@ -73,6 +76,8 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > > return true; > > } > > +static inline void housekeeping_lock(void) { } > > +static inline void housekeeping_unlock(void) { } > > static inline void housekeeping_init(void) { } > > #endif /* CONFIG_CPU_ISOLATION */ > > @@ -84,4 +89,6 @@ static inline bool cpu_is_isolated(int cpu) > > cpuset_cpu_is_isolated(cpu); > > } > > +DEFINE_LOCK_GUARD_0(housekeeping, housekeeping_lock(), housekeeping_unlock()) > > + > > #endif /* _LINUX_SCHED_ISOLATION_H */ > > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > > index 83cec3853864..8c02eeccea3b 100644 > > --- a/kernel/sched/isolation.c > > +++ b/kernel/sched/isolation.c > > @@ -18,12 +18,24 @@ static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX]; > > unsigned long housekeeping_flags; > > EXPORT_SYMBOL_GPL(housekeeping_flags); > > +DEFINE_STATIC_PERCPU_RWSEM(housekeeping_pcpu_lock); > > + > > bool housekeeping_enabled(enum hk_type type) > > { > > return !!(housekeeping_flags & BIT(type)); > > } > > EXPORT_SYMBOL_GPL(housekeeping_enabled); > > +void housekeeping_lock(void) > > +{ > > + percpu_down_read(&housekeeping_pcpu_lock); > > +} > > + > > +void housekeeping_unlock(void) > > +{ > > + percpu_up_read(&housekeeping_pcpu_lock); > > +} > > + > > int housekeeping_any_cpu(enum hk_type type) > > { > > int cpu; > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > index 475bb5998295..0cdb560ef2f3 100644 > > --- a/kernel/sched/sched.h > > +++ b/kernel/sched/sched.h > > @@ -46,6 +46,7 @@ > > #include <linux/mm.h> > > #include <linux/module.h> > > #include <linux/mutex_api.h> > > +#include <linux/percpu-rwsem.h> > > #include <linux/plist.h> > > #include <linux/poll.h> > > #include <linux/proc_fs.h> > > --
Le Wed, Jun 25, 2025 at 08:18:50AM -0400, Phil Auld a écrit : > Hi Waiman, > > On Mon, Jun 23, 2025 at 01:34:58PM -0400 Waiman Long wrote: > > On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > > > The HK_TYPE_DOMAIN isolation cpumask, and further the > > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > > > future. > > > > > > The affected subsystems will need to synchronize against those cpumask > > > changes so that: > > > > > > * The reader get a coherent snapshot > > > * The housekeeping subsystem can safely propagate a cpumask update to > > > the susbsytems after it has been published. > > > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > > > expected to be very rare given that CPU isolation is a niche usecase and > > > related cpuset setup happen only in preparation work. On the other hand > > > read sides can occur in more frequent paths. > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > > Thanks for the patch series and it certainly has some good ideas. However I > > am a bit concern about the overhead of using percpu-rwsem for > > synchronization especially when the readers have to wait for the completion > > on the writer side. From my point of view, during the transition period when > > new isolated CPUs are being added or old ones being removed, the reader will > > either get the old CPU data or the new one depending on the exact timing. > > The effect the CPU selection may persist for a while after the end of the > > critical section. > > > > Can we just rely on RCU to make sure that it either get the new one or the > > old one but nothing in between without the additional overhead? > > > > My current thinking is to make use CPU hotplug to enable better CPU > > isolation. IOW, I would shut down the affected CPUs, change the housekeeping > > masks and then bring them back online again. That means the writer side will > > take a while to complete. > > The problem with this approach is that offlining a cpu effects all the other > cpus and causes latency spikes on other low latency tasks which may already be > running on other parts of the system. > > I just don't want us to finally get to dynamic isolation and have it not > usable for the usecases asking for it. We'll have to discuss that eventually because that's the plan for nohz_full. We can work around the stop machine rendez-vous on nohz_full if that's the problem. If the issue is not to interrupt common RT-tasks, then that's a different problem for which I don't have a solution. Thanks. > > Cheers, > Phil > > > > > Cheers, > > Longman > > > > > --- > > > include/linux/sched/isolation.h | 7 +++++++ > > > kernel/sched/isolation.c | 12 ++++++++++++ > > > kernel/sched/sched.h | 1 + > > > 3 files changed, 20 insertions(+) > > > > > > diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h > > > index f98ba0d71c52..8de4f625a5c1 100644 > > > --- a/include/linux/sched/isolation.h > > > +++ b/include/linux/sched/isolation.h > > > @@ -41,6 +41,9 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > > > return true; > > > } > > > +extern void housekeeping_lock(void); > > > +extern void housekeeping_unlock(void); > > > + > > > extern void __init housekeeping_init(void); > > > #else > > > @@ -73,6 +76,8 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > > > return true; > > > } > > > +static inline void housekeeping_lock(void) { } > > > +static inline void housekeeping_unlock(void) { } > > > static inline void housekeeping_init(void) { } > > > #endif /* CONFIG_CPU_ISOLATION */ > > > @@ -84,4 +89,6 @@ static inline bool cpu_is_isolated(int cpu) > > > cpuset_cpu_is_isolated(cpu); > > > } > > > +DEFINE_LOCK_GUARD_0(housekeeping, housekeeping_lock(), housekeeping_unlock()) > > > + > > > #endif /* _LINUX_SCHED_ISOLATION_H */ > > > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > > > index 83cec3853864..8c02eeccea3b 100644 > > > --- a/kernel/sched/isolation.c > > > +++ b/kernel/sched/isolation.c > > > @@ -18,12 +18,24 @@ static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX]; > > > unsigned long housekeeping_flags; > > > EXPORT_SYMBOL_GPL(housekeeping_flags); > > > +DEFINE_STATIC_PERCPU_RWSEM(housekeeping_pcpu_lock); > > > + > > > bool housekeeping_enabled(enum hk_type type) > > > { > > > return !!(housekeeping_flags & BIT(type)); > > > } > > > EXPORT_SYMBOL_GPL(housekeeping_enabled); > > > +void housekeeping_lock(void) > > > +{ > > > + percpu_down_read(&housekeeping_pcpu_lock); > > > +} > > > + > > > +void housekeeping_unlock(void) > > > +{ > > > + percpu_up_read(&housekeeping_pcpu_lock); > > > +} > > > + > > > int housekeeping_any_cpu(enum hk_type type) > > > { > > > int cpu; > > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > > index 475bb5998295..0cdb560ef2f3 100644 > > > --- a/kernel/sched/sched.h > > > +++ b/kernel/sched/sched.h > > > @@ -46,6 +46,7 @@ > > > #include <linux/mm.h> > > > #include <linux/module.h> > > > #include <linux/mutex_api.h> > > > +#include <linux/percpu-rwsem.h> > > > #include <linux/plist.h> > > > #include <linux/poll.h> > > > #include <linux/proc_fs.h> > > > > > > -- > -- Frederic Weisbecker SUSE Labs
On Wed, Jun 25, 2025 at 04:34:18PM +0200 Frederic Weisbecker wrote: > Le Wed, Jun 25, 2025 at 08:18:50AM -0400, Phil Auld a écrit : > > Hi Waiman, > > > > On Mon, Jun 23, 2025 at 01:34:58PM -0400 Waiman Long wrote: > > > On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > > > > The HK_TYPE_DOMAIN isolation cpumask, and further the > > > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > > > > future. > > > > > > > > The affected subsystems will need to synchronize against those cpumask > > > > changes so that: > > > > > > > > * The reader get a coherent snapshot > > > > * The housekeeping subsystem can safely propagate a cpumask update to > > > > the susbsytems after it has been published. > > > > > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > > > > expected to be very rare given that CPU isolation is a niche usecase and > > > > related cpuset setup happen only in preparation work. On the other hand > > > > read sides can occur in more frequent paths. > > > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > > > > Thanks for the patch series and it certainly has some good ideas. However I > > > am a bit concern about the overhead of using percpu-rwsem for > > > synchronization especially when the readers have to wait for the completion > > > on the writer side. From my point of view, during the transition period when > > > new isolated CPUs are being added or old ones being removed, the reader will > > > either get the old CPU data or the new one depending on the exact timing. > > > The effect the CPU selection may persist for a while after the end of the > > > critical section. > > > > > > Can we just rely on RCU to make sure that it either get the new one or the > > > old one but nothing in between without the additional overhead? > > > > > > My current thinking is to make use CPU hotplug to enable better CPU > > > isolation. IOW, I would shut down the affected CPUs, change the housekeeping > > > masks and then bring them back online again. That means the writer side will > > > take a while to complete. > > > > The problem with this approach is that offlining a cpu effects all the other > > cpus and causes latency spikes on other low latency tasks which may already be > > running on other parts of the system. > > > > I just don't want us to finally get to dynamic isolation and have it not > > usable for the usecases asking for it. > > We'll have to discuss that eventually because that's the plan for nohz_full. > We can work around the stop machine rendez-vous on nohz_full if that's the > problem. If the issue is not to interrupt common RT-tasks, then that's a > different problem for which I don't have a solution. > My understanding is that it's the stop machine issue. If you have a way around that then great! Cheers, Phil > Thanks. > > > > > Cheers, > > Phil > > > > > > > > Cheers, > > > Longman > > > > > > > --- > > > > include/linux/sched/isolation.h | 7 +++++++ > > > > kernel/sched/isolation.c | 12 ++++++++++++ > > > > kernel/sched/sched.h | 1 + > > > > 3 files changed, 20 insertions(+) > > > > > > > > diff --git a/include/linux/sched/isolation.h b/include/linux/sched/isolation.h > > > > index f98ba0d71c52..8de4f625a5c1 100644 > > > > --- a/include/linux/sched/isolation.h > > > > +++ b/include/linux/sched/isolation.h > > > > @@ -41,6 +41,9 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > > > > return true; > > > > } > > > > +extern void housekeeping_lock(void); > > > > +extern void housekeeping_unlock(void); > > > > + > > > > extern void __init housekeeping_init(void); > > > > #else > > > > @@ -73,6 +76,8 @@ static inline bool housekeeping_cpu(int cpu, enum hk_type type) > > > > return true; > > > > } > > > > +static inline void housekeeping_lock(void) { } > > > > +static inline void housekeeping_unlock(void) { } > > > > static inline void housekeeping_init(void) { } > > > > #endif /* CONFIG_CPU_ISOLATION */ > > > > @@ -84,4 +89,6 @@ static inline bool cpu_is_isolated(int cpu) > > > > cpuset_cpu_is_isolated(cpu); > > > > } > > > > +DEFINE_LOCK_GUARD_0(housekeeping, housekeeping_lock(), housekeeping_unlock()) > > > > + > > > > #endif /* _LINUX_SCHED_ISOLATION_H */ > > > > diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c > > > > index 83cec3853864..8c02eeccea3b 100644 > > > > --- a/kernel/sched/isolation.c > > > > +++ b/kernel/sched/isolation.c > > > > @@ -18,12 +18,24 @@ static cpumask_var_t housekeeping_cpumasks[HK_TYPE_MAX]; > > > > unsigned long housekeeping_flags; > > > > EXPORT_SYMBOL_GPL(housekeeping_flags); > > > > +DEFINE_STATIC_PERCPU_RWSEM(housekeeping_pcpu_lock); > > > > + > > > > bool housekeeping_enabled(enum hk_type type) > > > > { > > > > return !!(housekeeping_flags & BIT(type)); > > > > } > > > > EXPORT_SYMBOL_GPL(housekeeping_enabled); > > > > +void housekeeping_lock(void) > > > > +{ > > > > + percpu_down_read(&housekeeping_pcpu_lock); > > > > +} > > > > + > > > > +void housekeeping_unlock(void) > > > > +{ > > > > + percpu_up_read(&housekeeping_pcpu_lock); > > > > +} > > > > + > > > > int housekeeping_any_cpu(enum hk_type type) > > > > { > > > > int cpu; > > > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > > > index 475bb5998295..0cdb560ef2f3 100644 > > > > --- a/kernel/sched/sched.h > > > > +++ b/kernel/sched/sched.h > > > > @@ -46,6 +46,7 @@ > > > > #include <linux/mm.h> > > > > #include <linux/module.h> > > > > #include <linux/mutex_api.h> > > > > +#include <linux/percpu-rwsem.h> > > > > #include <linux/plist.h> > > > > #include <linux/poll.h> > > > > #include <linux/proc_fs.h> > > > > > > > > > > -- > > > > -- > Frederic Weisbecker > SUSE Labs > --
On 6/25/25 11:50 AM, Phil Auld wrote: > On Wed, Jun 25, 2025 at 04:34:18PM +0200 Frederic Weisbecker wrote: >> Le Wed, Jun 25, 2025 at 08:18:50AM -0400, Phil Auld a écrit : >>> Hi Waiman, >>> >>> On Mon, Jun 23, 2025 at 01:34:58PM -0400 Waiman Long wrote: >>>> On 6/20/25 11:22 AM, Frederic Weisbecker wrote: >>>>> The HK_TYPE_DOMAIN isolation cpumask, and further the >>>>> HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the >>>>> future. >>>>> >>>>> The affected subsystems will need to synchronize against those cpumask >>>>> changes so that: >>>>> >>>>> * The reader get a coherent snapshot >>>>> * The housekeeping subsystem can safely propagate a cpumask update to >>>>> the susbsytems after it has been published. >>>>> >>>>> Protect against readsides that can sleep with per-cpu rwsem. Updates are >>>>> expected to be very rare given that CPU isolation is a niche usecase and >>>>> related cpuset setup happen only in preparation work. On the other hand >>>>> read sides can occur in more frequent paths. >>>>> >>>>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> >>>> Thanks for the patch series and it certainly has some good ideas. However I >>>> am a bit concern about the overhead of using percpu-rwsem for >>>> synchronization especially when the readers have to wait for the completion >>>> on the writer side. From my point of view, during the transition period when >>>> new isolated CPUs are being added or old ones being removed, the reader will >>>> either get the old CPU data or the new one depending on the exact timing. >>>> The effect the CPU selection may persist for a while after the end of the >>>> critical section. >>>> >>>> Can we just rely on RCU to make sure that it either get the new one or the >>>> old one but nothing in between without the additional overhead? >>>> >>>> My current thinking is to make use CPU hotplug to enable better CPU >>>> isolation. IOW, I would shut down the affected CPUs, change the housekeeping >>>> masks and then bring them back online again. That means the writer side will >>>> take a while to complete. >>> The problem with this approach is that offlining a cpu effects all the other >>> cpus and causes latency spikes on other low latency tasks which may already be >>> running on other parts of the system. >>> >>> I just don't want us to finally get to dynamic isolation and have it not >>> usable for the usecases asking for it. >> We'll have to discuss that eventually because that's the plan for nohz_full. >> We can work around the stop machine rendez-vous on nohz_full if that's the >> problem. If the issue is not to interrupt common RT-tasks, then that's a >> different problem for which I don't have a solution. >> > My understanding is that it's the stop machine issue. If you have a way > around that then great! My current thinking is to just run a selected set of CPUHP teardown and startup methods relevant to housekeeping cpumasks usage without calling the full set from CPUHP_ONLINE to CPUHP_OFFLINE. I don't know if it is possible or not or how much additional changes will be needed to make that possible. That will skip the CPUHP_TEARDOWN_CPU teardown method that is likely the cause of most the latency spike experienced by other CPUs. Cheers, Longman
On Thu, Jun 26, 2025 at 08:11:54PM -0400 Waiman Long wrote: > On 6/25/25 11:50 AM, Phil Auld wrote: > > On Wed, Jun 25, 2025 at 04:34:18PM +0200 Frederic Weisbecker wrote: > > > Le Wed, Jun 25, 2025 at 08:18:50AM -0400, Phil Auld a écrit : > > > > Hi Waiman, > > > > > > > > On Mon, Jun 23, 2025 at 01:34:58PM -0400 Waiman Long wrote: > > > > > On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > > > > > > The HK_TYPE_DOMAIN isolation cpumask, and further the > > > > > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > > > > > > future. > > > > > > > > > > > > The affected subsystems will need to synchronize against those cpumask > > > > > > changes so that: > > > > > > > > > > > > * The reader get a coherent snapshot > > > > > > * The housekeeping subsystem can safely propagate a cpumask update to > > > > > > the susbsytems after it has been published. > > > > > > > > > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > > > > > > expected to be very rare given that CPU isolation is a niche usecase and > > > > > > related cpuset setup happen only in preparation work. On the other hand > > > > > > read sides can occur in more frequent paths. > > > > > > > > > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > > > > Thanks for the patch series and it certainly has some good ideas. However I > > > > > am a bit concern about the overhead of using percpu-rwsem for > > > > > synchronization especially when the readers have to wait for the completion > > > > > on the writer side. From my point of view, during the transition period when > > > > > new isolated CPUs are being added or old ones being removed, the reader will > > > > > either get the old CPU data or the new one depending on the exact timing. > > > > > The effect the CPU selection may persist for a while after the end of the > > > > > critical section. > > > > > > > > > > Can we just rely on RCU to make sure that it either get the new one or the > > > > > old one but nothing in between without the additional overhead? > > > > > > > > > > My current thinking is to make use CPU hotplug to enable better CPU > > > > > isolation. IOW, I would shut down the affected CPUs, change the housekeeping > > > > > masks and then bring them back online again. That means the writer side will > > > > > take a while to complete. > > > > The problem with this approach is that offlining a cpu effects all the other > > > > cpus and causes latency spikes on other low latency tasks which may already be > > > > running on other parts of the system. > > > > > > > > I just don't want us to finally get to dynamic isolation and have it not > > > > usable for the usecases asking for it. > > > We'll have to discuss that eventually because that's the plan for nohz_full. > > > We can work around the stop machine rendez-vous on nohz_full if that's the > > > problem. If the issue is not to interrupt common RT-tasks, then that's a > > > different problem for which I don't have a solution. > > > > > My understanding is that it's the stop machine issue. If you have a way > > around that then great! > > My current thinking is to just run a selected set of CPUHP teardown and > startup methods relevant to housekeeping cpumasks usage without calling the > full set from CPUHP_ONLINE to CPUHP_OFFLINE. I don't know if it is possible > or not or how much additional changes will be needed to make that possible. > That will skip the CPUHP_TEARDOWN_CPU teardown method that is likely the > cause of most the latency spike experienced by other CPUs. > Yes, CPUHP_TEARDOWN_CPU is the source of the stop_machine I believe. It'll be interesting to see if you can safely use the cpuhp machinery selectively like that :) Cheers, Phil > Cheers, > Longman > --
On Thu, Jun 26 2025 at 20:48, Phil Auld wrote: > On Thu, Jun 26, 2025 at 08:11:54PM -0400 Waiman Long wrote: >> > My understanding is that it's the stop machine issue. If you have a way >> > around that then great! >> >> My current thinking is to just run a selected set of CPUHP teardown and >> startup methods relevant to housekeeping cpumasks usage without calling the >> full set from CPUHP_ONLINE to CPUHP_OFFLINE. I don't know if it is possible >> or not or how much additional changes will be needed to make that possible. >> That will skip the CPUHP_TEARDOWN_CPU teardown method that is likely the >> cause of most the latency spike experienced by other CPUs. >> > > Yes, CPUHP_TEARDOWN_CPU is the source of the stop_machine I believe. Correct. > It'll be interesting to see if you can safely use the cpuhp machinery > selectively like that :) It is supposed to work that way and you can exercise it from userspace via sysfs already. If it fails, then there are bugs in hotplug callbacks or ordering or ..., which need to be fixed anyway :) Thanks, tglx
Hello, On Mon, Jun 23, 2025 at 01:34:58PM -0400, Waiman Long wrote: > On 6/20/25 11:22 AM, Frederic Weisbecker wrote: > > The HK_TYPE_DOMAIN isolation cpumask, and further the > > HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the > > future. > > > > The affected subsystems will need to synchronize against those cpumask > > changes so that: > > > > * The reader get a coherent snapshot > > * The housekeeping subsystem can safely propagate a cpumask update to > > the susbsytems after it has been published. > > > > Protect against readsides that can sleep with per-cpu rwsem. Updates are > > expected to be very rare given that CPU isolation is a niche usecase and > > related cpuset setup happen only in preparation work. On the other hand > > read sides can occur in more frequent paths. > > > > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > > Thanks for the patch series and it certainly has some good ideas. However I > am a bit concern about the overhead of using percpu-rwsem for > synchronization especially when the readers have to wait for the completion > on the writer side. From my point of view, during the transition period when > new isolated CPUs are being added or old ones being removed, the reader will > either get the old CPU data or the new one depending on the exact timing. > The effect the CPU selection may persist for a while after the end of the > critical section. > > Can we just rely on RCU to make sure that it either get the new one or the > old one but nothing in between without the additional overhead? So, I had a similar thought - ie. does this need full interlocking so that when the modification operation can wait for existing users to drain? It'd be nice to explain that part a bit more. That said, percpu_rwsem read path is pretty cheap, so if that is a requirement, I doubt the overhead difference between RCU access and percpu read locking would make meaningful difference. Thanks. -- tejun
On 6/23/25 1:39 PM, Tejun Heo wrote: > Hello, > > On Mon, Jun 23, 2025 at 01:34:58PM -0400, Waiman Long wrote: >> On 6/20/25 11:22 AM, Frederic Weisbecker wrote: >>> The HK_TYPE_DOMAIN isolation cpumask, and further the >>> HK_TYPE_KERNEL_NOISE cpumask will be made modifiable at runtime in the >>> future. >>> >>> The affected subsystems will need to synchronize against those cpumask >>> changes so that: >>> >>> * The reader get a coherent snapshot >>> * The housekeeping subsystem can safely propagate a cpumask update to >>> the susbsytems after it has been published. >>> >>> Protect against readsides that can sleep with per-cpu rwsem. Updates are >>> expected to be very rare given that CPU isolation is a niche usecase and >>> related cpuset setup happen only in preparation work. On the other hand >>> read sides can occur in more frequent paths. >>> >>> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> >> Thanks for the patch series and it certainly has some good ideas. However I >> am a bit concern about the overhead of using percpu-rwsem for >> synchronization especially when the readers have to wait for the completion >> on the writer side. From my point of view, during the transition period when >> new isolated CPUs are being added or old ones being removed, the reader will >> either get the old CPU data or the new one depending on the exact timing. >> The effect the CPU selection may persist for a while after the end of the >> critical section. >> >> Can we just rely on RCU to make sure that it either get the new one or the >> old one but nothing in between without the additional overhead? > So, I had a similar thought - ie. does this need full interlocking so that > when the modification operation can wait for existing users to drain? It'd > be nice to explain that part a bit more. That said, percpu_rwsem read path > is pretty cheap, so if that is a requirement, I doubt the overhead > difference between RCU access and percpu read locking would make meaningful > difference. > > Thanks. The percpu-rwsem does have a cheaper read side compared with rwsem for typical use case where writer update happens sparingly. However, when the writer has successful acquired the write lock, the readers do have to wait until the writer issues a percpu_up_write() call before they can proceed. It is the delay introduced by this wait that I am worry about. Isolated partitions are typically set up to run RT applications that have a strict latency requirement. So any possible latency spike should be avoided. Cheers, Longman >
Hello, On Mon, Jun 23, 2025 at 01:57:17PM -0400, Waiman Long wrote: > The percpu-rwsem does have a cheaper read side compared with rwsem for > typical use case where writer update happens sparingly. However, when the > writer has successful acquired the write lock, the readers do have to wait > until the writer issues a percpu_up_write() call before they can proceed. It > is the delay introduced by this wait that I am worry about. Isolated > partitions are typically set up to run RT applications that have a strict > latency requirement. So any possible latency spike should be avoided. I see. Hmm... this being the mechanism that establishes the isolation, it doesn't seem too broken if things stutter a bit when isolation is being updated. Let's see what Frederic says why the strong interlocking is needed. Thanks. -- tejun
Le Mon, Jun 23, 2025 at 08:03:46AM -1000, Tejun Heo a écrit : > Hello, > > On Mon, Jun 23, 2025 at 01:57:17PM -0400, Waiman Long wrote: > > The percpu-rwsem does have a cheaper read side compared with rwsem for > > typical use case where writer update happens sparingly. However, when the > > writer has successful acquired the write lock, the readers do have to wait > > until the writer issues a percpu_up_write() call before they can proceed. It > > is the delay introduced by this wait that I am worry about. Isolated > > partitions are typically set up to run RT applications that have a strict > > latency requirement. So any possible latency spike should be avoided. > > I see. Hmm... this being the mechanism that establishes the isolation, it > doesn't seem too broken if things stutter a bit when isolation is being > updated. Let's see what Frederic says why the strong interlocking is needed. I should be able to work around that. I think only PCI requires that rwsem because it relies on work_on_cpu(). I can create a dedicated workqueue for it that housekeeping can flush after the cpumask update. -- Frederic Weisbecker SUSE Labs
© 2016 - 2025 Red Hat, Inc.