[PATCH v1 1/2] sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads

Mathieu Desnoyers posted 2 patches 1 month, 3 weeks ago
[PATCH v1 1/2] sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads
Posted by Mathieu Desnoyers 1 month, 3 weeks ago
commit 223baf9d17f25 ("sched: Fix performance regression introduced by mm_cid")
introduced a per-mm/cpu current concurrency id (mm_cid), which keeps
a reference to the concurrency id allocated for each CPU. This reference
expires shortly after a 100ms delay.

These per-CPU references keep the per-mm-cid data cache-local in
situations where threads are running at least once on each CPU within
each 100ms window, thus keeping the per-cpu reference alive.

However, intermittent workloads behaving in bursts spaced by more than
100ms on each CPU exhibit bad cache locality and degraded performance
compared to purely per-cpu data indexing, because concurrency IDs are
allocated over various CPUs and cores, therefore losing cache locality
of the associated data.

Introduce the following changes to improve per-mm-cid cache locality:

- Add a "recent_cid" field to the per-mm/cpu mm_cid structure to keep
  track of which mm_cid value was last used, and use it as a hint to
  attempt re-allocating the same concurrency ID the next time this
  mm/cpu needs to allocate a concurrency ID,

- Add a per-mm CPUs allowed mask, which keeps track of the union of
  CPUs allowed for all threads belonging to this mm. This cpumask is
  only set during the lifetime of the mm, never cleared, so it
  represents the union of all the CPUs allowed since the beginning of
  the mm lifetime. (note that the mm_cpumask() is really arch-specific
  and tailored to the TLB flush needs, and is thus _not_ a viable
  approach for this)

- Add a per-mm nr_cpus_allowed to keep track of the weight of the
  per-mm CPUs allowed mask (for fast access),

- Add a per-mm nr_cids_used to keep track of the highest concurrency
  ID allocated for the mm. This is used for expanding the concurrency ID
  allocation within the upper bound defined by:

    min(mm->nr_cpus_allowed, mm->mm_users)

  When the next unused CID value reaches this threshold, stop trying
  to expand the cid allocation and use the first available cid value
  instead.

Spreading allocation to use all the cid values within the range

  [ 0, min(mm->nr_cpus_allowed, mm->mm_users) - 1 ]

improves cache locality while preserving mm_cid compactness within the
expected user limits.

- In __mm_cid_try_get, only return cid values within the range
  [ 0, mm->nr_cpus_allowed ] rather than [ 0, nr_cpu_ids ]. This
  prevents allocating cids above the number of allowed cpus in
  rare scenarios where cid allocation races with a concurrent
  remote-clear of the per-mm/cpu cid. This improvement is made
  possible by the addition of the per-mm CPUs allowed mask.

- In sched_mm_cid_migrate_to, use mm->nr_cpus_allowed rather than
  t->nr_cpus_allowed. This criterion was really meant to compare
  the number of mm->mm_users to the number of CPUs allowed for the
  entire mm. Therefore, the prior comparison worked fine when all
  threads shared the same CPUs allowed mask, but not so much in
  scenarios where those threads have different masks (e.g. each
  thread pinned to a single CPU). This improvement is made
  possible by the addition of the per-mm CPUs allowed mask.

* Benchmarks

Each thread increments 16kB worth of 8-bit integers in bursts, with
a configurable delay between each thread's execution. Each thread run
one after the other (no threads run concurrently). The order of
thread execution in the sequence is random. The thread execution
sequence begins again after all threads have executed. The 16kB areas
are allocated with rseq_mempool and indexed by either cpu_id, mm_cid
(not cache-local), or cache-local mm_cid. Each thread is pinned to its
own core.

Testing configurations:

8-core/1-L3:        Use 8 cores within a single L3
24-core/24-L3:      Use 24 cores, 1 core per L3
192-core/24-L3:     Use 192 cores (all cores in the system)
384-thread/24-L3:   Use 384 HW threads (all HW threads in the system)

Intermittent workload delays between threads: 200ms, 10ms.

Hardware:

CPU(s):                   384
  On-line CPU(s) list:    0-383
Vendor ID:                AuthenticAMD
  Model name:             AMD EPYC 9654 96-Core Processor
    Thread(s) per core:   2
    Core(s) per socket:   96
    Socket(s):            2
Caches (sum of all):
  L1d:                    6 MiB (192 instances)
  L1i:                    6 MiB (192 instances)
  L2:                     192 MiB (192 instances)
  L3:                     768 MiB (24 instances)

Each result is an average of 5 test runs. The cache-local speedup
is calculated as: (cache-local mm_cid) / (mm_cid).

Intermittent workload delay: 200ms

                     per-cpu     mm_cid    cache-local mm_cid    cache-local speedup
                         (ns)      (ns)                  (ns)
8-core/1-L3             1374      19289                  1336            14.4x
24-core/24-L3           2423      26721                  1594            16.7x
192-core/24-L3          2291      15826                  2153             7.3x
384-thread/24-L3        1874      13234                  1907             6.9x

Intermittent workload delay: 10ms

                     per-cpu     mm_cid    cache-local mm_cid    cache-local speedup
                         (ns)      (ns)                  (ns)
8-core/1-L3               662       756                   686             1.1x
24-core/24-L3            1378      3648                  1035             3.5x
192-core/24-L3           1439     10833                  1482             7.3x
384-thread/24-L3         1503     10570                  1556             6.8x

[ This deprecates the prior "sched: NUMA-aware per-memory-map concurrency IDs"
  patch series with a simpler and more general approach. ]

Link: https://lore.kernel.org/lkml/20240823185946.418340-1-mathieu.desnoyers@efficios.com/
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Acked-by: Marco Elver <elver@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Valentin Schneider <vschneid@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Yury Norov <yury.norov@gmail.com>
Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk>
---
 fs/exec.c                |  2 +-
 include/linux/mm_types.h | 72 +++++++++++++++++++++++++++++++++++-----
 kernel/fork.c            |  2 +-
 kernel/sched/core.c      | 22 +++++++-----
 kernel/sched/sched.h     | 47 ++++++++++++++++++--------
 5 files changed, 111 insertions(+), 34 deletions(-)

diff --git a/fs/exec.c b/fs/exec.c
index 6c53920795c2..aaa605529a75 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -990,7 +990,7 @@ static int exec_mmap(struct mm_struct *mm)
 	active_mm = tsk->active_mm;
 	tsk->active_mm = mm;
 	tsk->mm = mm;
-	mm_init_cid(mm);
+	mm_init_cid(mm, tsk);
 	/*
 	 * This prevents preemption while active_mm is being loaded and
 	 * it and mm are being updated, which could cause problems for
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 6e3bdf8e38bc..8b5a185b4d5a 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -782,6 +782,7 @@ struct vm_area_struct {
 struct mm_cid {
 	u64 time;
 	int cid;
+	int recent_cid;
 };
 #endif
 
@@ -852,6 +853,27 @@ struct mm_struct {
 		 * When the next mm_cid scan is due (in jiffies).
 		 */
 		unsigned long mm_cid_next_scan;
+		/**
+		 * @nr_cpus_allowed: Number of CPUs allowed for mm.
+		 *
+		 * Number of CPUs allowed in the union of all mm's
+		 * threads allowed CPUs.
+		 */
+		atomic_t nr_cpus_allowed;
+		/**
+		 * @nr_cids_used: Number of used concurrency IDs.
+		 *
+		 * Track the highest concurrency ID allocated for the
+		 * mm: nr_cids_used - 1.
+		 */
+		atomic_t nr_cids_used;
+		/**
+		 * @cpus_allowed_lock: Lock protecting mm cpus_allowed.
+		 *
+		 * Provide mutual exclusion for mm cpus_allowed and
+		 * mm nr_cpus_allowed updates.
+		 */
+		spinlock_t cpus_allowed_lock;
 #endif
 #ifdef CONFIG_MMU
 		atomic_long_t pgtables_bytes;	/* size of all page tables */
@@ -1170,18 +1192,30 @@ static inline int mm_cid_clear_lazy_put(int cid)
 	return cid & ~MM_CID_LAZY_PUT;
 }
 
+/*
+ * mm_cpus_allowed: Union of all mm's threads allowed CPUs.
+ */
+static inline cpumask_t *mm_cpus_allowed(struct mm_struct *mm)
+{
+	unsigned long bitmap = (unsigned long)mm;
+
+	bitmap += offsetof(struct mm_struct, cpu_bitmap);
+	/* Skip cpu_bitmap */
+	bitmap += cpumask_size();
+	return (struct cpumask *)bitmap;
+}
+
 /* Accessor for struct mm_struct's cidmask. */
 static inline cpumask_t *mm_cidmask(struct mm_struct *mm)
 {
-	unsigned long cid_bitmap = (unsigned long)mm;
+	unsigned long cid_bitmap = (unsigned long)mm_cpus_allowed(mm);
 
-	cid_bitmap += offsetof(struct mm_struct, cpu_bitmap);
-	/* Skip cpu_bitmap */
+	/* Skip mm_cpus_allowed */
 	cid_bitmap += cpumask_size();
 	return (struct cpumask *)cid_bitmap;
 }
 
-static inline void mm_init_cid(struct mm_struct *mm)
+static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p)
 {
 	int i;
 
@@ -1189,17 +1223,22 @@ static inline void mm_init_cid(struct mm_struct *mm)
 		struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, i);
 
 		pcpu_cid->cid = MM_CID_UNSET;
+		pcpu_cid->recent_cid = MM_CID_UNSET;
 		pcpu_cid->time = 0;
 	}
+	atomic_set(&mm->nr_cpus_allowed, p->nr_cpus_allowed);
+	atomic_set(&mm->nr_cids_used, 0);
+	spin_lock_init(&mm->cpus_allowed_lock);
+	cpumask_copy(mm_cpus_allowed(mm), p->cpus_ptr);
 	cpumask_clear(mm_cidmask(mm));
 }
 
-static inline int mm_alloc_cid_noprof(struct mm_struct *mm)
+static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *p)
 {
 	mm->pcpu_cid = alloc_percpu_noprof(struct mm_cid);
 	if (!mm->pcpu_cid)
 		return -ENOMEM;
-	mm_init_cid(mm);
+	mm_init_cid(mm, p);
 	return 0;
 }
 #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
@@ -1212,16 +1251,31 @@ static inline void mm_destroy_cid(struct mm_struct *mm)
 
 static inline unsigned int mm_cid_size(void)
 {
-	return cpumask_size();
+	return 2 * cpumask_size();	/* mm_cpus_allowed(), mm_cidmask(). */
+}
+
+static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask)
+{
+	struct cpumask *mm_allowed = mm_cpus_allowed(mm);
+
+	if (!mm)
+		return;
+	/* The mm_cpus_allowed is the union of each thread allowed CPUs masks. */
+	spin_lock(&mm->cpus_allowed_lock);
+	cpumask_or(mm_allowed, mm_allowed, cpumask);
+	atomic_set(&mm->nr_cpus_allowed, cpumask_weight(mm_allowed));
+	spin_unlock(&mm->cpus_allowed_lock);
 }
 #else /* CONFIG_SCHED_MM_CID */
-static inline void mm_init_cid(struct mm_struct *mm) { }
-static inline int mm_alloc_cid(struct mm_struct *mm) { return 0; }
+static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p) { }
+static inline int mm_alloc_cid(struct mm_struct *mm, struct task_struct *p) { return 0; }
 static inline void mm_destroy_cid(struct mm_struct *mm) { }
+
 static inline unsigned int mm_cid_size(void)
 {
 	return 0;
 }
+static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask) { }
 #endif /* CONFIG_SCHED_MM_CID */
 
 struct mmu_gather;
diff --git a/kernel/fork.c b/kernel/fork.c
index 60c0b4868fd4..18bf37ae73a5 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1298,7 +1298,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
 	if (init_new_context(p, mm))
 		goto fail_nocontext;
 
-	if (mm_alloc_cid(mm))
+	if (mm_alloc_cid(mm, p))
 		goto fail_cid;
 
 	if (percpu_counter_init_many(mm->rss_stat, 0, GFP_KERNEL_ACCOUNT,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 43e453ab7e20..772a3daf784a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2691,6 +2691,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
 		put_prev_task(rq, p);
 
 	p->sched_class->set_cpus_allowed(p, ctx);
+	mm_set_cpus_allowed(p->mm, ctx->new_mask);
 
 	if (queued)
 		enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK);
@@ -10228,6 +10229,7 @@ int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
 	 */
 	if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
 		return -1;
+	WRITE_ONCE(src_pcpu_cid->recent_cid, MM_CID_UNSET);
 	return src_cid;
 }
 
@@ -10240,7 +10242,8 @@ void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
 {
 	struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
 	struct mm_struct *mm = t->mm;
-	int src_cid, dst_cid, src_cpu;
+	int src_cid, src_cpu;
+	bool dst_cid_is_set;
 	struct rq *src_rq;
 
 	lockdep_assert_rq_held(dst_rq);
@@ -10257,9 +10260,9 @@ void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
 	 * allocation closest to 0 in cases where few threads migrate around
 	 * many CPUs.
 	 *
-	 * If destination cid is already set, we may have to just clear
-	 * the src cid to ensure compactness in frequent migrations
-	 * scenarios.
+	 * If destination cid or recent cid is already set, we may have
+	 * to just clear the src cid to ensure compactness in frequent
+	 * migrations scenarios.
 	 *
 	 * It is not useful to clear the src cid when the number of threads is
 	 * greater or equal to the number of allowed CPUs, because user-space
@@ -10267,9 +10270,9 @@ void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
 	 * allowed CPUs.
 	 */
 	dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
-	dst_cid = READ_ONCE(dst_pcpu_cid->cid);
-	if (!mm_cid_is_unset(dst_cid) &&
-	    atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
+	dst_cid_is_set = !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->cid)) ||
+			 !mm_cid_is_unset(READ_ONCE(dst_pcpu_cid->recent_cid));
+	if (dst_cid_is_set && atomic_read(&mm->mm_users) >= atomic_read(&mm->nr_cpus_allowed))
 		return;
 	src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
 	src_rq = cpu_rq(src_cpu);
@@ -10280,13 +10283,14 @@ void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
 							    src_cid);
 	if (src_cid == -1)
 		return;
-	if (!mm_cid_is_unset(dst_cid)) {
+	if (dst_cid_is_set) {
 		__mm_cid_put(mm, src_cid);
 		return;
 	}
 	/* Move src_cid to dst cpu. */
 	mm_cid_snapshot_time(dst_rq, mm);
 	WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
+	WRITE_ONCE(dst_pcpu_cid->recent_cid, src_cid);
 }
 
 static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
@@ -10523,7 +10527,7 @@ void sched_mm_cid_after_execve(struct task_struct *t)
 		 * Matches barrier in sched_mm_cid_remote_clear_old().
 		 */
 		smp_mb();
-		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
+		t->last_mm_cid = t->mm_cid = mm_cid_get(rq, t, mm);
 	}
 	rseq_set_notify_resume(t);
 }
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index b1c3588a8f00..f32b8571af5d 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -3596,24 +3596,40 @@ static inline void mm_cid_put(struct mm_struct *mm)
 	__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
 }
 
-static inline int __mm_cid_try_get(struct mm_struct *mm)
+static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm)
 {
-	struct cpumask *cpumask;
-	int cid;
+	struct cpumask *cidmask = mm_cidmask(mm);
+	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
+	int cid = __this_cpu_read(pcpu_cid->recent_cid);
 
-	cpumask = mm_cidmask(mm);
+	/* Try to re-use recent cid. This improves cache locality. */
+	if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask))
+		return cid;
+	/*
+	 * Expand cid allocation if used cids are below the number cpus
+	 * allowed and number of threads. Expanding cid allocation as
+	 * much as possible improves cache locality.
+	 */
+	cid = atomic_read(&mm->nr_cids_used);
+	while (cid < atomic_read(&mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
+		if (!atomic_try_cmpxchg(&mm->nr_cids_used, &cid, cid + 1))
+			continue;
+		if (!cpumask_test_and_set_cpu(cid, cidmask))
+			return cid;
+	}
 	/*
+	 * Find the first available concurrency id.
 	 * Retry finding first zero bit if the mask is temporarily
 	 * filled. This only happens during concurrent remote-clear
 	 * which owns a cid without holding a rq lock.
 	 */
 	for (;;) {
-		cid = cpumask_first_zero(cpumask);
-		if (cid < nr_cpu_ids)
+		cid = cpumask_first_zero(cidmask);
+		if (cid < atomic_read(&mm->nr_cpus_allowed))
 			break;
 		cpu_relax();
 	}
-	if (cpumask_test_and_set_cpu(cid, cpumask))
+	if (cpumask_test_and_set_cpu(cid, cidmask))
 		return -1;
 
 	return cid;
@@ -3631,7 +3647,8 @@ static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
 	WRITE_ONCE(pcpu_cid->time, rq->clock);
 }
 
-static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
+static inline int __mm_cid_get(struct rq *rq, struct task_struct *t,
+			       struct mm_struct *mm)
 {
 	int cid;
 
@@ -3641,13 +3658,13 @@ static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
 	 * guarantee forward progress.
 	 */
 	if (!READ_ONCE(use_cid_lock)) {
-		cid = __mm_cid_try_get(mm);
+		cid = __mm_cid_try_get(t, mm);
 		if (cid >= 0)
 			goto end;
 		raw_spin_lock(&cid_lock);
 	} else {
 		raw_spin_lock(&cid_lock);
-		cid = __mm_cid_try_get(mm);
+		cid = __mm_cid_try_get(t, mm);
 		if (cid >= 0)
 			goto unlock;
 	}
@@ -3667,7 +3684,7 @@ static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
 	 * all newcoming allocations observe the use_cid_lock flag set.
 	 */
 	do {
-		cid = __mm_cid_try_get(mm);
+		cid = __mm_cid_try_get(t, mm);
 		cpu_relax();
 	} while (cid < 0);
 	/*
@@ -3684,7 +3701,8 @@ static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
 	return cid;
 }
 
-static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
+static inline int mm_cid_get(struct rq *rq, struct task_struct *t,
+			     struct mm_struct *mm)
 {
 	struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
 	struct cpumask *cpumask;
@@ -3701,8 +3719,9 @@ static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
 		if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
 			__mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
 	}
-	cid = __mm_cid_get(rq, mm);
+	cid = __mm_cid_get(rq, t, mm);
 	__this_cpu_write(pcpu_cid->cid, cid);
+	__this_cpu_write(pcpu_cid->recent_cid, cid);
 
 	return cid;
 }
@@ -3755,7 +3774,7 @@ static inline void switch_mm_cid(struct rq *rq,
 		prev->mm_cid = -1;
 	}
 	if (next->mm_cid_active)
-		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
+		next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next, next->mm);
 }
 
 #else /* !CONFIG_SCHED_MM_CID: */
-- 
2.39.2
Re: [PATCH v1 1/2] sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads
Posted by Peter Zijlstra 1 month, 2 weeks ago
On Thu, Oct 03, 2024 at 08:44:38PM -0400, Mathieu Desnoyers wrote:
> commit 223baf9d17f25 ("sched: Fix performance regression introduced by mm_cid")
> introduced a per-mm/cpu current concurrency id (mm_cid), which keeps
> a reference to the concurrency id allocated for each CPU. This reference
> expires shortly after a 100ms delay.
> 
> These per-CPU references keep the per-mm-cid data cache-local in
> situations where threads are running at least once on each CPU within
> each 100ms window, thus keeping the per-cpu reference alive.
> 
> However, intermittent workloads behaving in bursts spaced by more than
> 100ms on each CPU exhibit bad cache locality and degraded performance
> compared to purely per-cpu data indexing, because concurrency IDs are
> allocated over various CPUs and cores, therefore losing cache locality
> of the associated data.
> 
> Introduce the following changes to improve per-mm-cid cache locality:
> 
> - Add a "recent_cid" field to the per-mm/cpu mm_cid structure to keep
>   track of which mm_cid value was last used, and use it as a hint to
>   attempt re-allocating the same concurrency ID the next time this
>   mm/cpu needs to allocate a concurrency ID,
> 
> - Add a per-mm CPUs allowed mask, which keeps track of the union of
>   CPUs allowed for all threads belonging to this mm. This cpumask is
>   only set during the lifetime of the mm, never cleared, so it
>   represents the union of all the CPUs allowed since the beginning of
>   the mm lifetime. (note that the mm_cpumask() is really arch-specific
>   and tailored to the TLB flush needs, and is thus _not_ a viable
>   approach for this)

Because my morning juice came with an excessive dose of pedantry this
morning -- the previous and next item end with a comma due to this being
an enumeration; but this one has a full stop, suggesting the iteration
is at an end.

> - Add a per-mm nr_cpus_allowed to keep track of the weight of the
>   per-mm CPUs allowed mask (for fast access),
> 
> - Add a per-mm nr_cids_used to keep track of the highest concurrency
>   ID allocated for the mm. This is used for expanding the concurrency ID
>   allocation within the upper bound defined by:

The description and naming disagree -- while from vague memories they
end up being similar -- it is a stumbling block this morning. The
description seems to suggest this should be called max_cid or somesuch.

Also, is it actually used for anything? I found the tracking code in
__mm_cid_try_get(), but it's not actually doing anything?

>     min(mm->nr_cpus_allowed, mm->mm_users)
> 
>   When the next unused CID value reaches this threshold, stop trying
>   to expand the cid allocation and use the first available cid value
>   instead.
> 
> Spreading allocation to use all the cid values within the range
> 
>   [ 0, min(mm->nr_cpus_allowed, mm->mm_users) - 1 ]
> 
> improves cache locality while preserving mm_cid compactness within the
> expected user limits.

This paragraph seems to rudely interrupt the iteration ? Or is (Fred)
Colon gone missing again to start a new iteration?

(Damn, and now I need me a Nobby reference somehow)

Anyway, I have vague memories I strongly suggested keeping the CID space
dense at some point :-)

> - In __mm_cid_try_get, only return cid values within the range
>   [ 0, mm->nr_cpus_allowed ] rather than [ 0, nr_cpu_ids ]. This
>   prevents allocating cids above the number of allowed cpus in
>   rare scenarios where cid allocation races with a concurrent
>   remote-clear of the per-mm/cpu cid. This improvement is made
>   possible by the addition of the per-mm CPUs allowed mask.

and no comma to continue the iteration.

> - In sched_mm_cid_migrate_to, use mm->nr_cpus_allowed rather than
>   t->nr_cpus_allowed. This criterion was really meant to compare
>   the number of mm->mm_users to the number of CPUs allowed for the
>   entire mm. Therefore, the prior comparison worked fine when all
>   threads shared the same CPUs allowed mask, but not so much in
>   scenarios where those threads have different masks (e.g. each
>   thread pinned to a single CPU). This improvement is made
>   possible by the addition of the per-mm CPUs allowed mask.
> 

> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 6e3bdf8e38bc..8b5a185b4d5a 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -782,6 +782,7 @@ struct vm_area_struct {
>  struct mm_cid {
>  	u64 time;
>  	int cid;
> +	int recent_cid;
>  };
>  #endif
>  
> @@ -852,6 +853,27 @@ struct mm_struct {
>  		 * When the next mm_cid scan is due (in jiffies).
>  		 */
>  		unsigned long mm_cid_next_scan;
> +		/**
> +		 * @nr_cpus_allowed: Number of CPUs allowed for mm.
> +		 *
> +		 * Number of CPUs allowed in the union of all mm's
> +		 * threads allowed CPUs.
> +		 */
> +		atomic_t nr_cpus_allowed;
> +		/**
> +		 * @nr_cids_used: Number of used concurrency IDs.
> +		 *
> +		 * Track the highest concurrency ID allocated for the
> +		 * mm: nr_cids_used - 1.
> +		 */
> +		atomic_t nr_cids_used;
> +		/**
> +		 * @cpus_allowed_lock: Lock protecting mm cpus_allowed.
> +		 *
> +		 * Provide mutual exclusion for mm cpus_allowed and
> +		 * mm nr_cpus_allowed updates.

If nr_cpus_allowed update is serialized by this here thing, why is it an
atomic_t? A quick search seems to suggest you're only using atomic_set()
/ atomic_read() on it, which is a big fat clue it shouldn't be atomic_t.

Am I missing something?

> +		 */
> +		spinlock_t cpus_allowed_lock;
>  #endif
>  #ifdef CONFIG_MMU
>  		atomic_long_t pgtables_bytes;	/* size of all page tables */
> @@ -1170,18 +1192,30 @@ static inline int mm_cid_clear_lazy_put(int cid)
>  	return cid & ~MM_CID_LAZY_PUT;
>  }
>  
> +/*
> + * mm_cpus_allowed: Union of all mm's threads allowed CPUs.
> + */
> +static inline cpumask_t *mm_cpus_allowed(struct mm_struct *mm)
> +{
> +	unsigned long bitmap = (unsigned long)mm;
> +
> +	bitmap += offsetof(struct mm_struct, cpu_bitmap);
> +	/* Skip cpu_bitmap */
> +	bitmap += cpumask_size();
> +	return (struct cpumask *)bitmap;
> +}
> +
>  /* Accessor for struct mm_struct's cidmask. */
>  static inline cpumask_t *mm_cidmask(struct mm_struct *mm)
>  {
> -	unsigned long cid_bitmap = (unsigned long)mm;
> +	unsigned long cid_bitmap = (unsigned long)mm_cpus_allowed(mm);
>  
> -	cid_bitmap += offsetof(struct mm_struct, cpu_bitmap);
> -	/* Skip cpu_bitmap */
> +	/* Skip mm_cpus_allowed */
>  	cid_bitmap += cpumask_size();
>  	return (struct cpumask *)cid_bitmap;
>  }
>  
> -static inline void mm_init_cid(struct mm_struct *mm)
> +static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p)
>  {
>  	int i;
>  
> @@ -1189,17 +1223,22 @@ static inline void mm_init_cid(struct mm_struct *mm)
>  		struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, i);
>  
>  		pcpu_cid->cid = MM_CID_UNSET;
> +		pcpu_cid->recent_cid = MM_CID_UNSET;
>  		pcpu_cid->time = 0;
>  	}
> +	atomic_set(&mm->nr_cpus_allowed, p->nr_cpus_allowed);
> +	atomic_set(&mm->nr_cids_used, 0);
> +	spin_lock_init(&mm->cpus_allowed_lock);
> +	cpumask_copy(mm_cpus_allowed(mm), p->cpus_ptr);

Should that not be using p->cpus_mask ? I mean, it is unlikely this code
is ran during migrate_disable(), but just in case that ever does do
happen, we'll be getting a spurious single CPU mask.

>  	cpumask_clear(mm_cidmask(mm));
>  }
>  
> -static inline int mm_alloc_cid_noprof(struct mm_struct *mm)
> +static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *p)
>  {
>  	mm->pcpu_cid = alloc_percpu_noprof(struct mm_cid);
>  	if (!mm->pcpu_cid)
>  		return -ENOMEM;
> -	mm_init_cid(mm);
> +	mm_init_cid(mm, p);
>  	return 0;
>  }
>  #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
> @@ -1212,16 +1251,31 @@ static inline void mm_destroy_cid(struct mm_struct *mm)
>  
>  static inline unsigned int mm_cid_size(void)
>  {
> -	return cpumask_size();
> +	return 2 * cpumask_size();	/* mm_cpus_allowed(), mm_cidmask(). */
> +}
> +
> +static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask)
> +{
> +	struct cpumask *mm_allowed = mm_cpus_allowed(mm);
> +
> +	if (!mm)
> +		return;
> +	/* The mm_cpus_allowed is the union of each thread allowed CPUs masks. */
> +	spin_lock(&mm->cpus_allowed_lock);
> +	cpumask_or(mm_allowed, mm_allowed, cpumask);
> +	atomic_set(&mm->nr_cpus_allowed, cpumask_weight(mm_allowed));
> +	spin_unlock(&mm->cpus_allowed_lock);

We're having a problem here, you call this from __do_set_cpus_allowed(),
which is holding rq->lock, which is a raw_spinlock_t.

>  }
>  #else /* CONFIG_SCHED_MM_CID */
> -static inline void mm_init_cid(struct mm_struct *mm) { }
> -static inline int mm_alloc_cid(struct mm_struct *mm) { return 0; }
> +static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p) { }
> +static inline int mm_alloc_cid(struct mm_struct *mm, struct task_struct *p) { return 0; }
>  static inline void mm_destroy_cid(struct mm_struct *mm) { }
> +
>  static inline unsigned int mm_cid_size(void)
>  {
>  	return 0;
>  }
> +static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask) { }
>  #endif /* CONFIG_SCHED_MM_CID */
>  
>  struct mmu_gather;

> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 43e453ab7e20..772a3daf784a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2691,6 +2691,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
>  		put_prev_task(rq, p);
>  
>  	p->sched_class->set_cpus_allowed(p, ctx);
> +	mm_set_cpus_allowed(p->mm, ctx->new_mask);

This here, is with p->pi_lock and rq->lock held -- both are
raw_spinlock_t.

>  
>  	if (queued)
>  		enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK);
Re: [PATCH v1 1/2] sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads
Posted by Mathieu Desnoyers 1 month, 2 weeks ago
On 2024-10-09 11:07, Peter Zijlstra wrote:
> On Thu, Oct 03, 2024 at 08:44:38PM -0400, Mathieu Desnoyers wrote:
>> commit 223baf9d17f25 ("sched: Fix performance regression introduced by mm_cid")
>> introduced a per-mm/cpu current concurrency id (mm_cid), which keeps
>> a reference to the concurrency id allocated for each CPU. This reference
>> expires shortly after a 100ms delay.
>>
>> These per-CPU references keep the per-mm-cid data cache-local in
>> situations where threads are running at least once on each CPU within
>> each 100ms window, thus keeping the per-cpu reference alive.
>>
>> However, intermittent workloads behaving in bursts spaced by more than
>> 100ms on each CPU exhibit bad cache locality and degraded performance
>> compared to purely per-cpu data indexing, because concurrency IDs are
>> allocated over various CPUs and cores, therefore losing cache locality
>> of the associated data.
>>
>> Introduce the following changes to improve per-mm-cid cache locality:
>>
>> - Add a "recent_cid" field to the per-mm/cpu mm_cid structure to keep
>>    track of which mm_cid value was last used, and use it as a hint to
>>    attempt re-allocating the same concurrency ID the next time this
>>    mm/cpu needs to allocate a concurrency ID,
>>
>> - Add a per-mm CPUs allowed mask, which keeps track of the union of
>>    CPUs allowed for all threads belonging to this mm. This cpumask is
>>    only set during the lifetime of the mm, never cleared, so it
>>    represents the union of all the CPUs allowed since the beginning of
>>    the mm lifetime. (note that the mm_cpumask() is really arch-specific
>>    and tailored to the TLB flush needs, and is thus _not_ a viable
>>    approach for this)
> 
> Because my morning juice came with an excessive dose of pedantry this
> morning -- the previous and next item end with a comma due to this being
> an enumeration; but this one has a full stop, suggesting the iteration
> is at an end.

Lol, yes, I'll fix it thanks.

> 
>> - Add a per-mm nr_cpus_allowed to keep track of the weight of the
>>    per-mm CPUs allowed mask (for fast access),
>>
>> - Add a per-mm nr_cids_used to keep track of the highest concurrency
>>    ID allocated for the mm. This is used for expanding the concurrency ID
>>    allocation within the upper bound defined by:
> 
> The description and naming disagree -- while from vague memories they
> end up being similar -- it is a stumbling block this morning. The
> description seems to suggest this should be called max_cid or somesuch.

I am tempted to go for max_nr_cid then, so we keep tracking a _number_ of
concurrency IDs, because its purpose is to be compared with
mm->nr_cpus_allowed and mm->mm_users, which are also counts.

So I should change the way it is used in __mm_cid_try_get() to:

         /*
          * Expand cid allocation if the maximum number of concurrency
          * IDs allocated (max_nr_cid) is below the number cpus allowed
          * and number of threads. Expanding cid allocation as much as
          * possible improves cache locality.
          */
         cid = atomic_read(&mm->max_nr_cid);
         while (cid < atomic_read(&mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) {
                 if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1))
                         continue;
                 if (!cpumask_test_and_set_cpu(cid, cidmask))
                         return cid;
         }


> 
> Also, is it actually used for anything? I found the tracking code in
> __mm_cid_try_get(), but it's not actually doing anything?

It does something if there is no recent CID to reuse. Then we prefer
expanding with new CIDs (rather than find first available) as we are
within the range of nr_cpus_allowed and mm_users. Else, we use the
first available CID (favor compactness).

> 
>>      min(mm->nr_cpus_allowed, mm->mm_users)
>>
>>    When the next unused CID value reaches this threshold, stop trying
>>    to expand the cid allocation and use the first available cid value
>>    instead.
>>
>> Spreading allocation to use all the cid values within the range
>>
>>    [ 0, min(mm->nr_cpus_allowed, mm->mm_users) - 1 ]
>>
>> improves cache locality while preserving mm_cid compactness within the
>> expected user limits.
> 
> This paragraph seems to rudely interrupt the iteration ? Or is (Fred)
> Colon gone missing again to start a new iteration?
> 
> (Damn, and now I need me a Nobby reference somehow)

Lol thanks for the Discworld series good memories :)

Fixed.

> 
> Anyway, I have vague memories I strongly suggested keeping the CID space
> dense at some point :-)

You did indeed, but benchmarks disagreed up to a certain point with your
suggestion. So I've split this in two modes to accommodate each situation:

- When cid allocation is about to go beyond the number of threads in the
   mm, or beyond the number of allowed cpus, favor compactness and don't
   go beyond those limits. This provides clear upper bounds for
   userspace. This follows your earlier strong suggestion to the letter.

- When allocation is within the range limited by the number of threads in
   the mm and within range of number of allowed cpus, we are free to be
   compact or not: we've provided no guarantees to userspace there. If we
   happen to be more compact that those values, this is only due to pure
   chance and caused by the specific scheduling pattern, thus not something
   the user should rely on.

   And it turns out that for intermittent workloads, there are large gains
   by making sure we are spreading the cid allocation as much as possible
   up to that limit rather than keep it compact.

   Intermittent workloads can happen if userspace is only scheduled once in
   a while based on timers, or if the system is overcommitted and thus threads
   are only scheduled once in a while (each 100+ ms).

> 
>> - In __mm_cid_try_get, only return cid values within the range
>>    [ 0, mm->nr_cpus_allowed ] rather than [ 0, nr_cpu_ids ]. This
>>    prevents allocating cids above the number of allowed cpus in
>>    rare scenarios where cid allocation races with a concurrent
>>    remote-clear of the per-mm/cpu cid. This improvement is made
>>    possible by the addition of the per-mm CPUs allowed mask.
> 
> and no comma to continue the iteration.

Fixed.

> 
>> - In sched_mm_cid_migrate_to, use mm->nr_cpus_allowed rather than
>>    t->nr_cpus_allowed. This criterion was really meant to compare
>>    the number of mm->mm_users to the number of CPUs allowed for the
>>    entire mm. Therefore, the prior comparison worked fine when all
>>    threads shared the same CPUs allowed mask, but not so much in
>>    scenarios where those threads have different masks (e.g. each
>>    thread pinned to a single CPU). This improvement is made
>>    possible by the addition of the per-mm CPUs allowed mask.
>>
> 
>> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
>> index 6e3bdf8e38bc..8b5a185b4d5a 100644
>> --- a/include/linux/mm_types.h
>> +++ b/include/linux/mm_types.h
>> @@ -782,6 +782,7 @@ struct vm_area_struct {
>>   struct mm_cid {
>>   	u64 time;
>>   	int cid;
>> +	int recent_cid;
>>   };
>>   #endif
>>   
>> @@ -852,6 +853,27 @@ struct mm_struct {
>>   		 * When the next mm_cid scan is due (in jiffies).
>>   		 */
>>   		unsigned long mm_cid_next_scan;
>> +		/**
>> +		 * @nr_cpus_allowed: Number of CPUs allowed for mm.
>> +		 *
>> +		 * Number of CPUs allowed in the union of all mm's
>> +		 * threads allowed CPUs.
>> +		 */
>> +		atomic_t nr_cpus_allowed;
>> +		/**
>> +		 * @nr_cids_used: Number of used concurrency IDs.
>> +		 *
>> +		 * Track the highest concurrency ID allocated for the
>> +		 * mm: nr_cids_used - 1.
>> +		 */
>> +		atomic_t nr_cids_used;
>> +		/**
>> +		 * @cpus_allowed_lock: Lock protecting mm cpus_allowed.
>> +		 *
>> +		 * Provide mutual exclusion for mm cpus_allowed and
>> +		 * mm nr_cpus_allowed updates.
> 
> If nr_cpus_allowed update is serialized by this here thing, why is it an
> atomic_t? A quick search seems to suggest you're only using atomic_set()
> / atomic_read() on it, which is a big fat clue it shouldn't be atomic_t.
> 
> Am I missing something?

Right, I'll use an unsigned int with a READ_ONCE/WRITE_ONCE, because it is
read without locking.

> 
>> +		 */
>> +		spinlock_t cpus_allowed_lock;
>>   #endif
>>   #ifdef CONFIG_MMU
>>   		atomic_long_t pgtables_bytes;	/* size of all page tables */
>> @@ -1170,18 +1192,30 @@ static inline int mm_cid_clear_lazy_put(int cid)
>>   	return cid & ~MM_CID_LAZY_PUT;
>>   }
>>   
>> +/*
>> + * mm_cpus_allowed: Union of all mm's threads allowed CPUs.
>> + */
>> +static inline cpumask_t *mm_cpus_allowed(struct mm_struct *mm)
>> +{
>> +	unsigned long bitmap = (unsigned long)mm;
>> +
>> +	bitmap += offsetof(struct mm_struct, cpu_bitmap);
>> +	/* Skip cpu_bitmap */
>> +	bitmap += cpumask_size();
>> +	return (struct cpumask *)bitmap;
>> +}
>> +
>>   /* Accessor for struct mm_struct's cidmask. */
>>   static inline cpumask_t *mm_cidmask(struct mm_struct *mm)
>>   {
>> -	unsigned long cid_bitmap = (unsigned long)mm;
>> +	unsigned long cid_bitmap = (unsigned long)mm_cpus_allowed(mm);
>>   
>> -	cid_bitmap += offsetof(struct mm_struct, cpu_bitmap);
>> -	/* Skip cpu_bitmap */
>> +	/* Skip mm_cpus_allowed */
>>   	cid_bitmap += cpumask_size();
>>   	return (struct cpumask *)cid_bitmap;
>>   }
>>   
>> -static inline void mm_init_cid(struct mm_struct *mm)
>> +static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p)
>>   {
>>   	int i;
>>   
>> @@ -1189,17 +1223,22 @@ static inline void mm_init_cid(struct mm_struct *mm)
>>   		struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, i);
>>   
>>   		pcpu_cid->cid = MM_CID_UNSET;
>> +		pcpu_cid->recent_cid = MM_CID_UNSET;
>>   		pcpu_cid->time = 0;
>>   	}
>> +	atomic_set(&mm->nr_cpus_allowed, p->nr_cpus_allowed);
>> +	atomic_set(&mm->nr_cids_used, 0);
>> +	spin_lock_init(&mm->cpus_allowed_lock);
>> +	cpumask_copy(mm_cpus_allowed(mm), p->cpus_ptr);
> 
> Should that not be using p->cpus_mask ? I mean, it is unlikely this code
> is ran during migrate_disable(), but just in case that ever does do
> happen, we'll be getting a spurious single CPU mask.

Good point! Will fix.

> 
>>   	cpumask_clear(mm_cidmask(mm));
>>   }
>>   
>> -static inline int mm_alloc_cid_noprof(struct mm_struct *mm)
>> +static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct *p)
>>   {
>>   	mm->pcpu_cid = alloc_percpu_noprof(struct mm_cid);
>>   	if (!mm->pcpu_cid)
>>   		return -ENOMEM;
>> -	mm_init_cid(mm);
>> +	mm_init_cid(mm, p);
>>   	return 0;
>>   }
>>   #define mm_alloc_cid(...)	alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__))
>> @@ -1212,16 +1251,31 @@ static inline void mm_destroy_cid(struct mm_struct *mm)
>>   
>>   static inline unsigned int mm_cid_size(void)
>>   {
>> -	return cpumask_size();
>> +	return 2 * cpumask_size();	/* mm_cpus_allowed(), mm_cidmask(). */
>> +}
>> +
>> +static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask)
>> +{
>> +	struct cpumask *mm_allowed = mm_cpus_allowed(mm);
>> +
>> +	if (!mm)
>> +		return;
>> +	/* The mm_cpus_allowed is the union of each thread allowed CPUs masks. */
>> +	spin_lock(&mm->cpus_allowed_lock);
>> +	cpumask_or(mm_allowed, mm_allowed, cpumask);
>> +	atomic_set(&mm->nr_cpus_allowed, cpumask_weight(mm_allowed));
>> +	spin_unlock(&mm->cpus_allowed_lock);
> 
> We're having a problem here, you call this from __do_set_cpus_allowed(),
> which is holding rq->lock, which is a raw_spinlock_t.

Good point. It should have been:

                 raw_spinlock_t cpus_allowed_lock;

and using raw spin lock/unlock instead.

> 
>>   }
>>   #else /* CONFIG_SCHED_MM_CID */
>> -static inline void mm_init_cid(struct mm_struct *mm) { }
>> -static inline int mm_alloc_cid(struct mm_struct *mm) { return 0; }
>> +static inline void mm_init_cid(struct mm_struct *mm, struct task_struct *p) { }
>> +static inline int mm_alloc_cid(struct mm_struct *mm, struct task_struct *p) { return 0; }
>>   static inline void mm_destroy_cid(struct mm_struct *mm) { }
>> +
>>   static inline unsigned int mm_cid_size(void)
>>   {
>>   	return 0;
>>   }
>> +static inline void mm_set_cpus_allowed(struct mm_struct *mm, const struct cpumask *cpumask) { }
>>   #endif /* CONFIG_SCHED_MM_CID */
>>   
>>   struct mmu_gather;
> 
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 43e453ab7e20..772a3daf784a 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2691,6 +2691,7 @@ __do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
>>   		put_prev_task(rq, p);
>>   
>>   	p->sched_class->set_cpus_allowed(p, ctx);
>> +	mm_set_cpus_allowed(p->mm, ctx->new_mask);
> 
> This here, is with p->pi_lock and rq->lock held -- both are
> raw_spinlock_t.

Fixing by moving to a raw_spinlock_t.

Thanks!

Mathieu

> 
>>   
>>   	if (queued)
>>   		enqueue_task(rq, p, ENQUEUE_RESTORE | ENQUEUE_NOCLOCK);

-- 
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com