My initial testing showed that
perf bench futex hash
reported less operations/sec with private hash. After using the same
amount of buckets in the private hash as used by the global hash then
the operations/sec were about the same.
This changed once the private hash became resizable. This feature added
a RCU section and reference counting via atomic inc+dec operation into
the hot path.
The reference counting can be avoided if the private hash is made
immutable.
Extend PR_FUTEX_HASH_SET_SLOTS by a fourth argument which denotes if the
private should be made immutable. Once set (to true) the a further
resize is not allowed (same if set to global hash).
Add PR_FUTEX_HASH_GET_IMMUTABLE which returns true if the hash can not
be changed.
Update "perf bench" suite.
For comparison, results of "perf bench futex hash -s":
- Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs:
- Before the introducing task local hash
shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10
private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10
- With the series
shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10
-b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10
-Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10
-b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10
-Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10
128 is the default allocation of hash buckets.
8192 was the previous amount of allocated hash buckets.
- Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs:
- Before the introducing task local hash
shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20
private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20
- With the series
shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20
-b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20
-Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20
-b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20
-Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20
1024 is the default allocation of hash buckets.
65536 was the previous amount of allocated hash buckets.
Cc: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/futex.h | 2 +-
include/uapi/linux/prctl.h | 1 +
kernel/futex/core.c | 42 ++++++++++++++++++++++----
kernel/sys.c | 2 +-
tools/include/uapi/linux/prctl.h | 1 +
tools/perf/bench/futex-hash.c | 1 +
tools/perf/bench/futex-lock-pi.c | 1 +
tools/perf/bench/futex-requeue.c | 1 +
tools/perf/bench/futex-wake-parallel.c | 1 +
tools/perf/bench/futex-wake.c | 1 +
tools/perf/bench/futex.c | 8 +++--
tools/perf/bench/futex.h | 1 +
12 files changed, 51 insertions(+), 11 deletions(-)
diff --git a/include/linux/futex.h b/include/linux/futex.h
index ee48dcfbfe59d..96c7229856d97 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -80,7 +80,7 @@ void futex_exec_release(struct task_struct *tsk);
long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
u32 __user *uaddr2, u32 val2, u32 val3);
-int futex_hash_prctl(unsigned long arg2, unsigned long arg3);
+int futex_hash_prctl(unsigned long arg2, unsigned long arg3, unsigned long arg4);
#ifdef CONFIG_FUTEX_PRIVATE_HASH
int futex_hash_allocate_default(void);
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 3b93fb906e3c5..21f30b3ded74b 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -368,5 +368,6 @@ struct prctl_mm_map {
#define PR_FUTEX_HASH 78
# define PR_FUTEX_HASH_SET_SLOTS 1
# define PR_FUTEX_HASH_GET_SLOTS 2
+# define PR_FUTEX_HASH_GET_IMMUTABLE 3
#endif /* _LINUX_PRCTL_H */
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 5b8609c8729e7..44bb9eeb0a9c1 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -70,6 +70,7 @@ struct futex_private_hash {
struct rcu_head rcu;
void *mm;
bool custom;
+ bool immutable;
struct futex_hash_bucket queues[];
};
@@ -139,12 +140,16 @@ static inline bool futex_key_is_private(union futex_key *key)
bool futex_private_hash_get(struct futex_private_hash *fph)
{
+ if (fph->immutable)
+ return true;
return rcuref_get(&fph->users);
}
void futex_private_hash_put(struct futex_private_hash *fph)
{
/* Ignore return value, last put is verified via rcuref_is_dead() */
+ if (fph->immutable)
+ return;
if (rcuref_put(&fph->users))
wake_up_var(fph->mm);
}
@@ -284,6 +289,8 @@ struct futex_private_hash *futex_private_hash(void)
if (!fph)
return NULL;
+ if (fph->immutable)
+ return fph;
if (rcuref_get(&fph->users))
return fph;
}
@@ -1558,7 +1565,7 @@ static bool futex_hash_less(struct futex_private_hash *a,
return false; /* equal */
}
-static int futex_hash_allocate(unsigned int hash_slots, bool custom)
+static int futex_hash_allocate(unsigned int hash_slots, unsigned int immutable, bool custom)
{
struct mm_struct *mm = current->mm;
struct futex_private_hash *fph;
@@ -1572,7 +1579,7 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
*/
scoped_guard(rcu) {
fph = rcu_dereference(mm->futex_phash);
- if (fph && !fph->hash_mask) {
+ if (fph && (!fph->hash_mask || fph->immutable)) {
if (custom)
return -EBUSY;
return 0;
@@ -1586,6 +1593,7 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
rcuref_init(&fph->users, 1);
fph->hash_mask = hash_slots ? hash_slots - 1 : 0;
fph->custom = custom;
+ fph->immutable = !!immutable;
fph->mm = mm;
for (i = 0; i < hash_slots; i++)
@@ -1678,7 +1686,7 @@ int futex_hash_allocate_default(void)
if (current_buckets >= buckets)
return 0;
- return futex_hash_allocate(buckets, false);
+ return futex_hash_allocate(buckets, 0, false);
}
static int futex_hash_get_slots(void)
@@ -1692,9 +1700,22 @@ static int futex_hash_get_slots(void)
return 0;
}
+static int futex_hash_get_immutable(void)
+{
+ struct futex_private_hash *fph;
+
+ guard(rcu)();
+ fph = rcu_dereference(current->mm->futex_phash);
+ if (fph && fph->immutable)
+ return 1;
+ if (fph && !fph->hash_mask)
+ return 1;
+ return 0;
+}
+
#else
-static int futex_hash_allocate(unsigned int hash_slots, bool custom)
+static int futex_hash_allocate(unsigned int hash_slots, unsigned int immutable, bool custom)
{
return -EINVAL;
}
@@ -1703,21 +1724,30 @@ static int futex_hash_get_slots(void)
{
return 0;
}
+
+static int futex_hash_get_immutable(void)
+{
+ return 0;
+}
#endif
-int futex_hash_prctl(unsigned long arg2, unsigned long arg3)
+int futex_hash_prctl(unsigned long arg2, unsigned long arg3, unsigned long arg4)
{
int ret;
switch (arg2) {
case PR_FUTEX_HASH_SET_SLOTS:
- ret = futex_hash_allocate(arg3, true);
+ ret = futex_hash_allocate(arg3, arg4, true);
break;
case PR_FUTEX_HASH_GET_SLOTS:
ret = futex_hash_get_slots();
break;
+ case PR_FUTEX_HASH_GET_IMMUTABLE:
+ ret = futex_hash_get_immutable();
+ break;
+
default:
ret = -EINVAL;
break;
diff --git a/kernel/sys.c b/kernel/sys.c
index d446d8ecb0b33..adc0de0aa364a 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2822,7 +2822,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
error = posixtimer_create_prctl(arg2);
break;
case PR_FUTEX_HASH:
- error = futex_hash_prctl(arg2, arg3);
+ error = futex_hash_prctl(arg2, arg3, arg4);
break;
default:
trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5);
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index 3b93fb906e3c5..21f30b3ded74b 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -368,5 +368,6 @@ struct prctl_mm_map {
#define PR_FUTEX_HASH 78
# define PR_FUTEX_HASH_SET_SLOTS 1
# define PR_FUTEX_HASH_GET_SLOTS 2
+# define PR_FUTEX_HASH_GET_IMMUTABLE 3
#endif /* _LINUX_PRCTL_H */
diff --git a/tools/perf/bench/futex-hash.c b/tools/perf/bench/futex-hash.c
index c843bd8543c74..fdf133c9520f7 100644
--- a/tools/perf/bench/futex-hash.c
+++ b/tools/perf/bench/futex-hash.c
@@ -57,6 +57,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", ¶ms.runtime, "Specify runtime (in seconds)"),
OPT_UINTEGER('f', "futexes", ¶ms.nfutexes, "Specify amount of futexes per threads"),
diff --git a/tools/perf/bench/futex-lock-pi.c b/tools/perf/bench/futex-lock-pi.c
index 40640b6744279..5144a158512cc 100644
--- a/tools/perf/bench/futex-lock-pi.c
+++ b/tools/perf/bench/futex-lock-pi.c
@@ -47,6 +47,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", ¶ms.runtime, "Specify runtime (in seconds)"),
OPT_BOOLEAN( 'M', "multi", ¶ms.multi, "Use multiple futexes"),
diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c
index 0748b0fd689e8..a2f91ee1950b3 100644
--- a/tools/perf/bench/futex-requeue.c
+++ b/tools/perf/bench/futex-requeue.c
@@ -52,6 +52,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('q', "nrequeue", ¶ms.nrequeue, "Specify amount of threads to requeue at once"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
diff --git a/tools/perf/bench/futex-wake-parallel.c b/tools/perf/bench/futex-wake-parallel.c
index 6aede7c46b337..ee66482c29fd1 100644
--- a/tools/perf/bench/futex-wake-parallel.c
+++ b/tools/perf/bench/futex-wake-parallel.c
@@ -63,6 +63,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakers", ¶ms.nwakes, "Specify amount of waking threads"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
diff --git a/tools/perf/bench/futex-wake.c b/tools/perf/bench/futex-wake.c
index a31fc1563862e..8d6107f7cd941 100644
--- a/tools/perf/bench/futex-wake.c
+++ b/tools/perf/bench/futex-wake.c
@@ -52,6 +52,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakes", ¶ms.nwakes, "Specify amount of threads to wake at once"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
diff --git a/tools/perf/bench/futex.c b/tools/perf/bench/futex.c
index 8109d6bf3ede2..bed3b6e46d109 100644
--- a/tools/perf/bench/futex.c
+++ b/tools/perf/bench/futex.c
@@ -14,7 +14,7 @@ void futex_set_nbuckets_param(struct bench_futex_parameters *params)
if (params->nbuckets < 0)
return;
- ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, params->nbuckets);
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, params->nbuckets, params->buckets_immutable);
if (ret) {
printf("Requesting %d hash buckets failed: %d/%m\n",
params->nbuckets, ret);
@@ -38,11 +38,13 @@ void futex_print_nbuckets(struct bench_futex_parameters *params)
printf("Requested: %d in usage: %d\n", params->nbuckets, ret);
err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
}
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_IMMUTABLE);
if (params->nbuckets == 0)
ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
else
- ret = asprintf(&futex_hash_mode, "Futex hashing: %d hash buckets",
- params->nbuckets);
+ ret = asprintf(&futex_hash_mode, "Futex hashing: %d hash buckets %s",
+ params->nbuckets,
+ ret == 1 ? "(immutable)" : "");
} else {
if (ret <= 0) {
ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
diff --git a/tools/perf/bench/futex.h b/tools/perf/bench/futex.h
index dd295d27044ac..9c9a73f9d865e 100644
--- a/tools/perf/bench/futex.h
+++ b/tools/perf/bench/futex.h
@@ -26,6 +26,7 @@ struct bench_futex_parameters {
unsigned int nwakes;
unsigned int nrequeue;
int nbuckets;
+ bool buckets_immutable;
};
/**
--
2.49.0
Hi Sebastian. On 4/7/25 21:27, Sebastian Andrzej Siewior wrote: > My initial testing showed that > perf bench futex hash > > reported less operations/sec with private hash. After using the same > amount of buckets in the private hash as used by the global hash then > the operations/sec were about the same. > > This changed once the private hash became resizable. This feature added > a RCU section and reference counting via atomic inc+dec operation into > the hot path. > The reference counting can be avoided if the private hash is made > immutable. > Extend PR_FUTEX_HASH_SET_SLOTS by a fourth argument which denotes if the > private should be made immutable. Once set (to true) the a further > resize is not allowed (same if set to global hash). > Add PR_FUTEX_HASH_GET_IMMUTABLE which returns true if the hash can not > be changed. > Update "perf bench" suite. > It would be good option for the application to decide if it needs this. Using this option makes the perf regression goes away using previous number of buckets. Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com> base: ./perf bench futex hash Averaged 1556023 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M with series: ./perf bench futex hash -b32768 Averaged 126499 operations/sec (+- 0.41%), total secs = 10 <<-- .12M ./perf bench futex hash -Ib32768 Averaged 1549339 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M > For comparison, results of "perf bench futex hash -s": > - Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs: > - Before the introducing task local hash > shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10 > private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10 > > - With the series > shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10 > -b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10 > -Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10 > -b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10 > -Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10 > 128 is the default allocation of hash buckets. > 8192 was the previous amount of allocated hash buckets. > > - Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs: > - Before the introducing task local hash > shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20 > private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20 > > - With the series > shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20 > -b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20 > -Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20 > -b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20 > -Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20 > 1024 is the default allocation of hash buckets. > 65536 was the previous amount of allocated hash buckets. > > Cc: "Liang, Kan" <kan.liang@linux.intel.com> > Cc: Adrian Hunter <adrian.hunter@intel.com> > Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> > Cc: Arnaldo Carvalho de Melo <acme@kernel.org> > Cc: Ian Rogers <irogers@google.com> > Cc: Ingo Molnar <mingo@redhat.com> > Cc: Jiri Olsa <jolsa@kernel.org> > Cc: Mark Rutland <mark.rutland@arm.com> > Cc: Namhyung Kim <namhyung@kernel.org> > Cc: linux-perf-users@vger.kernel.org > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > --- > include/linux/futex.h | 2 +- > include/uapi/linux/prctl.h | 1 + > kernel/futex/core.c | 42 ++++++++++++++++++++++---- > kernel/sys.c | 2 +- > tools/include/uapi/linux/prctl.h | 1 + > tools/perf/bench/futex-hash.c | 1 + > tools/perf/bench/futex-lock-pi.c | 1 + > tools/perf/bench/futex-requeue.c | 1 + > tools/perf/bench/futex-wake-parallel.c | 1 + > tools/perf/bench/futex-wake.c | 1 + > tools/perf/bench/futex.c | 8 +++-- > tools/perf/bench/futex.h | 1 + > 12 files changed, 51 insertions(+), 11 deletions(-) > nit: Does it makes sense to split this patch into futex and perf?
On 2025-04-10 20:22:08 [+0530], Shrikanth Hegde wrote: > Hi Sebastian. Hi Shrikanth, > It would be good option for the application to decide if it needs this. You mean to have it as I introduced it here or something else? > Using this option makes the perf regression goes away using previous number of buckets. Okay, good to know. You test this on on ppc64le? > Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com> > > base: > ./perf bench futex hash > Averaged 1556023 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M > > with series: > ./perf bench futex hash -b32768 > Averaged 126499 operations/sec (+- 0.41%), total secs = 10 <<-- .12M > > ./perf bench futex hash -Ib32768 > Averaged 1549339 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M Thank you for testing. … > nit: Does it makes sense to split this patch into futex and perf? First I wanted to figure if we really do this. I have no idea if this regression would show up in real world use case or just here as part of the micro benchmark. And if we do this, it would probably make sense to have one perf patch which introduces -b & -I. And then figure out if the additional option to prctl should be part of the resize patch or not. Probably we should enforce 0/1 of arg4 from the beginning so maybe folding this in makes sense. Sebastian
On 4/10/25 20:58, Sebastian Andrzej Siewior wrote: > On 2025-04-10 20:22:08 [+0530], Shrikanth Hegde wrote: >> Hi Sebastian. > Hi Shrikanth, > >> It would be good option for the application to decide if it needs this. > > You mean to have it as I introduced it here or something else? > as you have introduced it here. >> Using this option makes the perf regression goes away using previous number of buckets. > > Okay, good to know. You test this on on ppc64le? Yes. > >> Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com> >> >> base: >> ./perf bench futex hash >> Averaged 1556023 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M >> >> with series: >> ./perf bench futex hash -b32768 >> Averaged 126499 operations/sec (+- 0.41%), total secs = 10 <<-- .12M >> >> ./perf bench futex hash -Ib32768 >> Averaged 1549339 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M > Thank you for testing. > … >> nit: Does it makes sense to split this patch into futex and perf? > > First I wanted to figure if we really do this. I have no idea if this > regression would show up in real world use case or just here as part of > the micro benchmark. > And if we do this, it would probably make sense to have one perf patch > which introduces -b & -I. And then figure out if the additional option > to prctl should be part of the resize patch or not. Probably we should > enforce 0/1 of arg4 from the beginning so maybe folding this in makes > sense. > ok. > Sebastian
On 2025-04-07 17:57:42 [+0200], To linux-kernel@vger.kernel.org wrote: > - Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs: > - Before the introducing task local hash > shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10 > private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10 > > - With the series > shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10 > -b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10 > -Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10 > -b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10 > -Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10 > 128 is the default allocation of hash buckets. > 8192 was the previous amount of allocated hash buckets. > > - Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs: > - Before the introducing task local hash > shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20 > private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20 > > - With the series > shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20 > -b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20 > -Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20 > -b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20 > -Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20 > 1024 is the default allocation of hash buckets. > 65536 was the previous amount of allocated hash buckets. On EPYC 7713, 2 NUMA nodes, 256 CPUs: ops/sec buckets -t 250 25.393 (+- 1,51%) 1024 -t 250 -b 1024 -I 1.645.327 (+- 0,34%) 1024 -t 250 -b 65536 25.445 (+- 2,41%) 65536 -t 250 -b 65536 -I 1.733.745 (+- 0,36%) 65536 -t 250 -b 0 1.745.242 (+- 0,21%) 32768 * 2 Sebastian
© 2016 - 2026 Red Hat, Inc.