From nobody Mon Feb 9 01:01:23 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=reject dis=none) header.from=citrix.com ARC-Seal: i=1; a=rsa-sha256; t=1646268082; cv=none; d=zohomail.com; s=zohoarc; b=mutSIZ6klqUBlz4AVQl7C4ZJfckZS4zIR6O6VcQ9zmCdHS2yU1vPs34lQhdmN4Hd9377nagHZxSv+FYLS3uf9MkGJFOVOaOJ+1kNoAe94iWFgMBYqhwV9A93xg8/LxW8Kfs0xCCxA/YF1UiAkaHtq5zZwD5X5A1x4EHuJhJU4bs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1646268082; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To; bh=yP3gBP5mFWWCneD/Z8/vV2yyi/CTWd/Pyah2g7mnvm8=; b=H8WWPIh7OrL7+1sSDIautgFIJdEJ41h3OvRWkIs+rgqMQaCnZuL0sheI011Pnf2vXm8LJpXnj6vLjworbw4VvSWCgMv0NauLGmUKpKZHV2TemeGFIx7whhY5jSTEwOelG72eDXnerMXRzDa1z3WUNUKweJUNzctJ5/XWavkbm0g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1646268082596933.7063618380427; Wed, 2 Mar 2022 16:41:22 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.282639.481421 (Exim 4.92) (envelope-from ) id 1nPZWW-0007OP-0p; Thu, 03 Mar 2022 00:40:52 +0000 Received: by outflank-mailman (output) from mailman id 282639.481421; Thu, 03 Mar 2022 00:40:51 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nPZWV-0007OI-TU; Thu, 03 Mar 2022 00:40:51 +0000 Received: by outflank-mailman (input) for mailman id 282639; Thu, 03 Mar 2022 00:40:51 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nPZWV-0007OC-1T for xen-devel@lists.xenproject.org; Thu, 03 Mar 2022 00:40:51 +0000 Received: from esa2.hc3370-68.iphmx.com (esa2.hc3370-68.iphmx.com [216.71.145.153]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 910642e1-9a8a-11ec-8eba-a37418f5ba1a; Thu, 03 Mar 2022 01:40:48 +0100 (CET) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 910642e1-9a8a-11ec-8eba-a37418f5ba1a DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1646268048; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=3wDk5sjgceGIpGYWpd8sSIy5+b30C+/+xACaZX5XfAo=; b=UOqxcTXsRdtc3RiOXdQxbP+xjLGGWxeI7NeuthA+KYLVxn3ujrlQeRaY 0y/hv6XTZHqxZ3Ba3Q9bl6SRBjR7DsOgIlt9VHNqpCKudx+02RZDNvZIR fAezV6LvZSp7m9cohGzTXgGCYe+zVJF3ArJCRnhgc7gbp6HRAChb66eJ6 8=; Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none X-SBRS: 5.1 X-MesageID: 65355541 X-Ironport-Server: esa2.hc3370-68.iphmx.com X-Remote-IP: 162.221.156.83 X-Policy: $RELAYED IronPort-Data: A9a23:etSnw6CDt5MBOxVW/wrjw5YqxClBgxIJ4kV8jS/XYbTApGsn0zRSy zBOWmyGaaqCMWL2eot0PYXi8UxX657cx4U3QQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA xA2M4GYRCwMZiaA4E/raNANlFEkvU2ybuOU5NXsZ2YgHWeIdA970Ug5w7Vh2NY06TSEK1jlV e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPh1z vBilLmLSjwbEbfLisRAaBZdNTtXaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us 6ZCcXZUM07F17neLLGTE4GAguwKKsXxMZxZkXZn1TzDVt4tQIzZQrWM7thdtNs1rp4TRquDO pZAAdZpRCqbUjBMZBAdMb8zmuyHoCWmcSRYjU3A8MLb5ECMlVcsgdABKuH9eNOQQt5Otl2Fv W+A9GP8ajkWOtWQxjuC9nOEnfLUkGXwX4d6PJ+S++NugVaT7ncOExBQXly+ydGph0j7V99BJ kg8/is1sbN05EGtVsP6XRCzvDiDpBF0c8VUO/037keK0KW83uqCLjFaFHgbMoVg7ZJoA2xxv rOUoz/3LThplZK0bEmezYW7tzSgGXUuImokfwZRGGPp/OLfiI00ixvOSPNqH6i0ksD5FFnM/ tyakMQtr+5N1JBWjs1X6XiC2mvx/caREmbZ8y2KBjrN0+9vWGKyi2VEA3D/5O0IEouWR0LpU JMsy5nHt7Bm4X1geUWwrAQx8FOBuq7t3N702wcH83wdG9KFoSTLkWd4umwWGauRGpxYEQIFm WeK0e+r2LddPWGxcYh8aJ+rBsIhwMDITIq5CKqIMIYfOccsLWdrGR2Cg2bKhQgBd2B2zMkC1 WqzK57wXR7294w9pNZJewvt+eBynX1vrY8ibZv60w6mwdKjiI29Et843K+1Rrlhtsus+VyNm /4Gbpfi40gPAYXWP3iMmaZOfA9iEJTOLc2vwyChXrXYeVQO9aBII6K5/I7NjKQ+x/UFzrqTp yrlMqKaoXKm7UD6xcyxQigLQNvSsVxX9BrX4QRE0Y6U5kUe IronPort-HdrOrdr: A9a23:ZD9e1qrazpyTbYVHTa/HV0oaV5oleYIsimQD101hICG8cqSj9v xG+85rsyMc6QxhP03I9urwW5VoLUmyyXcX2/h0AV7BZniFhILAFugLhuGOrwEIcxeOj9K1vp 0BT0ERMrPN5CBB/KPH3DU= X-IronPort-AV: E=Sophos;i="5.90,150,1643691600"; d="scan'208";a="65355541" From: Andrew Cooper To: Xen-devel CC: Andrew Cooper , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , Juergen Gross , Dario Faggioli Subject: [PATCH RFC] xen/sched: Optimise when only one scheduler is compiled in Date: Thu, 3 Mar 2022 00:40:15 +0000 Message-ID: <20220303004015.17688-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @citrix.com) X-ZM-MESSAGEID: 1646268084533100001 When only one scheduler is compiled in, function pointers can be optimised = to direct calls, and the hooks hardened against controlflow hijacking. RFC for several reasons. 1) There's an almost beautiful way of not introducing MAYBE_SCHED() and hid= ing the magic in REGISTER_SCHEDULER(), except it falls over https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D91765 which has no commen= t or resolution at all. 2) A different alternative which almost works is to remove the indirection = in .data.schedulers, but the singleton scheduler object can't be both there and in .init.rodata.cf_clobber. 3) I can't think of a way of build time check to enforce that new schedulers get added to the preprocessor magic. And the blocker: 4) This isn't compatible with how sched_idle_ops get used for granularity >= 1. Suggestions very welcome. Signed-off-by: Andrew Cooper --- CC: Jan Beulich CC: Roger Pau Monn=C3=A9 CC: Wei Liu CC: Juergen Gross CC: Dario Faggioli --- xen/common/sched/arinc653.c | 2 +- xen/common/sched/core.c | 4 +- xen/common/sched/credit.c | 2 +- xen/common/sched/credit2.c | 2 +- xen/common/sched/null.c | 2 +- xen/common/sched/private.h | 91 ++++++++++++++++++++++++++++++++---------= ---- xen/common/sched/rt.c | 2 +- 7 files changed, 72 insertions(+), 33 deletions(-) diff --git a/xen/common/sched/arinc653.c b/xen/common/sched/arinc653.c index a82c0d7314a1..73738b007e7d 100644 --- a/xen/common/sched/arinc653.c +++ b/xen/common/sched/arinc653.c @@ -694,7 +694,7 @@ a653sched_adjust_global(const struct scheduler *ops, * callback functions. * The symbol must be visible to the rest of Xen at link time. */ -static const struct scheduler sched_arinc653_def =3D { +const struct scheduler MAYBE_SCHED(sched_arinc653_def) =3D { .name =3D "ARINC 653 Scheduler", .opt_name =3D "arinc653", .sched_id =3D XEN_SCHEDULER_ARINC653, diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c index 19ab67818106..020a5741ca31 100644 --- a/xen/common/sched/core.c +++ b/xen/common/sched/core.c @@ -2263,7 +2263,7 @@ static struct sched_unit *do_schedule(struct sched_un= it *prev, s_time_t now, struct sched_unit *next; =20 /* get policy-specific decision on scheduling... */ - sched->do_schedule(sched, prev, now, sched_tasklet_check(cpu)); + sched_vcall(sched, do_schedule, sched, prev, now, sched_tasklet_check(= cpu)); =20 next =3D prev->next_task; =20 @@ -2975,7 +2975,7 @@ void __init scheduler_init(void) =20 #undef sched_test_func =20 - if ( schedulers[i]->global_init && schedulers[i]->global_init() < = 0 ) + if ( sched_global_init(schedulers[i]) < 0 ) { printk("scheduler %s failed initialization, dropped\n", schedulers[i]->opt_name); diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c index 4d3bd8cba6fc..8b85e9617fc0 100644 --- a/xen/common/sched/credit.c +++ b/xen/common/sched/credit.c @@ -2230,7 +2230,7 @@ csched_deinit(struct scheduler *ops) } } =20 -static const struct scheduler sched_credit_def =3D { +const struct scheduler MAYBE_SCHED(sched_credit_def) =3D { .name =3D "SMP Credit Scheduler", .opt_name =3D "credit", .sched_id =3D XEN_SCHEDULER_CREDIT, diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c index 0e3f89e5378e..fda3812d7ac1 100644 --- a/xen/common/sched/credit2.c +++ b/xen/common/sched/credit2.c @@ -4199,7 +4199,7 @@ csched2_deinit(struct scheduler *ops) xfree(prv); } =20 -static const struct scheduler sched_credit2_def =3D { +const struct scheduler MAYBE_SCHED(sched_credit2_def) =3D { .name =3D "SMP Credit Scheduler rev2", .opt_name =3D "credit2", .sched_id =3D XEN_SCHEDULER_CREDIT2, diff --git a/xen/common/sched/null.c b/xen/common/sched/null.c index 65a0a6c5312d..907a8ae1ca50 100644 --- a/xen/common/sched/null.c +++ b/xen/common/sched/null.c @@ -1025,7 +1025,7 @@ static void cf_check null_dump(const struct scheduler= *ops) spin_unlock_irqrestore(&prv->lock, flags); } =20 -static const struct scheduler sched_null_def =3D { +const struct scheduler MAYBE_SCHED(sched_null_def) =3D { .name =3D "null Scheduler", .opt_name =3D "null", .sched_id =3D XEN_SCHEDULER_NULL, diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h index a870320146ef..f3ba0101ecc7 100644 --- a/xen/common/sched/private.h +++ b/xen/common/sched/private.h @@ -271,6 +271,33 @@ static inline spinlock_t *pcpu_schedule_trylock(unsign= ed int cpu) return NULL; } =20 +#if 1 =3D=3D = \ + defined(CONFIG_SCHED_CREDIT) + defined(CONFIG_SCHED_CREDIT2) + \ + defined(CONFIG_SCHED_RTDS) + defined(CONFIG_SCHED_ARINC653) + \ + defined(CONFIG_SCHED_NULL) + +extern const struct scheduler sched_ops; +#define MAYBE_SCHED(x) __initdata_cf_clobber sched_ops +#define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \ + __used_section(".data.schedulers") =3D &sched_ops; + +#define sched_call(s, fn, ...) \ + alternative_call(sched_ops.fn, ##__VA_ARGS__) + +#define sched_vcall(s, fn, ...) \ + alternative_vcall(sched_ops.fn, ##__VA_ARGS__) + +#else + +#define MAYBE_SCHED(x) static x +#define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \ + __used_section(".data.schedulers") =3D &x; + +#define sched_call(s, fn, ...) (s)->fn(__VA_ARGS__) +#define sched_vcall(s, fn, ...) (s)->fn(__VA_ARGS__) + +#endif + struct scheduler { const char *name; /* full name for this scheduler */ const char *opt_name; /* option name for this scheduler */ @@ -333,39 +360,48 @@ struct scheduler { void (*dump_cpu_state) (const struct scheduler *, int); }; =20 +static inline int sched_global_init(const struct scheduler *s) +{ + if ( s->global_init ) + return sched_call(s, global_init); + return 0; +} + static inline int sched_init(struct scheduler *s) { - return s->init(s); + return sched_call(s, init, s); } =20 static inline void sched_deinit(struct scheduler *s) { - s->deinit(s); + sched_vcall(s, deinit, s); } =20 static inline spinlock_t *sched_switch_sched(struct scheduler *s, unsigned int cpu, void *pdata, void *vdata) { - return s->switch_sched(s, cpu, pdata, vdata); + return sched_call(s, switch_sched, s, cpu, pdata, vdata); } =20 static inline void sched_dump_settings(const struct scheduler *s) { if ( s->dump_settings ) - s->dump_settings(s); + sched_vcall(s, dump_settings, s); } =20 static inline void sched_dump_cpu_state(const struct scheduler *s, int cpu) { if ( s->dump_cpu_state ) - s->dump_cpu_state(s, cpu); + sched_vcall(s, dump_cpu_state, s, cpu); } =20 static inline void *sched_alloc_domdata(const struct scheduler *s, struct domain *d) { - return s->alloc_domdata ? s->alloc_domdata(s, d) : NULL; + if ( s->alloc_domdata ) + return sched_call(s, alloc_domdata, s, d); + return NULL; } =20 static inline void sched_free_domdata(const struct scheduler *s, @@ -373,12 +409,14 @@ static inline void sched_free_domdata(const struct sc= heduler *s, { ASSERT(s->free_domdata || !data); if ( s->free_domdata ) - s->free_domdata(s, data); + sched_vcall(s, free_domdata, s, data); } =20 static inline void *sched_alloc_pdata(const struct scheduler *s, int cpu) { - return s->alloc_pdata ? s->alloc_pdata(s, cpu) : NULL; + if ( s->alloc_pdata ) + return sched_call(s, alloc_pdata, s, cpu); + return NULL; } =20 static inline void sched_free_pdata(const struct scheduler *s, void *data, @@ -386,74 +424,74 @@ static inline void sched_free_pdata(const struct sche= duler *s, void *data, { ASSERT(s->free_pdata || !data); if ( s->free_pdata ) - s->free_pdata(s, data, cpu); + sched_vcall(s, free_pdata, s, data, cpu); } =20 static inline void sched_deinit_pdata(const struct scheduler *s, void *dat= a, int cpu) { if ( s->deinit_pdata ) - s->deinit_pdata(s, data, cpu); + sched_vcall(s, deinit_pdata, s, data, cpu); } =20 static inline void *sched_alloc_udata(const struct scheduler *s, struct sched_unit *unit, void *dom_d= ata) { - return s->alloc_udata(s, unit, dom_data); + return sched_call(s, alloc_udata, s, unit, dom_data); } =20 static inline void sched_free_udata(const struct scheduler *s, void *data) { - s->free_udata(s, data); + sched_vcall(s, free_udata, s, data); } =20 static inline void sched_insert_unit(const struct scheduler *s, struct sched_unit *unit) { if ( s->insert_unit ) - s->insert_unit(s, unit); + sched_vcall(s, insert_unit, s, unit); } =20 static inline void sched_remove_unit(const struct scheduler *s, struct sched_unit *unit) { if ( s->remove_unit ) - s->remove_unit(s, unit); + sched_vcall(s, remove_unit, s, unit); } =20 static inline void sched_sleep(const struct scheduler *s, struct sched_unit *unit) { if ( s->sleep ) - s->sleep(s, unit); + sched_vcall(s, sleep, s, unit); } =20 static inline void sched_wake(const struct scheduler *s, struct sched_unit *unit) { if ( s->wake ) - s->wake(s, unit); + sched_vcall(s, wake, s, unit); } =20 static inline void sched_yield(const struct scheduler *s, struct sched_unit *unit) { if ( s->yield ) - s->yield(s, unit); + sched_vcall(s, yield, s, unit); } =20 static inline void sched_context_saved(const struct scheduler *s, struct sched_unit *unit) { if ( s->context_saved ) - s->context_saved(s, unit); + sched_vcall(s, context_saved, s, unit); } =20 static inline void sched_migrate(const struct scheduler *s, struct sched_unit *unit, unsigned int cpu) { if ( s->migrate ) - s->migrate(s, unit, cpu); + sched_vcall(s, migrate, s, unit, cpu); else sched_set_res(unit, get_sched_res(cpu)); } @@ -461,7 +499,7 @@ static inline void sched_migrate(const struct scheduler= *s, static inline struct sched_resource *sched_pick_resource( const struct scheduler *s, const struct sched_unit *unit) { - return s->pick_resource(s, unit); + return sched_call(s, pick_resource, s, unit); } =20 static inline void sched_adjust_affinity(const struct scheduler *s, @@ -470,19 +508,23 @@ static inline void sched_adjust_affinity(const struct= scheduler *s, const cpumask_t *soft) { if ( s->adjust_affinity ) - s->adjust_affinity(s, unit, hard, soft); + sched_vcall(s, adjust_affinity, s, unit, hard, soft); } =20 static inline int sched_adjust_dom(const struct scheduler *s, struct domai= n *d, struct xen_domctl_scheduler_op *op) { - return s->adjust ? s->adjust(s, d, op) : 0; + if ( s->adjust ) + return sched_call(s, adjust, s, d, op); + return 0; } =20 static inline int sched_adjust_cpupool(const struct scheduler *s, struct xen_sysctl_scheduler_op *op) { - return s->adjust_global ? s->adjust_global(s, op) : 0; + if ( s->adjust_global ) + return sched_call(s, adjust_global, s, op); + return 0; } =20 static inline void sched_unit_pause_nosync(const struct sched_unit *unit) @@ -501,9 +543,6 @@ static inline void sched_unit_unpause(const struct sche= d_unit *unit) vcpu_unpause(v); } =20 -#define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \ - __used_section(".data.schedulers") =3D &x; - struct cpupool { unsigned int cpupool_id; diff --git a/xen/common/sched/rt.c b/xen/common/sched/rt.c index d6de25531b3c..9b42852b2de5 100644 --- a/xen/common/sched/rt.c +++ b/xen/common/sched/rt.c @@ -1529,7 +1529,7 @@ static void cf_check repl_timer_handler(void *data) spin_unlock_irq(&prv->lock); } =20 -static const struct scheduler sched_rtds_def =3D { +const struct scheduler MAYBE_SCHED(sched_rtds_def) =3D { .name =3D "SMP RTDS Scheduler", .opt_name =3D "rtds", .sched_id =3D XEN_SCHEDULER_RTDS, --=20 2.11.0