From nobody Wed May 8 05:05:08 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1610970952; cv=none; d=zohomail.com; s=zohoarc; b=TkrvGZwV6mTAleUspkH0C5Lm7Y4f/5/AwYcukAofHIrNZa4mUygwOFcpKmn3MqxsYl5bDZCVSZJ87xW9L/rUjFNqq5InKFichKheQK26zH06iWB44cENc/FTFVzBG+GoNuFsEjYJfzuElooX2O2foDifIgdkxzGu43JH+aZY+AY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610970952; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=puNmDHfyAfcD8/V1nWe0SEp7VXPx3Ta6JdhxgAf41VQ=; b=DoMwnOiDAxvX3q0oF58KXdACwzY8YEErEbECd5cn8sTCmMIHsjkJV6t3PiM9r1hriIsIzmNmA7XATjRh3hnvHcVsODIy2pA4XNfZQ+VR8ivAwUNGD4sPvzN9PIek7UIxXYX50U9mY3hv3rOucriQNpawDn5UIGovlg6xZEeER8o= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610970952853832.9729828304831; Mon, 18 Jan 2021 03:55:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.69612.124725 (Exim 4.92) (envelope-from ) id 1l1T7y-0005Wa-Rh; Mon, 18 Jan 2021 11:55:22 +0000 Received: by outflank-mailman (output) from mailman id 69612.124725; Mon, 18 Jan 2021 11:55:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T7y-0005WT-N3; Mon, 18 Jan 2021 11:55:22 +0000 Received: by outflank-mailman (input) for mailman id 69612; Mon, 18 Jan 2021 11:55:21 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T7x-0005WJ-Nd for xen-devel@lists.xenproject.org; Mon, 18 Jan 2021 11:55:21 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 1fdcd800-bc5c-4b1f-9238-a8bacdf01211; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 3E3F8AE6E; Mon, 18 Jan 2021 11:55:19 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 1fdcd800-bc5c-4b1f-9238-a8bacdf01211 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1610970919; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=puNmDHfyAfcD8/V1nWe0SEp7VXPx3Ta6JdhxgAf41VQ=; b=LKhUzS3Mtj5N4dH+eQojKJdwGXU/6AjfDXlVHVOqTGDH43r0t55CioEIJctCzhQcNqpaZV OBtDohXKZ2eVamR6DILxGTKPqq5AAiY/yM8Y3jEmU8gEjLrV0u+tUGoYKhpaC4gkB1XSao KsdEfMQMqtK1ABBcW5lq/FTfvcgbGZE= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 1/5] xen/hypfs: support dynamic hypfs nodes Date: Mon, 18 Jan 2021 12:55:12 +0100 Message-Id: <20210118115516.11001-2-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210118115516.11001-1-jgross@suse.com> References: <20210118115516.11001-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Add a HYPFS_VARDIR_INIT() macro for initializing such a directory statically, taking a struct hypfs_funcs pointer as parameter additional to those of HYPFS_DIR_INIT(). Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an additional parameter as this will be needed for dynamical entries. For being able to let the generic hypfs coding continue to work on normal struct hypfs_entry entities even for dynamical nodes add some infrastructure for allocating a working area for the current hypfs request in order to store needed information for traversing the tree. This area is anchored in a percpu pointer and can be retrieved by any level of the dynamic entries. The normal way to handle allocation and freeing is to allocate the data in the enter() callback of a node and to free it in the related exit() callback. Add a hypfs_add_dyndir() function for adding a dynamic directory template to the tree, which is needed for having the correct reference to its position in hypfs. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - switch to xzalloc_bytes() in hypfs_alloc_dyndata() (Jan Beulich) - carved out from previous patch - use enter() and exit() callbacks for allocating and freeing dyndata memory - add hypfs_add_dyndir() V3: - switch hypfs_alloc_dyndata() to be type safe (Jan Beulich) - rename HYPFS_VARDIR_INIT() to HYPFS_DIR_INIT_FUNC() (Jan Beulich) V4: - use temporary variables for avoiding multiple per_cpu() uses (Jan Beulich) - add comment (Jan Beulich) - hide hypfs_alloc_dyndata() type unsafe backing function (Jan Beulich) --- xen/common/hypfs.c | 43 +++++++++++++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 29 +++++++++++++++++---------- 2 files changed, 62 insertions(+), 10 deletions(-) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 73497ea1d7..6c0e59dedd 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -72,6 +72,7 @@ enum hypfs_lock_state { hypfs_write_locked }; static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked); +static DEFINE_PER_CPU(struct hypfs_dyndata *, hypfs_dyndata); =20 static DEFINE_PER_CPU(const struct hypfs_entry *, hypfs_last_node_entered); =20 @@ -155,6 +156,36 @@ static void node_exit_all(void) node_exit(*last); } =20 +#undef hypfs_alloc_dyndata +void *hypfs_alloc_dyndata(unsigned long size) +{ + unsigned int cpu =3D smp_processor_id(); + struct hypfs_dyndata **dyndata =3D &per_cpu(hypfs_dyndata, cpu); + + ASSERT(per_cpu(hypfs_locked, cpu) !=3D hypfs_unlocked); + ASSERT(*dyndata =3D=3D NULL); + + *dyndata =3D xzalloc_bytes(size); + + return *dyndata; +} + +void *hypfs_get_dyndata(void) +{ + struct hypfs_dyndata *dyndata =3D this_cpu(hypfs_dyndata); + + ASSERT(dyndata); + + return dyndata; +} + +void hypfs_free_dyndata(void) +{ + struct hypfs_dyndata **dyndata =3D &this_cpu(hypfs_dyndata); + + XFREE(*dyndata); +} + static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *n= ew) { int ret =3D -ENOENT; @@ -216,6 +247,18 @@ int hypfs_add_dir(struct hypfs_entry_dir *parent, return ret; } =20 +void hypfs_add_dyndir(struct hypfs_entry_dir *parent, + struct hypfs_entry_dir *template) +{ + /* + * As the template is only a placeholder for possibly multiple dynamic= ally + * generated directories, the link up to its parent can be static, whi= le + * the "real" children of the parent are to be found via the parent's + * findentry function only. + */ + template->e.parent =3D &parent->e; +} + int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault) { diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index a6dfdb7d8e..d028c01283 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -76,7 +76,7 @@ struct hypfs_entry_dir { struct list_head dirlist; }; =20 -#define HYPFS_DIR_INIT(var, nam) \ +#define HYPFS_DIR_INIT_FUNC(var, nam, fn) \ struct hypfs_entry_dir __read_mostly var =3D { \ .e.type =3D XEN_HYPFS_TYPE_DIR, \ .e.encoding =3D XEN_HYPFS_ENC_PLAIN, \ @@ -84,22 +84,25 @@ struct hypfs_entry_dir { .e.size =3D 0, \ .e.max_size =3D 0, \ .e.list =3D LIST_HEAD_INIT(var.e.list), \ - .e.funcs =3D &hypfs_dir_funcs, \ + .e.funcs =3D (fn), \ .dirlist =3D LIST_HEAD_INIT(var.dirlist), \ } =20 -#define HYPFS_VARSIZE_INIT(var, typ, nam, msz) \ - struct hypfs_entry_leaf __read_mostly var =3D { \ - .e.type =3D (typ), \ - .e.encoding =3D XEN_HYPFS_ENC_PLAIN, \ - .e.name =3D (nam), \ - .e.max_size =3D (msz), \ - .e.funcs =3D &hypfs_leaf_ro_funcs, \ +#define HYPFS_DIR_INIT(var, nam) \ + HYPFS_DIR_INIT_FUNC(var, nam, &hypfs_dir_funcs) + +#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \ + struct hypfs_entry_leaf __read_mostly var =3D { \ + .e.type =3D (typ), \ + .e.encoding =3D XEN_HYPFS_ENC_PLAIN, \ + .e.name =3D (nam), \ + .e.max_size =3D (msz), \ + .e.funcs =3D (fn), \ } =20 /* Content and size need to be set via hypfs_string_set_reference(). */ #define HYPFS_STRING_INIT(var, nam) \ - HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0) + HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_= funcs) =20 /* * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will poi= nt @@ -150,6 +153,8 @@ extern struct hypfs_entry_dir hypfs_root; =20 int hypfs_add_dir(struct hypfs_entry_dir *parent, struct hypfs_entry_dir *dir, bool nofault); +void hypfs_add_dyndir(struct hypfs_entry_dir *parent, + struct hypfs_entry_dir *template); int hypfs_add_leaf(struct hypfs_entry_dir *parent, struct hypfs_entry_leaf *leaf, bool nofault); const struct hypfs_entry *hypfs_node_enter(const struct hypfs_entry *entry= ); @@ -177,6 +182,10 @@ struct hypfs_entry *hypfs_leaf_findentry(const struct = hypfs_entry_dir *dir, struct hypfs_entry *hypfs_dir_findentry(const struct hypfs_entry_dir *dir, const char *name, unsigned int name_len); +void *hypfs_alloc_dyndata(unsigned long size); +#define hypfs_alloc_dyndata(type) ((type *)hypfs_alloc_dyndata(sizeof(type= ))) +void *hypfs_get_dyndata(void); +void hypfs_free_dyndata(void); #endif =20 #endif /* __XEN_HYPFS_H__ */ --=20 2.26.2 From nobody Wed May 8 05:05:08 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1610970952; cv=none; d=zohomail.com; s=zohoarc; b=mOdYu3Ne01znK1PBWDetfn1BR5/J8ulIxC9vlKbzhnSh+3ZvzY/mqaN21UNvIdDxQe57n7bWAyv20coh1HxUSzAkF3pWIg3QYTIZA/CXjjMzpEhhtaWjY+J2NX8/csweFMQ7rc+uY9UnLETK88DvKyRSw9YLbJF7wLzLPFJddFo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610970952; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=ZXC/E6+75NReM9ebqkdXeHvnngG5oZ9IgYlC+BUWRAA=; b=TJbULWdB0yiOuQXjt/VgRMEHe714q4fnkqkvviYdKx9Ic8IbZdZrv0fPIS+/QaMGoFznNPuTRK+YwsGAO0GTZhaUmWtNcV47YosYzx5ctjb/by7EnYUQKZHDacqkB/v+gestu5xw1U1MjULG3zwsCUpVYTYZ4HLP8RTO1aCDvzE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610970952831816.1018587934839; Mon, 18 Jan 2021 03:55:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.69614.124749 (Exim 4.92) (envelope-from ) id 1l1T84-0005a8-Fa; Mon, 18 Jan 2021 11:55:28 +0000 Received: by outflank-mailman (output) from mailman id 69614.124749; Mon, 18 Jan 2021 11:55:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T84-0005Zy-Bi; Mon, 18 Jan 2021 11:55:28 +0000 Received: by outflank-mailman (input) for mailman id 69614; Mon, 18 Jan 2021 11:55:26 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T82-0005WJ-Ci for xen-devel@lists.xenproject.org; Mon, 18 Jan 2021 11:55:26 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 5322ff8e-e627-454c-abbf-012024c57d5c; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 7865AB1EE; Mon, 18 Jan 2021 11:55:19 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 5322ff8e-e627-454c-abbf-012024c57d5c X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1610970919; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ZXC/E6+75NReM9ebqkdXeHvnngG5oZ9IgYlC+BUWRAA=; b=c07WApfhD8IrfWoFKg0+YBo87iWH14bAIKKcccG5vT+tK6dXmay4+4Cc1gxddN6Le742rc RX8I4qsaNLzXpoe51iD9tNlAOKgBiwTUdfsUsMdrPKNBkKhgj2GgSMNHRidrt88YpWJ1kM /1B64iIuborJ7nRM7O4ei/7clFzaQmE= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH v4 2/5] xen/hypfs: add support for id-based dynamic directories Date: Mon, 18 Jan 2021 12:55:13 +0100 Message-Id: <20210118115516.11001-3-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210118115516.11001-1-jgross@suse.com> References: <20210118115516.11001-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Add some helpers to hypfs.c to support dynamic directories with a numerical id as name. The dynamic directory is based on a template specified by the user allowing to use specific access functions and having a predefined set of entries in the directory. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich --- V2: - use macro for length of entry name (Jan Beulich) - const attributes (Jan Beulich) - use template name as format string (Jan Beulich) - add hypfs_dynid_entry_size() helper (Jan Beulich) - expect dyndir data having been allocated by enter() callback V3: - add a specific enter() callback returning the template pointer - add data field to struct hypfs_dyndir_id - rename hypfs_gen_dyndir_entry_id() (Jan Beulich) - add comments regarding generated names to be kept in sync (Jan Beulich) V4: - correct comments (Jan Beulich) --- xen/common/hypfs.c | 98 +++++++++++++++++++++++++++++++++++++++++ xen/include/xen/hypfs.h | 18 ++++++++ 2 files changed, 116 insertions(+) diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c index 6c0e59dedd..5468497404 100644 --- a/xen/common/hypfs.c +++ b/xen/common/hypfs.c @@ -365,6 +365,104 @@ unsigned int hypfs_getsize(const struct hypfs_entry *= entry) return entry->size; } =20 +/* + * Fill the direntry for a dynamically generated entry. Especially the + * generated name needs to be kept in sync with hypfs_gen_dyndir_id_entry(= ). + */ +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template, + unsigned int id, bool is_last, + XEN_GUEST_HANDLE_PARAM(void) *uaddr) +{ + struct xen_hypfs_dirlistentry direntry; + char name[HYPFS_DYNDIR_ID_NAMELEN]; + unsigned int e_namelen, e_len; + + e_namelen =3D snprintf(name, sizeof(name), template->e.name, id); + e_len =3D DIRENTRY_SIZE(e_namelen); + direntry.e.pad =3D 0; + direntry.e.type =3D template->e.type; + direntry.e.encoding =3D template->e.encoding; + direntry.e.content_len =3D template->e.funcs->getsize(&template->e); + direntry.e.max_write_len =3D template->e.max_size; + direntry.off_next =3D is_last ? 0 : e_len; + if ( copy_to_guest(*uaddr, &direntry, 1) ) + return -EFAULT; + if ( copy_to_guest_offset(*uaddr, DIRENTRY_NAME_OFF, name, + e_namelen + 1) ) + return -EFAULT; + + guest_handle_add_offset(*uaddr, e_len); + + return 0; +} + +static const struct hypfs_entry *hypfs_dyndir_enter( + const struct hypfs_entry *entry) +{ + const struct hypfs_dyndir_id *data; + + data =3D hypfs_get_dyndata(); + + /* Use template with original enter function. */ + return data->template->e.funcs->enter(&data->template->e); +} + +static struct hypfs_entry *hypfs_dyndir_findentry( + const struct hypfs_entry_dir *dir, const char *name, unsigned int name= _len) +{ + const struct hypfs_dyndir_id *data; + + data =3D hypfs_get_dyndata(); + + /* Use template with original findentry function. */ + return data->template->e.funcs->findentry(data->template, name, name_l= en); +} + +static int hypfs_read_dyndir(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + const struct hypfs_dyndir_id *data; + + data =3D hypfs_get_dyndata(); + + /* Use template with original read function. */ + return data->template->e.funcs->read(&data->template->e, uaddr); +} + +/* + * Fill dyndata with a dynamically generated entry based on a template + * and a numerical id. + * Needs to be kept in sync with hypfs_read_dyndir_id_entry() regarding the + * name generated. + */ +struct hypfs_entry *hypfs_gen_dyndir_id_entry( + const struct hypfs_entry_dir *template, unsigned int id, void *data) +{ + struct hypfs_dyndir_id *dyndata; + + dyndata =3D hypfs_get_dyndata(); + + dyndata->template =3D template; + dyndata->id =3D id; + dyndata->data =3D data; + snprintf(dyndata->name, sizeof(dyndata->name), template->e.name, id); + dyndata->dir =3D *template; + dyndata->dir.e.name =3D dyndata->name; + dyndata->dir.e.funcs =3D &dyndata->funcs; + dyndata->funcs =3D *template->e.funcs; + dyndata->funcs.enter =3D hypfs_dyndir_enter; + dyndata->funcs.findentry =3D hypfs_dyndir_findentry; + dyndata->funcs.read =3D hypfs_read_dyndir; + + return &dyndata->dir.e; +} + +unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template, + unsigned int id) +{ + return DIRENTRY_SIZE(snprintf(NULL, 0, template->name, id)); +} + int hypfs_read_dir(const struct hypfs_entry *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h index d028c01283..e9d4c2555b 100644 --- a/xen/include/xen/hypfs.h +++ b/xen/include/xen/hypfs.h @@ -76,6 +76,17 @@ struct hypfs_entry_dir { struct list_head dirlist; }; =20 +struct hypfs_dyndir_id { + struct hypfs_entry_dir dir; /* Modified copy of template. = */ + struct hypfs_funcs funcs; /* Dynamic functions. */ + const struct hypfs_entry_dir *template; /* Template used. */ +#define HYPFS_DYNDIR_ID_NAMELEN 12 + char name[HYPFS_DYNDIR_ID_NAMELEN]; /* Name of hypfs entry. */ + + unsigned int id; /* Numerical id. */ + void *data; /* Data associated with id. */ +}; + #define HYPFS_DIR_INIT_FUNC(var, nam, fn) \ struct hypfs_entry_dir __read_mostly var =3D { \ .e.type =3D XEN_HYPFS_TYPE_DIR, \ @@ -186,6 +197,13 @@ void *hypfs_alloc_dyndata(unsigned long size); #define hypfs_alloc_dyndata(type) ((type *)hypfs_alloc_dyndata(sizeof(type= ))) void *hypfs_get_dyndata(void); void hypfs_free_dyndata(void); +int hypfs_read_dyndir_id_entry(const struct hypfs_entry_dir *template, + unsigned int id, bool is_last, + XEN_GUEST_HANDLE_PARAM(void) *uaddr); +struct hypfs_entry *hypfs_gen_dyndir_id_entry( + const struct hypfs_entry_dir *template, unsigned int id, void *data); +unsigned int hypfs_dynid_entry_size(const struct hypfs_entry *template, + unsigned int id); #endif =20 #endif /* __XEN_HYPFS_H__ */ --=20 2.26.2 From nobody Wed May 8 05:05:08 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1610970953; cv=none; d=zohomail.com; s=zohoarc; b=CrCffazljlkv4biWF0TxWGSjb/ud6tScbkfJbTB+xQxBvrnnhHFt+Xjlfqhn4heCvMiMZUXwc92l+JlVFmQPMZ/cnsaRMhlMVOPkLYLosBCmgcjbs4Tvkb30Gua3raiBIh06aBeQKKgBvbKCDTd0UehqCYLKbAU/bc5ivBJaS3Y= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610970953; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=3Emy9rbLih902PcqSX2sNGugd9ZQYH2ntVDKxRTV0zU=; b=NUW+AeSXRkEXM1wbDhpa9/H5IwX1Wd2tHjHdxFBHVyH0qVyc5nkrY1IlUoopPlt/XNAJSASadggJ87Aw0fz/76WVHV4+QJeHrPggKXDJ5uOlOsDedrIAEXy+q/u5NdR36/ABjlXQIaC7uYJvOAa97UowuT2YqUw8qhBAYa1mo8g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610970953283792.6416084291186; Mon, 18 Jan 2021 03:55:53 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.69616.124773 (Exim 4.92) (envelope-from ) id 1l1T89-0005gE-5B; Mon, 18 Jan 2021 11:55:33 +0000 Received: by outflank-mailman (output) from mailman id 69616.124773; Mon, 18 Jan 2021 11:55:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T89-0005g6-1n; Mon, 18 Jan 2021 11:55:33 +0000 Received: by outflank-mailman (input) for mailman id 69616; Mon, 18 Jan 2021 11:55:31 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T87-0005WJ-Cr for xen-devel@lists.xenproject.org; Mon, 18 Jan 2021 11:55:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 99683b94-adbb-4245-b007-21442dea31e2; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id BF765B7C4; Mon, 18 Jan 2021 11:55:19 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 99683b94-adbb-4245-b007-21442dea31e2 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1610970919; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3Emy9rbLih902PcqSX2sNGugd9ZQYH2ntVDKxRTV0zU=; b=mgtawzlIo8ZGmR69RdVUGyowAKWAu4doGUknjRzCsWneeSq28fU2+7tEAPbCMOt7Bkz7sg BgZPANeMfGF2i7x9H6Z4qaYz05WMWWlQrOhyAvvf5hbEjU1/JQRdXNDLz8xt4kLb/7TCiL mO7Bh+zJB1WIXayMK0oE5jLQlUdOad4= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v4 3/5] xen/cpupool: add cpupool directories Date: Mon, 18 Jan 2021 12:55:14 +0100 Message-Id: <20210118115516.11001-4-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210118115516.11001-1-jgross@suse.com> References: <20210118115516.11001-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Add /cpupool/ directories to hypfs. Those are completely dynamic, so the related hypfs access functions need to be implemented. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Reviewed-by: Dario Faggioli --- V2: - added const (Jan Beulich) - call hypfs_add_dir() in helper (Dario Faggioli) - switch locking to enter/exit callbacks V3: - use generic dyndirid enter function - const for hypfs function vector (Jan Beulich) - drop size calculation from cpupool_dir_read() (Jan Beulich) - check cpupool id to not exceed UINT_MAX (Jan Beulich) - coding style (#if/#else/#endif) (Jan Beulich) --- docs/misc/hypfs-paths.pandoc | 9 +++ xen/common/sched/cpupool.c | 104 +++++++++++++++++++++++++++++++++++ 2 files changed, 113 insertions(+) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index 6c7b2f7ee3..aaca1cdf92 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -175,6 +175,15 @@ The major version of Xen. =20 The minor version of Xen. =20 +#### /cpupool/ + +A directory of all current cpupools. + +#### /cpupool/*/ + +The individual cpupools. Each entry is a directory with the name being the +cpupool-id (e.g. /cpupool/0/). + #### /params/ =20 A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index 0db7d77219..f293ba0cc4 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -13,6 +13,8 @@ =20 #include #include +#include +#include #include #include #include @@ -33,6 +35,7 @@ static int cpupool_moving_cpu =3D -1; static struct cpupool *cpupool_cpu_moving =3D NULL; static cpumask_t cpupool_locked_cpus; =20 +/* This lock nests inside sysctl or hypfs lock. */ static DEFINE_SPINLOCK(cpupool_lock); =20 static enum sched_gran __read_mostly opt_sched_granularity =3D SCHED_GRAN_= cpu; @@ -1003,12 +1006,113 @@ static struct notifier_block cpu_nfb =3D { .notifier_call =3D cpu_callback }; =20 +#ifdef CONFIG_HYPFS + +static HYPFS_DIR_INIT(cpupool_pooldir, "%u"); + +static int cpupool_dir_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + int ret =3D 0; + const struct cpupool *c; + + list_for_each_entry(c, &cpupool_list, list) + { + ret =3D hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id, + list_is_last(&c->list, &cpupool_l= ist), + &uaddr); + if ( ret ) + break; + } + + return ret; +} + +static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry) +{ + const struct cpupool *c; + unsigned int size =3D 0; + + list_for_each_entry(c, &cpupool_list, list) + size +=3D hypfs_dynid_entry_size(entry, c->cpupool_id); + + return size; +} + +static const struct hypfs_entry *cpupool_dir_enter( + const struct hypfs_entry *entry) +{ + struct hypfs_dyndir_id *data; + + data =3D hypfs_alloc_dyndata(struct hypfs_dyndir_id); + if ( !data ) + return ERR_PTR(-ENOMEM); + data->id =3D CPUPOOLID_NONE; + + spin_lock(&cpupool_lock); + + return entry; +} + +static void cpupool_dir_exit(const struct hypfs_entry *entry) +{ + spin_unlock(&cpupool_lock); + + hypfs_free_dyndata(); +} + +static struct hypfs_entry *cpupool_dir_findentry( + const struct hypfs_entry_dir *dir, const char *name, unsigned int name= _len) +{ + unsigned long id; + const char *end; + struct cpupool *cpupool; + + id =3D simple_strtoul(name, &end, 10); + if ( end !=3D name + name_len || id > UINT_MAX ) + return ERR_PTR(-ENOENT); + + cpupool =3D __cpupool_find_by_id(id, true); + + if ( !cpupool ) + return ERR_PTR(-ENOENT); + + return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool); +} + +static const struct hypfs_funcs cpupool_dir_funcs =3D { + .enter =3D cpupool_dir_enter, + .exit =3D cpupool_dir_exit, + .read =3D cpupool_dir_read, + .write =3D hypfs_write_deny, + .getsize =3D cpupool_dir_getsize, + .findentry =3D cpupool_dir_findentry, +}; + +static HYPFS_DIR_INIT_FUNC(cpupool_dir, "cpupool", &cpupool_dir_funcs); + +static void cpupool_hypfs_init(void) +{ + hypfs_add_dir(&hypfs_root, &cpupool_dir, true); + hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir); +} + +#else /* CONFIG_HYPFS */ + +static void cpupool_hypfs_init(void) +{ +} + +#endif /* CONFIG_HYPFS */ + static int __init cpupool_init(void) { unsigned int cpu; =20 cpupool_gran_init(); =20 + cpupool_hypfs_init(); + cpupool0 =3D cpupool_create(0, 0); BUG_ON(IS_ERR(cpupool0)); cpupool_put(cpupool0); --=20 2.26.2 From nobody Wed May 8 05:05:08 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1610970952; cv=none; d=zohomail.com; s=zohoarc; b=jv/VTT4DWnqWT6TbCRo4oemaKPWa+WKZoQD8bvNufyw6IP5FaM2fIYDQIppZKr4e5KAakd6LuEsA9An4m0UJQDV2Ns957Y8E6jpgNtL+Ur9/yZaS6Qf0MG2FnZMLn7o9O305/10Ukk/4WstQcDQJyNiWb1YVnCbrhB0DPbbQdso= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610970952; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=am94U0utN2Ue9jktg4hF/gzv3ykid5UM0GvnBsWgHyU=; b=IwK8dbGzRH7QM5kPlu9F0PCWeQSn54+0Ph5OvYTM1Ob1gCwT77u10rc51iA2/x3NGUhjZW37PJbIfuI2ac5d1OBkEJnticH2HqwsZUtZYLAl7mkxFYqU/G1IMbrFYZWRUSOsTBeYL5EIl2/k0qsYVYPHQrZLB7OPjEczbTs5DcM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610970952148122.99231515355518; Mon, 18 Jan 2021 03:55:52 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.69615.124756 (Exim 4.92) (envelope-from ) id 1l1T84-0005ao-UY; Mon, 18 Jan 2021 11:55:28 +0000 Received: by outflank-mailman (output) from mailman id 69615.124756; Mon, 18 Jan 2021 11:55:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T84-0005aa-MD; Mon, 18 Jan 2021 11:55:28 +0000 Received: by outflank-mailman (input) for mailman id 69615; Mon, 18 Jan 2021 11:55:26 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T82-0005WK-Ps for xen-devel@lists.xenproject.org; Mon, 18 Jan 2021 11:55:26 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id ff53ab65-988a-48b3-9032-640508289ab6; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 0ACB6B8E6; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: ff53ab65-988a-48b3-9032-640508289ab6 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1610970920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=am94U0utN2Ue9jktg4hF/gzv3ykid5UM0GvnBsWgHyU=; b=ctJACRl8JO/dMrxFUYbBGG7Gg/JqVbZFv0KKRXtD/BU/OjMeA8bj47QxRh9EOppRus9tA0 E5QY0yTqRg1+NYEfUEbEm6cHz7QRJJOBMVBN+Nhr0gSuFGl6DanvjlevbvVVLys0XmtG+p 5Ws7JlUKE9hZBv3jicAUmjIVFOoZqnQ= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v4 4/5] xen/cpupool: add scheduling granularity entry to cpupool entries Date: Mon, 18 Jan 2021 12:55:15 +0100 Message-Id: <20210118115516.11001-5-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210118115516.11001-1-jgross@suse.com> References: <20210118115516.11001-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Add a "sched-gran" entry to the per-cpupool hypfs directories. For now make this entry read-only and let it contain one of the strings "cpu", "core" or "socket". Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Reviewed-by: Jan Beulich --- V2: - added const (Jan Beulich) - modify test in cpupool_gran_read() (Jan Beulich) --- docs/misc/hypfs-paths.pandoc | 4 ++ xen/common/sched/cpupool.c | 72 ++++++++++++++++++++++++++++++++++-- 2 files changed, 72 insertions(+), 4 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index aaca1cdf92..f1ce24d7fe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,6 +184,10 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). =20 +#### /cpupool/*/sched-gran =3D ("cpu" | "core" | "socket") + +The scheduling granularity of a cpupool. + #### /params/ =20 A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index f293ba0cc4..e2011367bd 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -41,9 +41,10 @@ static DEFINE_SPINLOCK(cpupool_lock); static enum sched_gran __read_mostly opt_sched_granularity =3D SCHED_GRAN_= cpu; static unsigned int __read_mostly sched_granularity =3D 1; =20 +#define SCHED_GRAN_NAME_LEN 8 struct sched_gran_name { enum sched_gran mode; - char name[8]; + char name[SCHED_GRAN_NAME_LEN]; }; =20 static const struct sched_gran_name sg_name[] =3D { @@ -52,7 +53,7 @@ static const struct sched_gran_name sg_name[] =3D { {SCHED_GRAN_socket, "socket"}, }; =20 -static void sched_gran_print(enum sched_gran mode, unsigned int gran) +static const char *sched_gran_get_name(enum sched_gran mode) { const char *name =3D ""; unsigned int i; @@ -66,8 +67,13 @@ static void sched_gran_print(enum sched_gran mode, unsig= ned int gran) } } =20 + return name; +} + +static void sched_gran_print(enum sched_gran mode, unsigned int gran) +{ printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n", - name, gran, gran =3D=3D 1 ? "" : "s"); + sched_gran_get_name(mode), gran, gran =3D=3D 1 ? "" : "s"); } =20 #ifdef CONFIG_HAS_SCHED_GRANULARITY @@ -1014,10 +1020,16 @@ static int cpupool_dir_read(const struct hypfs_entr= y *entry, XEN_GUEST_HANDLE_PARAM(void) uaddr) { int ret =3D 0; - const struct cpupool *c; + struct cpupool *c; + struct hypfs_dyndir_id *data; + + data =3D hypfs_get_dyndata(); =20 list_for_each_entry(c, &cpupool_list, list) { + data->id =3D c->cpupool_id; + data->data =3D c; + ret =3D hypfs_read_dyndir_id_entry(&cpupool_pooldir, c->cpupool_id, list_is_last(&c->list, &cpupool_l= ist), &uaddr); @@ -1080,6 +1092,56 @@ static struct hypfs_entry *cpupool_dir_findentry( return hypfs_gen_dyndir_id_entry(&cpupool_pooldir, id, cpupool); } =20 +static int cpupool_gran_read(const struct hypfs_entry *entry, + XEN_GUEST_HANDLE_PARAM(void) uaddr) +{ + const struct hypfs_dyndir_id *data; + const struct cpupool *cpupool; + const char *gran; + + data =3D hypfs_get_dyndata(); + cpupool =3D data->data; + ASSERT(cpupool); + + gran =3D sched_gran_get_name(cpupool->gran); + + if ( !*gran ) + return -ENOENT; + + return copy_to_guest(uaddr, gran, strlen(gran) + 1) ? -EFAULT : 0; +} + +static unsigned int hypfs_gran_getsize(const struct hypfs_entry *entry) +{ + const struct hypfs_dyndir_id *data; + const struct cpupool *cpupool; + const char *gran; + + data =3D hypfs_get_dyndata(); + cpupool =3D data->data; + ASSERT(cpupool); + + gran =3D sched_gran_get_name(cpupool->gran); + + return strlen(gran) + 1; +} + +static const struct hypfs_funcs cpupool_gran_funcs =3D { + .enter =3D hypfs_node_enter, + .exit =3D hypfs_node_exit, + .read =3D cpupool_gran_read, + .write =3D hypfs_write_deny, + .getsize =3D hypfs_gran_getsize, + .findentry =3D hypfs_leaf_findentry, +}; + +static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran= ", + 0, &cpupool_gran_funcs); +static char granstr[SCHED_GRAN_NAME_LEN] =3D { + [0 ... SCHED_GRAN_NAME_LEN - 2] =3D '?', + [SCHED_GRAN_NAME_LEN - 1] =3D 0 +}; + static const struct hypfs_funcs cpupool_dir_funcs =3D { .enter =3D cpupool_dir_enter, .exit =3D cpupool_dir_exit, @@ -1095,6 +1157,8 @@ static void cpupool_hypfs_init(void) { hypfs_add_dir(&hypfs_root, &cpupool_dir, true); hypfs_add_dyndir(&cpupool_dir, &cpupool_pooldir); + hypfs_string_set_reference(&cpupool_gran, granstr); + hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true); } =20 #else /* CONFIG_HYPFS */ --=20 2.26.2 From nobody Wed May 8 05:05:08 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1610970953; cv=none; d=zohomail.com; s=zohoarc; b=bfvfZi8LmmtU2OJ3cnxLuSJA2hQgYmz6yHpMWqD/hYnvOZD9QfNRD3eKyova7dSjY5xY21Mm8LTH0cvmLG0jVvnsACcMWW20pYmBXTh63ktXAD/qQitj9pk98Ks+sBS//WmlQWluD8YB3j3/+ltk23ZOyKD/jl73k9jZ4AaGjd4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1610970953; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=Zvj0WIR083HaUoZpxG9d5NzM+F+8nwp7U2Fyjv6QVtM=; b=V57Dv/atZs3Q9yE6fm5L4BFhEhX33bxkUHFdzzAhSHYHB34/MFaAgquyw6qGfw3nN/JanLYBU5eQ6zrKuWQoVQIIZc9dp9hc/xOQECpFVBfc6W99rOJs6ckya8BOyLu6KcvdT2iFUZ9CZSFuQv9ZtaMWBd9/tD1IaUrPS29PUjQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) header.from= Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1610970953136434.7515939302324; Mon, 18 Jan 2021 03:55:53 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.69617.124780 (Exim 4.92) (envelope-from ) id 1l1T89-0005h4-MP; Mon, 18 Jan 2021 11:55:33 +0000 Received: by outflank-mailman (output) from mailman id 69617.124780; Mon, 18 Jan 2021 11:55:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T89-0005gh-Bu; Mon, 18 Jan 2021 11:55:33 +0000 Received: by outflank-mailman (input) for mailman id 69617; Mon, 18 Jan 2021 11:55:31 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l1T87-0005WK-Pv for xen-devel@lists.xenproject.org; Mon, 18 Jan 2021 11:55:31 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 25d4e8b7-18d2-4e33-90d2-10ce82f442c1; Mon, 18 Jan 2021 11:55:22 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 508AEB8E7; Mon, 18 Jan 2021 11:55:20 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 25d4e8b7-18d2-4e33-90d2-10ce82f442c1 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1610970920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zvj0WIR083HaUoZpxG9d5NzM+F+8nwp7U2Fyjv6QVtM=; b=dojN8pLPDg3/t6GXcEDE+e3vaa/gLgxQQ7oZTOox0BM4ihwMwrh2m01usfBP2pp2t70fq1 98J7UNtIv+dCNG5KGEmpkKCQ3pIayqa4jGxXLhXhPo5H5fHvqQrPcQQnKKoN/CGQtC8XGk kFlzDZYFKr7vnrmjrFOv7J4q2b+yhj8= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Ian Jackson , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu , Dario Faggioli Subject: [PATCH v4 5/5] xen/cpupool: make per-cpupool sched-gran hypfs node writable Date: Mon, 18 Jan 2021 12:55:16 +0100 Message-Id: <20210118115516.11001-6-jgross@suse.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210118115516.11001-1-jgross@suse.com> References: <20210118115516.11001-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) Content-Type: text/plain; charset="utf-8" Make /cpupool//sched-gran in hypfs writable. This will enable per cpupool selectable scheduling granularity. Writing this node is allowed only with no cpu assigned to the cpupool. Allowed are values "cpu", "core" and "socket". Signed-off-by: Juergen Gross Reviewed-by: Dario Faggioli Reviewed-by: Jan Beulich --- V2: - test user parameters earlier (Jan Beulich) V3: - fix build without CONFIG_HYPFS on Arm (Andrew Cooper) --- docs/misc/hypfs-paths.pandoc | 5 ++- xen/common/sched/cpupool.c | 70 ++++++++++++++++++++++++++++++------ 2 files changed, 63 insertions(+), 12 deletions(-) diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc index f1ce24d7fe..e86f7d0dbe 100644 --- a/docs/misc/hypfs-paths.pandoc +++ b/docs/misc/hypfs-paths.pandoc @@ -184,10 +184,13 @@ A directory of all current cpupools. The individual cpupools. Each entry is a directory with the name being the cpupool-id (e.g. /cpupool/0/). =20 -#### /cpupool/*/sched-gran =3D ("cpu" | "core" | "socket") +#### /cpupool/*/sched-gran =3D ("cpu" | "core" | "socket") [w] =20 The scheduling granularity of a cpupool. =20 +Writing a value is allowed only for cpupools with no cpu assigned and if t= he +architecture is supporting different scheduling granularities. + #### /params/ =20 A directory of runtime parameters. diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c index e2011367bd..acd26f9449 100644 --- a/xen/common/sched/cpupool.c +++ b/xen/common/sched/cpupool.c @@ -77,7 +77,7 @@ static void sched_gran_print(enum sched_gran mode, unsign= ed int gran) } =20 #ifdef CONFIG_HAS_SCHED_GRANULARITY -static int __init sched_select_granularity(const char *str) +static int sched_gran_get(const char *str, enum sched_gran *mode) { unsigned int i; =20 @@ -85,36 +85,43 @@ static int __init sched_select_granularity(const char *= str) { if ( strcmp(sg_name[i].name, str) =3D=3D 0 ) { - opt_sched_granularity =3D sg_name[i].mode; + *mode =3D sg_name[i].mode; return 0; } } =20 return -EINVAL; } + +static int __init sched_select_granularity(const char *str) +{ + return sched_gran_get(str, &opt_sched_granularity); +} custom_param("sched-gran", sched_select_granularity); +#elif CONFIG_HYPFS +static int sched_gran_get(const char *str, enum sched_gran *mode) +{ + return -EINVAL; +} #endif =20 -static unsigned int __init cpupool_check_granularity(void) +static unsigned int cpupool_check_granularity(enum sched_gran mode) { unsigned int cpu; unsigned int siblings, gran =3D 0; =20 - if ( opt_sched_granularity =3D=3D SCHED_GRAN_cpu ) + if ( mode =3D=3D SCHED_GRAN_cpu ) return 1; =20 for_each_online_cpu ( cpu ) { - siblings =3D cpumask_weight(sched_get_opt_cpumask(opt_sched_granul= arity, - cpu)); + siblings =3D cpumask_weight(sched_get_opt_cpumask(mode, cpu)); if ( gran =3D=3D 0 ) gran =3D siblings; else if ( gran !=3D siblings ) return 0; } =20 - sched_disable_smt_switching =3D true; - return gran; } =20 @@ -126,7 +133,7 @@ static void __init cpupool_gran_init(void) =20 while ( gran =3D=3D 0 ) { - gran =3D cpupool_check_granularity(); + gran =3D cpupool_check_granularity(opt_sched_granularity); =20 if ( gran =3D=3D 0 ) { @@ -152,6 +159,9 @@ static void __init cpupool_gran_init(void) if ( fallback ) warning_add(fallback); =20 + if ( opt_sched_granularity !=3D SCHED_GRAN_cpu ) + sched_disable_smt_switching =3D true; + sched_granularity =3D gran; sched_gran_print(opt_sched_granularity, sched_granularity); } @@ -1126,17 +1136,55 @@ static unsigned int hypfs_gran_getsize(const struct= hypfs_entry *entry) return strlen(gran) + 1; } =20 +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf, + XEN_GUEST_HANDLE_PARAM(const_void) uaddr, + unsigned int ulen) +{ + const struct hypfs_dyndir_id *data; + struct cpupool *cpupool; + enum sched_gran gran; + unsigned int sched_gran =3D 0; + char name[SCHED_GRAN_NAME_LEN]; + int ret =3D 0; + + if ( ulen > SCHED_GRAN_NAME_LEN ) + return -ENOSPC; + + if ( copy_from_guest(name, uaddr, ulen) ) + return -EFAULT; + + if ( memchr(name, 0, ulen) =3D=3D (name + ulen - 1) ) + sched_gran =3D sched_gran_get(name, &gran) ? + 0 : cpupool_check_granularity(gran); + if ( sched_gran =3D=3D 0 ) + return -EINVAL; + + data =3D hypfs_get_dyndata(); + cpupool =3D data->data; + ASSERT(cpupool); + + if ( !cpumask_empty(cpupool->cpu_valid) ) + ret =3D -EBUSY; + else + { + cpupool->gran =3D gran; + cpupool->sched_gran =3D sched_gran; + } + + return ret; +} + static const struct hypfs_funcs cpupool_gran_funcs =3D { .enter =3D hypfs_node_enter, .exit =3D hypfs_node_exit, .read =3D cpupool_gran_read, - .write =3D hypfs_write_deny, + .write =3D cpupool_gran_write, .getsize =3D hypfs_gran_getsize, .findentry =3D hypfs_leaf_findentry, }; =20 static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran= ", - 0, &cpupool_gran_funcs); + SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs); static char granstr[SCHED_GRAN_NAME_LEN] =3D { [0 ... SCHED_GRAN_NAME_LEN - 2] =3D '?', [SCHED_GRAN_NAME_LEN - 1] =3D 0 --=20 2.26.2