From nobody Fri May 17 11:58:58 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1662825036; cv=none; d=zohomail.com; s=zohoarc; b=Heb2VwPyaHFBIqOHSE9bVc7ZA/6xZ3TsKTEX1WDwxVdF1FzF7ay0U4C30fnBOwZ1Q/0/xM8VKEw0ln/iQ0te7H/ySI4vgInRWgS2wRw2XIFHZ8eMebzjSp3xCbUlp/Iv1KGG9aZlbXh9NfosHXkiTa168GPHqPHSOE4d2YMJd5I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662825036; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=YX7767BRvnYvt1UK4p6BHY9ImMXrDao1VWeozDcVM6o=; b=fmaTjlr5PT5l7O17bsIfv56V4ym5qSg5odbgIkBkEDIC98h1hOxbdTDKc3QytdaZFrI+skuRBjpWWnM1lH55DxVncECP/HSx/4xNMFVsTkcSXyJ/FXvnTwXb+vEX1Qj5+ynHyk8tmXF3iYsIEy1h6OWoWvanRYu34pOdAdFooJU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1662825036706721.8325894542583; Sat, 10 Sep 2022 08:50:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.404808.647407 (Exim 4.92) (envelope-from ) id 1oX2kF-0004sK-Ie; Sat, 10 Sep 2022 15:50:11 +0000 Received: by outflank-mailman (output) from mailman id 404808.647407; Sat, 10 Sep 2022 15:50:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oX2kF-0004sD-Eg; Sat, 10 Sep 2022 15:50:11 +0000 Received: by outflank-mailman (input) for mailman id 404808; Sat, 10 Sep 2022 15:50:10 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oX2kE-0004rT-GV for xen-devel@lists.xenproject.org; Sat, 10 Sep 2022 15:50:10 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3ee2666c-3120-11ed-a31c-8f8a9ae3403f; Sat, 10 Sep 2022 17:50:08 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2231222004; Sat, 10 Sep 2022 15:50:08 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AF63D13441; Sat, 10 Sep 2022 15:50:07 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id W3KQKS+yHGNiZAAAMHmgww (envelope-from ); Sat, 10 Sep 2022 15:50:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3ee2666c-3120-11ed-a31c-8f8a9ae3403f DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1662825008; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YX7767BRvnYvt1UK4p6BHY9ImMXrDao1VWeozDcVM6o=; b=EniQIcBXKrVNcpGY7A9pztSJt35WlnHvbP9WEHwcy0OZSRVKQE+h0qbbvU3IF6SdLxDFsa jOXrljql5m9vh2GcHmaxbPDZJorzhVhkXGNcFddz6jDOLzMftnaLTaepOUNCIlgxtoeADg a1GSxerLR0XXoV90nvp+7rcskbzLONE= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Stefano Stabellini , Julien Grall , Bertrand Marquis , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Tamas K Lengyel , Lukasz Hawrylko , "Daniel P. Smith" , =?UTF-8?q?Mateusz=20M=C3=B3wka?= , Paul Durrant Subject: [RFC PATCH 1/3] xen/spinlock: add explicit non-recursive locking functions Date: Sat, 10 Sep 2022 17:49:57 +0200 Message-Id: <20220910154959.15971-2-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220910154959.15971-1-jgross@suse.com> References: <20220910154959.15971-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1662825037641100005 Content-Type: text/plain; charset="utf-8" In order to prepare a type-safe recursive spinlock structure, add explicitly non-recursive locking functions to be used for non-recursive locking of spinlocks, which are use recursively, too. Signed-off-by: Juergen Gross --- xen/arch/arm/mm.c | 4 ++-- xen/arch/x86/domain.c | 12 ++++++------ xen/arch/x86/mm.c | 12 ++++++------ xen/arch/x86/mm/mem_sharing.c | 8 ++++---- xen/arch/x86/mm/p2m-pod.c | 4 ++-- xen/arch/x86/mm/p2m.c | 4 ++-- xen/arch/x86/numa.c | 4 ++-- xen/arch/x86/tboot.c | 4 ++-- xen/common/domain.c | 4 ++-- xen/common/domctl.c | 4 ++-- xen/common/grant_table.c | 10 +++++----- xen/common/ioreq.c | 2 +- xen/common/memory.c | 4 ++-- xen/common/page_alloc.c | 18 +++++++++--------- xen/drivers/char/console.c | 24 ++++++++++++------------ xen/drivers/passthrough/pci.c | 4 ++-- xen/include/xen/spinlock.h | 17 ++++++++++++++++- 17 files changed, 77 insertions(+), 62 deletions(-) diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c index 11ee49598b..bf88d2cab8 100644 --- a/xen/arch/arm/mm.c +++ b/xen/arch/arm/mm.c @@ -1284,7 +1284,7 @@ void share_xen_page_with_guest(struct page_info *page= , struct domain *d, if ( page_get_owner(page) =3D=3D d ) return; =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); =20 /* * The incremented type count pins as writable or read-only. @@ -1315,7 +1315,7 @@ void share_xen_page_with_guest(struct page_info *page= , struct domain *d, page_list_add_tail(page, &d->xenpage_list); } =20 - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); } =20 int xenmem_add_to_physmap_one( diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 41e1e3f272..a66846a6d1 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -213,7 +213,7 @@ void dump_pageframe_info(struct domain *d) { unsigned long total[MASK_EXTR(PGT_type_mask, PGT_type_mask) + 1] = =3D {}; =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { unsigned int index =3D MASK_EXTR(page->u.inuse.type_info, @@ -232,13 +232,13 @@ void dump_pageframe_info(struct domain *d) _p(mfn_x(page_to_mfn(page))), page->count_info, page->u.inuse.type_info); } - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); } =20 if ( is_hvm_domain(d) ) p2m_pod_dump_data(d); =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); =20 page_list_for_each ( page, &d->xenpage_list ) { @@ -254,7 +254,7 @@ void dump_pageframe_info(struct domain *d) page->count_info, page->u.inuse.type_info); } =20 - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); } =20 void update_guest_memory_policy(struct vcpu *v, @@ -2456,10 +2456,10 @@ int domain_relinquish_resources(struct domain *d) } #endif =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page_list_splice(&d->arch.relmem_list, &d->page_list); INIT_PAGE_LIST_HEAD(&d->arch.relmem_list); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 PROGRESS(xen): =20 diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index db1817b691..e084ba04ad 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -499,7 +499,7 @@ void share_xen_page_with_guest(struct page_info *page, = struct domain *d, =20 set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), INVALID_M2P_ENTRY); =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); =20 /* The incremented type count pins as writable or read-only. */ page->u.inuse.type_info =3D @@ -519,7 +519,7 @@ void share_xen_page_with_guest(struct page_info *page, = struct domain *d, page_list_add_tail(page, &d->xenpage_list); } =20 - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); } =20 void make_cr3(struct vcpu *v, mfn_t mfn) @@ -3586,11 +3586,11 @@ long do_mmuext_op( { bool drop_ref; =20 - spin_lock(&pg_owner->page_alloc_lock); + spin_lock_nonrecursive(&pg_owner->page_alloc_lock); drop_ref =3D (pg_owner->is_dying && test_and_clear_bit(_PGT_pinned, &page->u.inuse.type_info)); - spin_unlock(&pg_owner->page_alloc_lock); + spin_unlock_nonrecursive(&pg_owner->page_alloc_lock); if ( drop_ref ) { pin_drop: @@ -4413,7 +4413,7 @@ int steal_page( * that it might be upon return from alloc_domheap_pages with * MEMF_no_owner set. */ - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); =20 BUG_ON(page->u.inuse.type_info & (PGT_count_mask | PGT_locked | PGT_pinned)); @@ -4425,7 +4425,7 @@ int steal_page( if ( !(memflags & MEMF_no_refcount) && !domain_adjust_tot_pages(d, -1)= ) drop_dom_ref =3D true; =20 - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 if ( unlikely(drop_dom_ref) ) put_domain(d); diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 649d93dc54..89817dc427 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -758,11 +758,11 @@ static int page_make_private(struct domain *d, struct= page_info *page) if ( !get_page(page, dom_cow) ) return -EINVAL; =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); =20 if ( d->is_dying ) { - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); put_page(page); return -EBUSY; } @@ -770,7 +770,7 @@ static int page_make_private(struct domain *d, struct p= age_info *page) expected_type =3D (PGT_shared_page | PGT_validated | PGT_locked | 2); if ( page->u.inuse.type_info !=3D expected_type ) { - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); put_page(page); return -EEXIST; } @@ -787,7 +787,7 @@ static int page_make_private(struct domain *d, struct p= age_info *page) if ( domain_adjust_tot_pages(d, 1) =3D=3D 1 ) get_knownalive_domain(d); page_list_add_tail(page, &d->page_list); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 put_page(page); =20 diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index fc110506dc..deab55648c 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -39,7 +39,7 @@ static inline void lock_page_alloc(struct p2m_domain *p2m) { page_alloc_mm_pre_lock(p2m->domain); - spin_lock(&(p2m->domain->page_alloc_lock)); + spin_lock_nonrecursive(&(p2m->domain->page_alloc_lock)); page_alloc_mm_post_lock(p2m->domain, p2m->domain->arch.page_alloc_unlock_level); } @@ -47,7 +47,7 @@ static inline void lock_page_alloc(struct p2m_domain *p2m) static inline void unlock_page_alloc(struct p2m_domain *p2m) { page_alloc_mm_unlock(p2m->domain->arch.page_alloc_unlock_level); - spin_unlock(&(p2m->domain->page_alloc_lock)); + spin_unlock_nonrecursive(&(p2m->domain->page_alloc_lock)); } =20 /* diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index a405ee5fde..30bc248f72 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2245,7 +2245,7 @@ void audit_p2m(struct domain *d, =20 /* Audit part two: walk the domain's page allocation list, checking * the m2p entries. */ - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page_list_for_each ( page, &d->page_list ) { mfn =3D mfn_x(page_to_mfn(page)); @@ -2297,7 +2297,7 @@ void audit_p2m(struct domain *d, P2M_PRINTK("OK: mfn=3D%#lx, gfn=3D%#lx, p2mfn=3D%#lx\n", mfn, gfn, mfn_x(p2mfn)); } - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 pod_unlock(p2m); p2m_unlock(p2m); diff --git a/xen/arch/x86/numa.c b/xen/arch/x86/numa.c index 627ae8aa95..90fbfdcb31 100644 --- a/xen/arch/x86/numa.c +++ b/xen/arch/x86/numa.c @@ -425,13 +425,13 @@ static void cf_check dump_numa(unsigned char key) for_each_online_node ( i ) page_num_node[i] =3D 0; =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { i =3D phys_to_nid(page_to_maddr(page)); page_num_node[i]++; } - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 for_each_online_node ( i ) printk(" Node %u: %u\n", i, page_num_node[i]); diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c index fe1abfdf08..93e8e3e90f 100644 --- a/xen/arch/x86/tboot.c +++ b/xen/arch/x86/tboot.c @@ -215,14 +215,14 @@ static void tboot_gen_domain_integrity(const uint8_t = key[TB_KEY_SIZE], continue; printk("MACing Domain %u\n", d->domain_id); =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page_list_for_each(page, &d->page_list) { void *pg =3D __map_domain_page(page); vmac_update(pg, PAGE_SIZE, &ctx); unmap_domain_page(pg); } - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 if ( !is_idle_domain(d) ) { diff --git a/xen/common/domain.c b/xen/common/domain.c index c23f449451..51160a4b5c 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -598,8 +598,8 @@ struct domain *domain_create(domid_t domid, =20 atomic_set(&d->refcnt, 1); RCU_READ_LOCK_INIT(&d->rcu_lock); - spin_lock_init_prof(d, domain_lock); - spin_lock_init_prof(d, page_alloc_lock); + spin_lock_recursive_init_prof(d, domain_lock); + spin_lock_recursive_init_prof(d, page_alloc_lock); spin_lock_init(&d->hypercall_deadlock_mutex); INIT_PAGE_LIST_HEAD(&d->page_list); INIT_PAGE_LIST_HEAD(&d->extra_page_list); diff --git a/xen/common/domctl.c b/xen/common/domctl.c index 452266710a..09870c87e0 100644 --- a/xen/common/domctl.c +++ b/xen/common/domctl.c @@ -651,14 +651,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u= _domctl) { uint64_t new_max =3D op->u.max_mem.max_memkb >> (PAGE_SHIFT - 10); =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); /* * NB. We removed a check that new_max >=3D current tot_pages; thi= s means * that the domain will now be allowed to "ratchet" down to new_ma= x. In * the meantime, while tot > max, all new allocations are disallow= ed. */ d->max_pages =3D min(new_max, (uint64_t)(typeof(d->max_pages))-1); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); break; } =20 diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c index ad773a6996..7acf8a9f6c 100644 --- a/xen/common/grant_table.c +++ b/xen/common/grant_table.c @@ -2349,7 +2349,7 @@ gnttab_transfer( mfn =3D page_to_mfn(page); } =20 - spin_lock(&e->page_alloc_lock); + spin_lock_nonrecursive(&e->page_alloc_lock); =20 /* * Check that 'e' will accept the page and has reservation @@ -2360,7 +2360,7 @@ gnttab_transfer( unlikely(domain_tot_pages(e) >=3D e->max_pages) || unlikely(!(e->tot_pages + 1)) ) { - spin_unlock(&e->page_alloc_lock); + spin_unlock_nonrecursive(&e->page_alloc_lock); =20 if ( e->is_dying ) gdprintk(XENLOG_INFO, "Transferee d%d is dying\n", @@ -2384,7 +2384,7 @@ gnttab_transfer( * safely drop the lock and re-aquire it later to add page to the * pagelist. */ - spin_unlock(&e->page_alloc_lock); + spin_unlock_nonrecursive(&e->page_alloc_lock); okay =3D gnttab_prepare_for_transfer(e, d, gop.ref); =20 /* @@ -2400,9 +2400,9 @@ gnttab_transfer( * Need to grab this again to safely free our "reserved" * page in the page total */ - spin_lock(&e->page_alloc_lock); + spin_lock_nonrecursive(&e->page_alloc_lock); drop_dom_ref =3D !domain_adjust_tot_pages(e, -1); - spin_unlock(&e->page_alloc_lock); + spin_unlock_nonrecursive(&e->page_alloc_lock); =20 if ( okay /* i.e. e->is_dying due to the surrounding if() */ ) gdprintk(XENLOG_INFO, "Transferee d%d is now dying\n", diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 4617aef29b..c46a5d70e6 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -1329,7 +1329,7 @@ unsigned int ioreq_broadcast(ioreq_t *p, bool buffere= d) =20 void ioreq_domain_init(struct domain *d) { - spin_lock_init(&d->ioreq_server.lock); + spin_lock_recursive_init(&d->ioreq_server.lock); =20 arch_ioreq_domain_init(d); } diff --git a/xen/common/memory.c b/xen/common/memory.c index ae8163a738..0b4313832e 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -769,10 +769,10 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xe= n_memory_exchange_t) arg) (1UL << in_chunk_order)) - (j * (1UL << exch.out.extent_order))); =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); drop_dom_ref =3D (dec_count && !domain_adjust_tot_pages(d, -dec_count)); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); =20 if ( drop_dom_ref ) put_domain(d); diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 62afb07bc6..35e6015ce2 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -469,7 +469,7 @@ unsigned long domain_adjust_tot_pages(struct domain *d,= long pages) { long dom_before, dom_after, dom_claimed, sys_before, sys_after; =20 - ASSERT(spin_is_locked(&d->page_alloc_lock)); + ASSERT(spin_recursive_is_locked(&d->page_alloc_lock)); d->tot_pages +=3D pages; =20 /* @@ -508,7 +508,7 @@ int domain_set_outstanding_pages(struct domain *d, unsi= gned long pages) * must always take the global heap_lock rather than only in the much * rarer case that d->outstanding_pages is non-zero */ - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); spin_lock(&heap_lock); =20 /* pages=3D=3D0 means "unset" the claim. */ @@ -554,7 +554,7 @@ int domain_set_outstanding_pages(struct domain *d, unsi= gned long pages) =20 out: spin_unlock(&heap_lock); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); return ret; } =20 @@ -2328,7 +2328,7 @@ int assign_pages( int rc =3D 0; unsigned int i; =20 - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); =20 if ( unlikely(d->is_dying) ) { @@ -2410,7 +2410,7 @@ int assign_pages( } =20 out: - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); return rc; } =20 @@ -2891,9 +2891,9 @@ mfn_t acquire_reserved_page(struct domain *d, unsigne= d int memflags) ASSERT_ALLOC_CONTEXT(); =20 /* Acquire a page from reserved page list(resv_page_list). */ - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page =3D page_list_remove_head(&d->resv_page_list); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); if ( unlikely(!page) ) return INVALID_MFN; =20 @@ -2912,9 +2912,9 @@ mfn_t acquire_reserved_page(struct domain *d, unsigne= d int memflags) */ unprepare_staticmem_pages(page, 1, false); fail: - spin_lock(&d->page_alloc_lock); + spin_lock_nonrecursive(&d->page_alloc_lock); page_list_add_tail(page, &d->resv_page_list); - spin_unlock(&d->page_alloc_lock); + spin_unlock_nonrecursive(&d->page_alloc_lock); return INVALID_MFN; } #endif diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index e8468c121a..2e861ad9d6 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -120,7 +120,7 @@ static int __read_mostly sercon_handle =3D -1; int8_t __read_mostly opt_console_xen; /* console=3Dxen */ #endif =20 -static DEFINE_SPINLOCK(console_lock); +static DEFINE_SPINLOCK_RECURSIVE(console_lock); =20 /* * To control the amount of printing, thresholds are added. @@ -328,7 +328,7 @@ static void cf_check do_dec_thresh(unsigned char key, s= truct cpu_user_regs *regs =20 static void conring_puts(const char *str, size_t len) { - ASSERT(spin_is_locked(&console_lock)); + ASSERT(spin_recursive_is_locked(&console_lock)); =20 while ( len-- ) conring[CONRING_IDX_MASK(conringp++)] =3D *str++; @@ -369,9 +369,9 @@ long read_console_ring(struct xen_sysctl_readconsole *o= p) =20 if ( op->clear ) { - spin_lock_irq(&console_lock); + spin_lock_nonrecursive_irq(&console_lock); conringc =3D p - c > conring_size ? p - conring_size : c; - spin_unlock_irq(&console_lock); + spin_unlock_nonrecursive_irq(&console_lock); } =20 op->count =3D sofar; @@ -612,7 +612,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(= char) buffer, if ( is_hardware_domain(cd) ) { /* Use direct console output as it could be interactive */ - spin_lock_irq(&console_lock); + spin_lock_nonrecursive_irq(&console_lock); =20 console_serial_puts(kbuf, kcount); video_puts(kbuf, kcount); @@ -633,7 +633,7 @@ static long guest_console_write(XEN_GUEST_HANDLE_PARAM(= char) buffer, tasklet_schedule(¬ify_dom0_con_ring_tasklet); } =20 - spin_unlock_irq(&console_lock); + spin_unlock_nonrecursive_irq(&console_lock); } else { @@ -739,7 +739,7 @@ static void __putstr(const char *str) { size_t len =3D strlen(str); =20 - ASSERT(spin_is_locked(&console_lock)); + ASSERT(spin_recursive_is_locked(&console_lock)); =20 console_serial_puts(str, len); video_puts(str, len); @@ -1000,9 +1000,9 @@ void __init console_init_preirq(void) pv_console_set_rx_handler(serial_rx); =20 /* HELLO WORLD --- start-of-day banner text. */ - spin_lock(&console_lock); + spin_lock_nonrecursive(&console_lock); __putstr(xen_banner()); - spin_unlock(&console_lock); + spin_unlock_nonrecursive(&console_lock); printk("Xen version %d.%d%s (%s@%s) (%s) %s %s\n", xen_major_version(), xen_minor_version(), xen_extra_version(), xen_compile_by(), xen_compile_domain(), xen_compiler(), @@ -1039,13 +1039,13 @@ void __init console_init_ring(void) } opt_conring_size =3D PAGE_SIZE << order; =20 - spin_lock_irqsave(&console_lock, flags); + spin_lock_nonrecursive_irqsave(&console_lock, flags); for ( i =3D conringc ; i !=3D conringp; i++ ) ring[i & (opt_conring_size - 1)] =3D conring[i & (conring_size - 1= )]; conring =3D ring; smp_wmb(); /* Allow users of console_force_unlock() to see larger buff= er. */ conring_size =3D opt_conring_size; - spin_unlock_irqrestore(&console_lock, flags); + spin_unlock_nonrecursive_irqrestore(&console_lock, flags); =20 printk("Allocated console ring of %u KiB.\n", opt_conring_size >> 10); } @@ -1151,7 +1151,7 @@ void console_force_unlock(void) { watchdog_disable(); spin_debug_disable(); - spin_lock_init(&console_lock); + spin_lock_recursive_init(&console_lock); serial_force_unlock(sercon_handle); console_locks_busted =3D 1; console_start_sync(); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index cdaf5c247f..c86b11be10 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -50,7 +50,7 @@ struct pci_seg { } bus2bridge[MAX_BUSES]; }; =20 -static spinlock_t _pcidevs_lock =3D SPIN_LOCK_UNLOCKED; +static DEFINE_SPINLOCK_RECURSIVE(_pcidevs_lock); =20 void pcidevs_lock(void) { @@ -64,7 +64,7 @@ void pcidevs_unlock(void) =20 bool_t pcidevs_locked(void) { - return !!spin_is_locked(&_pcidevs_lock); + return !!spin_recursive_is_locked(&_pcidevs_lock); } =20 static struct radix_tree_root pci_segments; diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 961891bea4..20f64102c9 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -40,7 +40,7 @@ union lock_debug { }; lock profiling on: =20 Global locks which should be subject to profiling must be declared via - DEFINE_SPINLOCK. + DEFINE_SPINLOCK[_RECURSIVE]. =20 For locks in structures further measures are necessary: - the structure definition must include a profile_head with exactly th= is @@ -146,6 +146,8 @@ struct lock_profile_qhead { }; =20 #endif =20 +#define DEFINE_SPINLOCK_RECURSIVE(l) DEFINE_SPINLOCK(l) + typedef union { u32 head_tail; struct { @@ -171,6 +173,8 @@ typedef struct spinlock { =20 =20 #define spin_lock_init(l) (*(l) =3D (spinlock_t)SPIN_LOCK_UNLOCKED) +#define spin_lock_recursive_init(l) (*(l) =3D (spinlock_t)SPIN_LOCK_UNLOCK= ED) +#define spin_lock_recursive_init_prof(s, l) spin_lock_init_prof(s, l) =20 void _spin_lock(spinlock_t *lock); void _spin_lock_cb(spinlock_t *lock, void (*cond)(void *), void *data); @@ -223,9 +227,20 @@ void _spin_unlock_recursive(spinlock_t *lock); * part of a recursively-nested set must be protected by these forms. If t= here * are any critical regions that cannot form part of such a set, they can = use * standard spin_[un]lock(). + * The related spin_[un]lock_nonrecursive() variants should be used when no + * recursion of locking is needed for locks, which might be taken recursiv= ely. */ #define spin_trylock_recursive(l) _spin_trylock_recursive(l) #define spin_lock_recursive(l) _spin_lock_recursive(l) #define spin_unlock_recursive(l) _spin_unlock_recursive(l) +#define spin_recursive_is_locked(l) spin_is_locked(l) + +#define spin_trylock_nonrecursive(l) spin_trylock(l) +#define spin_lock_nonrecursive(l) spin_lock(l) +#define spin_unlock_nonrecursive(l) spin_unlock(l) +#define spin_lock_nonrecursive_irq(l) spin_lock_irq(l) +#define spin_unlock_nonrecursive_irq(l) spin_unlock_irq(l) +#define spin_lock_nonrecursive_irqsave(l, f) spin_lock_irqsave(l, f) +#define spin_unlock_nonrecursive_irqrestore(l, f) spin_unlock_irqrestore(l= , f) =20 #endif /* __SPINLOCK_H__ */ --=20 2.35.3 From nobody Fri May 17 11:58:58 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1662825036; cv=none; d=zohomail.com; s=zohoarc; b=YBw2pyWoLzGwFRCCKR3Yaf4whFFI5CidQ+OHbHrZaFyslG0NY6oWxD2YhE0i7uTK9F9vF3/85an8XxtBkVAbvijtIofBehSpid3uCHBpm3pSx+0cujSL7N0Cds3au96OiiOahmQx8uHGu+q4XT57U3KG/vbTRZjjMSVJ8K3nsuE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662825036; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=kww+P6/XyomSMqUmxNBEe2RQmOU2q/zA4hDRZaLZnr0=; b=kG7XLp04cTpjtpAZ5ibw1eHBOiBpDQ+jbs862mw23E2kTy1ybKg9YpmU305jLPuesg/g2JEI53x0fE+kmt4VK62f4kChhqTRM/mjATdYJbG2uG6iLCjlasdp769SurV8URetxC+wyohNkHra/LxQrpIV2MOJ3A+uNZCbJl9sb+o= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1662825036045286.2444036357225; Sat, 10 Sep 2022 08:50:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.404809.647417 (Exim 4.92) (envelope-from ) id 1oX2kK-0005Bm-Us; Sat, 10 Sep 2022 15:50:16 +0000 Received: by outflank-mailman (output) from mailman id 404809.647417; Sat, 10 Sep 2022 15:50:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oX2kK-0005Bd-Rr; Sat, 10 Sep 2022 15:50:16 +0000 Received: by outflank-mailman (input) for mailman id 404809; Sat, 10 Sep 2022 15:50:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oX2kJ-0004BJ-5f for xen-devel@lists.xenproject.org; Sat, 10 Sep 2022 15:50:15 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 424092a1-3120-11ed-9760-273f2230c3a0; Sat, 10 Sep 2022 17:50:14 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id C07BC22033; Sat, 10 Sep 2022 15:50:13 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 7F72113441; Sat, 10 Sep 2022 15:50:13 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id xUPdHTWyHGNoZAAAMHmgww (envelope-from ); Sat, 10 Sep 2022 15:50:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 424092a1-3120-11ed-9760-273f2230c3a0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1662825013; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kww+P6/XyomSMqUmxNBEe2RQmOU2q/zA4hDRZaLZnr0=; b=jO9o5enxEgzbp+rDcM8wyj6qXdr3ETnN0plv8mu81kEud6L5zAdWNbweOIdnZmWBNFc/tm 9lpF9YoVdlFmvemJVp72Q2KUgI8bctjewD+G6rxLuUV4n7pYiCGpkxsH0NWzJwdj36EbKa FFaD2bF2P90kT/C/b4xpHKPdea/60V0= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Wei Liu , George Dunlap , Julien Grall , Stefano Stabellini Subject: [RFC PATCH 2/3] xen/spinlock: split recursive spinlocks from normal ones Date: Sat, 10 Sep 2022 17:49:58 +0200 Message-Id: <20220910154959.15971-3-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220910154959.15971-1-jgross@suse.com> References: <20220910154959.15971-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1662825037375100003 Content-Type: text/plain; charset="utf-8" Recursive and normal spinlocks are sharing the same data structure for representation of the lock. This has two major disadvantages: - it is not clear from the definition of a lock, whether it is intended to be used recursive or not, while a mixture of both usage variants needs to be - in production builds (builds without CONFIG_DEBUG_LOCKS) the needed data size of an ordinary spinlock is 8 bytes instead of 4, due to the additional recursion data needed (associated with that the rwlock data is using 12 instead of only 8 bytes) Fix that by introducing a struct spinlock_recursive for recursive spinlocks only, and switch recursive spinlock functions to require pointers to this new struct. This allows to check the correct usage at build time. The sizes per lock will change: lock type debug build non-debug build old new old new spinlock 8 8 8 4 recursive spinlock 8 12 8 8 rwlock 12 12 12 8 So the only downside is an increase for recursive spinlocks in debug builds, while in non-debug builds especially normal spinlocks and rwlocks are consuming less memory. Signed-off-by: Juergen Gross --- xen/arch/x86/include/asm/mm.h | 2 +- xen/arch/x86/mm/mm-locks.h | 2 +- xen/arch/x86/mm/p2m-pod.c | 2 +- xen/common/domain.c | 2 +- xen/common/spinlock.c | 21 ++++++----- xen/include/xen/sched.h | 6 ++-- xen/include/xen/spinlock.h | 65 +++++++++++++++++++++-------------- 7 files changed, 60 insertions(+), 40 deletions(-) diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index 0fc826de46..8cf86b4796 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -610,7 +610,7 @@ unsigned long domain_get_maximum_gpfn(struct domain *d); =20 /* Definition of an mm lock: spinlock with extra fields for debugging */ typedef struct mm_lock { - spinlock_t lock; + struct spinlock_recursive lock; int unlock_level; int locker; /* processor which holds the lock = */ const char *locker_function; /* func that took it */ diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index c1523aeccf..7b54e6914b 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -32,7 +32,7 @@ DECLARE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock); =20 static inline void mm_lock_init(mm_lock_t *l) { - spin_lock_init(&l->lock); + spin_lock_recursive_init(&l->lock); l->locker =3D -1; l->locker_function =3D "nobody"; l->unlock_level =3D 0; diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index deab55648c..02c149f839 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -397,7 +397,7 @@ int p2m_pod_empty_cache(struct domain *d) =20 /* After this barrier no new PoD activities can happen. */ BUG_ON(!d->is_dying); - spin_barrier(&p2m->pod.lock.lock); + spin_barrier(&p2m->pod.lock.lock.lock); =20 lock_page_alloc(p2m); =20 diff --git a/xen/common/domain.c b/xen/common/domain.c index 51160a4b5c..5e5ac4e74b 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -929,7 +929,7 @@ int domain_kill(struct domain *d) case DOMDYING_alive: domain_pause(d); d->is_dying =3D DOMDYING_dying; - spin_barrier(&d->domain_lock); + spin_barrier(&d->domain_lock.lock); argo_destroy(d); vnuma_destroy(d->vnuma); domain_set_outstanding_pages(d, 0); diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 62c83aaa6a..a48ed17ac6 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -224,6 +224,11 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigne= d long flags) } =20 int _spin_is_locked(spinlock_t *lock) +{ + return lock->tickets.head !=3D lock->tickets.tail; +} + +int _spin_recursive_is_locked(struct spinlock_recursive *lock) { /* * Recursive locks may be locked by another CPU, yet we return @@ -231,7 +236,7 @@ int _spin_is_locked(spinlock_t *lock) * ASSERT()s and alike. */ return lock->recurse_cpu =3D=3D SPINLOCK_NO_CPU - ? lock->tickets.head !=3D lock->tickets.tail + ? _spin_is_locked(&lock->lock) : lock->recurse_cpu =3D=3D smp_processor_id(); } =20 @@ -292,7 +297,7 @@ void _spin_barrier(spinlock_t *lock) smp_mb(); } =20 -int _spin_trylock_recursive(spinlock_t *lock) +int _spin_trylock_recursive(struct spinlock_recursive *lock) { unsigned int cpu =3D smp_processor_id(); =20 @@ -300,11 +305,11 @@ int _spin_trylock_recursive(spinlock_t *lock) BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU); BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3); =20 - check_lock(&lock->debug, true); + check_lock(&lock->lock.debug, true); =20 if ( likely(lock->recurse_cpu !=3D cpu) ) { - if ( !spin_trylock(lock) ) + if ( !spin_trylock(&lock->lock) ) return 0; lock->recurse_cpu =3D cpu; } @@ -316,13 +321,13 @@ int _spin_trylock_recursive(spinlock_t *lock) return 1; } =20 -void _spin_lock_recursive(spinlock_t *lock) +void _spin_lock_recursive(struct spinlock_recursive *lock) { unsigned int cpu =3D smp_processor_id(); =20 if ( likely(lock->recurse_cpu !=3D cpu) ) { - _spin_lock(lock); + _spin_lock(&lock->lock); lock->recurse_cpu =3D cpu; } =20 @@ -331,12 +336,12 @@ void _spin_lock_recursive(spinlock_t *lock) lock->recurse_cnt++; } =20 -void _spin_unlock_recursive(spinlock_t *lock) +void _spin_unlock_recursive(struct spinlock_recursive *lock) { if ( likely(--lock->recurse_cnt =3D=3D 0) ) { lock->recurse_cpu =3D SPINLOCK_NO_CPU; - spin_unlock(lock); + spin_unlock(&lock->lock); } } =20 diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 557b3229f6..8d45f522d5 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -375,9 +375,9 @@ struct domain =20 rcu_read_lock_t rcu_lock; =20 - spinlock_t domain_lock; + struct spinlock_recursive domain_lock; =20 - spinlock_t page_alloc_lock; /* protects all the following fields= */ + struct spinlock_recursive page_alloc_lock; /* protects following field= s */ struct page_list_head page_list; /* linked list */ struct page_list_head extra_page_list; /* linked list (size extra_page= s) */ struct page_list_head xenpage_list; /* linked list (size xenheap_pages= ) */ @@ -595,7 +595,7 @@ struct domain #ifdef CONFIG_IOREQ_SERVER /* Lock protects all other values in the sub-struct */ struct { - spinlock_t lock; + struct spinlock_recursive lock; struct ioreq_server *server[MAX_NR_IOREQ_SERVERS]; } ioreq_server; #endif diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 20f64102c9..d0cfb4c524 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -89,16 +89,21 @@ struct lock_profile_qhead { int32_t idx; /* index for printout */ }; =20 -#define _LOCK_PROFILE(name) { 0, #name, &name, 0, 0, 0, 0, 0 } +#define _LOCK_PROFILE(name, var) { 0, #name, &var, 0, 0, 0, 0, 0 } #define _LOCK_PROFILE_PTR(name) = \ static struct lock_profile * const __lock_profile_##name = \ __used_section(".lockprofile.data") =3D = \ &__lock_profile_data_##name -#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG, x= } +#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, _LOCK_DEBUG, x } #define SPIN_LOCK_UNLOCKED _SPIN_LOCK_UNLOCKED(NULL) #define DEFINE_SPINLOCK(l) = \ spinlock_t l =3D _SPIN_LOCK_UNLOCKED(NULL); = \ - static struct lock_profile __lock_profile_data_##l =3D _LOCK_PROFILE(l= ); \ + static struct lock_profile __lock_profile_data_##l =3D _LOCK_PROFILE(l= , l); \ + _LOCK_PROFILE_PTR(l) +#define DEFINE_SPINLOCK_RECURSIVE(l) = \ + struct spinlock_recursive l =3D { .lock =3D _SPIN_LOCK_UNLOCKED(NULL) = }; \ + static struct lock_profile __lock_profile_data_##l =3D = \ + _LOCK_PROFILE(l, l.lock); = \ _LOCK_PROFILE_PTR(l) =20 #define spin_lock_init_prof(s, l) = \ @@ -136,8 +141,10 @@ extern void cf_check spinlock_profile_reset(unsigned c= har key); =20 struct lock_profile_qhead { }; =20 -#define SPIN_LOCK_UNLOCKED { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG } +#define SPIN_LOCK_UNLOCKED { { 0 }, _LOCK_DEBUG } #define DEFINE_SPINLOCK(l) spinlock_t l =3D SPIN_LOCK_UNLOCKED +#define DEFINE_SPINLOCK_RECURSIVE(l) \ + struct spinlock_recursive l =3D { .lock =3D SPIN_LOCK_UNLOCKED } =20 #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l)) #define lock_profile_register_struct(type, ptr, idx) @@ -146,8 +153,6 @@ struct lock_profile_qhead { }; =20 #endif =20 -#define DEFINE_SPINLOCK_RECURSIVE(l) DEFINE_SPINLOCK(l) - typedef union { u32 head_tail; struct { @@ -160,21 +165,30 @@ typedef union { =20 typedef struct spinlock { spinlock_tickets_t tickets; - u16 recurse_cpu:SPINLOCK_CPU_BITS; -#define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) -#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) - u16 recurse_cnt:SPINLOCK_RECURSE_BITS; -#define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) union lock_debug debug; #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile *profile; #endif } spinlock_t; =20 +struct spinlock_recursive { + struct spinlock lock; + u16 recurse_cpu:SPINLOCK_CPU_BITS; +#define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) +#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) + u16 recurse_cnt:SPINLOCK_RECURSE_BITS; +#define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) +}; =20 #define spin_lock_init(l) (*(l) =3D (spinlock_t)SPIN_LOCK_UNLOCKED) -#define spin_lock_recursive_init(l) (*(l) =3D (spinlock_t)SPIN_LOCK_UNLOCK= ED) -#define spin_lock_recursive_init_prof(s, l) spin_lock_init_prof(s, l) +#define spin_lock_recursive_init(l) (*(l) =3D (struct spinlock_recursive){= \ + .lock =3D (spinlock_t)SPIN_LOCK_UNLOCKED, = \ + .recurse_cpu =3D SPINLOCK_NO_CPU }) +#define spin_lock_recursive_init_prof(s, l) do { \ + spin_lock_init_prof(s, l.lock); \ + (s)->l.recurse_cpu =3D SPINLOCK_NO_CPU; \ + (s)->l.recurse_cnt =3D 0; \ + } while (0) =20 void _spin_lock(spinlock_t *lock); void _spin_lock_cb(spinlock_t *lock, void (*cond)(void *), void *data); @@ -189,9 +203,10 @@ int _spin_is_locked(spinlock_t *lock); int _spin_trylock(spinlock_t *lock); void _spin_barrier(spinlock_t *lock); =20 -int _spin_trylock_recursive(spinlock_t *lock); -void _spin_lock_recursive(spinlock_t *lock); -void _spin_unlock_recursive(spinlock_t *lock); +int _spin_recursive_is_locked(struct spinlock_recursive *lock); +int _spin_trylock_recursive(struct spinlock_recursive *lock); +void _spin_lock_recursive(struct spinlock_recursive *lock); +void _spin_unlock_recursive(struct spinlock_recursive *lock); =20 #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) @@ -233,14 +248,14 @@ void _spin_unlock_recursive(spinlock_t *lock); #define spin_trylock_recursive(l) _spin_trylock_recursive(l) #define spin_lock_recursive(l) _spin_lock_recursive(l) #define spin_unlock_recursive(l) _spin_unlock_recursive(l) -#define spin_recursive_is_locked(l) spin_is_locked(l) - -#define spin_trylock_nonrecursive(l) spin_trylock(l) -#define spin_lock_nonrecursive(l) spin_lock(l) -#define spin_unlock_nonrecursive(l) spin_unlock(l) -#define spin_lock_nonrecursive_irq(l) spin_lock_irq(l) -#define spin_unlock_nonrecursive_irq(l) spin_unlock_irq(l) -#define spin_lock_nonrecursive_irqsave(l, f) spin_lock_irqsave(l, f) -#define spin_unlock_nonrecursive_irqrestore(l, f) spin_unlock_irqrestore(l= , f) +#define spin_recursive_is_locked(l) _spin_recursive_is_locked(l) + +#define spin_trylock_nonrecursive(l) spin_trylock(&(l)->lock) +#define spin_lock_nonrecursive(l) spin_lock(&(l)->lock) +#define spin_unlock_nonrecursive(l) spin_unlock(&(l)->lock) +#define spin_lock_nonrecursive_irq(l) spin_lock_irq(&(l)->lock) +#define spin_unlock_nonrecursive_irq(l) spin_unlock_irq(&(l)->lock) +#define spin_lock_nonrecursive_irqsave(l, f) spin_lock_irqsave(&(l)->= lock, f) +#define spin_unlock_nonrecursive_irqrestore(l, f) spin_unlock_irqrestore(&= (l)->lock, f) =20 #endif /* __SPINLOCK_H__ */ --=20 2.35.3 From nobody Fri May 17 11:58:58 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1662825039; cv=none; d=zohomail.com; s=zohoarc; b=KnxEakpsI7ccE8RvFtdAdqttLBbAkW7p8yWXM2q564hX8R91KmJ+Gw9Aw6HkdURZe5rP1w5iYlAasyAQia4+hybUpFju17rmpEFVdvkpDyOXYbSFpcWd0Y+rrECjAdwi0+3EBN7GdG90RTni2UZYvwkIH413FedCw7VPuVW1V8o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1662825039; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=HT/7IdQpKAQmA1bHFNkUTH+SSUmzVBMKCGG4nyC9O9s=; b=Is2fqPZhd+pu+4jv6nwf60R5jdCDzdRVnWCylw6DYXK5W8nkEuZBntTt+Q9riJCJF5T/fQ6CDb5HHAcVhr7Th+3U+TP4SmwAVTE8srLm0oJsmQNo9hbVkX1QIgCiyGn6j5+EPqJKMgm8QVI8/XqphoVYZhQJPO8nQI4wDKSh9Ho= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1662825039556316.7691208968996; Sat, 10 Sep 2022 08:50:39 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.404810.647428 (Exim 4.92) (envelope-from ) id 1oX2kP-0005Vh-7q; Sat, 10 Sep 2022 15:50:21 +0000 Received: by outflank-mailman (output) from mailman id 404810.647428; Sat, 10 Sep 2022 15:50:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oX2kP-0005VU-4l; Sat, 10 Sep 2022 15:50:21 +0000 Received: by outflank-mailman (input) for mailman id 404810; Sat, 10 Sep 2022 15:50:20 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1oX2kO-0004BJ-As for xen-devel@lists.xenproject.org; Sat, 10 Sep 2022 15:50:20 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4598fd20-3120-11ed-9760-273f2230c3a0; Sat, 10 Sep 2022 17:50:19 +0200 (CEST) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 64A0621B67; Sat, 10 Sep 2022 15:50:19 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 2835C13441; Sat, 10 Sep 2022 15:50:19 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id SHyYCDuyHGNqZAAAMHmgww (envelope-from ); Sat, 10 Sep 2022 15:50:19 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4598fd20-3120-11ed-9760-273f2230c3a0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1662825019; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=HT/7IdQpKAQmA1bHFNkUTH+SSUmzVBMKCGG4nyC9O9s=; b=jz/mSRYOdzK5Rz8l4vKVkSxHUEgyJJvXeZvo62ktJg4xGVarCKlv5fs43igPIMlKhkKYCr 8MseAgKGswoK1fuT9PgS92W0QgQC1NGthqZiTkx3WHw8Yl6ac6z144K0Jc47QEnUE3iVGm XqM4wRxsVcicQV2bue67pHS947cm5hU= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [RFC PATCH 3/3] xen/spinlock: support higher number of cpus Date: Sat, 10 Sep 2022 17:49:59 +0200 Message-Id: <20220910154959.15971-4-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220910154959.15971-1-jgross@suse.com> References: <20220910154959.15971-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1662825041407100001 Content-Type: text/plain; charset="utf-8" There is no real reason why the cpu fields of struct spinlock should be limited to 12 bits, now that there is a 2 byte padding hole after those fields. Make the related structures a little bit larger allowing 16 bits per cpu number, which is the limit imposed by spinlock_tickets_t. Signed-off-by: Juergen Gross --- xen/common/spinlock.c | 1 + xen/include/xen/spinlock.h | 18 +++++++++--------- 2 files changed, 10 insertions(+), 9 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index a48ed17ac6..5509e4b79a 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -303,6 +303,7 @@ int _spin_trylock_recursive(struct spinlock_recursive *= lock) =20 /* Don't allow overflow of recurse_cpu field. */ BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU); + BUILD_BUG_ON(SPINLOCK_CPU_BITS > sizeof(lock->recurse_cpu) * 8); BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3); =20 check_lock(&lock->lock.debug, true); diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index d0cfb4c524..e157b12f6e 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -6,16 +6,16 @@ #include #include =20 -#define SPINLOCK_CPU_BITS 12 +#define SPINLOCK_CPU_BITS 16 =20 #ifdef CONFIG_DEBUG_LOCKS union lock_debug { - uint16_t val; -#define LOCK_DEBUG_INITVAL 0xffff + uint32_t val; +#define LOCK_DEBUG_INITVAL 0xffffffff struct { - uint16_t cpu:SPINLOCK_CPU_BITS; -#define LOCK_DEBUG_PAD_BITS (14 - SPINLOCK_CPU_BITS) - uint16_t :LOCK_DEBUG_PAD_BITS; + uint32_t cpu:SPINLOCK_CPU_BITS; +#define LOCK_DEBUG_PAD_BITS (30 - SPINLOCK_CPU_BITS) + uint32_t :LOCK_DEBUG_PAD_BITS; bool irq_safe:1; bool unseen:1; }; @@ -173,10 +173,10 @@ typedef struct spinlock { =20 struct spinlock_recursive { struct spinlock lock; - u16 recurse_cpu:SPINLOCK_CPU_BITS; + uint16_t recurse_cpu; #define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) -#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) - u16 recurse_cnt:SPINLOCK_RECURSE_BITS; +#define SPINLOCK_RECURSE_BITS 8 + uint8_t recurse_cnt; #define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) }; =20 --=20 2.35.3