From nobody Mon Feb 9 16:19:02 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=suse.com ARC-Seal: i=1; a=rsa-sha256; t=1700480376; cv=none; d=zohomail.com; s=zohoarc; b=MDSTmtxVrPzXTjXf/+6W93sryCTWJZDV/sHMJJ2+bR+0GieHuS2eZ64dwdlXX5kpwS8qsY0/QOc0dxmkt6bCJLq5LCnzgEGzwbWTtu4fvbSIsdM2UuL04PPZVFt0h+9r59HygCzcplzb3iy9G0NqC1TQ60NNOnZ77C523pfSH8g= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1700480376; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=XYL8r6N0FEL8of8JNC2d3rBx/jhoYR/sLdxdIC137RE=; b=IsF6jRY0BXGRknSiZQk5A/zh3g8ORS0rNpNfnywUPj8aBsftr+tk69aFlOHToXAc6UppAj0aMnDyLcEaNLihwf8ml7WtY0wZsuO7ssUaQO49QDpqomzGRPN/1Glmb18vrXSXYpW6n5CwjJOWwG7xczcogxoGFprhLh/oL6K0XqM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1700480376935112.91679577835725; Mon, 20 Nov 2023 03:39:36 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.636639.992275 (Exim 4.92) (envelope-from ) id 1r52ca-0006HN-La; Mon, 20 Nov 2023 11:39:20 +0000 Received: by outflank-mailman (output) from mailman id 636639.992275; Mon, 20 Nov 2023 11:39:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r52ca-0006HG-Hy; Mon, 20 Nov 2023 11:39:20 +0000 Received: by outflank-mailman (input) for mailman id 636639; Mon, 20 Nov 2023 11:39:19 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1r52cZ-0005BS-1m for xen-devel@lists.xenproject.org; Mon, 20 Nov 2023 11:39:19 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 6f90d0ac-8799-11ee-9b0e-b553b5be7939; Mon, 20 Nov 2023 12:39:16 +0100 (CET) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 2A6912190B; Mon, 20 Nov 2023 11:39:16 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id AA87A13499; Mon, 20 Nov 2023 11:39:15 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id 4Zt9KGNFW2X5PQAAMHmgww (envelope-from ); Mon, 20 Nov 2023 11:39:15 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6f90d0ac-8799-11ee-9b0e-b553b5be7939 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1700480356; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=XYL8r6N0FEL8of8JNC2d3rBx/jhoYR/sLdxdIC137RE=; b=aDY9JPyEhBwk5NSC2JK5fN8sP3vPg1sX+IuOkYtMoxOvaMXxWAyNcekXrcdcst2kmapV0f /CH5I/UFiu+0L5+mMvBmuYGFLdUIrtyNPzOT1GUg5zuUmjmiwmemS9LtXHsRHa3pUEVvUC r/LCzuJzFYB+EWeO2Jskxn0urz7owJQ= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: javi.merino@cloud.com, Juergen Gross , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Andrew Cooper , George Dunlap , Jan Beulich , Wei Liu , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Tamas K Lengyel , Paul Durrant Subject: [PATCH v3 05/13] xen/spinlock: rename recursive lock functions Date: Mon, 20 Nov 2023 12:38:34 +0100 Message-Id: <20231120113842.5897-6-jgross@suse.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20231120113842.5897-1-jgross@suse.com> References: <20231120113842.5897-1-jgross@suse.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Authentication-Results: smtp-out1.suse.de; none X-Spam-Level: X-Spam-Score: -3.30 X-Spamd-Result: default: False [-3.30 / 50.00]; ARC_NA(0.00)[]; RCVD_VIA_SMTP_AUTH(0.00)[]; FROM_HAS_DN(0.00)[]; TO_DN_SOME(0.00)[]; R_MISSING_CHARSET(2.50)[]; TO_MATCH_ENVRCPT_ALL(0.00)[]; MIME_GOOD(-0.10)[text/plain]; REPLY(-4.00)[]; BROKEN_CONTENT_TYPE(1.50)[]; NEURAL_HAM_LONG(-1.00)[-1.000]; DKIM_SIGNED(0.00)[suse.com:s=susede1]; NEURAL_HAM_SHORT(-0.20)[-1.000]; RCPT_COUNT_TWELVE(0.00)[15]; MID_CONTAINS_FROM(1.00)[]; FUZZY_BLOCKED(0.00)[rspamd.com]; FROM_EQ_ENVFROM(0.00)[]; MIME_TRACE(0.00)[0:+]; RCVD_COUNT_TWO(0.00)[2]; RCVD_TLS_ALL(0.00)[]; BAYES_HAM(-3.00)[100.00%] X-Spam-Flag: NO X-ZohoMail-DKIM: pass (identity @suse.com) X-ZM-MESSAGEID: 1700480378515100003 Content-Type: text/plain; charset="utf-8" Rename the recursive spin_lock() functions by replacing the trailing "_recursive" with a leading "r". Switch the parameter to be a pointer to rspinlock_t. Remove the indirection through a macro, as it is adding only complexity without any gain. Suggested-by: Jan Beulich Signed-off-by: Juergen Gross --- V2: - new patch --- xen/arch/arm/domain.c | 4 +-- xen/arch/x86/domain.c | 8 +++--- xen/arch/x86/mm/mem_sharing.c | 8 +++--- xen/arch/x86/mm/mm-locks.h | 4 +-- xen/common/ioreq.c | 52 +++++++++++++++++------------------ xen/common/page_alloc.c | 12 ++++---- xen/common/spinlock.c | 6 ++-- xen/drivers/char/console.c | 12 ++++---- xen/drivers/passthrough/pci.c | 4 +-- xen/include/xen/sched.h | 4 +-- xen/include/xen/spinlock.h | 24 +++++++--------- 11 files changed, 67 insertions(+), 71 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 28e3aaa5e4..603a5f7c81 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -996,7 +996,7 @@ static int relinquish_memory(struct domain *d, struct p= age_list_head *list) int ret =3D 0; =20 /* Use a recursive lock, as we may enter 'free_domheap_page'. */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 page_list_for_each_safe( page, tmp, list ) { @@ -1023,7 +1023,7 @@ static int relinquish_memory(struct domain *d, struct= page_list_head *list) } =20 out: - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); return ret; } =20 diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 3712e36df9..69ce1fd5cf 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1321,7 +1321,7 @@ int arch_set_info_guest( { bool done =3D false; =20 - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 for ( i =3D 0; ; ) { @@ -1342,7 +1342,7 @@ int arch_set_info_guest( break; } =20 - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); =20 if ( !done ) return -ERESTART; @@ -2181,7 +2181,7 @@ static int relinquish_memory( int ret =3D 0; =20 /* Use a recursive lock, as we may enter 'free_domheap_page'. */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 while ( (page =3D page_list_remove_head(list)) ) { @@ -2322,7 +2322,7 @@ static int relinquish_memory( page_list_move(list, &d->arch.relmem_list); =20 out: - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); return ret; } =20 diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index 142258f16a..9585406095 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -688,7 +688,7 @@ static int page_make_sharable(struct domain *d, int rc =3D 0; bool drop_dom_ref =3D false; =20 - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 if ( d->is_dying ) { @@ -731,7 +731,7 @@ static int page_make_sharable(struct domain *d, } =20 out: - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); =20 if ( drop_dom_ref ) put_domain(d); @@ -1942,7 +1942,7 @@ int mem_sharing_fork_reset(struct domain *d, bool res= et_state, goto state; =20 /* need recursive lock because we will free pages */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); page_list_for_each_safe(page, tmp, &d->page_list) { shr_handle_t sh; @@ -1971,7 +1971,7 @@ int mem_sharing_fork_reset(struct domain *d, bool res= et_state, put_page_alloc_ref(page); put_page_and_type(page); } - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); =20 state: if ( reset_state ) diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index b05cad1752..c867ad7d53 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -79,7 +79,7 @@ static inline void _mm_lock(const struct domain *d, mm_lo= ck_t *l, { if ( !((mm_locked_by_me(l)) && rec) ) _check_lock_level(d, level); - spin_lock_recursive(&l->lock); + rspin_lock(&l->lock); if ( l->lock.recurse_cnt =3D=3D 1 ) { l->locker_function =3D func; @@ -200,7 +200,7 @@ static inline void mm_unlock(mm_lock_t *l) l->locker_function =3D "nobody"; _set_lock_level(l->unlock_level); } - spin_unlock_recursive(&l->lock); + rspin_unlock(&l->lock); } =20 static inline void mm_enforce_order_unlock(int unlock_level, diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c index 652c18a9b5..1257a3d972 100644 --- a/xen/common/ioreq.c +++ b/xen/common/ioreq.c @@ -329,7 +329,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) unsigned int id; bool found =3D false; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -340,7 +340,7 @@ bool is_ioreq_server_page(struct domain *d, const struc= t page_info *page) } } =20 - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return found; } @@ -658,7 +658,7 @@ static int ioreq_server_create(struct domain *d, int bu= fioreq_handling, return -ENOMEM; =20 domain_pause(d); - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 for ( i =3D 0; i < MAX_NR_IOREQ_SERVERS; i++ ) { @@ -686,13 +686,13 @@ static int ioreq_server_create(struct domain *d, int = bufioreq_handling, if ( id ) *id =3D i; =20 - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); domain_unpause(d); =20 return 0; =20 fail: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); domain_unpause(d); =20 xfree(s); @@ -704,7 +704,7 @@ static int ioreq_server_destroy(struct domain *d, ioser= vid_t id) struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -736,7 +736,7 @@ static int ioreq_server_destroy(struct domain *d, ioser= vid_t id) rc =3D 0; =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return rc; } @@ -749,7 +749,7 @@ static int ioreq_server_get_info(struct domain *d, iose= rvid_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -783,7 +783,7 @@ static int ioreq_server_get_info(struct domain *d, iose= rvid_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return rc; } @@ -796,7 +796,7 @@ int ioreq_server_get_frame(struct domain *d, ioservid_t= id, =20 ASSERT(is_hvm_domain(d)); =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -834,7 +834,7 @@ int ioreq_server_get_frame(struct domain *d, ioservid_t= id, } =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return rc; } @@ -850,7 +850,7 @@ static int ioreq_server_map_io_range(struct domain *d, = ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -886,7 +886,7 @@ static int ioreq_server_map_io_range(struct domain *d, = ioservid_t id, rc =3D rangeset_add_range(r, start, end); =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return rc; } @@ -902,7 +902,7 @@ static int ioreq_server_unmap_io_range(struct domain *d= , ioservid_t id, if ( start > end ) return -EINVAL; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -938,7 +938,7 @@ static int ioreq_server_unmap_io_range(struct domain *d= , ioservid_t id, rc =3D rangeset_remove_range(r, start, end); =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return rc; } @@ -963,7 +963,7 @@ int ioreq_server_map_mem_type(struct domain *d, ioservi= d_t id, if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE ) return -EINVAL; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -978,7 +978,7 @@ int ioreq_server_map_mem_type(struct domain *d, ioservi= d_t id, rc =3D arch_ioreq_server_map_mem_type(d, s, flags); =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 if ( rc =3D=3D 0 ) arch_ioreq_server_map_mem_type_completed(d, s, flags); @@ -992,7 +992,7 @@ static int ioreq_server_set_state(struct domain *d, ios= ervid_t id, struct ioreq_server *s; int rc; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 s =3D get_ioreq_server(d, id); =20 @@ -1016,7 +1016,7 @@ static int ioreq_server_set_state(struct domain *d, i= oservid_t id, rc =3D 0; =20 out: - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); return rc; } =20 @@ -1026,7 +1026,7 @@ int ioreq_server_add_vcpu_all(struct domain *d, struc= t vcpu *v) unsigned int id; int rc; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) { @@ -1035,7 +1035,7 @@ int ioreq_server_add_vcpu_all(struct domain *d, struc= t vcpu *v) goto fail; } =20 - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return 0; =20 @@ -1050,7 +1050,7 @@ int ioreq_server_add_vcpu_all(struct domain *d, struc= t vcpu *v) ioreq_server_remove_vcpu(s, v); } =20 - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); =20 return rc; } @@ -1060,12 +1060,12 @@ void ioreq_server_remove_vcpu_all(struct domain *d,= struct vcpu *v) struct ioreq_server *s; unsigned int id; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 FOR_EACH_IOREQ_SERVER(d, id, s) ioreq_server_remove_vcpu(s, v); =20 - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); } =20 void ioreq_server_destroy_all(struct domain *d) @@ -1076,7 +1076,7 @@ void ioreq_server_destroy_all(struct domain *d) if ( !arch_ioreq_server_destroy_all(d) ) return; =20 - spin_lock_recursive(&d->ioreq_server.lock); + rspin_lock(&d->ioreq_server.lock); =20 /* No need to domain_pause() as the domain is being torn down */ =20 @@ -1094,7 +1094,7 @@ void ioreq_server_destroy_all(struct domain *d) xfree(s); } =20 - spin_unlock_recursive(&d->ioreq_server.lock); + rspin_unlock(&d->ioreq_server.lock); } =20 struct ioreq_server *ioreq_server_select(struct domain *d, diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 9b5df74fdd..8c6a3d9274 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -2497,7 +2497,7 @@ void free_domheap_pages(struct page_info *pg, unsigne= d int order) if ( unlikely(is_xen_heap_page(pg)) ) { /* NB. May recursively lock from relinquish_memory(). */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 for ( i =3D 0; i < (1 << order); i++ ) arch_free_heap_page(d, &pg[i]); @@ -2505,7 +2505,7 @@ void free_domheap_pages(struct page_info *pg, unsigne= d int order) d->xenheap_pages -=3D 1 << order; drop_dom_ref =3D (d->xenheap_pages =3D=3D 0); =20 - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); } else { @@ -2514,7 +2514,7 @@ void free_domheap_pages(struct page_info *pg, unsigne= d int order) if ( likely(d) && likely(d !=3D dom_cow) ) { /* NB. May recursively lock from relinquish_memory(). */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 for ( i =3D 0; i < (1 << order); i++ ) { @@ -2537,7 +2537,7 @@ void free_domheap_pages(struct page_info *pg, unsigne= d int order) =20 drop_dom_ref =3D !domain_adjust_tot_pages(d, -(1 << order)); =20 - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); =20 /* * Normally we expect a domain to clear pages before freeing t= hem, @@ -2753,7 +2753,7 @@ void free_domstatic_page(struct page_info *page) ASSERT_ALLOC_CONTEXT(); =20 /* NB. May recursively lock from relinquish_memory(). */ - spin_lock_recursive(&d->page_alloc_lock); + rspin_lock(&d->page_alloc_lock); =20 arch_free_heap_page(d, page); =20 @@ -2764,7 +2764,7 @@ void free_domstatic_page(struct page_info *page) /* Add page on the resv_page_list *after* it has been freed. */ page_list_add_tail(page, &d->resv_page_list); =20 - spin_unlock_recursive(&d->page_alloc_lock); + rspin_unlock(&d->page_alloc_lock); =20 if ( drop_dom_ref ) put_domain(d); diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index ce18fbdd8a..26c667d3cc 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -436,7 +436,7 @@ void _spin_barrier(spinlock_t *lock) smp_mb(); } =20 -int _spin_trylock_recursive(spinlock_t *lock) +int rspin_trylock(rspinlock_t *lock) { unsigned int cpu =3D smp_processor_id(); =20 @@ -460,7 +460,7 @@ int _spin_trylock_recursive(spinlock_t *lock) return 1; } =20 -void _spin_lock_recursive(spinlock_t *lock) +void rspin_lock(rspinlock_t *lock) { unsigned int cpu =3D smp_processor_id(); =20 @@ -475,7 +475,7 @@ void _spin_lock_recursive(spinlock_t *lock) lock->recurse_cnt++; } =20 -void _spin_unlock_recursive(spinlock_t *lock) +void rspin_unlock(rspinlock_t *lock) { if ( likely(--lock->recurse_cnt =3D=3D 0) ) { diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 8b161488f6..369b2f9077 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -920,7 +920,7 @@ static void vprintk_common(const char *prefix, const ch= ar *fmt, va_list args) =20 /* console_lock can be acquired recursively from __printk_ratelimit().= */ local_irq_save(flags); - spin_lock_recursive(&console_lock); + rspin_lock(&console_lock); state =3D &this_cpu(state); =20 (void)vsnprintf(buf, sizeof(buf), fmt, args); @@ -956,7 +956,7 @@ static void vprintk_common(const char *prefix, const ch= ar *fmt, va_list args) state->continued =3D 1; } =20 - spin_unlock_recursive(&console_lock); + rspin_unlock(&console_lock); local_irq_restore(flags); } =20 @@ -1163,14 +1163,14 @@ unsigned long console_lock_recursive_irqsave(void) unsigned long flags; =20 local_irq_save(flags); - spin_lock_recursive(&console_lock); + rspin_lock(&console_lock); =20 return flags; } =20 void console_unlock_recursive_irqrestore(unsigned long flags) { - spin_unlock_recursive(&console_lock); + rspin_unlock(&console_lock); local_irq_restore(flags); } =20 @@ -1231,12 +1231,12 @@ int __printk_ratelimit(int ratelimit_ms, int rateli= mit_burst) char lost_str[8]; snprintf(lost_str, sizeof(lost_str), "%d", lost); /* console_lock may already be acquired by printk(). */ - spin_lock_recursive(&console_lock); + rspin_lock(&console_lock); printk_start_of_line("(XEN) "); __putstr("printk: "); __putstr(lost_str); __putstr(" messages suppressed.\n"); - spin_unlock_recursive(&console_lock); + rspin_unlock(&console_lock); } local_irq_restore(flags); return 1; diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 61be34e75f..22342f07ac 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -54,12 +54,12 @@ static DEFINE_RSPINLOCK(_pcidevs_lock); =20 void pcidevs_lock(void) { - spin_lock_recursive(&_pcidevs_lock); + rspin_lock(&_pcidevs_lock); } =20 void pcidevs_unlock(void) { - spin_unlock_recursive(&_pcidevs_lock); + rspin_unlock(&_pcidevs_lock); } =20 bool pcidevs_locked(void) diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index c6604aef78..8cf751ad0c 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -358,8 +358,8 @@ struct sched_unit { (v) =3D (v)->next_in_list ) =20 /* Per-domain lock can be recursively acquired in fault handlers. */ -#define domain_lock(d) spin_lock_recursive(&(d)->domain_lock) -#define domain_unlock(d) spin_unlock_recursive(&(d)->domain_lock) +#define domain_lock(d) rspin_lock(&(d)->domain_lock) +#define domain_unlock(d) rspin_unlock(&(d)->domain_lock) =20 struct evtchn_port_ops; =20 diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 19561d5e61..c99ee52458 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -209,9 +209,16 @@ int _spin_is_locked(const spinlock_t *lock); int _spin_trylock(spinlock_t *lock); void _spin_barrier(spinlock_t *lock); =20 -int _spin_trylock_recursive(spinlock_t *lock); -void _spin_lock_recursive(spinlock_t *lock); -void _spin_unlock_recursive(spinlock_t *lock); +/* + * rspin_[un]lock(): Use these forms when the lock can (safely!) be + * reentered recursively on the same CPU. All critical regions that may fo= rm + * part of a recursively-nested set must be protected by these forms. If t= here + * are any critical regions that cannot form part of such a set, they can = use + * standard spin_[un]lock(). + */ +int rspin_trylock(rspinlock_t *lock); +void rspin_lock(rspinlock_t *lock); +void rspin_unlock(rspinlock_t *lock); =20 #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) @@ -241,15 +248,4 @@ void _spin_unlock_recursive(spinlock_t *lock); /* Ensure a lock is quiescent between two critical operations. */ #define spin_barrier(l) _spin_barrier(l) =20 -/* - * spin_[un]lock_recursive(): Use these forms when the lock can (safely!) = be - * reentered recursively on the same CPU. All critical regions that may fo= rm - * part of a recursively-nested set must be protected by these forms. If t= here - * are any critical regions that cannot form part of such a set, they can = use - * standard spin_[un]lock(). - */ -#define spin_trylock_recursive(l) _spin_trylock_recursive(l) -#define spin_lock_recursive(l) _spin_lock_recursive(l) -#define spin_unlock_recursive(l) _spin_unlock_recursive(l) - #endif /* __SPINLOCK_H__ */ --=20 2.35.3