From nobody Sat Dec 13 07:21:45 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1764931087659168.50532270487236; Fri, 5 Dec 2025 02:38:07 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1178639.1502414 (Exim 4.92) (envelope-from ) id 1vRTC8-0006ny-NS; Fri, 05 Dec 2025 10:37:48 +0000 Received: by outflank-mailman (output) from mailman id 1178639.1502414; Fri, 05 Dec 2025 10:37:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vRTC8-0006nl-Jy; Fri, 05 Dec 2025 10:37:48 +0000 Received: by outflank-mailman (input) for mailman id 1178639; Fri, 05 Dec 2025 10:37:47 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vRTC7-0005Ju-KJ for xen-devel@lists.xenproject.org; Fri, 05 Dec 2025 10:37:47 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 6f62ceb9-d1c6-11f0-980a-7dc792cee155; Fri, 05 Dec 2025 11:37:45 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5B0FE1575; Fri, 5 Dec 2025 02:37:37 -0800 (PST) Received: from C3HXLD123V.arm.com (unknown [10.57.45.211]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9BA273F86F; Fri, 5 Dec 2025 02:37:43 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 6f62ceb9-d1c6-11f0-980a-7dc792cee155 From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [PATCH v1 05/12] xen/arm: ffa: rework SPMC RX/TX buffer management Date: Fri, 5 Dec 2025 11:36:38 +0100 Message-ID: <491f62ede43a7a135327fa68afe9a648fde1dcba.1764930353.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.51.2 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1764931089776019201 Content-Type: text/plain; charset="utf-8" Rework how Xen accesses the RX/TX buffers shared with the SPMC so that ownership and locking are handled centrally. Move the SPMC RX/TX buffer bases into ffa_rxtx.c as ffa_spmc_rx/ffa_spmc_tx, protect them with dedicated ffa_spmc_{rx,tx}_lock spinlocks and expose ffa_rxtx_spmc_{rx,tx}_{acquire,release}() helpers instead of the global ffa_rx/ffa_tx pointers and ffa_{rx,tx}_buffer_lock. The RX helpers now always issue FFA_RX_RELEASE when we are done consuming data from the SPMC, so partition-info enumeration and shared memory paths release the RX buffer on all exit paths. The RX/TX mapping code is updated to use the descriptor offsets (rx_region_offs and tx_region_offs) rather than hard-coded structure layout, and to use the TX acquire/release helpers instead of touching the TX buffer directly. Signed-off-by: Bertrand Marquis --- Changes in v1: - modify share_shm function to use a goto and have one place to release the spmc tx buffer instead of doing it directly in the if error condition. - fix rx_acquire and tx_acquire to not release the spinlock as this is expected to be done only in release to ensure no parallel usage. --- xen/arch/arm/tee/ffa.c | 22 +----- xen/arch/arm/tee/ffa_partinfo.c | 40 +++++----- xen/arch/arm/tee/ffa_private.h | 18 ++--- xen/arch/arm/tee/ffa_rxtx.c | 130 +++++++++++++++++++++++++------- xen/arch/arm/tee/ffa_shm.c | 29 ++++--- 5 files changed, 153 insertions(+), 86 deletions(-) diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 497ada8264e0..43af49d1c011 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -48,8 +48,8 @@ * notification for secure partitions * - doesn't support notifications for Xen itself * - * There are some large locked sections with ffa_tx_buffer_lock and - * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used + * There are some large locked sections with ffa_spmc_tx_lock and + * ffa_spmc_rx_lock. Especially the ffa_spmc_tx_lock spinlock used * around share_shm() is a very large locked section which can let one VM * affect another VM. */ @@ -108,20 +108,6 @@ static const struct ffa_fw_abi ffa_fw_abi_needed[] =3D= { FW_ABI(FFA_RUN), }; =20 -/* - * Our rx/tx buffers shared with the SPMC. FFA_RXTX_PAGE_COUNT is the - * number of pages used in each of these buffers. - * - * The RX buffer is protected from concurrent usage with ffa_rx_buffer_loc= k. - * Note that the SPMC is also tracking the ownership of our RX buffer so - * for calls which uses our RX buffer to deliver a result we must call - * ffa_rx_release() to let the SPMC know that we're done with the buffer. - */ -void *ffa_rx __read_mostly; -void *ffa_tx __read_mostly; -DEFINE_SPINLOCK(ffa_rx_buffer_lock); -DEFINE_SPINLOCK(ffa_tx_buffer_lock); - LIST_HEAD(ffa_ctx_head); /* RW Lock to protect addition/removal and reading in ffa_ctx_head */ DEFINE_RWLOCK(ffa_ctx_list_rwlock); @@ -617,7 +603,7 @@ static bool ffa_probe_fw(void) ffa_fw_abi_needed[i].name); } =20 - if ( !ffa_rxtx_init() ) + if ( !ffa_rxtx_spmc_init() ) { printk(XENLOG_ERR "ffa: Error during RXTX buffer init\n"); goto err_no_fw; @@ -631,7 +617,7 @@ static bool ffa_probe_fw(void) return true; =20 err_rxtx_destroy: - ffa_rxtx_destroy(); + ffa_rxtx_spmc_destroy(); err_no_fw: ffa_fw_version =3D 0; bitmap_zero(ffa_fw_abi_supported, FFA_ABI_BITMAP_SIZE); diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinf= o.c index ec5a53ed1cab..145b869957b0 100644 --- a/xen/arch/arm/tee/ffa_partinfo.c +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -77,28 +77,24 @@ static int32_t ffa_get_sp_partinfo(uint32_t *uuid, uint= 32_t *sp_count, { int32_t ret; uint32_t src_size, real_sp_count; - void *src_buf =3D ffa_rx; + void *src_buf; uint32_t count =3D 0; =20 - /* Do we have a RX buffer with the SPMC */ - if ( !ffa_rx ) - return FFA_RET_DENIED; - /* We need to use the RX buffer to receive the list */ - spin_lock(&ffa_rx_buffer_lock); + src_buf =3D ffa_rxtx_spmc_rx_acquire(); + if ( !src_buf ) + return FFA_RET_DENIED; =20 ret =3D ffa_partition_info_get(uuid, 0, &real_sp_count, &src_size); if ( ret ) goto out; =20 - /* We now own the RX buffer */ - /* Validate the src_size we got */ if ( src_size < sizeof(struct ffa_partition_info_1_0) || src_size >=3D FFA_PAGE_SIZE ) { ret =3D FFA_RET_NOT_SUPPORTED; - goto out_release; + goto out; } =20 /* @@ -114,7 +110,7 @@ static int32_t ffa_get_sp_partinfo(uint32_t *uuid, uint= 32_t *sp_count, if ( real_sp_count > (FFA_RXTX_PAGE_COUNT * FFA_PAGE_SIZE) / src_size ) { ret =3D FFA_RET_NOT_SUPPORTED; - goto out_release; + goto out; } =20 for ( uint32_t sp_num =3D 0; sp_num < real_sp_count; sp_num++ ) @@ -127,7 +123,7 @@ static int32_t ffa_get_sp_partinfo(uint32_t *uuid, uint= 32_t *sp_count, if ( dst_buf > (end_buf - dst_size) ) { ret =3D FFA_RET_NO_MEMORY; - goto out_release; + goto out; } =20 memcpy(dst_buf, src_buf, MIN(src_size, dst_size)); @@ -143,10 +139,8 @@ static int32_t ffa_get_sp_partinfo(uint32_t *uuid, uin= t32_t *sp_count, =20 *sp_count =3D count; =20 -out_release: - ffa_hyp_rx_release(); out: - spin_unlock(&ffa_rx_buffer_lock); + ffa_rxtx_spmc_rx_release(); return ret; } =20 @@ -378,7 +372,7 @@ static void uninit_subscribers(void) XFREE(subscr_vm_destroyed); } =20 -static bool init_subscribers(uint16_t count, uint32_t fpi_size) +static bool init_subscribers(void *buf, uint16_t count, uint32_t fpi_size) { uint16_t n; uint16_t c_pos; @@ -395,7 +389,7 @@ static bool init_subscribers(uint16_t count, uint32_t f= pi_size) subscr_vm_destroyed_count =3D 0; for ( n =3D 0; n < count; n++ ) { - fpi =3D ffa_rx + n * fpi_size; + fpi =3D buf + n * fpi_size; =20 /* * We need to have secure partitions using bit 15 set convention f= or @@ -433,7 +427,7 @@ static bool init_subscribers(uint16_t count, uint32_t f= pi_size) =20 for ( c_pos =3D 0, d_pos =3D 0, n =3D 0; n < count; n++ ) { - fpi =3D ffa_rx + n * fpi_size; + fpi =3D buf + n * fpi_size; =20 if ( FFA_ID_IS_SECURE(fpi->id) ) { @@ -455,10 +449,14 @@ bool ffa_partinfo_init(void) uint32_t fpi_size; uint32_t count; int e; + void *spmc_rx; =20 if ( !ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) || - !ffa_fw_supports_fid(FFA_MSG_SEND_DIRECT_REQ_32) || - !ffa_rx || !ffa_tx ) + !ffa_fw_supports_fid(FFA_MSG_SEND_DIRECT_REQ_32)) + return false; + + spmc_rx =3D ffa_rxtx_spmc_rx_acquire(); + if (!spmc_rx) return false; =20 e =3D ffa_partition_info_get(NULL, 0, &count, &fpi_size); @@ -475,10 +473,10 @@ bool ffa_partinfo_init(void) goto out; } =20 - ret =3D init_subscribers(count, fpi_size); + ret =3D init_subscribers(spmc_rx, count, fpi_size); =20 out: - ffa_hyp_rx_release(); + ffa_rxtx_spmc_rx_release(); return ret; } =20 diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index d6400efd50bb..8797a62abd01 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -415,10 +415,6 @@ struct ffa_ctx { unsigned long *vm_destroy_bitmap; }; =20 -extern void *ffa_rx; -extern void *ffa_tx; -extern spinlock_t ffa_rx_buffer_lock; -extern spinlock_t ffa_tx_buffer_lock; extern DECLARE_BITMAP(ffa_fw_abi_supported, FFA_ABI_BITMAP_SIZE); =20 extern struct list_head ffa_ctx_head; @@ -436,8 +432,13 @@ int ffa_partinfo_domain_init(struct domain *d); bool ffa_partinfo_domain_destroy(struct domain *d); void ffa_handle_partition_info_get(struct cpu_user_regs *regs); =20 -bool ffa_rxtx_init(void); -void ffa_rxtx_destroy(void); +bool ffa_rxtx_spmc_init(void); +void ffa_rxtx_spmc_destroy(void); +void *ffa_rxtx_spmc_rx_acquire(void); +void ffa_rxtx_spmc_rx_release(void); +void *ffa_rxtx_spmc_tx_acquire(void); +void ffa_rxtx_spmc_tx_release(void); + int32_t ffa_rxtx_domain_init(struct domain *d); void ffa_rxtx_domain_destroy(struct domain *d); int32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr, @@ -567,11 +568,6 @@ static inline int32_t ffa_simple_call(uint32_t fid, re= gister_t a1, return ffa_get_ret_code(&resp); } =20 -static inline int32_t ffa_hyp_rx_release(void) -{ - return ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0); -} - static inline bool ffa_fw_supports_fid(uint32_t fid) { BUILD_BUG_ON(FFA_FNUM_MIN_VALUE > FFA_FNUM_MAX_VALUE); diff --git a/xen/arch/arm/tee/ffa_rxtx.c b/xen/arch/arm/tee/ffa_rxtx.c index 5776693bb3f0..e325eae07bda 100644 --- a/xen/arch/arm/tee/ffa_rxtx.c +++ b/xen/arch/arm/tee/ffa_rxtx.c @@ -30,6 +30,20 @@ struct ffa_endpoint_rxtx_descriptor_1_1 { uint32_t tx_region_offs; }; =20 +/* + * Our rx/tx buffers shared with the SPMC. FFA_RXTX_PAGE_COUNT is the + * number of pages used in each of these buffers. + * Each buffer has its own lock to protect from concurrent usage. + * + * Note that the SPMC is also tracking the ownership of our RX buffer so + * for calls which uses our RX buffer to deliver a result we must do an + * FFA_RX_RELEASE to let the SPMC know that we're done with the buffer. + */ +static void *ffa_spmc_rx __read_mostly; +static void *ffa_spmc_tx __read_mostly; +static DEFINE_SPINLOCK(ffa_spmc_rx_lock); +static DEFINE_SPINLOCK(ffa_spmc_tx_lock); + static int32_t ffa_rxtx_map(paddr_t tx_addr, paddr_t rx_addr, uint32_t page_count) { @@ -126,8 +140,9 @@ int32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx= _addr, sizeof(struct ffa_address_range) * 2 > FFA_MAX_RXTX_PAGE_COUNT * FFA_PAGE_SIZE); =20 - spin_lock(&ffa_tx_buffer_lock); - rxtx_desc =3D ffa_tx; + rxtx_desc =3D ffa_rxtx_spmc_tx_acquire(); + if ( !rxtx_desc ) + goto err_unmap_rx; =20 /* * We have only one page for each so we pack everything: @@ -144,7 +159,7 @@ int32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx= _addr, address_range_array[1]); =20 /* rx buffer */ - mem_reg =3D ffa_tx + sizeof(*rxtx_desc); + mem_reg =3D (void *)rxtx_desc + rxtx_desc->rx_region_offs; mem_reg->total_page_count =3D 1; mem_reg->address_range_count =3D 1; mem_reg->reserved =3D 0; @@ -154,7 +169,7 @@ int32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx= _addr, mem_reg->address_range_array[0].reserved =3D 0; =20 /* tx buffer */ - mem_reg =3D ffa_tx + rxtx_desc->tx_region_offs; + mem_reg =3D (void *)rxtx_desc + rxtx_desc->tx_region_offs; mem_reg->total_page_count =3D 1; mem_reg->address_range_count =3D 1; mem_reg->reserved =3D 0; @@ -165,7 +180,7 @@ int32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx= _addr, =20 ret =3D ffa_rxtx_map(0, 0, 0); =20 - spin_unlock(&ffa_tx_buffer_lock); + ffa_rxtx_spmc_tx_release(); =20 if ( ret !=3D FFA_RET_OK ) goto err_unmap_rx; @@ -319,49 +334,112 @@ void ffa_rxtx_domain_destroy(struct domain *d) rxtx_unmap(d); } =20 -void ffa_rxtx_destroy(void) +void *ffa_rxtx_spmc_rx_acquire(void) +{ + ASSERT(!spin_is_locked(&ffa_spmc_rx_lock)); + + spin_lock(&ffa_spmc_rx_lock); + + if ( ffa_spmc_rx ) + return ffa_spmc_rx; + + return NULL; +} + +void ffa_rxtx_spmc_rx_release(void) +{ + int32_t ret; + + ASSERT(spin_is_locked(&ffa_spmc_rx_lock)); + + /* Inform the SPMC that we are done with our RX buffer */ + ret =3D ffa_simple_call(FFA_RX_RELEASE, 0, 0, 0, 0); + if ( ret !=3D FFA_RET_OK ) + printk(XENLOG_DEBUG "Error releasing SPMC RX buffer: %d\n", ret); + + spin_unlock(&ffa_spmc_rx_lock); +} + +void *ffa_rxtx_spmc_tx_acquire(void) { - bool need_unmap =3D ffa_tx && ffa_rx; + ASSERT(!spin_is_locked(&ffa_spmc_tx_lock)); =20 - if ( ffa_tx ) + spin_lock(&ffa_spmc_tx_lock); + + if ( ffa_spmc_tx ) + return ffa_spmc_tx; + + return NULL; +} + +void ffa_rxtx_spmc_tx_release(void) +{ + ASSERT(spin_is_locked(&ffa_spmc_tx_lock)); + + spin_unlock(&ffa_spmc_tx_lock); +} + +void ffa_rxtx_spmc_destroy(void) +{ + bool need_unmap; + + spin_lock(&ffa_spmc_rx_lock); + spin_lock(&ffa_spmc_tx_lock); + need_unmap =3D ffa_spmc_tx && ffa_spmc_rx; + + if ( ffa_spmc_tx ) { - free_xenheap_pages(ffa_tx, 0); - ffa_tx =3D NULL; + free_xenheap_pages(ffa_spmc_tx, 0); + ffa_spmc_tx =3D NULL; } - if ( ffa_rx ) + if ( ffa_spmc_rx ) { - free_xenheap_pages(ffa_rx, 0); - ffa_rx =3D NULL; + free_xenheap_pages(ffa_spmc_rx, 0); + ffa_spmc_rx =3D NULL; } =20 if ( need_unmap ) ffa_rxtx_unmap(0); + + spin_unlock(&ffa_spmc_tx_lock); + spin_unlock(&ffa_spmc_rx_lock); } =20 -bool ffa_rxtx_init(void) +bool ffa_rxtx_spmc_init(void) { int32_t e; + bool ret =3D false; =20 /* Firmware not there or not supporting */ if ( !ffa_fw_supports_fid(FFA_RXTX_MAP_64) ) return false; =20 - ffa_rx =3D alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUN= T), 0); - if ( !ffa_rx ) - return false; + spin_lock(&ffa_spmc_rx_lock); + spin_lock(&ffa_spmc_tx_lock); + + ffa_spmc_rx =3D alloc_xenheap_pages( + get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0); + if ( !ffa_spmc_rx ) + goto exit; =20 - ffa_tx =3D alloc_xenheap_pages(get_order_from_pages(FFA_RXTX_PAGE_COUN= T), 0); - if ( !ffa_tx ) - goto err; + ffa_spmc_tx =3D alloc_xenheap_pages( + get_order_from_pages(FFA_RXTX_PAGE_COUNT), 0); + if ( !ffa_spmc_tx ) + goto exit; =20 - e =3D ffa_rxtx_map(__pa(ffa_tx), __pa(ffa_rx), FFA_RXTX_PAGE_COUNT); + e =3D ffa_rxtx_map(__pa(ffa_spmc_tx), __pa(ffa_spmc_rx), + FFA_RXTX_PAGE_COUNT); if ( e ) - goto err; + goto exit; =20 - return true; + ret =3D true; =20 -err: - ffa_rxtx_destroy(); +exit: + spin_unlock(&ffa_spmc_tx_lock); + spin_unlock(&ffa_spmc_rx_lock); =20 - return false; + if ( !ret ) + ffa_rxtx_spmc_destroy(); + + return ret; } diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c index dad3da192247..e275d3769d9b 100644 --- a/xen/arch/arm/tee/ffa_shm.c +++ b/xen/arch/arm/tee/ffa_shm.c @@ -286,9 +286,8 @@ static void init_range(struct ffa_address_range *addr_r= ange, } =20 /* - * This function uses the ffa_tx buffer to transmit the memory transaction - * descriptor. The function depends ffa_tx_buffer_lock to be used to guard - * the buffer from concurrent use. + * This function uses the ffa_spmc tx buffer to transmit the memory transa= ction + * descriptor. */ static int share_shm(struct ffa_shm_mem *shm) { @@ -298,17 +297,22 @@ static int share_shm(struct ffa_shm_mem *shm) struct ffa_address_range *addr_range; struct ffa_mem_region *region_descr; const unsigned int region_count =3D 1; - void *buf =3D ffa_tx; uint32_t frag_len; uint32_t tot_len; paddr_t last_pa; unsigned int n; paddr_t pa; + int32_t ret; + void *buf; =20 - ASSERT(spin_is_locked(&ffa_tx_buffer_lock)); ASSERT(shm->page_count); =20 + buf =3D ffa_rxtx_spmc_tx_acquire(); + if ( !buf ) + return FFA_RET_NOT_SUPPORTED; + descr =3D buf; + memset(descr, 0, sizeof(*descr)); descr->sender_id =3D shm->sender_id; descr->handle =3D shm->handle; @@ -340,7 +344,10 @@ static int share_shm(struct ffa_shm_mem *shm) tot_len =3D ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, region_descr->address_range_count); if ( tot_len > max_frag_len ) - return FFA_RET_NOT_SUPPORTED; + { + ret =3D FFA_RET_NOT_SUPPORTED; + goto out; + } =20 addr_range =3D region_descr->address_range_array; frag_len =3D ADDR_RANGE_OFFSET(descr->mem_access_count, region_count, = 1); @@ -360,7 +367,12 @@ static int share_shm(struct ffa_shm_mem *shm) init_range(addr_range, pa); } =20 - return ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle); + ret =3D ffa_mem_share(tot_len, frag_len, 0, 0, &shm->handle); + +out: + ffa_rxtx_spmc_tx_release(); + + return ret; } =20 static int read_mem_transaction(uint32_t ffa_vers, const void *buf, size_t= blen, @@ -579,10 +591,7 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs) if ( ret ) goto out; =20 - /* Note that share_shm() uses our tx buffer */ - spin_lock(&ffa_tx_buffer_lock); ret =3D share_shm(shm); - spin_unlock(&ffa_tx_buffer_lock); if ( ret ) goto out; =20 --=20 2.51.2