From nobody Thu Nov 21 21:47:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17290705963891019.1342121171054; Wed, 16 Oct 2024 02:23:16 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.819768.1233266 (Exim 4.92) (envelope-from ) id 1t10Eh-0003us-50; Wed, 16 Oct 2024 09:22:31 +0000 Received: by outflank-mailman (output) from mailman id 819768.1233266; Wed, 16 Oct 2024 09:22:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Eh-0003ul-1C; Wed, 16 Oct 2024 09:22:31 +0000 Received: by outflank-mailman (input) for mailman id 819768; Wed, 16 Oct 2024 09:22:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ef-0001Bq-PT for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:29 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2a1b3bed-8ba0-11ef-a0be-8be0dac302b0; Wed, 16 Oct 2024 11:22:29 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3F283FEC; Wed, 16 Oct 2024 02:22:58 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6076B3F528; Wed, 16 Oct 2024 02:22:27 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2a1b3bed-8ba0-11ef-a0be-8be0dac302b0 From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 1/4] xen/arm: ffa: Introduce VM to VM support Date: Wed, 16 Oct 2024 11:21:55 +0200 Message-ID: <0475e48ace0acd862224e7ff628d11db94392871.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1729070597295116600 Content-Type: text/plain; charset="utf-8" Create a CONFIG_FFA_VM_TO_VM parameter to activate FFA communication between VMs. When activated list VMs in the system with FF-A support in part_info_get. WARNING: There is no filtering for now and all VMs are listed !! Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/Kconfig | 11 +++ xen/arch/arm/tee/ffa_partinfo.c | 135 ++++++++++++++++++++++++++------ xen/arch/arm/tee/ffa_private.h | 12 +++ 3 files changed, 135 insertions(+), 23 deletions(-) diff --git a/xen/arch/arm/tee/Kconfig b/xen/arch/arm/tee/Kconfig index c5b0f88d7522..88a4c4c99154 100644 --- a/xen/arch/arm/tee/Kconfig +++ b/xen/arch/arm/tee/Kconfig @@ -28,5 +28,16 @@ config FFA =20 [1] https://developer.arm.com/documentation/den0077/latest =20 +config FFA_VM_TO_VM + bool "Enable FF-A between VMs (UNSUPPORTED)" if UNSUPPORTED + default n + depends on FFA + help + This option enables to use FF-A between VMs. + This is experimental and there is no access control so any + guest can communicate with any other guest. + + If unsure, say N. + endmenu =20 diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinf= o.c index fde187dba4e5..d699a267cc76 100644 --- a/xen/arch/arm/tee/ffa_partinfo.c +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -77,7 +77,21 @@ void ffa_handle_partition_info_get(struct cpu_user_regs = *regs) }; uint32_t src_size, dst_size; void *dst_buf; - uint32_t ffa_sp_count =3D 0; + uint32_t ffa_vm_count =3D 0, ffa_sp_count =3D 0; +#ifdef CONFIG_FFA_VM_TO_VM + struct domain *dom; + + /* Count the number of VM with FF-A support */ + rcu_read_lock(&domlist_read_lock); + for_each_domain( dom ) + { + struct ffa_ctx *vm =3D dom->arch.tee; + + if (dom !=3D d && vm !=3D NULL && vm->guest_vers !=3D 0) + ffa_vm_count++; + } + rcu_read_unlock(&domlist_read_lock); +#endif =20 /* * If the guest is v1.0, he does not get back the entry size so we must @@ -127,33 +141,38 @@ void ffa_handle_partition_info_get(struct cpu_user_re= gs *regs) =20 dst_buf =3D ctx->rx; =20 - if ( !ffa_rx ) + /* If not supported, we have ffa_sp_count=3D0 */ + if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) ) { - ret =3D FFA_RET_DENIED; - goto out_rx_release; - } + if ( !ffa_rx ) + { + ret =3D FFA_RET_DENIED; + goto out_rx_release; + } =20 - spin_lock(&ffa_rx_buffer_lock); + spin_lock(&ffa_rx_buffer_lock); =20 - ret =3D ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size); + ret =3D ffa_partition_info_get(uuid, 0, &ffa_sp_count, &src_size); =20 - if ( ret ) - goto out_rx_hyp_unlock; + if ( ret ) + goto out_rx_hyp_unlock; =20 - /* - * ffa_partition_info_get() succeeded so we now own the RX buffer we - * share with the SPMC. We must give it back using ffa_hyp_rx_release() - * once we've copied the content. - */ + /* + * ffa_partition_info_get() succeeded so we now own the RX buffer = we + * share with the SPMC. We must give it back using ffa_hyp_rx_rele= ase() + * once we've copied the content. + */ =20 - /* we cannot have a size smaller than 1.0 structure */ - if ( src_size < sizeof(struct ffa_partition_info_1_0) ) - { - ret =3D FFA_RET_NOT_SUPPORTED; - goto out_rx_hyp_release; + /* we cannot have a size smaller than 1.0 structure */ + if ( src_size < sizeof(struct ffa_partition_info_1_0) ) + { + ret =3D FFA_RET_NOT_SUPPORTED; + goto out_rx_hyp_release; + } } =20 - if ( ctx->page_count * FFA_PAGE_SIZE < ffa_sp_count * dst_size ) + if ( ctx->page_count * FFA_PAGE_SIZE < + (ffa_sp_count + ffa_vm_count) * dst_size ) { ret =3D FFA_RET_NO_MEMORY; goto out_rx_hyp_release; @@ -185,18 +204,88 @@ void ffa_handle_partition_info_get(struct cpu_user_re= gs *regs) } } =20 + if ( ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) ) + { + ffa_hyp_rx_release(); + spin_unlock(&ffa_rx_buffer_lock); + } + +#ifdef CONFIG_FFA_VM_TO_VM + if ( ffa_vm_count ) + { + uint32_t curr =3D 0; + /* add the VM informations if any */ + rcu_read_lock(&domlist_read_lock); + for_each_domain( dom ) + { + struct ffa_ctx *vm =3D dom->arch.tee; + + /* + * we do not add the VM calling to the list and only VMs with + * FF-A support + */ + if (dom !=3D d && vm !=3D NULL && vm->guest_vers !=3D 0) + { + /* + * We do not have UUID info for VMs so use + * the 1.0 structure so that we set UUIDs to + * zero using memset + */ + struct ffa_partition_info_1_0 srcvm; + + if ( curr =3D=3D ffa_vm_count ) + { + /* + * The number of domains changed since we counted them= , we + * can add new ones if there is enough space in the + * destination buffer so check it or go out with an er= ror. + */ + ffa_vm_count++; + if ( ctx->page_count * FFA_PAGE_SIZE < + (ffa_sp_count + ffa_vm_count) * dst_size ) + { + ret =3D FFA_RET_NO_MEMORY; + rcu_read_unlock(&domlist_read_lock); + goto out_rx_release; + } + } + + srcvm.id =3D ffa_get_vm_id(dom); + srcvm.execution_context =3D dom->max_vcpus; + srcvm.partition_properties =3D FFA_PART_VM_PROP; + if ( is_64bit_domain(dom) ) + srcvm.partition_properties |=3D FFA_PART_PROP_AARCH64_= STATE; + + memcpy(dst_buf, &srcvm, MIN(sizeof(srcvm), dst_size)); + + if ( dst_size > sizeof(srcvm) ) + memset(dst_buf + sizeof(srcvm), 0, + dst_size - sizeof(srcvm)); + + dst_buf +=3D dst_size; + curr++; + } + } + rcu_read_unlock(&domlist_read_lock); + + /* the number of domains could have reduce since the initial count= */ + ffa_vm_count =3D curr; + } +#endif + + goto out; + out_rx_hyp_release: ffa_hyp_rx_release(); out_rx_hyp_unlock: spin_unlock(&ffa_rx_buffer_lock); out_rx_release: - if ( ret !=3D FFA_RET_OK ) - ffa_rx_release(d); + ffa_rx_release(d); out: if ( ret ) ffa_set_regs_error(regs, ret); else - ffa_set_regs_success(regs, ffa_sp_count, dst_size); + ffa_set_regs_success(regs, ffa_sp_count + ffa_vm_count, dst_size); } =20 static int32_t ffa_direct_req_send_vm(uint16_t sp_id, uint16_t vm_id, diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index d441c0ca5598..47dd6b5fadea 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -187,6 +187,18 @@ */ #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U) =20 +/* + * Partition properties we give for a normal world VM: + * - can send direct message but not receive them + * - can handle indirect messages + * - can receive notifications + * 32/64 bit flag is set depending on the VM + */ +#define FFA_PART_VM_PROP (FFA_PART_PROP_DIRECT_REQ_SEND | \ + FFA_PART_PROP_INDIRECT_MSGS | \ + FFA_PART_PROP_RECV_NOTIF | \ + FFA_PART_PROP_IS_PE_ID) + /* Flags used in calls to FFA_NOTIFICATION_GET interface */ #define FFA_NOTIF_FLAG_BITMAP_SP BIT(0, U) #define FFA_NOTIF_FLAG_BITMAP_VM BIT(1, U) --=20 2.47.0 From nobody Thu Nov 21 21:47:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729070600866771.06723105109; Wed, 16 Oct 2024 02:23:20 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.819776.1233276 (Exim 4.92) (envelope-from ) id 1t10Ej-0004JS-KZ; Wed, 16 Oct 2024 09:22:33 +0000 Received: by outflank-mailman (output) from mailman id 819776.1233276; Wed, 16 Oct 2024 09:22:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ej-0004JB-HJ; Wed, 16 Oct 2024 09:22:33 +0000 Received: by outflank-mailman (input) for mailman id 819776; Wed, 16 Oct 2024 09:22:32 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ei-0001Po-HX for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:32 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 2b108c2e-8ba0-11ef-99a3-01e77a169b0f; Wed, 16 Oct 2024 11:22:30 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B60151007; Wed, 16 Oct 2024 02:22:59 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E501D3F528; Wed, 16 Oct 2024 02:22:28 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2b108c2e-8ba0-11ef-99a3-01e77a169b0f From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 2/4] xen/arm: ffa: Add buffer full notification support Date: Wed, 16 Oct 2024 11:21:56 +0200 Message-ID: <70a1fd32542901791fef0d528b0fb0fa94f8e814.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1729070601314116600 Content-Type: text/plain; charset="utf-8" Add support to raise a Rx buffer full notification to a VM. This function will be used for indirect message support between VM and is only activated if CONFIG_FFA_VM_TO_VM is selected. Even if there are 32 framework notifications possible, right now only one is defined so the implementation is simplified to only handle the buffer full notification using a boolean. If other framework notifications have to be supported one day, the design will have to be modified to handle it properly. Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/ffa_notif.c | 26 +++++++++++++++++++++----- xen/arch/arm/tee/ffa_private.h | 13 +++++++++++++ 2 files changed, 34 insertions(+), 5 deletions(-) diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c index 3c6418e62e2b..052b3e364a70 100644 --- a/xen/arch/arm/tee/ffa_notif.c +++ b/xen/arch/arm/tee/ffa_notif.c @@ -93,6 +93,7 @@ void ffa_handle_notification_info_get(struct cpu_user_reg= s *regs) void ffa_handle_notification_get(struct cpu_user_regs *regs) { struct domain *d =3D current->domain; + struct ffa_ctx *ctx =3D d->arch.tee; uint32_t recv =3D get_user_reg(regs, 1); uint32_t flags =3D get_user_reg(regs, 2); uint32_t w2 =3D 0; @@ -132,11 +133,7 @@ void ffa_handle_notification_get(struct cpu_user_regs = *regs) */ if ( ( flags & FFA_NOTIF_FLAG_BITMAP_SP ) && ( flags & FFA_NOTIF_FLAG_BITMAP_SPM ) ) - { - struct ffa_ctx *ctx =3D d->arch.tee; - - ACCESS_ONCE(ctx->notif.secure_pending) =3D false; - } + ACCESS_ONCE(ctx->notif.secure_pending) =3D false; =20 arm_smccc_1_2_smc(&arg, &resp); e =3D ffa_get_ret_code(&resp); @@ -156,6 +153,12 @@ void ffa_handle_notification_get(struct cpu_user_regs = *regs) w6 =3D resp.a6; } =20 +#ifdef CONFIG_FFA_VM_TO_VM + if ( flags & FFA_NOTIF_FLAG_BITMAP_HYP && + test_and_clear_bool(ctx->notif.buff_full_pending) ) + w7 =3D FFA_NOTIF_RX_BUFFER_FULL; +#endif + ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7); } =20 @@ -178,6 +181,19 @@ int ffa_handle_notification_set(struct cpu_user_regs *= regs) bitmap_hi); } =20 +#ifdef CONFIG_FFA_VM_TO_VM +void ffa_raise_rx_buffer_full(struct domain *d) +{ + struct ffa_ctx *ctx =3D d->arch.tee; + + if ( !ctx ) + return; + + if ( !test_and_set_bool(ctx->notif.buff_full_pending) ) + vgic_inject_irq(d, d->vcpu[0], notif_sri_irq, true); +} +#endif + /* * Extract a 16-bit ID (index n) from the successful return value from * FFA_NOTIFICATION_INFO_GET_64 or FFA_NOTIFICATION_INFO_GET_32. IDs are diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h index 47dd6b5fadea..ad1dd04aeb7c 100644 --- a/xen/arch/arm/tee/ffa_private.h +++ b/xen/arch/arm/tee/ffa_private.h @@ -210,6 +210,8 @@ #define FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT 7 #define FFA_NOTIF_INFO_GET_ID_COUNT_MASK 0x1F =20 +#define FFA_NOTIF_RX_BUFFER_FULL BIT(0, U) + /* Feature IDs used with FFA_FEATURES */ #define FFA_FEATURE_NOTIF_PEND_INTR 0x1U #define FFA_FEATURE_SCHEDULE_RECV_INTR 0x2U @@ -298,6 +300,13 @@ struct ffa_ctx_notif { * pending global notifications. */ bool secure_pending; + +#ifdef CONFIG_FFA_VM_TO_VM + /* + * Pending Hypervisor framework notifications + */ + bool buff_full_pending; +#endif }; =20 struct ffa_ctx { @@ -370,6 +379,10 @@ void ffa_handle_notification_info_get(struct cpu_user_= regs *regs); void ffa_handle_notification_get(struct cpu_user_regs *regs); int ffa_handle_notification_set(struct cpu_user_regs *regs); =20 +#ifdef CONFIG_FFA_VM_TO_VM +void ffa_raise_rx_buffer_full(struct domain *d); +#endif + void ffa_handle_msg_send_direct_req(struct cpu_user_regs *regs, uint32_t f= id); int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs); =20 --=20 2.47.0 From nobody Thu Nov 21 21:47:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729070894532976.1335124023132; Wed, 16 Oct 2024 02:28:14 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.819817.1233295 (Exim 4.92) (envelope-from ) id 1t10Jb-0006wb-Dm; Wed, 16 Oct 2024 09:27:35 +0000 Received: by outflank-mailman (output) from mailman id 819817.1233295; Wed, 16 Oct 2024 09:27:35 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Jb-0006wU-BA; Wed, 16 Oct 2024 09:27:35 +0000 Received: by outflank-mailman (input) for mailman id 819817; Wed, 16 Oct 2024 09:27:34 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10El-0001Bq-Qm for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:35 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-sth1.inumbo.com (Halon) with ESMTP id 2bf1599a-8ba0-11ef-a0be-8be0dac302b0; Wed, 16 Oct 2024 11:22:32 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 507A9FEC; Wed, 16 Oct 2024 02:23:01 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 683CF3F528; Wed, 16 Oct 2024 02:22:30 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2bf1599a-8ba0-11ef-a0be-8be0dac302b0 From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 3/4] xen/arm: ffa: Add indirect message between VM Date: Wed, 16 Oct 2024 11:21:57 +0200 Message-ID: <52d9809a114965832ee632756152d9125e93d4ea.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1729070896678116600 Content-Type: text/plain; charset="utf-8" Add support for indirect messages between VMs. This is only enabled if CONFIG_FFA_VM_TO_VM is selected. Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/ffa_msg.c | 96 ++++++++++++++++++++++++++++++++++---- 1 file changed, 88 insertions(+), 8 deletions(-) diff --git a/xen/arch/arm/tee/ffa_msg.c b/xen/arch/arm/tee/ffa_msg.c index 335f246ba657..25f184a06546 100644 --- a/xen/arch/arm/tee/ffa_msg.c +++ b/xen/arch/arm/tee/ffa_msg.c @@ -95,9 +95,12 @@ int32_t ffa_handle_msg_send2(struct cpu_user_regs *regs) const struct ffa_part_msg_rxtx *src_msg; uint16_t dst_id, src_id; int32_t ret; - - if ( !ffa_fw_supports_fid(FFA_MSG_SEND2) ) - return FFA_RET_NOT_SUPPORTED; +#ifdef CONFIG_FFA_VM_TO_VM + struct domain *dst_d; + struct ffa_ctx *dst_ctx; + struct ffa_part_msg_rxtx *dst_msg; + int err; +#endif =20 if ( !spin_trylock(&src_ctx->tx_lock) ) return FFA_RET_BUSY; @@ -106,10 +109,10 @@ int32_t ffa_handle_msg_send2(struct cpu_user_regs *re= gs) src_id =3D src_msg->send_recv_id >> 16; dst_id =3D src_msg->send_recv_id & GENMASK(15,0); =20 - if ( src_id !=3D ffa_get_vm_id(src_d) || !FFA_ID_IS_SECURE(dst_id) ) + if ( src_id !=3D ffa_get_vm_id(src_d) ) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock_tx; + goto out; } =20 /* check source message fits in buffer */ @@ -118,12 +121,89 @@ int32_t ffa_handle_msg_send2(struct cpu_user_regs *re= gs) src_msg->msg_offset < sizeof(struct ffa_part_msg_rxtx) ) { ret =3D FFA_RET_INVALID_PARAMETERS; - goto out_unlock_tx; + goto out; } =20 - ret =3D ffa_simple_call(FFA_MSG_SEND2, ((uint32_t)src_id) << 16, 0, 0,= 0); + if ( FFA_ID_IS_SECURE(dst_id) ) + { + /* Message for a secure partition */ + if ( !ffa_fw_supports_fid(FFA_MSG_SEND2) ) + { + ret =3D FFA_RET_NOT_SUPPORTED; + goto out; + } + + ret =3D ffa_simple_call(FFA_MSG_SEND2, ((uint32_t)src_id) << 16, 0= , 0, + 0); + goto out; + } =20 -out_unlock_tx: +#ifndef CONFIG_FFA_VM_TO_VM + ret =3D FFA_RET_INVALID_PARAMETERS; +#else + /* Message for a VM */ + if ( dst_id =3D=3D 0 ) + { + /* FF-A ID 0 is the hypervisor, this is not valid */ + ret =3D FFA_RET_INVALID_PARAMETERS; + goto out; + } + + /* This is also checking that dest is not src */ + err =3D rcu_lock_live_remote_domain_by_id(dst_id - 1, &dst_d); + if ( err ) + { + ret =3D FFA_RET_INVALID_PARAMETERS; + goto out; + } + + if ( dst_d->arch.tee =3D=3D NULL ) + { + ret =3D FFA_RET_INVALID_PARAMETERS; + goto out_unlock; + } + + dst_ctx =3D dst_d->arch.tee; + if ( !dst_ctx->guest_vers ) + { + ret =3D FFA_RET_INVALID_PARAMETERS; + goto out_unlock; + } + + /* This also checks that destination has set a Rx buffer */ + ret =3D ffa_rx_acquire(dst_d); + if ( ret ) + goto out_unlock; + + /* we need to have enough space in the destination buffer */ + if ( dst_ctx->page_count * FFA_PAGE_SIZE < + (sizeof(struct ffa_part_msg_rxtx) + src_msg->msg_size) ) + { + ret =3D FFA_RET_NO_MEMORY; + ffa_rx_release(dst_d); + goto out_unlock; + } + + dst_msg =3D dst_ctx->rx; + + /* prepare destination header */ + dst_msg->flags =3D 0; + dst_msg->reserved =3D 0; + dst_msg->msg_offset =3D sizeof(struct ffa_part_msg_rxtx); + dst_msg->send_recv_id =3D src_msg->send_recv_id; + dst_msg->msg_size =3D src_msg->msg_size; + + memcpy(dst_ctx->rx + sizeof(struct ffa_part_msg_rxtx), + src_ctx->tx + src_msg->msg_offset, src_msg->msg_size); + + /* receiver rx buffer will be released by the receiver*/ + +out_unlock: + rcu_unlock_domain(dst_d); + if ( !ret ) + ffa_raise_rx_buffer_full(dst_d); +#endif +out: spin_unlock(&src_ctx->tx_lock); return ret; } --=20 2.47.0 From nobody Thu Nov 21 21:47:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=fail(p=none dis=none) header.from=arm.com Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1729070824914638.3598664856831; Wed, 16 Oct 2024 02:27:04 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.819798.1233286 (Exim 4.92) (envelope-from ) id 1t10Ir-00069v-7R; Wed, 16 Oct 2024 09:26:49 +0000 Received: by outflank-mailman (output) from mailman id 819798.1233286; Wed, 16 Oct 2024 09:26:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10Ir-00069o-2W; Wed, 16 Oct 2024 09:26:49 +0000 Received: by outflank-mailman (input) for mailman id 819798; Wed, 16 Oct 2024 09:26:48 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1t10En-0001Po-DN for xen-devel@lists.xenproject.org; Wed, 16 Oct 2024 09:22:37 +0000 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by se1-gles-flk1.inumbo.com (Halon) with ESMTP id 2ce41d79-8ba0-11ef-99a3-01e77a169b0f; Wed, 16 Oct 2024 11:22:33 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C70891007; Wed, 16 Oct 2024 02:23:02 -0700 (PDT) Received: from C3HXLD123V.arm.com (unknown [10.57.21.81]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0223A3F528; Wed, 16 Oct 2024 02:22:31 -0700 (PDT) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2ce41d79-8ba0-11ef-99a3-01e77a169b0f From: Bertrand Marquis To: xen-devel@lists.xenproject.org Cc: jens.wiklander@linaro.org, Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Michal Orzel Subject: [RFC PATCH 4/4] xen/arm: ffa: Enable VM to VM without firmware Date: Wed, 16 Oct 2024 11:21:58 +0200 Message-ID: <57c59cae4141dd9601d7b4e9260030a16809b764.1729069025.git.bertrand.marquis@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZM-MESSAGEID: 1729070826318116600 Content-Type: text/plain; charset="utf-8" When VM to VM support is activated and there is no suitable FF-A support in the firmware, enable FF-A support for VMs to allow using it for VM to VM communications. If there is Optee running in the secure world and using the non FF-A communication system, having CONFIG_FFA_VM_TO_VM could be non functional (if optee is probed first) or Optee could be non functional (if FF-A is probed first) so it is not recommended to activate the configuration option for such systems. To make buffer full notification work between VMs when there is not firmware, rework the notification handling and modify the global flag to only be used as check for firmware notification support instead. Modify part_info_get to return the list of VMs when there is no firmware support. Signed-off-by: Bertrand Marquis --- xen/arch/arm/tee/ffa.c | 11 +++ xen/arch/arm/tee/ffa_notif.c | 118 ++++++++++++++++---------------- xen/arch/arm/tee/ffa_partinfo.c | 2 + 3 files changed, 73 insertions(+), 58 deletions(-) diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c index 21d41b452dc9..6d427864f3da 100644 --- a/xen/arch/arm/tee/ffa.c +++ b/xen/arch/arm/tee/ffa.c @@ -324,8 +324,11 @@ static int ffa_domain_init(struct domain *d) struct ffa_ctx *ctx; int ret; =20 +#ifndef CONFIG_FFA_VM_TO_VM if ( !ffa_fw_version ) return -ENODEV; +#endif + /* * We are using the domain_id + 1 as the FF-A ID for VMs as FF-A ID 0 = is * reserved for the hypervisor and we only support secure endpoints us= ing @@ -549,7 +552,15 @@ err_no_fw: bitmap_zero(ffa_fw_abi_supported, FFA_ABI_BITMAP_SIZE); printk(XENLOG_WARNING "ARM FF-A No firmware support\n"); =20 +#ifdef CONFIG_FFA_VM_TO_VM + INIT_LIST_HEAD(&ffa_teardown_head); + init_timer(&ffa_teardown_timer, ffa_teardown_timer_callback, NULL, 0); + + printk(XENLOG_INFO "ARM FF-A only available between VMs\n"); + return true; +#else return false; +#endif } =20 static const struct tee_mediator_ops ffa_ops =3D diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c index 052b3e364a70..f2c87d1320de 100644 --- a/xen/arch/arm/tee/ffa_notif.c +++ b/xen/arch/arm/tee/ffa_notif.c @@ -16,7 +16,7 @@ =20 #include "ffa_private.h" =20 -static bool __ro_after_init notif_enabled; +static bool __ro_after_init fw_notif_enabled; static unsigned int __ro_after_init notif_sri_irq; =20 int ffa_handle_notification_bind(struct cpu_user_regs *regs) @@ -27,21 +27,17 @@ int ffa_handle_notification_bind(struct cpu_user_regs *= regs) uint32_t bitmap_lo =3D get_user_reg(regs, 3); uint32_t bitmap_hi =3D get_user_reg(regs, 4); =20 - if ( !notif_enabled ) - return FFA_RET_NOT_SUPPORTED; - if ( (src_dst & 0xFFFFU) !=3D ffa_get_vm_id(d) ) return FFA_RET_INVALID_PARAMETERS; =20 if ( flags ) /* Only global notifications are supported */ return FFA_RET_DENIED; =20 - /* - * We only support notifications from SP so no need to check the sender - * endpoint ID, the SPMC will take care of that for us. - */ - return ffa_simple_call(FFA_NOTIFICATION_BIND, src_dst, flags, bitmap_h= i, - bitmap_lo); + if ( FFA_ID_IS_SECURE(src_dst>>16) && fw_notif_enabled ) + return ffa_simple_call(FFA_NOTIFICATION_BIND, src_dst, flags, + bitmap_hi, bitmap_lo); + + return FFA_RET_NOT_SUPPORTED; } =20 int ffa_handle_notification_unbind(struct cpu_user_regs *regs) @@ -51,32 +47,36 @@ int ffa_handle_notification_unbind(struct cpu_user_regs= *regs) uint32_t bitmap_lo =3D get_user_reg(regs, 3); uint32_t bitmap_hi =3D get_user_reg(regs, 4); =20 - if ( !notif_enabled ) - return FFA_RET_NOT_SUPPORTED; - if ( (src_dst & 0xFFFFU) !=3D ffa_get_vm_id(d) ) return FFA_RET_INVALID_PARAMETERS; =20 - /* - * We only support notifications from SP so no need to check the - * destination endpoint ID, the SPMC will take care of that for us. - */ - return ffa_simple_call(FFA_NOTIFICATION_UNBIND, src_dst, 0, bitmap_hi, - bitmap_lo); + if ( FFA_ID_IS_SECURE(src_dst>>16) && fw_notif_enabled ) + return ffa_simple_call(FFA_NOTIFICATION_UNBIND, src_dst, 0, bitma= p_hi, + bitmap_lo); + + return FFA_RET_NOT_SUPPORTED; } =20 void ffa_handle_notification_info_get(struct cpu_user_regs *regs) { struct domain *d =3D current->domain; struct ffa_ctx *ctx =3D d->arch.tee; + bool notif_pending =3D false; =20 - if ( !notif_enabled ) +#ifndef CONFIG_FFA_VM_TO_VM + if ( !fw_notif_enabled ) { ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); return; } +#endif =20 - if ( test_and_clear_bool(ctx->notif.secure_pending) ) + notif_pending =3D ctx->notif.secure_pending; +#ifdef CONFIG_FFA_VM_TO_VM + notif_pending |=3D ctx->notif.buff_full_pending; +#endif + + if ( notif_pending ) { /* A pending global notification for the guest */ ffa_set_regs(regs, FFA_SUCCESS_64, 0, @@ -103,11 +103,13 @@ void ffa_handle_notification_get(struct cpu_user_regs= *regs) uint32_t w6 =3D 0; uint32_t w7 =3D 0; =20 - if ( !notif_enabled ) +#ifndef CONFIG_FFA_VM_TO_VM + if ( !fw_notif_enabled ) { ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED); return; } +#endif =20 if ( (recv & 0xFFFFU) !=3D ffa_get_vm_id(d) ) { @@ -115,7 +117,8 @@ void ffa_handle_notification_get(struct cpu_user_regs *= regs) return; } =20 - if ( flags & ( FFA_NOTIF_FLAG_BITMAP_SP | FFA_NOTIF_FLAG_BITMAP_SPM ) ) + if ( fw_notif_enabled && (flags & ( FFA_NOTIF_FLAG_BITMAP_SP + | FFA_NOTIF_FLAG_BITMAP_SPM )) ) { struct arm_smccc_1_2_regs arg =3D { .a0 =3D FFA_NOTIFICATION_GET, @@ -170,15 +173,14 @@ int ffa_handle_notification_set(struct cpu_user_regs = *regs) uint32_t bitmap_lo =3D get_user_reg(regs, 3); uint32_t bitmap_hi =3D get_user_reg(regs, 4); =20 - if ( !notif_enabled ) - return FFA_RET_NOT_SUPPORTED; - if ( (src_dst >> 16) !=3D ffa_get_vm_id(d) ) return FFA_RET_INVALID_PARAMETERS; =20 - /* Let the SPMC check the destination of the notification */ - return ffa_simple_call(FFA_NOTIFICATION_SET, src_dst, flags, bitmap_lo, - bitmap_hi); + if ( FFA_ID_IS_SECURE(src_dst>>16) && fw_notif_enabled ) + return ffa_simple_call(FFA_NOTIFICATION_SET, src_dst, flags, bitma= p_lo, + bitmap_hi); + + return FFA_RET_NOT_SUPPORTED; } =20 #ifdef CONFIG_FFA_VM_TO_VM @@ -190,7 +192,7 @@ void ffa_raise_rx_buffer_full(struct domain *d) return; =20 if ( !test_and_set_bool(ctx->notif.buff_full_pending) ) - vgic_inject_irq(d, d->vcpu[0], notif_sri_irq, true); + vgic_inject_irq(d, d->vcpu[0], GUEST_FFA_NOTIF_PEND_INTR_ID, true); } #endif =20 @@ -363,7 +365,7 @@ void ffa_notif_init_interrupt(void) { int ret; =20 - if ( notif_enabled && notif_sri_irq < NR_GIC_SGI ) + if ( fw_notif_enabled && notif_sri_irq < NR_GIC_SGI ) { /* * An error here is unlikely since the primary CPU has already @@ -394,47 +396,47 @@ void ffa_notif_init(void) int ret; =20 /* Only enable fw notification if all ABIs we need are supported */ - if ( !(ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_CREATE) && - ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_DESTROY) && - ffa_fw_supports_fid(FFA_NOTIFICATION_GET) && - ffa_fw_supports_fid(FFA_NOTIFICATION_INFO_GET_64)) ) - return; - - arm_smccc_1_2_smc(&arg, &resp); - if ( resp.a0 !=3D FFA_SUCCESS_32 ) - return; - - irq =3D resp.a2; - notif_sri_irq =3D irq; - if ( irq >=3D NR_GIC_SGI ) - irq_set_type(irq, IRQ_TYPE_EDGE_RISING); - ret =3D request_irq(irq, 0, notif_irq_handler, "FF-A notif", NULL); - if ( ret ) + if ( ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_CREATE) && + ffa_fw_supports_fid(FFA_NOTIFICATION_BITMAP_DESTROY) && + ffa_fw_supports_fid(FFA_NOTIFICATION_GET) && + ffa_fw_supports_fid(FFA_NOTIFICATION_INFO_GET_64) ) { - printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", - irq, ret); - return; - } + arm_smccc_1_2_smc(&arg, &resp); + if ( resp.a0 !=3D FFA_SUCCESS_32 ) + return; =20 - notif_enabled =3D true; + irq =3D resp.a2; + notif_sri_irq =3D irq; + if ( irq >=3D NR_GIC_SGI ) + irq_set_type(irq, IRQ_TYPE_EDGE_RISING); + ret =3D request_irq(irq, 0, notif_irq_handler, "FF-A notif", NULL); + if ( ret ) + { + printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n", + irq, ret); + return; + } + fw_notif_enabled =3D true; + } } =20 int ffa_notif_domain_init(struct domain *d) { int32_t res; =20 - if ( !notif_enabled ) - return 0; + if ( fw_notif_enabled ) + { =20 - res =3D ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vcpus); - if ( res ) - return -ENOMEM; + res =3D ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vc= pus); + if ( res ) + return -ENOMEM; + } =20 return 0; } =20 void ffa_notif_domain_destroy(struct domain *d) { - if ( notif_enabled ) + if ( fw_notif_enabled ) ffa_notification_bitmap_destroy(ffa_get_vm_id(d)); } diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinf= o.c index d699a267cc76..2e09440fe6c2 100644 --- a/xen/arch/arm/tee/ffa_partinfo.c +++ b/xen/arch/arm/tee/ffa_partinfo.c @@ -128,12 +128,14 @@ void ffa_handle_partition_info_get(struct cpu_user_re= gs *regs) goto out; } =20 +#ifndef CONFIG_FFA_VM_TO_VM if ( !ffa_fw_supports_fid(FFA_PARTITION_INFO_GET) ) { /* Just give an empty partition list to the caller */ ret =3D FFA_RET_OK; goto out; } +#endif =20 ret =3D ffa_rx_acquire(d); if ( ret !=3D FFA_RET_OK ) --=20 2.47.0