From nobody Thu Jan 8 12:48:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1766078929; cv=none; d=zohomail.com; s=zohoarc; b=OnC5VzIsAUDXtAWnQCA3CsH4CJUQ4q2E8ol3ieE6KrralL0bX/GiFl47+3po3Ca9GBIpDDcx/MfMevUPPMc+s/iSu7imBql8WYXcVbOClI+G3ihTs9zoqV2f5oFeTmLqo2vfO/U2ecr+x8mOaabs+Ycur2fn/PLzXdXXqszRrKo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1766078929; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=X37iUX/x2wplHfGK7+yE6chFGRlh9LBuKxi0fvm1Lzc=; b=Hg0NtLa14uJdBshT5SsGO94nN/smPrA50Gy6FyndvFi4sMKwQCPCY6kJtkd+lrExwnq/J+1S3gPo3Aj2NusJjLrNO9Kd59Bx9XEuTFGfVk4w4wknKe96mrkWLt6WE8DlQizddtjksBSJqeZQNdHNq/ZT97C7zkAseoDvsCRpG3c= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1766078929700526.3977068950069; Thu, 18 Dec 2025 09:28:49 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1190025.1510715 (Exim 4.92) (envelope-from ) id 1vWHnf-0003Bx-N3; Thu, 18 Dec 2025 17:28:27 +0000 Received: by outflank-mailman (output) from mailman id 1190025.1510715; Thu, 18 Dec 2025 17:28:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHnf-0003Bk-J0; Thu, 18 Dec 2025 17:28:27 +0000 Received: by outflank-mailman (input) for mailman id 1190025; Thu, 18 Dec 2025 17:28:25 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHnd-0002iW-LF for xen-devel@lists.xenproject.org; Thu, 18 Dec 2025 17:28:25 +0000 Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com [2a00:1450:4864:20::532]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f396d9b5-dc36-11f0-9cce-f158ae23cfc8; Thu, 18 Dec 2025 18:28:22 +0100 (CET) Received: by mail-ed1-x532.google.com with SMTP id 4fb4d7f45d1cf-64b791b5584so976104a12.0 for ; Thu, 18 Dec 2025 09:28:22 -0800 (PST) Received: from fedora (user-109-243-71-38.play-internet.pl. [109.243.71.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-64b585b53c1sm3209423a12.5.2025.12.18.09.28.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Dec 2025 09:28:20 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f396d9b5-dc36-11f0-9cce-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766078901; x=1766683701; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=X37iUX/x2wplHfGK7+yE6chFGRlh9LBuKxi0fvm1Lzc=; b=meaSwAxoqTmzTYzCe8yY+uVKqtpnmR0ZdJKmEdmSu6af8k3lHkZRmXJqwhHOpqS8nD w74lTMnNC20ho+yWEDE2yPx36QD2pCy/fMS2jpMyHB272ewVFqIkWzPll+Zrwb9KIJTt cZql72O31EPb1knJtNO+fXFgi2u/WBTnSdaVn48J6Z2wzHtMqB9w8/BAixR/eQpBkihS U435074ldHmGsVusWf5lVLF1uRFl3uayFLdqCAeLMyt11sBzDWctSPnKF0pbDQ061m9a yiuq2bU9eOPVBxz30gQZMLY/kedOfsvNzVgeiqzYY13sCUTlECtuwkwXEAnTS7lQNkZ9 lX5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766078901; x=1766683701; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=X37iUX/x2wplHfGK7+yE6chFGRlh9LBuKxi0fvm1Lzc=; b=UQK8oVUswEr3ag9dp8K4g3YCxEKpQB2QE5J6/DHCPcWowdXZ2trJ+QmiD0u3Nhlo8k 8KqWnNbTtsKaaKf2ZQ6c8J5NSqptB5+4MHkMTXUKGvl/OoawkRS4dUc/u6DinqnKYXJY xT/X4tN/MV5AX8nol/EZdmdnMOZFSGBQxqTdv00BgnIrILq1w+/A9KzUnL/xU5UzoxNf B5l+3i/DBdCecV33vPauP99ac3hZp4E18W5ueuaptuQ9gigvJEVgavvLLWRd6ILMH8Kl sqE0oy4mA6GXXZiMgBykB1hK95H/NepTrIvfO/FMqGJ55KELxxkeP3WvEGEDeWLP5RJ4 fQhg== X-Gm-Message-State: AOJu0YwEBeQKSbfHE3D8JKdYl+6zWeFqG9beaGgfpQxkJ0GTESBF+014 oFOq9/2exFZMqMs+Eec5tavlkKbiPtH4me4BiSfxeWTmpW/cP8lLLCFBuQ36yA== X-Gm-Gg: AY/fxX4emM6JvO46t/j/VxNq8PVhdzURAgB+d6lPfhRIfFQq+6aKLG+a4ShJrrfKbPk 8LgRZMqllKN+OF8f7YByb5XKqRNEpfzAPe7XNaKZbHl1n1j6bVKG2ycH0o/9UtYUfHvqkYwQa4J 6Iv+YgGt7dP+QXofHeNG5MrSSOqM2DkHhtssWF5c/14Vi2g+1JGSR7MJjXz/MbqybbVSLFoMHkZ ZFA2xctPOZCccaBEmIrnDkuMBcgbhNp0hR1crKceQKsaQg6iqkw7aSkpdoNjUByDmo7cKRPl+B4 u2+BRnvFkv8fUVbNBtgB0gT46rAc7rp4D3HH38FqPJfG5uryq5yxPeqUyxFKffSDKnshfIYnqK5 CYLledbt/PS+9V5IdrnqENlXrJFqa0V5+SC/e67YuQYQ3nFZLncip2ZBWhwWiSqoHzwNwFEI47u y08MXEDLJARqd1c9wQhJlNYdhWUCZuIrsyXNMXekDsW2UmFDnlMXedcXE= X-Google-Smtp-Source: AGHT+IEwmP+4VxWLFFaQRCJZiFCjrfKIjhRZtyV8js/Njronv6amib4KdMuR5ue/96Cl+YZ2/zUrVQ== X-Received: by 2002:a05:6402:336:b0:649:9268:1f43 with SMTP id 4fb4d7f45d1cf-64b8ec6d9b0mr116127a12.21.1766078901228; Thu, 18 Dec 2025 09:28:21 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v2 1/4] xen/arm: optimize the size of struct vcpu Date: Thu, 18 Dec 2025 18:28:06 +0100 Message-ID: <946a1c2cfaf4157074470a653bba5baa8561ebbf.1766053253.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1766078931808158500 Content-Type: text/plain; charset="utf-8" When CONFIG_NEW_VGIC=3Dy and CONFIG_ARM_64=3Dy, the size of struct vcpu exceeds one page, which requires allocating two pages and led to the introduction of MAX_PAGES_PER_VCPU. To remove the need for MAX_PAGES_PER_VCPU in a follow-up patch, the vgic member of struct arch_vcpu is changed to a pointer to struct vgic_cpu. As a result, the size of struct vcpu for Arm64 is reduced to 2048 bytes, compared to 3840 bytes (without these changes and with CONFIG_ARM_64=3Dy) and 4736 bytes (without these changes and with both CONFIG_ARM_64=3Dy and CONFIG_NEW_VGIC=3Dy). Since the vgic member is now a pointer, vcpu_vgic_init() and vcpu_vgic_free() are updated to allocate and free the struct vgic_cpu instance dynamically. Signed-off-by: Oleksii Kurochko --- Changes in v2: - New patch. --- xen/arch/arm/gic-vgic.c | 48 ++++++++++----------- xen/arch/arm/include/asm/domain.h | 2 +- xen/arch/arm/vgic-v3.c | 34 +++++++-------- xen/arch/arm/vgic.c | 72 +++++++++++++++++-------------- xen/arch/arm/vgic/vgic-init.c | 10 ++++- xen/arch/arm/vgic/vgic-v2.c | 4 +- xen/arch/arm/vgic/vgic.c | 50 ++++++++++----------- 7 files changed, 116 insertions(+), 104 deletions(-) diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c index ea48c5375a..482b77c986 100644 --- a/xen/arch/arm/gic-vgic.c +++ b/xen/arch/arm/gic-vgic.c @@ -42,12 +42,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v= , struct pending_irq *n) { struct pending_irq *iter; =20 - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 if ( !list_empty(&n->lr_queue) ) return; =20 - list_for_each_entry ( iter, &v->arch.vgic.lr_pending, lr_queue ) + list_for_each_entry ( iter, &v->arch.vgic->lr_pending, lr_queue ) { if ( iter->priority > n->priority ) { @@ -55,12 +55,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v= , struct pending_irq *n) return; } } - list_add_tail(&n->lr_queue, &v->arch.vgic.lr_pending); + list_add_tail(&n->lr_queue, &v->arch.vgic->lr_pending); } =20 void gic_remove_from_lr_pending(struct vcpu *v, struct pending_irq *p) { - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 list_del_init(&p->lr_queue); } @@ -73,7 +73,7 @@ void gic_raise_inflight_irq(struct vcpu *v, unsigned int = virtual_irq) if ( unlikely(!n) ) return; =20 - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 /* Don't try to update the LR if the interrupt is disabled */ if ( !test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) ) @@ -104,7 +104,7 @@ static unsigned int gic_find_unused_lr(struct vcpu *v, { uint64_t *lr_mask =3D &this_cpu(lr_mask); =20 - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 if ( unlikely(test_bit(GIC_IRQ_GUEST_PRISTINE_LPI, &p->status)) ) { @@ -130,13 +130,13 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int= virtual_irq, unsigned int nr_lrs =3D gic_get_nr_lrs(); struct pending_irq *p =3D irq_to_pending(v, virtual_irq); =20 - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 if ( unlikely(!p) ) /* An unmapped LPI does not need to be raised. */ return; =20 - if ( v =3D=3D current && list_empty(&v->arch.vgic.lr_pending) ) + if ( v =3D=3D current && list_empty(&v->arch.vgic->lr_pending) ) { i =3D gic_find_unused_lr(v, p, 0); =20 @@ -156,7 +156,7 @@ static void gic_update_one_lr(struct vcpu *v, int i) int irq; struct gic_lr lr_val; =20 - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); ASSERT(!local_irq_is_enabled()); =20 gic_hw_ops->read_lr(i, &lr_val); @@ -253,7 +253,7 @@ void vgic_sync_from_lrs(struct vcpu *v) =20 gic_hw_ops->update_hcr_status(GICH_HCR_UIE, false); =20 - spin_lock_irqsave(&v->arch.vgic.lock, flags); + spin_lock_irqsave(&v->arch.vgic->lock, flags); =20 while ((i =3D find_next_bit((const unsigned long *) &this_cpu(lr_mask), nr_lrs, i)) < nr_lrs ) { @@ -261,7 +261,7 @@ void vgic_sync_from_lrs(struct vcpu *v) i++; } =20 - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); } =20 static void gic_restore_pending_irqs(struct vcpu *v) @@ -274,13 +274,13 @@ static void gic_restore_pending_irqs(struct vcpu *v) =20 ASSERT(!local_irq_is_enabled()); =20 - spin_lock(&v->arch.vgic.lock); + spin_lock(&v->arch.vgic->lock); =20 - if ( list_empty(&v->arch.vgic.lr_pending) ) + if ( list_empty(&v->arch.vgic->lr_pending) ) goto out; =20 - inflight_r =3D &v->arch.vgic.inflight_irqs; - list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue ) + inflight_r =3D &v->arch.vgic->inflight_irqs; + list_for_each_entry_safe ( p, t, &v->arch.vgic->lr_pending, lr_queue ) { lr =3D gic_find_unused_lr(v, p, lr); if ( lr >=3D nr_lrs ) @@ -318,17 +318,17 @@ found: } =20 out: - spin_unlock(&v->arch.vgic.lock); + spin_unlock(&v->arch.vgic->lock); } =20 void gic_clear_pending_irqs(struct vcpu *v) { struct pending_irq *p, *t; =20 - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 v->arch.lr_mask =3D 0; - list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue ) + list_for_each_entry_safe ( p, t, &v->arch.vgic->lr_pending, lr_queue ) gic_remove_from_lr_pending(v, p); } =20 @@ -357,14 +357,14 @@ int vgic_vcpu_pending_irq(struct vcpu *v) mask_priority =3D gic_hw_ops->read_vmcr_priority(); active_priority =3D find_first_bit(&apr, 32); =20 - spin_lock_irqsave(&v->arch.vgic.lock, flags); + spin_lock_irqsave(&v->arch.vgic->lock, flags); =20 /* TODO: We order the guest irqs by priority, but we don't change * the priority of host irqs. */ =20 /* find the first enabled non-active irq, the queue is already * ordered by priority */ - list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight ) + list_for_each_entry( p, &v->arch.vgic->inflight_irqs, inflight ) { if ( GIC_PRI_TO_GUEST(p->priority) >=3D mask_priority ) goto out; @@ -378,7 +378,7 @@ int vgic_vcpu_pending_irq(struct vcpu *v) } =20 out: - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); return rc; } =20 @@ -388,7 +388,7 @@ void vgic_sync_to_lrs(void) =20 gic_restore_pending_irqs(current); =20 - if ( !list_empty(¤t->arch.vgic.lr_pending) && lr_all_full() ) + if ( !list_empty(¤t->arch.vgic->lr_pending) && lr_all_full() ) gic_hw_ops->update_hcr_status(GICH_HCR_UIE, true); } =20 @@ -396,10 +396,10 @@ void gic_dump_vgic_info(struct vcpu *v) { struct pending_irq *p; =20 - list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight ) + list_for_each_entry ( p, &v->arch.vgic->inflight_irqs, inflight ) printk("Inflight irq=3D%u lr=3D%u\n", p->irq, p->lr); =20 - list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue ) + list_for_each_entry( p, &v->arch.vgic->lr_pending, lr_queue ) printk("Pending irq=3D%d\n", p->irq); } =20 diff --git a/xen/arch/arm/include/asm/domain.h b/xen/arch/arm/include/asm/d= omain.h index 758ad807e4..6cfa793828 100644 --- a/xen/arch/arm/include/asm/domain.h +++ b/xen/arch/arm/include/asm/domain.h @@ -230,7 +230,7 @@ struct arch_vcpu union gic_state_data gic; uint64_t lr_mask; =20 - struct vgic_cpu vgic; + struct vgic_cpu *vgic; =20 /* Timer registers */ register_t cntkctl; diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c index 77aab5c3c2..a9bb7e8906 100644 --- a/xen/arch/arm/vgic-v3.c +++ b/xen/arch/arm/vgic-v3.c @@ -162,10 +162,10 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu = *v, mmio_info_t *info, goto read_as_zero_32; if ( dabt.size !=3D DABT_WORD ) goto bad_width; =20 - spin_lock_irqsave(&v->arch.vgic.lock, flags); - *r =3D vreg_reg32_extract(!!(v->arch.vgic.flags & VGIC_V3_LPIS_ENA= BLED), + spin_lock_irqsave(&v->arch.vgic->lock, flags); + *r =3D vreg_reg32_extract(!!(v->arch.vgic->flags & VGIC_V3_LPIS_EN= ABLED), info); - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); return 1; } =20 @@ -195,7 +195,7 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v= , mmio_info_t *info, /* We use the VCPU ID as the redistributor ID in bits[23:8] */ typer |=3D v->vcpu_id << GICR_TYPER_PROC_NUM_SHIFT; =20 - if ( v->arch.vgic.flags & VGIC_V3_RDIST_LAST ) + if ( v->arch.vgic->flags & VGIC_V3_RDIST_LAST ) typer |=3D GICR_TYPER_LAST; =20 if ( v->domain->arch.vgic.has_its ) @@ -249,7 +249,7 @@ static int __vgic_v3_rdistr_rd_mmio_read(struct vcpu *v= , mmio_info_t *info, goto read_as_zero_64; if ( !vgic_reg64_check_access(dabt) ) goto bad_width; =20 - val =3D read_atomic(&v->arch.vgic.rdist_pendbase); + val =3D read_atomic(&v->arch.vgic->rdist_pendbase); val &=3D ~GICR_PENDBASER_PTZ; /* WO, reads as 0 */ *r =3D vreg_reg64_extract(val, info); return 1; @@ -467,7 +467,7 @@ static void vgic_vcpu_enable_lpis(struct vcpu *v) smp_mb(); } =20 - v->arch.vgic.flags |=3D VGIC_V3_LPIS_ENABLED; + v->arch.vgic->flags |=3D VGIC_V3_LPIS_ENABLED; } =20 static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu *v, mmio_info_t *inf= o, @@ -488,14 +488,14 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu= *v, mmio_info_t *info, if ( dabt.size !=3D DABT_WORD ) goto bad_width; =20 vgic_lock(v); /* protects rdists_enabled */ - spin_lock_irqsave(&v->arch.vgic.lock, flags); + spin_lock_irqsave(&v->arch.vgic->lock, flags); =20 /* LPIs can only be enabled once, but never disabled again. */ if ( (r & GICR_CTLR_ENABLE_LPIS) && - !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) ) + !(v->arch.vgic->flags & VGIC_V3_LPIS_ENABLED) ) vgic_vcpu_enable_lpis(v); =20 - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); vgic_unlock(v); =20 return 1; @@ -565,18 +565,18 @@ static int __vgic_v3_rdistr_rd_mmio_write(struct vcpu= *v, mmio_info_t *info, goto write_ignore_64; if ( !vgic_reg64_check_access(dabt) ) goto bad_width; =20 - spin_lock_irqsave(&v->arch.vgic.lock, flags); + spin_lock_irqsave(&v->arch.vgic->lock, flags); =20 /* Writing PENDBASER with LPIs enabled is UNPREDICTABLE. */ - if ( !(v->arch.vgic.flags & VGIC_V3_LPIS_ENABLED) ) + if ( !(v->arch.vgic->flags & VGIC_V3_LPIS_ENABLED) ) { - reg =3D read_atomic(&v->arch.vgic.rdist_pendbase); + reg =3D read_atomic(&v->arch.vgic->rdist_pendbase); vreg_reg64_update(®, r, info); reg =3D sanitize_pendbaser(reg); - write_atomic(&v->arch.vgic.rdist_pendbase, reg); + write_atomic(&v->arch.vgic->rdist_pendbase, reg); } =20 - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); =20 return 1; } @@ -1115,7 +1115,7 @@ static struct vcpu *get_vcpu_from_rdist(struct domain= *d, =20 v =3D d->vcpu[vcpu_id]; =20 - *offset =3D gpa - v->arch.vgic.rdist_base; + *offset =3D gpa - v->arch.vgic->rdist_base; =20 return v; } @@ -1745,7 +1745,7 @@ static int vgic_v3_vcpu_init(struct vcpu *v) return -EINVAL; } =20 - v->arch.vgic.rdist_base =3D rdist_base; + v->arch.vgic->rdist_base =3D rdist_base; =20 /* * If the redistributor is the last one of the @@ -1756,7 +1756,7 @@ static int vgic_v3_vcpu_init(struct vcpu *v) last_cpu =3D (region->size / GICV3_GICR_SIZE) + region->first_cpu - 1; =20 if ( v->vcpu_id =3D=3D last_cpu || (v->vcpu_id =3D=3D (d->max_vcpus - = 1)) ) - v->arch.vgic.flags |=3D VGIC_V3_RDIST_LAST; + v->arch.vgic->flags |=3D VGIC_V3_RDIST_LAST; =20 return 0; } diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c index 3ebdf9953f..8b17871b86 100644 --- a/xen/arch/arm/vgic.c +++ b/xen/arch/arm/vgic.c @@ -84,7 +84,7 @@ static inline struct vgic_irq_rank *vgic_get_rank(struct = vcpu *v, unsigned int rank) { if ( rank =3D=3D 0 ) - return v->arch.vgic.private_irqs; + return v->arch.vgic->private_irqs; else if ( rank <=3D DOMAIN_NR_RANKS(v->domain) ) return &v->domain->arch.vgic.shared_irqs[rank - 1]; else if ( is_valid_espi_rank(v->domain, rank) ) @@ -370,29 +370,35 @@ int vcpu_vgic_init(struct vcpu *v) { int i; =20 - v->arch.vgic.private_irqs =3D xzalloc(struct vgic_irq_rank); - if ( v->arch.vgic.private_irqs =3D=3D NULL ) + v->arch.vgic =3D xzalloc(struct vgic_cpu); + if ( v->arch.vgic =3D=3D NULL ) + return -ENOMEM; + + v->arch.vgic->private_irqs =3D xzalloc(struct vgic_irq_rank); + if ( v->arch.vgic->private_irqs =3D=3D NULL ) return -ENOMEM; =20 /* SGIs/PPIs are always routed to this VCPU */ - vgic_rank_init(v->arch.vgic.private_irqs, 0, v->vcpu_id); + vgic_rank_init(v->arch.vgic->private_irqs, 0, v->vcpu_id); =20 v->domain->arch.vgic.handler->vcpu_init(v); =20 - memset(&v->arch.vgic.pending_irqs, 0, sizeof(v->arch.vgic.pending_irqs= )); + memset(&v->arch.vgic->pending_irqs, 0, sizeof(v->arch.vgic->pending_ir= qs)); for (i =3D 0; i < 32; i++) - vgic_init_pending_irq(&v->arch.vgic.pending_irqs[i], i); + vgic_init_pending_irq(&v->arch.vgic->pending_irqs[i], i); =20 - INIT_LIST_HEAD(&v->arch.vgic.inflight_irqs); - INIT_LIST_HEAD(&v->arch.vgic.lr_pending); - spin_lock_init(&v->arch.vgic.lock); + INIT_LIST_HEAD(&v->arch.vgic->inflight_irqs); + INIT_LIST_HEAD(&v->arch.vgic->lr_pending); + spin_lock_init(&v->arch.vgic->lock); =20 return 0; } =20 int vcpu_vgic_free(struct vcpu *v) { - xfree(v->arch.vgic.private_irqs); + xfree(v->arch.vgic->private_irqs); + xfree(v->arch.vgic); + return 0; } =20 @@ -423,14 +429,14 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *= new, unsigned int irq) /* This will never be called for an LPI, as we don't migrate them. */ ASSERT(!is_lpi(irq)); =20 - spin_lock_irqsave(&old->arch.vgic.lock, flags); + spin_lock_irqsave(&old->arch.vgic->lock, flags); =20 p =3D irq_to_pending(old, irq); =20 /* nothing to do for virtual interrupts */ if ( p->desc =3D=3D NULL ) { - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); + spin_unlock_irqrestore(&old->arch.vgic->lock, flags); return true; } =20 @@ -438,7 +444,7 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *ne= w, unsigned int irq) if ( test_bit(GIC_IRQ_GUEST_MIGRATING, &p->status) ) { gprintk(XENLOG_WARNING, "irq %u migration failed: requested while = in progress\n", irq); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); + spin_unlock_irqrestore(&old->arch.vgic->lock, flags); return false; } =20 @@ -447,7 +453,7 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *ne= w, unsigned int irq) if ( list_empty(&p->inflight) ) { irq_set_affinity(p->desc, cpumask_of(new->processor)); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); + spin_unlock_irqrestore(&old->arch.vgic->lock, flags); return true; } /* If the IRQ is still lr_pending, re-inject it to the new vcpu */ @@ -455,7 +461,7 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *ne= w, unsigned int irq) { vgic_remove_irq_from_queues(old, p); irq_set_affinity(p->desc, cpumask_of(new->processor)); - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); + spin_unlock_irqrestore(&old->arch.vgic->lock, flags); vgic_inject_irq(new->domain, new, irq, true); return true; } @@ -464,7 +470,7 @@ bool vgic_migrate_irq(struct vcpu *old, struct vcpu *ne= w, unsigned int irq) if ( !list_empty(&p->inflight) ) set_bit(GIC_IRQ_GUEST_MIGRATING, &p->status); =20 - spin_unlock_irqrestore(&old->arch.vgic.lock, flags); + spin_unlock_irqrestore(&old->arch.vgic->lock, flags); return true; } =20 @@ -516,12 +522,12 @@ void vgic_disable_irqs(struct vcpu *v, uint32_t r, un= signed int n) irq =3D i + (32 * n); v_target =3D vgic_get_target_vcpu(v, irq); =20 - spin_lock_irqsave(&v_target->arch.vgic.lock, flags); + spin_lock_irqsave(&v_target->arch.vgic->lock, flags); p =3D irq_to_pending(v_target, irq); clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status); gic_remove_from_lr_pending(v_target, p); desc =3D p->desc; - spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v_target->arch.vgic->lock, flags); =20 if ( desc !=3D NULL ) { @@ -567,12 +573,12 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, uns= igned int n) while ( (i =3D find_next_bit(&mask, 32, i)) < 32 ) { irq =3D i + (32 * n); v_target =3D vgic_get_target_vcpu(v, irq); - spin_lock_irqsave(&v_target->arch.vgic.lock, flags); + spin_lock_irqsave(&v_target->arch.vgic->lock, flags); p =3D irq_to_pending(v_target, irq); set_bit(GIC_IRQ_GUEST_ENABLED, &p->status); if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE,= &p->status) ) gic_raise_guest_irq(v_target, irq, p->priority); - spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v_target->arch.vgic->lock, flags); if ( p->desc !=3D NULL ) { irq_set_affinity(p->desc, cpumask_of(v_target->processor)); @@ -701,7 +707,7 @@ struct pending_irq *irq_to_pending(struct vcpu *v, unsi= gned int irq) /* Pending irqs allocation strategy: the first vgic.nr_spis irqs * are used for SPIs; the rests are used for per cpu irqs */ if ( irq < 32 ) - n =3D &v->arch.vgic.pending_irqs[irq]; + n =3D &v->arch.vgic->pending_irqs[irq]; else if ( is_lpi(irq) ) n =3D v->domain->arch.vgic.handler->lpi_to_pending(v->domain, irq); else @@ -734,16 +740,16 @@ void vgic_clear_pending_irqs(struct vcpu *v) struct pending_irq *p, *t; unsigned long flags; =20 - spin_lock_irqsave(&v->arch.vgic.lock, flags); - list_for_each_entry_safe ( p, t, &v->arch.vgic.inflight_irqs, inflight= ) + spin_lock_irqsave(&v->arch.vgic->lock, flags); + list_for_each_entry_safe ( p, t, &v->arch.vgic->inflight_irqs, infligh= t ) list_del_init(&p->inflight); gic_clear_pending_irqs(v); - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); } =20 void vgic_remove_irq_from_queues(struct vcpu *v, struct pending_irq *p) { - ASSERT(spin_is_locked(&v->arch.vgic.lock)); + ASSERT(spin_is_locked(&v->arch.vgic->lock)); =20 clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status); list_del_init(&p->inflight); @@ -772,20 +778,20 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v= , unsigned int virq, v =3D vgic_get_target_vcpu(d->vcpu[0], virq); }; =20 - spin_lock_irqsave(&v->arch.vgic.lock, flags); + spin_lock_irqsave(&v->arch.vgic->lock, flags); =20 n =3D irq_to_pending(v, virq); /* If an LPI has been removed, there is nothing to inject here. */ if ( unlikely(!n) ) { - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); return; } =20 /* vcpu offline */ if ( test_bit(_VPF_down, &v->pause_flags) ) { - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); return; } =20 @@ -804,7 +810,7 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v, = unsigned int virq, if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) ) gic_raise_guest_irq(v, virq, priority); =20 - list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight ) + list_for_each_entry ( iter, &v->arch.vgic->inflight_irqs, inflight ) { if ( iter->priority > priority ) { @@ -812,9 +818,9 @@ void vgic_inject_irq(struct domain *d, struct vcpu *v, = unsigned int virq, goto out; } } - list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs); + list_add_tail(&n->inflight, &v->arch.vgic->inflight_irqs); out: - spin_unlock_irqrestore(&v->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->lock, flags); =20 /* we have a new higher priority irq, inject it into the guest */ vcpu_kick(v); @@ -924,7 +930,7 @@ void vgic_check_inflight_irqs_pending(struct vcpu *v, u= nsigned int rank, uint32_ =20 v_target =3D vgic_get_target_vcpu(v, irq); =20 - spin_lock_irqsave(&v_target->arch.vgic.lock, flags); + spin_lock_irqsave(&v_target->arch.vgic->lock, flags); =20 p =3D irq_to_pending(v_target, irq); =20 @@ -933,7 +939,7 @@ void vgic_check_inflight_irqs_pending(struct vcpu *v, u= nsigned int rank, uint32_ "%pv trying to clear pending interrupt %u.\n", v, irq); =20 - spin_unlock_irqrestore(&v_target->arch.vgic.lock, flags); + spin_unlock_irqrestore(&v_target->arch.vgic->lock, flags); } } =20 diff --git a/xen/arch/arm/vgic/vgic-init.c b/xen/arch/arm/vgic/vgic-init.c index f8d7d3a226..67f297797f 100644 --- a/xen/arch/arm/vgic/vgic-init.c +++ b/xen/arch/arm/vgic/vgic-init.c @@ -57,7 +57,7 @@ */ static void vgic_vcpu_early_init(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; unsigned int i; =20 INIT_LIST_HEAD(&vgic_cpu->ap_list_head); @@ -202,6 +202,10 @@ int vcpu_vgic_init(struct vcpu *v) { int ret =3D 0; =20 + v->arch.vgic =3D xzalloc(struct vgic_cpu); + if ( v->arch.vgic =3D=3D NULL ) + return -ENOMEM; + vgic_vcpu_early_init(v); =20 if ( gic_hw_version() =3D=3D GIC_V2 ) @@ -241,10 +245,12 @@ void domain_vgic_free(struct domain *d) =20 int vcpu_vgic_free(struct vcpu *v) { - struct vgic_cpu *vgic_cpu =3D &v->arch.vgic; + struct vgic_cpu *vgic_cpu =3D v->arch.vgic; =20 INIT_LIST_HEAD(&vgic_cpu->ap_list_head); =20 + xfree(vgic_cpu); + return 0; } =20 diff --git a/xen/arch/arm/vgic/vgic-v2.c b/xen/arch/arm/vgic/vgic-v2.c index 6a558089c5..e64d681dd2 100644 --- a/xen/arch/arm/vgic/vgic-v2.c +++ b/xen/arch/arm/vgic/vgic-v2.c @@ -56,8 +56,8 @@ void vgic_v2_setup_hw(paddr_t dbase, paddr_t cbase, paddr= _t csize, */ void vgic_v2_fold_lr_state(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; - unsigned int used_lrs =3D vcpu->arch.vgic.used_lrs; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; + unsigned int used_lrs =3D vcpu->arch.vgic->used_lrs; unsigned long flags; unsigned int lr; =20 diff --git a/xen/arch/arm/vgic/vgic.c b/xen/arch/arm/vgic/vgic.c index b2c0e1873a..146bd124c3 100644 --- a/xen/arch/arm/vgic/vgic.c +++ b/xen/arch/arm/vgic/vgic.c @@ -40,8 +40,8 @@ * When taking more than one ap_list_lock at the same time, always take the * lowest numbered VCPU's ap_list_lock first, so: * vcpuX->vcpu_id < vcpuY->vcpu_id: - * spin_lock(vcpuX->arch.vgic.ap_list_lock); - * spin_lock(vcpuY->arch.vgic.ap_list_lock); + * spin_lock(vcpuX->arch.vgic->ap_list_lock); + * spin_lock(vcpuY->arch.vgic->ap_list_lock); * * Since the VGIC must support injecting virtual interrupts from ISRs, we = have * to use the spin_lock_irqsave/spin_unlock_irqrestore versions of outer @@ -102,7 +102,7 @@ struct vgic_irq *vgic_get_irq(struct domain *d, struct = vcpu *vcpu, { /* SGIs and PPIs */ if ( intid <=3D VGIC_MAX_PRIVATE ) - return &vcpu->arch.vgic.private_irqs[intid]; + return &vcpu->arch.vgic->private_irqs[intid]; =20 /* SPIs */ if ( intid <=3D VGIC_MAX_SPI ) @@ -245,7 +245,7 @@ out: /* Must be called with the ap_list_lock held */ static void vgic_sort_ap_list(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; =20 ASSERT(spin_is_locked(&vgic_cpu->ap_list_lock)); =20 @@ -323,7 +323,7 @@ retry: =20 /* someone can do stuff here, which we re-check below */ =20 - spin_lock_irqsave(&vcpu->arch.vgic.ap_list_lock, flags); + spin_lock_irqsave(&vcpu->arch.vgic->ap_list_lock, flags); spin_lock(&irq->irq_lock); =20 /* @@ -341,7 +341,7 @@ retry: if ( unlikely(irq->vcpu || vcpu !=3D vgic_target_oracle(irq)) ) { spin_unlock(&irq->irq_lock); - spin_unlock_irqrestore(&vcpu->arch.vgic.ap_list_lock, flags); + spin_unlock_irqrestore(&vcpu->arch.vgic->ap_list_lock, flags); =20 spin_lock_irqsave(&irq->irq_lock, flags); goto retry; @@ -352,11 +352,11 @@ retry: * now in the ap_list. */ vgic_get_irq_kref(irq); - list_add_tail(&irq->ap_list, &vcpu->arch.vgic.ap_list_head); + list_add_tail(&irq->ap_list, &vcpu->arch.vgic->ap_list_head); irq->vcpu =3D vcpu; =20 spin_unlock(&irq->irq_lock); - spin_unlock_irqrestore(&vcpu->arch.vgic.ap_list_lock, flags); + spin_unlock_irqrestore(&vcpu->arch.vgic->ap_list_lock, flags); =20 vcpu_kick(vcpu); =20 @@ -422,7 +422,7 @@ void vgic_inject_irq(struct domain *d, struct vcpu *vcp= u, unsigned int intid, */ static void vgic_prune_ap_list(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; struct vgic_irq *irq, *tmp; unsigned long flags; =20 @@ -487,8 +487,8 @@ retry: vcpuB =3D vcpu; } =20 - spin_lock_irqsave(&vcpuA->arch.vgic.ap_list_lock, flags); - spin_lock(&vcpuB->arch.vgic.ap_list_lock); + spin_lock_irqsave(&vcpuA->arch.vgic->ap_list_lock, flags); + spin_lock(&vcpuB->arch.vgic->ap_list_lock); spin_lock(&irq->irq_lock); =20 /* @@ -502,7 +502,7 @@ retry: */ if ( target_vcpu =3D=3D vgic_target_oracle(irq) ) { - struct vgic_cpu *new_cpu =3D &target_vcpu->arch.vgic; + struct vgic_cpu *new_cpu =3D target_vcpu->arch.vgic; =20 list_del(&irq->ap_list); irq->vcpu =3D target_vcpu; @@ -510,8 +510,8 @@ retry: } =20 spin_unlock(&irq->irq_lock); - spin_unlock(&vcpuB->arch.vgic.ap_list_lock); - spin_unlock_irqrestore(&vcpuA->arch.vgic.ap_list_lock, flags); + spin_unlock(&vcpuB->arch.vgic->ap_list_lock); + spin_unlock_irqrestore(&vcpuA->arch.vgic->ap_list_lock, flags); goto retry; } =20 @@ -542,7 +542,7 @@ static void vgic_set_underflow(struct vcpu *vcpu) /* Requires the ap_list_lock to be held. */ static int compute_ap_list_depth(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; struct vgic_irq *irq; int count =3D 0; =20 @@ -557,7 +557,7 @@ static int compute_ap_list_depth(struct vcpu *vcpu) /* Requires the VCPU's ap_list_lock to be held. */ static void vgic_flush_lr_state(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; struct vgic_irq *irq; int count =3D 0; =20 @@ -583,7 +583,7 @@ static void vgic_flush_lr_state(struct vcpu *vcpu) } } =20 - vcpu->arch.vgic.used_lrs =3D count; + vcpu->arch.vgic->used_lrs =3D count; } =20 /** @@ -600,7 +600,7 @@ static void vgic_flush_lr_state(struct vcpu *vcpu) void vgic_sync_from_lrs(struct vcpu *vcpu) { /* An empty ap_list_head implies used_lrs =3D=3D 0 */ - if ( list_empty(&vcpu->arch.vgic.ap_list_head) ) + if ( list_empty(&vcpu->arch.vgic->ap_list_head) ) return; =20 vgic_fold_lr_state(vcpu); @@ -628,14 +628,14 @@ void vgic_sync_to_lrs(void) * and introducing additional synchronization mechanism doesn't change * this. */ - if ( list_empty(¤t->arch.vgic.ap_list_head) ) + if ( list_empty(¤t->arch.vgic->ap_list_head) ) return; =20 ASSERT(!local_irq_is_enabled()); =20 - spin_lock(¤t->arch.vgic.ap_list_lock); + spin_lock(¤t->arch.vgic->ap_list_lock); vgic_flush_lr_state(current); - spin_unlock(¤t->arch.vgic.ap_list_lock); + spin_unlock(¤t->arch.vgic->ap_list_lock); =20 gic_hw_ops->update_hcr_status(GICH_HCR_EN, 1); } @@ -652,7 +652,7 @@ void vgic_sync_to_lrs(void) */ int vgic_vcpu_pending_irq(struct vcpu *vcpu) { - struct vgic_cpu *vgic_cpu =3D &vcpu->arch.vgic; + struct vgic_cpu *vgic_cpu =3D vcpu->arch.vgic; struct vgic_irq *irq; unsigned long flags; int ret =3D 0; @@ -761,11 +761,11 @@ void vgic_free_virq(struct domain *d, unsigned int vi= rq) =20 void gic_dump_vgic_info(struct vcpu *v) { - struct vgic_cpu *vgic_cpu =3D &v->arch.vgic; + struct vgic_cpu *vgic_cpu =3D v->arch.vgic; struct vgic_irq *irq; unsigned long flags; =20 - spin_lock_irqsave(&v->arch.vgic.ap_list_lock, flags); + spin_lock_irqsave(&v->arch.vgic->ap_list_lock, flags); =20 if ( !list_empty(&vgic_cpu->ap_list_head) ) printk(" active or pending interrupts queued:\n"); @@ -781,7 +781,7 @@ void gic_dump_vgic_info(struct vcpu *v) spin_unlock(&irq->irq_lock); } =20 - spin_unlock_irqrestore(&v->arch.vgic.ap_list_lock, flags); + spin_unlock_irqrestore(&v->arch.vgic->ap_list_lock, flags); } =20 void vgic_clear_pending_irqs(struct vcpu *v) --=20 2.52.0 From nobody Thu Jan 8 12:48:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1766078925; cv=none; d=zohomail.com; s=zohoarc; b=B4NjEKFxyY2irKtdjkPyCE73vKw2Y5jc9qltLpXdes0/n0VBBeT3sO2Xgxjeo3Qy+jyEY0rtYGYTzevtOACXKR5d/kQ5ml2rS16DveGUp1mLUYr5ZL6GGud5Ys+qFfr8LUkd+rJ3Ret23ab2Wm5CL4fQQ7plfHwo/ZKNCWkHyhw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1766078925; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=rW+wJ63md9ndA96ESO3iCElLFPadst5PIcpnOB9F2yY=; b=e3wPvrtXv8dA7NURhPRqyDeHTdjzLGEKTdaJd68vw/jbtjyWX9otKC3W0JukkG0RqsH8ZXp13LWlKlg2JGMkYgTJMxL114nVblLHxV/N11um+nA8OeyUIzNkDFD1UUseSEfaF4Pt/4R+D9Ap6Utz+Rr2pD9X8+42R6wAsK1AeWA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1766078925052592.9098794868676; Thu, 18 Dec 2025 09:28:45 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1190024.1510699 (Exim 4.92) (envelope-from ) id 1vWHne-0002mo-8k; Thu, 18 Dec 2025 17:28:26 +0000 Received: by outflank-mailman (output) from mailman id 1190024.1510699; Thu, 18 Dec 2025 17:28:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHne-0002lW-3u; Thu, 18 Dec 2025 17:28:26 +0000 Received: by outflank-mailman (input) for mailman id 1190024; Thu, 18 Dec 2025 17:28:25 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHnd-0002iW-0J for xen-devel@lists.xenproject.org; Thu, 18 Dec 2025 17:28:25 +0000 Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com [2a00:1450:4864:20::52c]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f3de86f7-dc36-11f0-9cce-f158ae23cfc8; Thu, 18 Dec 2025 18:28:22 +0100 (CET) Received: by mail-ed1-x52c.google.com with SMTP id 4fb4d7f45d1cf-64b4b35c812so1192929a12.0 for ; Thu, 18 Dec 2025 09:28:22 -0800 (PST) Received: from fedora (user-109-243-71-38.play-internet.pl. [109.243.71.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-64b585b53c1sm3209423a12.5.2025.12.18.09.28.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Dec 2025 09:28:21 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f3de86f7-dc36-11f0-9cce-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766078902; x=1766683702; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rW+wJ63md9ndA96ESO3iCElLFPadst5PIcpnOB9F2yY=; b=NLCfjbxlBNuNJoQb7UGi80qWLnruJmUqetGj/rQtsBYtxyQjXZvkeJVYZWT/bMGFx4 gT2mtUSs0GW5lT0yuDABx5zRlBZdR8jt5vru8GIxG5siSzEw9hbufIB0+Xx0AM7qoCXH hta4fNaIeH1kypozPhOvve1usqzM0mOyZpy+DYtNSdk2n6SQg/oR+OUKE5HBY8d1WPQn WynzeLbkHsnGVy6dwMtzDacbrfIBXhsDCRzs05TYa5Wq5t5KUKFqzdwiLVVwhb2D8zse kkQV0po8oo52I5nfJyoJlRWJRh/d1Ax89urhFknLdrGj22bGrTA3ZA5WqHMoSI9zSQIs MlZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766078902; x=1766683702; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=rW+wJ63md9ndA96ESO3iCElLFPadst5PIcpnOB9F2yY=; b=RazTm4Je+K45+DhXLSsMePu296x5pEp9pGndcsNvoh+VvCmhkfhcO+UisiDT/+kTtT hsNKh4sSD3YNHk9qNRom810M9iDxHY9msfXqsgZJOTRsso370ImvmUuIGhUAsw5TNiG1 f2j1iqA9Lc6HqtszgxTyysym6MTdTRfr8KY5kgBUtHlkdoLg/Ij9TEl7zLm0S4wiD7jS MHd07DuwvQI1dMT16c2X767f4UikmVBiFU+kA+1FjOFkjDbgaQqVYCjjEov805qb0ceQ ylaPEdsMGAnvrwvKljW7dJd34dGEx0si5XzWBNmdzSzOu+y+KRGtOZ9U1CafOA/1L7Ro b23Q== X-Gm-Message-State: AOJu0Yz+rbOT/GgNL3oICfHeEQMgytRl6acLMv0UVydJFHnC9ElnSOHu td2WCdu01XKcWyG2z/azoEbCu44G0KvARxn44hlT9GkKZdDJ8xwheWxNobWUAA== X-Gm-Gg: AY/fxX4f7vk0rZ4AMkgcC36pN1bSMmytg5EZS77YgZ17aBTA7CDqiIx707CKqNWoxtX Cp/r6ZEAHNiotNoJxkpuh5su+3IgntDSYFVbw5wnlORdPH8tcrSUT/Uop61fGXJVQuoCehGBOvl 6MTbYCcj3liiS3pqlzXUZ4ajnutJ30NUlva3Kz7plcNJuCJFZPHkqq+J6vmqVXdlOoxKv1lpvV3 SWBdLsR9S4qOcvzj6VBYnDaOv1Av9rDpaluJ6W0ZNobVlwE63rlzZlXKi5HSv+0S7HA5clxpTGo m4MP4SjGR4fClztCoAaV7x7ETuFjfowqpMhCwKCQqt+moZn8OHFzWwVqgRXZQDAA/G9i7PLBstn dWlTAS2J/6SKfK9U9/nCrvtMEKosmmE+DZc5kwV3P9RFX3rtZjZOxDOdwxWaR7hdf422Z/MzE4x 3bMfgw+ZVNm1Z72NYryyqwrPnC5O63VdoWRdWElZN/Au7emezLLM2f+jM= X-Google-Smtp-Source: AGHT+IHOLBkvgXaKhb4YYR5U8jNPk5mPr/gqCgbWRquEGgBpPyyXGtotZ574+vtr5wqIVm/PSZMAwA== X-Received: by 2002:a05:6402:3550:b0:641:8a92:9334 with SMTP id 4fb4d7f45d1cf-64b8e938162mr193723a12.6.1766078901964; Thu, 18 Dec 2025 09:28:21 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk Subject: [PATCH v2 2/4] xen/arm: drop MAX_PAGES_PER_VCPU Date: Thu, 18 Dec 2025 18:28:07 +0100 Message-ID: <74f1594aad235765002b59f2baa975cc8fe72f06.1766053253.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1766078925669158500 Content-Type: text/plain; charset="utf-8" Now that the size of struct vcpu is smaller than PAGE_SIZE, MAX_PAGES_PER_VCPU is no longer needed and is removed. This change also updates {alloc,free}_vcpu_struct(). Signed-off-by: Oleksii Kurochko --- Changes in v2: - New patch. --- xen/arch/arm/domain.c | 25 +++++-------------------- 1 file changed, 5 insertions(+), 20 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 47973f99d9..e566023340 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -473,36 +473,21 @@ void dump_pageframe_info(struct domain *d) =20 } =20 -/* - * The new VGIC has a bigger per-IRQ structure, so we need more than one - * page on ARM64. Cowardly increase the limit in this case. - */ -#if defined(CONFIG_NEW_VGIC) && defined(CONFIG_ARM_64) -#define MAX_PAGES_PER_VCPU 2 -#else -#define MAX_PAGES_PER_VCPU 1 -#endif - struct vcpu *alloc_vcpu_struct(const struct domain *d) { struct vcpu *v; =20 - BUILD_BUG_ON(sizeof(*v) > MAX_PAGES_PER_VCPU * PAGE_SIZE); - v =3D alloc_xenheap_pages(get_order_from_bytes(sizeof(*v)), 0); - if ( v !=3D NULL ) - { - unsigned int i; - - for ( i =3D 0; i < DIV_ROUND_UP(sizeof(*v), PAGE_SIZE); i++ ) - clear_page((void *)v + i * PAGE_SIZE); - } + BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE); + v =3D alloc_xenheap_pages(0, 0); + if ( v ) + clear_page(v); =20 return v; } =20 void free_vcpu_struct(struct vcpu *v) { - free_xenheap_pages(v, get_order_from_bytes(sizeof(*v))); + free_xenheap_page(v); } =20 int arch_vcpu_create(struct vcpu *v) --=20 2.52.0 From nobody Thu Jan 8 12:48:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1766078922; cv=none; d=zohomail.com; s=zohoarc; b=fky+LZTV37U4UYsSwYdLQ5QpgUqDe8lj21fW7/VfntpnPPSUGDuXr8zY3TybZffFSQeUidlHNIphGAvPRqYNdnReMJeLsKak+2hBNP3+TmzY6sxFJ3N2NwZOFUeLdPQFuAdGo/rEauciWYHK/mj2TQc0rhAmK5pRHqEEzbrV8L4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1766078922; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=hgML6elZQJ93D5kRikMc+0qH5Xjev/s0exSbE8XMmAU=; b=fNh61iex8tJyiJ/7gwF+aDMRx7UVBRSwazcIHH6GlFsOnM8VtCLvJlmjUOQi5B41RwXzSggcjP/WH5229tNyh9tJtno3iSw7AlFDrVGR7DCPCN4j6ckzcjtzQDqHnBF1QypwRXBQGBEWC3BTr88EkXdAITWrz+fqL3JQfvnkwfE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1766078922068740.131748685456; Thu, 18 Dec 2025 09:28:42 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1190023.1510694 (Exim 4.92) (envelope-from ) id 1vWHnd-0002jK-VA; Thu, 18 Dec 2025 17:28:25 +0000 Received: by outflank-mailman (output) from mailman id 1190023.1510694; Thu, 18 Dec 2025 17:28:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHnd-0002jE-Rx; Thu, 18 Dec 2025 17:28:25 +0000 Received: by outflank-mailman (input) for mailman id 1190023; Thu, 18 Dec 2025 17:28:24 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHnc-0002UZ-N3 for xen-devel@lists.xenproject.org; Thu, 18 Dec 2025 17:28:24 +0000 Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com [2a00:1450:4864:20::531]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id f48b0c0a-dc36-11f0-b15b-2bf370ae4941; Thu, 18 Dec 2025 18:28:23 +0100 (CET) Received: by mail-ed1-x531.google.com with SMTP id 4fb4d7f45d1cf-6419aaced59so1351388a12.0 for ; Thu, 18 Dec 2025 09:28:23 -0800 (PST) Received: from fedora (user-109-243-71-38.play-internet.pl. [109.243.71.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-64b585b53c1sm3209423a12.5.2025.12.18.09.28.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Dec 2025 09:28:22 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f48b0c0a-dc36-11f0-b15b-2bf370ae4941 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766078903; x=1766683703; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hgML6elZQJ93D5kRikMc+0qH5Xjev/s0exSbE8XMmAU=; b=CAi+D753uVSo8vlvO9fHbwwQn2qKpoF4SIx6p/MZnZU23JhaE4Cv2ry97xWZgNswN9 UxG+Cs3Zz4Q4ST6Q8+oaqeYYjWFH0sbSOoTc4cfz4OxeyJxTpH3OYom8Hhn/dExC7SmI USwEwN2XIg3qKVbxImSM6IbYY0YVZl2oyipAs24Mv8nKP0U8zwvujtaDcflDaE9mmsxE xGIVAmQi+KfgYKDpA7vODouH4aQGMrdHXwfUit1oEaPsfv1O90sn/vMmBMSmLvGnnZGh ivuhOQXaaHJ+ftU/iMFsL6jQoaZ6DLdwlof3wBpI3q/YcgByWFjAUhxXkwuXMbZ4yOd1 xIvw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766078903; x=1766683703; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=hgML6elZQJ93D5kRikMc+0qH5Xjev/s0exSbE8XMmAU=; b=i2lqhAvRnI43gt1uAMJN0lxLE9doVaEEh0knBPzeU0A0fz6Az8UtQMPz59n/LlhlIf /fW2wryrYkD75MaDjlMs1XkAamX90mQC3xYgsF92UX8VrOcVwbdZCKMM2kfQPh891+h5 jeGZKdzVE/8fNVhqNEWRFCt0YV0EeiAUDyME5pXyjDjjkPiJ8soTxOHCO34d0Py7roIS PVAXdIz8VYgFfs/xqNpRD1oJGg772fq0jCymQOikEKoP5xDdmQLOBlzMCcAlDMNEAbA/ JLIdaChuwhW8WhoA59HvgPlTaK9g+2kyjBh03dPsDXNsxvqxbP6nTmA7rkqwOZh0/IOD gelQ== X-Gm-Message-State: AOJu0YwKgbj5Dw33KQaBvIq7Ye70la8xWKTVWI8suLbONTLmThf76m7Q Xy3B6q/OoFnuWGzR3XGUbA8JGzcweejP1gZNwgPsKN48z0azSgCA+FJXTpYSag== X-Gm-Gg: AY/fxX7LR6QQXTy/KWDRjCjcEXrgwaaUTGecjbI0UjqOB2Ye6fB5EDCjNPeI8YpffOO 7XYOSwViMnLSyjVH2cmR66+9FpMTGijNhRFzrfTncI3PHtgWnlS4APl+6AbF6g5VHODiyu5au4V hjRjPC+Hf9vHbQb8jBZhtJ5i/qSWNj1m12LwHILSjAtrG01+eJIGyUZkHqkO0NwC6rH6ie1FAOU NEMoY5WqcAPW4plbpsFI1o3zvRF85esWvK6b4CyK/E7ihiXyUnU+llAMJ3ru8G/PJ9aIM3r91dD JankQlux01MwOQTDgyNnD/fHd6bK+gbP7lV9YmHnlSg/beDyQpzF3gRBUxSSPAicQKT5XASdSkW h0SPRTjwzjKH9hjtJfU5K5nrWKcEslkTcZMWWP3T/DSFTa4imd7j1DP322jmvi72PiGnU9XJXlR moY+2riUekoyf7eArmYD430TEZ6J1cS7py7OU86IdvMCmD9FOQn15SbjI= X-Google-Smtp-Source: AGHT+IHUFzaTlWeEsfxKKTDyxE1ri/OL1XU+Ws6Fg1It0P9MKq6FAG4PnEL9q9QmSW8Ji9jWyBNx9g== X-Received: by 2002:a05:6402:2553:b0:649:9f05:68ce with SMTP id 4fb4d7f45d1cf-64b8ea4b9a4mr178499a12.14.1766078902938; Thu, 18 Dec 2025 09:28:22 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Stefano Stabellini , Julien Grall , Bertrand Marquis , Michal Orzel , Volodymyr Babchuk , Andrew Cooper , Anthony PERARD , Jan Beulich , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Timothy Pearson , Alistair Francis , Bob Eshleman , Connor Davis Subject: [PATCH v2 3/4] xen: move alloc/free_vcpu_struct() to common code Date: Thu, 18 Dec 2025 18:28:08 +0100 Message-ID: <099753603c18bbba0db702746d394c2e77e15a4d.1766053253.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1766078923813158500 Content-Type: text/plain; charset="utf-8" alloc_vcpu_struct() and free_vcpu_struct() contain little architecture-specific logic and are suitable for sharing across architectures. Move both helpers to common code. To support the remaining architectural differences, introduce arch_vcpu_struct_memflags(), allowing architectures to override the memory flags passed to alloc_xenheap_pages(). This is currently needed by x86, which may require MEMF_bits(32) for HVM guests using shadow paging. The ARM implementation of alloc/free_vcpu_struct() is removed and replaced by the common version. Stub implementations are also dropped from PPC and RISC-V. Finally, make alloc_vcpu_struct() and free_vcpu_struct() static to common/domain.c, as they are no longer used outside common code. No functional changes. Signed-off-by: Oleksii Kurochko --- Changes in v2: - Rework alloc/free_vcpu_struct() to work with only one page. - Return back the comment about the restriction inside x86's arch_vcpu_struct_memflags(). - Drop MAX_PAGES_PER_VCPU. --- xen/arch/arm/domain.c | 17 ----------------- xen/arch/ppc/stubs.c | 10 ---------- xen/arch/riscv/stubs.c | 10 ---------- xen/arch/x86/domain.c | 17 ++--------------- xen/arch/x86/include/asm/domain.h | 3 +++ xen/common/domain.c | 20 ++++++++++++++++++++ xen/include/xen/domain.h | 4 ---- 7 files changed, 25 insertions(+), 56 deletions(-) diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index e566023340..507df807ed 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -473,23 +473,6 @@ void dump_pageframe_info(struct domain *d) =20 } =20 -struct vcpu *alloc_vcpu_struct(const struct domain *d) -{ - struct vcpu *v; - - BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE); - v =3D alloc_xenheap_pages(0, 0); - if ( v ) - clear_page(v); - - return v; -} - -void free_vcpu_struct(struct vcpu *v) -{ - free_xenheap_page(v); -} - int arch_vcpu_create(struct vcpu *v) { int rc =3D 0; diff --git a/xen/arch/ppc/stubs.c b/xen/arch/ppc/stubs.c index 9953ea1c6c..f7f6e7ed97 100644 --- a/xen/arch/ppc/stubs.c +++ b/xen/arch/ppc/stubs.c @@ -152,11 +152,6 @@ void dump_pageframe_info(struct domain *d) BUG_ON("unimplemented"); } =20 -void free_vcpu_struct(struct vcpu *v) -{ - BUG_ON("unimplemented"); -} - int arch_vcpu_create(struct vcpu *v) { BUG_ON("unimplemented"); @@ -264,11 +259,6 @@ void vcpu_kick(struct vcpu *v) BUG_ON("unimplemented"); } =20 -struct vcpu *alloc_vcpu_struct(const struct domain *d) -{ - BUG_ON("unimplemented"); -} - unsigned long hypercall_create_continuation(unsigned int op, const char *format, ...) { diff --git a/xen/arch/riscv/stubs.c b/xen/arch/riscv/stubs.c index fe7d85ee1d..579c4215c8 100644 --- a/xen/arch/riscv/stubs.c +++ b/xen/arch/riscv/stubs.c @@ -126,11 +126,6 @@ void dump_pageframe_info(struct domain *d) BUG_ON("unimplemented"); } =20 -void free_vcpu_struct(struct vcpu *v) -{ - BUG_ON("unimplemented"); -} - int arch_vcpu_create(struct vcpu *v) { BUG_ON("unimplemented"); @@ -238,11 +233,6 @@ void vcpu_kick(struct vcpu *v) BUG_ON("unimplemented"); } =20 -struct vcpu *alloc_vcpu_struct(const struct domain *d) -{ - BUG_ON("unimplemented"); -} - unsigned long hypercall_create_continuation(unsigned int op, const char *format, ...) { diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c index 7632d5e2d6..68c7503eda 100644 --- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -493,28 +493,15 @@ unsigned int arch_domain_struct_memflags(void) return MEMF_bits(bits); } =20 -struct vcpu *alloc_vcpu_struct(const struct domain *d) +unsigned int arch_vcpu_struct_memflags(const struct domain *d) { - struct vcpu *v; /* * This structure contains embedded PAE PDPTEs, used when an HVM guest * runs on shadow pagetables outside of 64-bit mode. In this case the = CPU * may require that the shadow CR3 points below 4GB, and hence the who= le * structure must satisfy this restriction. Thus we specify MEMF_bits(= 32). */ - unsigned int memflags =3D - (is_hvm_domain(d) && paging_mode_shadow(d)) ? MEMF_bits(32) : 0; - - BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE); - v =3D alloc_xenheap_pages(0, memflags); - if ( v !=3D NULL ) - clear_page(v); - return v; -} - -void free_vcpu_struct(struct vcpu *v) -{ - free_xenheap_page(v); + return (is_hvm_domain(d) && paging_mode_shadow(d)) ? MEMF_bits(32) : 0; } =20 /* Initialise various registers to their architectural INIT/RESET state. */ diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/d= omain.h index 7e5cbd11a4..576f9202b4 100644 --- a/xen/arch/x86/include/asm/domain.h +++ b/xen/arch/x86/include/asm/domain.h @@ -15,6 +15,9 @@ unsigned int arch_domain_struct_memflags(void); #define arch_domain_struct_memflags arch_domain_struct_memflags =20 +unsigned int arch_vcpu_struct_memflags(const struct domain *d); +#define arch_vcpu_struct_memflags arch_vcpu_struct_memflags + #define has_32bit_shinfo(d) ((d)->arch.has_32bit_shinfo) =20 /* diff --git a/xen/common/domain.c b/xen/common/domain.c index 93c71bc766..92fc0684fc 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -392,6 +392,26 @@ static int vcpu_teardown(struct vcpu *v) return 0; } =20 +static struct vcpu *alloc_vcpu_struct(const struct domain *d) +{ +#ifndef arch_vcpu_struct_memflags +# define arch_vcpu_struct_memflags(d) 0 +#endif + struct vcpu *v; + + BUILD_BUG_ON(sizeof(*v) > PAGE_SIZE); + v =3D alloc_xenheap_pages(0, arch_vcpu_struct_memflags(d)); + if ( v ) + clear_page(v); + + return v; +} + +static void free_vcpu_struct(struct vcpu *v) +{ + free_xenheap_page(v); +} + /* * Destoy a vcpu once all references to it have been dropped. Used either * from domain_destroy()'s RCU path, or from the vcpu_create() error path diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 8aab05ae93..644f5ac3f2 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -70,10 +70,6 @@ void domid_free(domid_t domid); struct domain *alloc_domain_struct(void); void free_domain_struct(struct domain *d); =20 -/* Allocate/free a VCPU structure. */ -struct vcpu *alloc_vcpu_struct(const struct domain *d); -void free_vcpu_struct(struct vcpu *v); - /* Allocate/free a PIRQ structure. */ #ifndef alloc_pirq_struct struct pirq *alloc_pirq_struct(struct domain *d); --=20 2.52.0 From nobody Thu Jan 8 12:48:21 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=none dis=none) header.from=gmail.com ARC-Seal: i=1; a=rsa-sha256; t=1766078929; cv=none; d=zohomail.com; s=zohoarc; b=Z7Gc+BMi/vtlYtDgjRN9eqeDzb5syleaQwQmzg2R3A+6yEHaq4xXlfEJgfK6rbqHZfAhvBvFqolHAbKQOeEkN/F1rUUpYt82RWM12pxf0heD5o/oWdqhaW2TRSestIM7fHabwXKJuTyQ6kBZFWGsvVirhe14oZ+rtCU3Ve0Arsw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1766078929; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=nyyCTn0BO84+8dgz1DD0QHNFwT8tEePxSeonQlFM4G8=; b=nAjVLqZFod0pWjODikAT8MM/7zzijwibQKnpF8hq2TOQOCDm1n0EZ9S3HC9U3NdwA0gAspHjLub9u2Vu1h6GwdeW7wB4avEb46FAKPzVfYnBIbth3zCJDsShsLRVVZa5xVG2XXbmat7L96+dMtfUCwHIDi6VhkEPIgvdxL6+IB4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1766078929654328.18972668041454; Thu, 18 Dec 2025 09:28:49 -0800 (PST) Received: from list by lists.xenproject.org with outflank-mailman.1190026.1510722 (Exim 4.92) (envelope-from ) id 1vWHng-0003JF-9Z; Thu, 18 Dec 2025 17:28:28 +0000 Received: by outflank-mailman (output) from mailman id 1190026.1510722; Thu, 18 Dec 2025 17:28:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHng-0003I0-3E; Thu, 18 Dec 2025 17:28:28 +0000 Received: by outflank-mailman (input) for mailman id 1190026; Thu, 18 Dec 2025 17:28:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1vWHne-0002iW-LH for xen-devel@lists.xenproject.org; Thu, 18 Dec 2025 17:28:26 +0000 Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com [2a00:1450:4864:20::52b]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id f5109a13-dc36-11f0-9cce-f158ae23cfc8; Thu, 18 Dec 2025 18:28:24 +0100 (CET) Received: by mail-ed1-x52b.google.com with SMTP id 4fb4d7f45d1cf-64b61f82b5fso1175162a12.0 for ; Thu, 18 Dec 2025 09:28:24 -0800 (PST) Received: from fedora (user-109-243-71-38.play-internet.pl. [109.243.71.38]) by smtp.gmail.com with ESMTPSA id 4fb4d7f45d1cf-64b585b53c1sm3209423a12.5.2025.12.18.09.28.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 18 Dec 2025 09:28:23 -0800 (PST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: f5109a13-dc36-11f0-9cce-f158ae23cfc8 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766078904; x=1766683704; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nyyCTn0BO84+8dgz1DD0QHNFwT8tEePxSeonQlFM4G8=; b=iQ7v3n60sz07N4/drhyaVgnB8tyKSa/yMiYs3Pg1tkCxizx2O7CWAZ+PelsHwL0Vxa 7VW/DSUdl7YVSqm8Q4nzcaKTTqvX9hh2kGGfHqjneoeYCKjqW/eGfYOj5A2pgN/G3dQW CtoMzo41eEvmpUI2VbqBI3fLypsGvVcBfda57EJMXOYpYQgbLuzmB1C4u/HbSe7nGYNr yznbT6IhkrUpS4tONIvlXI4+gsGgh/XZSuRyj+KTuq6Z8dkcwNIo64xTUVRTXWiE9hOS uB501wW9rklW7Ae+esgw+4b2BgXCo8B4v9LUzX58wj4Bcb0PewgiEVf68yYH1RK7S+SJ m3rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766078904; x=1766683704; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=nyyCTn0BO84+8dgz1DD0QHNFwT8tEePxSeonQlFM4G8=; b=l2urUUd91FumuADhe8ElCMCvYPbYgrW8hfPZpr9zzQ5YreBl+HQ0qaeEJHZ++zMmMm jXOO59HArzHe4bXpUdYprJwLGJsUsVvRo4F10f9IKyScrVL3X8xWRkTr5pZ6J7u+jkeY FrpcH5MoNXhER+ZQm9wH+StNEoxnuJk5oYIhV83iTjX9duP+C9FKRDcV8v5DoPxYyjOT ijK0tlCXZQXO/m9tYXfO4Ybf03mhH58drHhYKXlbsYBkzSB3qjNFVsM6kM5WwthYy0kE KvSQTP6583kYyzKqP0WbGnOLZyoCm1RZ/4joKLW13mSukM0qWWAD+ZjuWUB0k0xOlLqJ 5zRQ== X-Gm-Message-State: AOJu0Yy9qzmO6rIFWilWqiXr0tH+BwvsX31Ck2Q/D75jsTj3rA2UyQcV yxVhWjxFwkRCh5ou8EX1H8ynNUZWKl0q6yFK0pVVv+QVbIsbPO5KbKYR3aSF2g== X-Gm-Gg: AY/fxX5WGV/Q4uoWPOQ8pjKI59LEnOzzBXbCMxM6l691q8QhkROSBy755mKdX+MZsYz Cc8D8/20sJQoWyURfbP1kshJqfSyGdPLg9qXnTrnuxtGYse7WMLQHUgT575nJA5EFSpWeIgohRf ZfjIl+3vxbEit6EpxNZF1LTnLrGy0ZELNJQGznapNuZyqqeOxD76hyPxkWuSt/FupMwKT2QcxZN V9ofioKeH5hWTpwjplr+bTUn8mf7Yd2THl63rgt3+NAMBqxJ9+acRRC+yB89SPXnddffYjy8W1e 39sKcrnuthThbLwrEz7Cx6v6hzQpSEEFsWvFRXZ2vvBXh/msjfMnHw8OHMsiwWmdjqrdpXM/pDF MEvKKJ/OjMV2FHTTNpQwfPxdu95fyRyZhbfDp3jJFYyU9Hj8zUCR+XSc0nvZ7PsD2aBjEpBiqbH m9zuX0QpszE+s1HRhiJ+Vdv7GwDhK0ll23aOv+lwz5znJDeWxSlhisJ1Y= X-Google-Smtp-Source: AGHT+IEJqDupn1iTF0rergfxLD/C09v+9f4H0/Bivd89XjTUtGTRXnvG436ZyPk5tpIjNL3JRCF/pg== X-Received: by 2002:a05:6402:313b:b0:649:aa69:dc07 with SMTP id 4fb4d7f45d1cf-64b8ea4d917mr162395a12.12.1766078903920; Thu, 18 Dec 2025 09:28:23 -0800 (PST) From: Oleksii Kurochko To: xen-devel@lists.xenproject.org Cc: Oleksii Kurochko , Andrew Cooper , Anthony PERARD , Michal Orzel , Jan Beulich , Julien Grall , =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= , Stefano Stabellini Subject: [PATCH v2 4/4] xen/common: make {alloc,free}_domain_struct() static Date: Thu, 18 Dec 2025 18:28:09 +0100 Message-ID: <439f6e9dc1f35736024023d70ed7e1daf1ec294b.1766053253.git.oleksii.kurochko@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @gmail.com) X-ZM-MESSAGEID: 1766078931677158500 Content-Type: text/plain; charset="utf-8" As {alloc,free}_domain_struct() are used only within domain.c, they can be declared static and their declarations removed from xen/domain.h. Signed-off-by: Oleksii Kurochko Acked-by: Andrew Cooper --- Changes in v2: - New patch. --- xen/common/domain.c | 6 ++++-- xen/include/xen/domain.h | 4 ---- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/xen/common/domain.c b/xen/common/domain.c index 92fc0684fc..7509dafd6f 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -690,6 +690,8 @@ static int domain_teardown(struct domain *d) return 0; } =20 +static void free_domain_struct(struct domain *d); + /* * Destroy a domain once all references to it have been dropped. Used eit= her * from the RCU path, or from the domain_create() error path before the do= main @@ -819,7 +821,7 @@ static int sanitise_domain_config(struct xen_domctl_cre= atedomain *config) return arch_sanitise_domain_config(config); } =20 -struct domain *alloc_domain_struct(void) +static struct domain *alloc_domain_struct(void) { #ifndef arch_domain_struct_memflags # define arch_domain_struct_memflags() 0 @@ -835,7 +837,7 @@ struct domain *alloc_domain_struct(void) return d; } =20 -void free_domain_struct(struct domain *d) +static void free_domain_struct(struct domain *d) { free_xenheap_page(d); } diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index 644f5ac3f2..273717c31b 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -66,10 +66,6 @@ void domid_free(domid_t domid); * Arch-specifics. */ =20 -/* Allocate/free a domain structure. */ -struct domain *alloc_domain_struct(void); -void free_domain_struct(struct domain *d); - /* Allocate/free a PIRQ structure. */ #ifndef alloc_pirq_struct struct pirq *alloc_pirq_struct(struct domain *d); --=20 2.52.0