From nobody Tue Feb 10 10:19:48 2026 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1573848918; cv=none; d=zoho.com; s=zohoarc; b=mp0eHVo7hzBfhqAf+HZS4+nRuOv8x4MgAZL2+Kqu++zM065BIDtHCKUHacDPek026uD4Lvcme0LWY1F8sqa8q+zhQs3kT3gWc6IeuedVFA0d81iuP+whNcmVo8um+mz4lMOgVuhFHIq9WFTkyxRmSKDYlcLrsvp06lU2m26heHQ= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1573848918; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=9L81uR/dr3rbTMOkt+sMxFBYezEBxERPpfXV1ri2gtk=; b=cxzJHLafrOIoAUrHdXzK9V/qvknPy5qZkJ5zRP4SI4NdgUu7eDoGQed70TSpBAYL7mcfPIpBXbM9NCSVtdj8GuQWBepZ7ow+MKReh+SVzOliutsSEkrKRyH/ou23nsJXllRfVSoD/aTE/+nYlcu6vpwZue8ceOdAly2/haTl4dY= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1573848918174171.42982867652267; Fri, 15 Nov 2019 12:15:18 -0800 (PST) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iVhyy-0007SY-Ag; Fri, 15 Nov 2019 20:14:16 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1iVhyx-0007ST-G5 for xen-devel@lists.xenproject.org; Fri, 15 Nov 2019 20:14:15 +0000 Received: from webmail.dornerworks.com (unknown [12.207.209.150]) by us1-rack-iad1.inumbo.com (Halon) with ESMTP id 7d850246-07e4-11ea-adbe-bc764e2007e4; Fri, 15 Nov 2019 20:14:14 +0000 (UTC) X-Inumbo-ID: 7d850246-07e4-11ea-adbe-bc764e2007e4 From: Stewart Hildebrand To: Date: Fri, 15 Nov 2019 15:14:06 -0500 Message-ID: <20191115201407.45042-1-stewart.hildebrand@dornerworks.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191115200115.44890-1-stewart.hildebrand@dornerworks.com> References: <20191115200115.44890-1-stewart.hildebrand@dornerworks.com> MIME-Version: 1.0 X-Originating-IP: [172.27.14.58] X-ClientProxiedBy: Mcbain.dw.local (172.27.1.45) To Mcbain.dw.local (172.27.1.45) X-spam-status: No, score=-2.9 required=3.5 tests=ALL_TRUSTED, BAYES_00, MAILSHELL_SCORE_10_69 X-Spam-Flag: NO Subject: [Xen-devel] [RFC XEN PATCH v3 10/11] xen: arm: context switch vtimer PPI state. X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Volodymyr Babchuk , Stefano Stabellini , Julien Grall , Ian Campbell Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" From: Ian Campbell ... instead of artificially masking the timer interrupt in the timer state and relying on the guest to unmask (which it isn't required to do per the h/w spec, although Linux does). By using the newly added hwppi save/restore functionality we make use of the GICD I[SC]ACTIVER registers to save and restore the active state of the interrupt, which prevents the nested invocations which the current masking works around. Signed-off-by: Ian Campbell Signed-off-by: Stewart Hildebrand --- v2: Rebased, in particular over Julien's passthrough stuff which reworked a bunch of IRQ related stuff. Also largely rewritten since precursor patches now lay very different groundwork. v3: Address feedback from v2 [1]: * Remove virt_timer_irqs performance counter since it is now unused. * Add caveat to comment about not using I*ACTIVER register. * HACK: don't initialize pending_irq->irq in vtimer for new vGIC to allows us to build with CONFIG_NEW_VGIC=3Dy [1] https://lists.xenproject.org/archives/html/xen-devel/2015-11/msg01065.h= tml --- Note: Regarding Stefano's comment in [2], I did test it with the call to gic_hwppi_set_pending removed, and I was able to boot dom0. [2] https://lists.xenproject.org/archives/html/xen-devel/2015-12/msg02683.h= tml --- xen/arch/arm/time.c | 26 ++---------------- xen/arch/arm/vtimer.c | 45 +++++++++++++++++++++++++++++--- xen/include/asm-arm/domain.h | 1 + xen/include/asm-arm/perfc_defn.h | 1 - 4 files changed, 44 insertions(+), 29 deletions(-) diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c index 739bcf186c..e3a23b8e16 100644 --- a/xen/arch/arm/time.c +++ b/xen/arch/arm/time.c @@ -243,28 +243,6 @@ static void timer_interrupt(int irq, void *dev_id, str= uct cpu_user_regs *regs) } } =20 -static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *= regs) -{ - /* - * Edge-triggered interrupts can be used for the virtual timer. Even - * if the timer output signal is masked in the context switch, the - * GIC will keep track that of any interrupts raised while IRQS are - * disabled. As soon as IRQs are re-enabled, the virtual interrupt - * will be injected to Xen. - * - * If an IDLE vCPU was scheduled next then we should ignore the - * interrupt. - */ - if ( unlikely(is_idle_vcpu(current)) ) - return; - - perfc_incr(virt_timer_irqs); - - current->arch.virt_timer.ctl =3D READ_SYSREG32(CNTV_CTL_EL0); - WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_= EL0); - vgic_inject_irq(current->domain, current, current->arch.virt_timer.irq= , true); -} - /* * Arch timer interrupt really ought to be level triggered, since the * design of the timer/comparator mechanism is based around that @@ -304,8 +282,8 @@ void init_timer_interrupt(void) =20 request_irq(timer_irq[TIMER_HYP_PPI], 0, timer_interrupt, "hyptimer", NULL); - request_irq(timer_irq[TIMER_VIRT_PPI], 0, vtimer_interrupt, - "virtimer", NULL); + route_hwppi_to_current_vcpu(timer_irq[TIMER_VIRT_PPI], "virtimer"); + request_irq(timer_irq[TIMER_PHYS_NONSECURE_PPI], 0, timer_interrupt, "phytimer", NULL); =20 diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c index e6aebdac9e..6e3498952d 100644 --- a/xen/arch/arm/vtimer.c +++ b/xen/arch/arm/vtimer.c @@ -55,9 +55,19 @@ static void phys_timer_expired(void *data) static void virt_timer_expired(void *data) { struct vtimer *t =3D data; - t->ctl |=3D CNTx_CTL_MASK; - vgic_inject_irq(t->v->domain, t->v, t->irq, true); - perfc_incr(vtimer_virt_inject); + t->ctl |=3D CNTx_CTL_PENDING; + if ( !(t->ctl & CNTx_CTL_MASK) ) + { + /* + * An edge triggered interrupt should now be pending. Since + * this timer can never expire while the domain is scheduled + * we know that the gic_restore_hwppi in virt_timer_restore + * will cause the real hwppi to occur and be routed. + */ + gic_hwppi_set_pending(&t->ppi_state); + vcpu_unblock(t->v); + perfc_incr(vtimer_virt_inject); + } } =20 int domain_vtimer_init(struct domain *d, struct xen_arch_domainconfig *con= fig) @@ -98,9 +108,14 @@ int domain_vtimer_init(struct domain *d, struct xen_arc= h_domainconfig *config) =20 int vcpu_vtimer_init(struct vcpu *v) { +#ifndef CONFIG_NEW_VGIC + struct pending_irq *p; +#endif struct vtimer *t =3D &v->arch.phys_timer; bool d0 =3D is_hardware_domain(v->domain); =20 + const unsigned host_vtimer_irq_ppi =3D timer_get_irq(TIMER_VIRT_PPI); + /* * Hardware domain uses the hardware interrupts, guests get the virtual * platform. @@ -118,10 +133,18 @@ int vcpu_vtimer_init(struct vcpu *v) init_timer(&t->timer, virt_timer_expired, t, v->processor); t->ctl =3D 0; t->irq =3D d0 - ? timer_get_irq(TIMER_VIRT_PPI) + ? host_vtimer_irq_ppi : GUEST_TIMER_VIRT_PPI; t->v =3D v; =20 +#ifndef CONFIG_NEW_VGIC + p =3D irq_to_pending(v, t->irq); + p->irq =3D t->irq; +#endif + + gic_hwppi_state_init(&v->arch.virt_timer.ppi_state, + host_vtimer_irq_ppi); + v->arch.vtimer_initialized =3D 1; =20 return 0; @@ -149,6 +172,16 @@ void virt_timer_save(struct vcpu *v) set_timer(&v->arch.virt_timer.timer, ticks_to_ns(v->arch.virt_time= r.cval + v->domain->arch.virt_timer_base.offset - boot_count)); } + + /* + * Since the vtimer irq is a PPI we don't need to worry about + * racing against it becoming active while we are saving the + * state, since that requires the guest to be reading the IAR, + * as long as the guest is not using I*ACTIVER register which we + * don't yet implement. + */ + gic_save_and_mask_hwppi(v, v->arch.virt_timer.irq, + &v->arch.virt_timer.ppi_state); } =20 void virt_timer_restore(struct vcpu *v) @@ -162,6 +195,10 @@ void virt_timer_restore(struct vcpu *v) WRITE_SYSREG64(v->domain->arch.virt_timer_base.offset, CNTVOFF_EL2); WRITE_SYSREG64(v->arch.virt_timer.cval, CNTV_CVAL_EL0); WRITE_SYSREG32(v->arch.virt_timer.ctl, CNTV_CTL_EL0); + + gic_restore_hwppi(v, + v->arch.virt_timer.irq, + &v->arch.virt_timer.ppi_state); } =20 static bool vtimer_cntp_ctl(struct cpu_user_regs *regs, uint32_t *r, bool = read) diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h index c3f4cd5069..b8fe142960 100644 --- a/xen/include/asm-arm/domain.h +++ b/xen/include/asm-arm/domain.h @@ -51,6 +51,7 @@ struct vtimer { struct timer timer; uint32_t ctl; uint64_t cval; + struct hwppi_state ppi_state; }; =20 struct arch_domain diff --git a/xen/include/asm-arm/perfc_defn.h b/xen/include/asm-arm/perfc_d= efn.h index 6a83185163..198dd4eadb 100644 --- a/xen/include/asm-arm/perfc_defn.h +++ b/xen/include/asm-arm/perfc_defn.h @@ -70,7 +70,6 @@ PERFCOUNTER(guest_irqs, "#GUEST-IRQS") =20 PERFCOUNTER(hyp_timer_irqs, "Hypervisor timer interrupts") PERFCOUNTER(phys_timer_irqs, "Physical timer interrupts") -PERFCOUNTER(virt_timer_irqs, "Virtual timer interrupts") PERFCOUNTER(maintenance_irqs, "Maintenance interrupts") =20 PERFCOUNTER(atomics_guest, "atomics: guest access") --=20 2.24.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel