From nobody Sat Feb 7 05:57:29 2026 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=reject dis=none) header.from=unpredictable.fr ARC-Seal: i=1; a=rsa-sha256; t=1769539719; cv=none; d=zohomail.com; s=zohoarc; b=FALAUHwyX2j/RuUR9mRY750QA5eW/9oXTEgBEtn8UfZL55Vy5NeTJOzRf+WGVJIIuZS6KzmehEOvGqAF+wbRFwJZjX5VkHRQORXGftQXqny9yw9nc8GrltP0EJ8rVo1+QQIYWgmpQcNdFWlaLckKAME4n5Y9U5K53XYUKM/51+E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1769539719; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6AbFosTeO+22W6pzp7NhrRLKcRCIno0EsBShVb2QtOo=; b=fitb8rLUuCKga5Yh1futiaoVfEGx1GBOEk7zsZ7LB+nLvKJg987ZPATgDyQspfeIdcOVKF7EI20S7K3rt6IMkLhbNiiabc/ODESgC7k0dFyfku0CeLLTJy4Cq3lQ6UywX3Amcg/vdPiy7mQkiTx2LuTSaglRjYewZ/fd7VVJbpM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=reject dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1769539719966331.7764811098407; Tue, 27 Jan 2026 10:48:39 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1vko4y-0000u6-HA; Tue, 27 Jan 2026 13:46:20 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vko4W-0000OX-6l for qemu-devel@nongnu.org; Tue, 27 Jan 2026 13:45:54 -0500 Received: from p-east2-cluster6-host12-snip4-10.eps.apple.com ([57.103.76.251] helo=outbound.st.icloud.com) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1vko4G-0007bj-U1 for qemu-devel@nongnu.org; Tue, 27 Jan 2026 13:45:43 -0500 Received: from outbound.st.icloud.com (unknown [127.0.0.2]) by p00-icloudmta-asmtp-us-east-1a-60-percent-9 (Postfix) with ESMTPS id 56BAB1800668; Tue, 27 Jan 2026 18:45:29 +0000 (UTC) Received: from localhost.localdomain (unknown [17.42.251.67]) by p00-icloudmta-asmtp-us-east-1a-60-percent-9 (Postfix) with ESMTPSA id 4995F1800139; Tue, 27 Jan 2026 18:45:27 +0000 (UTC) Dkim-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=unpredictable.fr; s=sig1; t=1769539533; x=1772131533; bh=6AbFosTeO+22W6pzp7NhrRLKcRCIno0EsBShVb2QtOo=; h=From:To:Subject:Date:Message-ID:MIME-Version:x-icloud-hme; b=W3vNEKu4WQl7FfONcAOYBm9SXEn/R6c44M/v86LaFKyxqwW1V9gjsIeBpSCc3zKfhyx9xDuyxP2/Fc4F9hpZ1v4xo4N26rQ1HP/mZltXM43GZ0ufLIA6FFHlBKpHs8aX6bcX1mxbO8Z6m5ZcAIgesGQGefaPgtV04Z1f7RDOkZLbDXQvH2+LkTwDxtee8XpzxBwBHcIEhUJ930lAC3NDTNCXff+AWp68PVtDLZL8/3seuz8jyG1xv/s6G4QOpMoFnwqkMUNPq41ntqcSbp7egaCqhWqjv6TOyDl5XfLkr4oHoJw3enJn4+EOZMmlH664jnlMYm/Auu6BqtBjBXSqPg== mail-alias-created-date: 1752046281608 From: Mohamed Mediouni To: qemu-devel@nongnu.org Cc: Peter Maydell , Alexander Graf , Paolo Bonzini , qemu-arm@nongnu.org, Roman Bolshakov , Mads Ynddal , Phil Dennis-Jordan , Cameron Esfahani , Mohamed Mediouni Subject: [PATCH v9 01/12] hw/intc: Add hvf vGIC interrupt controller support Date: Tue, 27 Jan 2026 19:45:12 +0100 Message-ID: <20260127184523.5357-2-mohamed@unpredictable.fr> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20260127184523.5357-1-mohamed@unpredictable.fr> References: <20260127184523.5357-1-mohamed@unpredictable.fr> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Proofpoint-GUID: 3LJX_UVGqHPskUtek-Hp21ahvFQH1fAK X-Proofpoint-ORIG-GUID: 3LJX_UVGqHPskUtek-Hp21ahvFQH1fAK X-Authority-Info-Out: v=2.4 cv=CYEFJbrl c=1 sm=1 tr=0 ts=697907ca cx=c_apl:c_apl_out:c_pps a=YrL12D//S6tul8v/L+6tKg==:117 a=YrL12D//S6tul8v/L+6tKg==:17 a=vUbySO9Y5rIA:10 a=VkNPw1HP01LnGYTKEx00:22 a=D2JAlcJVxmvnjXxMMLIA:9 X-Proofpoint-Spam-Details-Enc: AW1haW4tMjYwMTI3MDE1MyBTYWx0ZWRfX+MV9kfzfyZGk UpOjbPpxwV3JcQgSbd5zSAej0s7CP8xrmtaowWPnrvkQ0hinJA5EIpZm9pS8yZmsFLed2R9J4cw t1M6zl/aUFJ5KXPJWq7dRJsXk4zK5+R84ZVBch5VRwwCrkQ8H23usDjDHQpOUNb5bZDQO+aIxy0 47Qty656cDlkTKWMgxKExm/Ixg0mfJNSGsUQF9jRjkbFHehN9n3WPYXAqQkWoiAOVoEk7U3magC gUnUBQAPpT//MDVAPxDVTptrulsT9FtpkHYxPP805llFzYR/ThqpoSw/QT0kJJ6A49JD6D17Thd rpkcAF3Jx4plrt/AnJd6FNYYeVok9dQfHznyylLMzDbGpKG/yD0/Tf9naVuFjY= X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1121,Hydra:6.1.51,FMLib:17.12.100.49 definitions=2026-01-27_04,2026-01-27_03,2025-10-01_01 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 suspectscore=0 malwarescore=0 mlxlogscore=999 spamscore=0 adultscore=0 lowpriorityscore=0 bulkscore=0 clxscore=1030 mlxscore=0 classifier=spam authscore=0 adjust=0 reason=mlx scancount=1 engine=8.22.0-2601150000 definitions=main-2601270153 X-JNJ: AAAAAAABw1QUTVD1dGhpSKuYyk9MjUbKyfVCAlCycr//Za7CJ+bPaFfShQNOsauJsDzqiM0SqPjaxBcrTRf+6QU6V32ePWn6hcBBGVGn2g3KGYxOspYfDUttVwcmDTyW3MCiIaSJ0/STxV19VyLvcBsDEX96b4kC5vXmsBsZSunBVRFoCHH6ffJb9lXmXQ/gbX4KlBcVnWRqFMOXbI3YTeAY7eP0+k++2sAu4JFOv5qoht97lJIbv/YyaHgvNZ5bx7nmCxqFSxwvJZ2/Hf8rXi+5w9LGDquCS1Hs+lGMSlGUJkKzgVX3Iltxp4Refc/JtoMsrnl+14ZrCWhzJIRILE9EpB3N4tHLnCke9+4IDm2RBlqGd0EctJ5dwZDNfut9XsglhFoAHuFuHnjewelBYvvnAvuty6sytk2/cfz7W9sbRXsknd/jLqkjpTXhLTEb/mRpGT3lcJp5Kisn23XVtLUK3BnB5jR8vqa8CSxJrLdQ2oulPONoM4OJ7CFkiYcDnKfwBnWsAU0bSE+V4iRVrBloR9GNHf5xqdG322zmUDtW5L+jg/XIMKXko155yn8DYPheEgS+zIGlv2iGHcKooR25xEUxZ3npybWbcu/lNHyVSD8Nc3DGjXa0kJ0GTjvxkKJciwR3hjaBo8X5JtVF/IE8jDfFAFN3d3IgrI1kulEDOpJUjsmWYIr88hPLkD286hgB1F2bVBhVV0vLhCFqoOQ5yzn06qR4X2+Mq5/rBuej0nc= Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=57.103.76.251; envelope-from=mohamed@unpredictable.fr; helo=outbound.st.icloud.com X-Spam_score_int: -27 X-Spam_score: -2.8 X-Spam_bar: -- X-Spam_report: (-2.8 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_PASS=-0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: qemu development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @unpredictable.fr) X-ZM-MESSAGEID: 1769539722768154100 Content-Type: text/plain; charset="utf-8" This opens up the door to nested virtualisation support. Signed-off-by: Mohamed Mediouni --- hw/intc/arm_gicv3_hvf.c | 735 +++++++++++++++++++++++++++++ hw/intc/meson.build | 1 + include/hw/intc/arm_gicv3_common.h | 1 + 3 files changed, 737 insertions(+) create mode 100644 hw/intc/arm_gicv3_hvf.c diff --git a/hw/intc/arm_gicv3_hvf.c b/hw/intc/arm_gicv3_hvf.c new file mode 100644 index 0000000000..d6a46b7d53 --- /dev/null +++ b/hw/intc/arm_gicv3_hvf.c @@ -0,0 +1,735 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * ARM Generic Interrupt Controller using HVF platform support + * + * Copyright (c) 2025 Mohamed Mediouni + * Based on vGICv3 KVM code by Pavel Fedin + * + */ + +#include "qemu/osdep.h" +#include "qapi/error.h" +#include "hw/intc/arm_gicv3_common.h" +#include "qemu/error-report.h" +#include "qemu/module.h" +#include "system/runstate.h" +#include "system/hvf.h" +#include "system/hvf_int.h" +#include "hvf_arm.h" +#include "gicv3_internal.h" +#include "vgic_common.h" +#include "qom/object.h" +#include "target/arm/cpregs.h" +#include + +struct HVFARMGICv3Class { + ARMGICv3CommonClass parent_class; + DeviceRealize parent_realize; + ResettablePhases parent_phases; +}; + +typedef struct HVFARMGICv3Class HVFARMGICv3Class; + +/* This is reusing the GICv3State typedef from ARM_GICV3_ITS_COMMON */ +DECLARE_OBJ_CHECKERS(GICv3State, HVFARMGICv3Class, + HVF_GICV3, TYPE_HVF_GICV3); + +/* + * Loop through each distributor IRQ related register; since bits + * corresponding to SPIs and PPIs are RAZ/WI when affinity routing + * is enabled, we skip those. + */ +#define for_each_dist_irq_reg(_irq, _max, _field_width) \ + for (_irq =3D GIC_INTERNAL; _irq < _max; _irq +=3D (32 / _field_width)) + +/* + * Wrap calls to the vGIC APIs to assert_hvf_ok() + * as a macro to keep the code clean. + */ +#define hv_gic_get_distributor_reg(offset, reg) \ + assert_hvf_ok(hv_gic_get_distributor_reg(offset, reg)) + +#define hv_gic_set_distributor_reg(offset, reg) \ + assert_hvf_ok(hv_gic_set_distributor_reg(offset, reg)) + +#define hv_gic_get_redistributor_reg(vcpu, reg, value) \ + assert_hvf_ok(hv_gic_get_redistributor_reg(vcpu, reg, value)) + +#define hv_gic_set_redistributor_reg(vcpu, reg, value) \ + assert_hvf_ok(hv_gic_set_redistributor_reg(vcpu, reg, value)) + +#define hv_gic_get_icc_reg(vcpu, reg, value) \ + assert_hvf_ok(hv_gic_get_icc_reg(vcpu, reg, value)) + +#define hv_gic_set_icc_reg(vcpu, reg, value) \ + assert_hvf_ok(hv_gic_set_icc_reg(vcpu, reg, value)) + +#define hv_gic_get_ich_reg(vcpu, reg, value) \ + assert_hvf_ok(hv_gic_get_ich_reg(vcpu, reg, value)) + +#define hv_gic_set_ich_reg(vcpu, reg, value) \ + assert_hvf_ok(hv_gic_set_ich_reg(vcpu, reg, value)) + +static void hvf_dist_get_priority(GICv3State *s, hv_gic_distributor_reg_t = offset + , uint8_t *bmp) +{ + uint64_t reg; + uint32_t *field; + int irq; + field =3D (uint32_t *)(bmp); + + for_each_dist_irq_reg(irq, s->num_irq, 8) { + hv_gic_get_distributor_reg(offset, ®); + *field =3D reg; + offset +=3D 4; + field++; + } +} + +static void hvf_dist_put_priority(GICv3State *s, hv_gic_distributor_reg_t = offset + , uint8_t *bmp) +{ + uint32_t reg, *field; + int irq; + field =3D (uint32_t *)(bmp); + + for_each_dist_irq_reg(irq, s->num_irq, 8) { + reg =3D *field; + hv_gic_set_distributor_reg(offset, reg); + offset +=3D 4; + field++; + } +} + +static void hvf_dist_get_edge_trigger(GICv3State *s, hv_gic_distributor_re= g_t offset, + uint32_t *bmp) +{ + uint64_t reg; + int irq; + + for_each_dist_irq_reg(irq, s->num_irq, 2) { + hv_gic_get_distributor_reg(offset, ®); + reg =3D half_unshuffle32(reg >> 1); + if (irq % 32 !=3D 0) { + reg =3D (reg << 16); + } + *gic_bmp_ptr32(bmp, irq) |=3D reg; + offset +=3D 4; + } +} + +static void hvf_dist_put_edge_trigger(GICv3State *s, hv_gic_distributor_re= g_t offset, + uint32_t *bmp) +{ + uint32_t reg; + int irq; + + for_each_dist_irq_reg(irq, s->num_irq, 2) { + reg =3D *gic_bmp_ptr32(bmp, irq); + if (irq % 32 !=3D 0) { + reg =3D (reg & 0xffff0000) >> 16; + } else { + reg =3D reg & 0xffff; + } + reg =3D half_shuffle32(reg) << 1; + hv_gic_set_distributor_reg(offset, reg); + offset +=3D 4; + } +} + +/* Read a bitmap register group from the kernel VGIC. */ +static void hvf_dist_getbmp(GICv3State *s, hv_gic_distributor_reg_t offset= , uint32_t *bmp) +{ + uint64_t reg; + int irq; + + for_each_dist_irq_reg(irq, s->num_irq, 1) { + + hv_gic_get_distributor_reg(offset, ®); + *gic_bmp_ptr32(bmp, irq) =3D reg; + offset +=3D 4; + } +} + +static void hvf_dist_putbmp(GICv3State *s, hv_gic_distributor_reg_t offset, + hv_gic_distributor_reg_t clroffset, uint32_t *= bmp) +{ + uint32_t reg; + int irq; + + for_each_dist_irq_reg(irq, s->num_irq, 1) { + /* + * If this bitmap is a set/clear register pair, first write to the + * clear-reg to clear all bits before using the set-reg to write + * the 1 bits. + */ + if (clroffset !=3D 0) { + reg =3D 0; + hv_gic_set_distributor_reg(clroffset, reg); + clroffset +=3D 4; + } + reg =3D *gic_bmp_ptr32(bmp, irq); + hv_gic_set_distributor_reg(offset, reg); + offset +=3D 4; + } +} + +static void hvf_gicv3_check(GICv3State *s) +{ + uint64_t reg; + uint32_t num_irq; + + /* Sanity checking s->num_irq */ + hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_TYPER, ®); + num_irq =3D ((reg & 0x1f) + 1) * 32; + + if (num_irq < s->num_irq) { + error_report("Model requests %u IRQs, but HVF supports max %u", + s->num_irq, num_irq); + abort(); + } +} + +static void hvf_gicv3_put_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg) +{ + int num_pri_bits; + + /* Redistributor state */ + GICv3CPUState *c =3D arg.host_ptr; + hv_vcpu_t vcpu =3D c->cpu->accel->fd; + + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, c->ich_vmcr_el2); + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, c->ich_hcr_el2); + + for (int i =3D 0; i < GICV3_LR_MAX; i++) { + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, c->ich_lr_el2[i]); + } + + num_pri_bits =3D c->vpribits; + + switch (num_pri_bits) { + case 7: + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3, + c->ich_apr[GICV3_G0][3]); + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2, + c->ich_apr[GICV3_G0][2]); + /* fall through */ + case 6: + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1, + c->ich_apr[GICV3_G0][1]); + /* fall through */ + default: + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2, + c->ich_apr[GICV3_G0][0]); + } + + switch (num_pri_bits) { + case 7: + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3, + c->ich_apr[GICV3_G1NS][3]); + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2, + c->ich_apr[GICV3_G1NS][2]); + /* fall through */ + case 6: + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1, + c->ich_apr[GICV3_G1NS][1]); + /* fall through */ + default: + hv_gic_set_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2, + c->ich_apr[GICV3_G1NS][0]); + } +} + +static void hvf_gicv3_put_cpu(CPUState *cpu_state, run_on_cpu_data arg) +{ + uint32_t reg; + uint64_t reg64; + int i, num_pri_bits; + + /* Redistributor state */ + GICv3CPUState *c =3D arg.host_ptr; + hv_vcpu_t vcpu =3D c->cpu->accel->fd; + + reg =3D c->gicr_waker; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROU= PR0, reg); + + reg =3D c->gicr_igroupr0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROU= PR0, reg); + + reg =3D ~0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICENA= BLER0, reg); + reg =3D c->gicr_ienabler0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENA= BLER0, reg); + + /* Restore config before pending so we treat level/edge correctly */ + reg =3D half_shuffle32(c->edge_trigger >> 16) << 1; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR= 1, reg); + + reg =3D ~0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICPEN= DR0, reg); + reg =3D c->gicr_ipendr0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPEN= DR0, reg); + + reg =3D ~0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICACT= IVER0, reg); + reg =3D c->gicr_iactiver0; + hv_gic_set_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACT= IVER0, reg); + + for (i =3D 0; i < GIC_INTERNAL; i +=3D 4) { + reg =3D c->gicr_ipriorityr[i] | + (c->gicr_ipriorityr[i + 1] << 8) | + (c->gicr_ipriorityr[i + 2] << 16) | + (c->gicr_ipriorityr[i + 3] << 24); + hv_gic_set_redistributor_reg(vcpu, + HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, reg); + } + + /* CPU interface state */ + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, c->icc_sre_el1); + + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1, + c->icc_ctlr_el1[GICV3_NS]); + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1, + c->icc_igrpen[GICV3_G0]); + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1, + c->icc_igrpen[GICV3_G1NS]); + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, c->icc_pmr_el1); + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, c->icc_bpr[GICV3_G0]= ); + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, c->icc_bpr[GICV3_G1N= S]); + + num_pri_bits =3D ((c->icc_ctlr_el1[GICV3_NS] & + ICC_CTLR_EL1_PRIBITS_MASK) >> + ICC_CTLR_EL1_PRIBITS_SHIFT) + 1; + + switch (num_pri_bits) { + case 7: + reg64 =3D c->icc_apr[GICV3_G0][3]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3, reg64); + reg64 =3D c->icc_apr[GICV3_G0][2]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2, reg64); + /* fall through */ + case 6: + reg64 =3D c->icc_apr[GICV3_G0][1]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1, reg64); + /* fall through */ + default: + reg64 =3D c->icc_apr[GICV3_G0][0]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1, reg64); + } + + switch (num_pri_bits) { + case 7: + reg64 =3D c->icc_apr[GICV3_G1NS][3]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3, reg64); + reg64 =3D c->icc_apr[GICV3_G1NS][2]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2, reg64); + /* fall through */ + case 6: + reg64 =3D c->icc_apr[GICV3_G1NS][1]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1, reg64); + /* fall through */ + default: + reg64 =3D c->icc_apr[GICV3_G1NS][0]; + hv_gic_set_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1, reg64); + } + + /* Registers beyond this point are with nested virt only */ + if (c->gic->maint_irq) { + hvf_gicv3_put_cpu_el2(cpu_state, arg); + } +} + +static void hvf_gicv3_put(GICv3State *s) +{ + uint32_t reg; + int ncpu, i; + + hvf_gicv3_check(s); + + reg =3D s->gicd_ctlr; + hv_gic_set_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, reg); + + /* per-CPU state */ + + for (ncpu =3D 0; ncpu < s->num_cpu; ncpu++) { + run_on_cpu_data data; + data.host_ptr =3D &s->cpu[ncpu]; + run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_put_cpu, data); + } + + /* s->enable bitmap -> GICD_ISENABLERn */ + hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0 + , HV_GIC_DISTRIBUTOR_REG_GICD_ICENABLER0, s->enabled); + + /* s->group bitmap -> GICD_IGROUPRn */ + hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0 + , 0, s->group); + + /* Restore targets before pending to ensure the pending state is set on + * the appropriate CPU interfaces in the kernel + */ + + /* s->gicd_irouter[irq] -> GICD_IROUTERn */ + for (i =3D GIC_INTERNAL; i < s->num_irq; i++) { + uint32_t offset =3D HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32 + (8 * i) + - (8 * GIC_INTERNAL); + hv_gic_set_distributor_reg(offset, s->gicd_irouter[i]); + } + + /* + * s->trigger bitmap -> GICD_ICFGRn + * (restore configuration registers before pending IRQs so we treat + * level/edge correctly) + */ + hvf_dist_put_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0, s->ed= ge_trigger); + + /* s->pending bitmap -> GICD_ISPENDRn */ + hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0, + HV_GIC_DISTRIBUTOR_REG_GICD_ICPENDR0, s->pending); + + /* s->active bitmap -> GICD_ISACTIVERn */ + hvf_dist_putbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0, + HV_GIC_DISTRIBUTOR_REG_GICD_ICACTIVER0, s->active); + + /* s->gicd_ipriority[] -> GICD_IPRIORITYRn */ + hvf_dist_put_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0, s->g= icd_ipriority); +} + +static void hvf_gicv3_get_cpu_el2(CPUState *cpu_state, run_on_cpu_data arg) +{ + int num_pri_bits; + + /* Redistributor state */ + GICv3CPUState *c =3D arg.host_ptr; + hv_vcpu_t vcpu =3D c->cpu->accel->fd; + + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_VMCR_EL2, &c->ich_vmcr_el2); + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_HCR_EL2, &c->ich_hcr_el2); + + for (int i =3D 0; i < GICV3_LR_MAX; i++) { + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_LR0_EL2, &c->ich_lr_el2[i]= ); + } + + num_pri_bits =3D c->vpribits; + + switch (num_pri_bits) { + case 7: + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 3, + &c->ich_apr[GICV3_G0][3]); + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 2, + &c->ich_apr[GICV3_G0][2]); + /* fall through */ + case 6: + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2 + 1, + &c->ich_apr[GICV3_G0][1]); + /* fall through */ + default: + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP0R0_EL2, + &c->ich_apr[GICV3_G0][0]); + } + + switch (num_pri_bits) { + case 7: + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 3, + &c->ich_apr[GICV3_G1NS][3]); + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 2, + &c->ich_apr[GICV3_G1NS][2]); + /* fall through */ + case 6: + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2 + 1, + &c->ich_apr[GICV3_G1NS][1]); + /* fall through */ + default: + hv_gic_get_ich_reg(vcpu, HV_GIC_ICH_REG_AP1R0_EL2, + &c->ich_apr[GICV3_G1NS][0]); + } +} + +static void hvf_gicv3_get_cpu(CPUState *cpu_state, run_on_cpu_data arg) +{ + uint64_t reg; + int i, num_pri_bits; + + /* Redistributor state */ + GICv3CPUState *c =3D arg.host_ptr; + hv_vcpu_t vcpu =3D c->cpu->accel->fd; + + hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IGROU= PR0, + ®); + c->gicr_igroupr0 =3D reg; + hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISENA= BLER0, + ®); + c->gicr_ienabler0 =3D reg; + hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ICFGR= 1, + ®); + c->edge_trigger =3D half_unshuffle32(reg >> 1) << 16; + hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISPEN= DR0, + ®); + c->gicr_ipendr0 =3D reg; + hv_gic_get_redistributor_reg(vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_ISACT= IVER0, + ®); + c->gicr_iactiver0 =3D reg; + + for (i =3D 0; i < GIC_INTERNAL; i +=3D 4) { + hv_gic_get_redistributor_reg( + vcpu, HV_GIC_REDISTRIBUTOR_REG_GICR_IPRIORITYR0 + i, ®); + c->gicr_ipriorityr[i] =3D extract32(reg, 0, 8); + c->gicr_ipriorityr[i + 1] =3D extract32(reg, 8, 8); + c->gicr_ipriorityr[i + 2] =3D extract32(reg, 16, 8); + c->gicr_ipriorityr[i + 3] =3D extract32(reg, 24, 8); + } + + /* CPU interface */ + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_SRE_EL1, &c->icc_sre_el1); + + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_CTLR_EL1, + &c->icc_ctlr_el1[GICV3_NS]); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN0_EL1, + &c->icc_igrpen[GICV3_G0]); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_IGRPEN1_EL1, + &c->icc_igrpen[GICV3_G1NS]); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_PMR_EL1, &c->icc_pmr_el1); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR0_EL1, &c->icc_bpr[GICV3_G0= ]); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_BPR1_EL1, &c->icc_bpr[GICV3_G1= NS]); + num_pri_bits =3D ((c->icc_ctlr_el1[GICV3_NS] & ICC_CTLR_EL1_PRIBITS_MA= SK) >> + ICC_CTLR_EL1_PRIBITS_SHIFT) + + 1; + + switch (num_pri_bits) { + case 7: + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 3, + &c->icc_apr[GICV3_G0][3]); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 2, + &c->icc_apr[GICV3_G0][2]); + /* fall through */ + case 6: + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1 + 1, + &c->icc_apr[GICV3_G0][1]); + /* fall through */ + default: + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP0R0_EL1, + &c->icc_apr[GICV3_G0][0]); + } + + switch (num_pri_bits) { + case 7: + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 3, + &c->icc_apr[GICV3_G1NS][3]); + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 2, + &c->icc_apr[GICV3_G1NS][2]); + /* fall through */ + case 6: + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1 + 1, + &c->icc_apr[GICV3_G1NS][1]); + /* fall through */ + default: + hv_gic_get_icc_reg(vcpu, HV_GIC_ICC_REG_AP1R0_EL1, + &c->icc_apr[GICV3_G1NS][0]); + } + + /* Registers beyond this point are with nested virt only */ + if (c->gic->maint_irq) { + hvf_gicv3_get_cpu_el2(cpu_state, arg); + } +} + +static void hvf_gicv3_get(GICv3State *s) +{ + uint64_t reg; + int ncpu, i; + + hvf_gicv3_check(s); + + hv_gic_get_distributor_reg(HV_GIC_DISTRIBUTOR_REG_GICD_CTLR, ®); + s->gicd_ctlr =3D reg; + + /* Redistributor state (one per CPU) */ + + for (ncpu =3D 0; ncpu < s->num_cpu; ncpu++) { + run_on_cpu_data data; + data.host_ptr =3D &s->cpu[ncpu]; + run_on_cpu(s->cpu[ncpu].cpu, hvf_gicv3_get_cpu, data); + } + + /* GICD_IGROUPRn -> s->group bitmap */ + hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_IGROUPR0, s->group); + + /* GICD_ISENABLERn -> s->enabled bitmap */ + hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISENABLER0, s->enabled); + + /* GICD_ISPENDRn -> s->pending bitmap */ + hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISPENDR0, s->pending); + + /* GICD_ISACTIVERn -> s->active bitmap */ + hvf_dist_getbmp(s, HV_GIC_DISTRIBUTOR_REG_GICD_ISACTIVER0, s->active); + + /* GICD_ICFGRn -> s->trigger bitmap */ + hvf_dist_get_edge_trigger(s, HV_GIC_DISTRIBUTOR_REG_GICD_ICFGR0 + , s->edge_trigger); + + /* GICD_IPRIORITYRn -> s->gicd_ipriority[] */ + hvf_dist_get_priority(s, HV_GIC_DISTRIBUTOR_REG_GICD_IPRIORITYR0 + , s->gicd_ipriority); + + /* GICD_IROUTERn -> s->gicd_irouter[irq] */ + for (i =3D GIC_INTERNAL; i < s->num_irq; i++) { + uint32_t offset =3D HV_GIC_DISTRIBUTOR_REG_GICD_IROUTER32 + + (8 * i) - (8 * GIC_INTERNAL); + hv_gic_get_distributor_reg(offset, &s->gicd_irouter[i]); + } +} + +static void hvf_gicv3_set_irq(void *opaque, int irq, int level) +{ + GICv3State *s =3D opaque; + if (irq > s->num_irq) { + return; + } + hv_gic_set_spi(GIC_INTERNAL + irq, !!level); +} + +static void hvf_gicv3_icc_reset(CPUARMState *env, const ARMCPRegInfo *ri) +{ + GICv3CPUState *c; + + c =3D env->gicv3state; + c->icc_pmr_el1 =3D 0; + /* + * Architecturally the reset value of the ICC_BPR registers + * is UNKNOWN. We set them all to 0 here; when the kernel + * uses these values to program the ICH_VMCR_EL2 fields that + * determine the guest-visible ICC_BPR register values, the + * hardware's "writing a value less than the minimum sets + * the field to the minimum value" behaviour will result in + * them effectively resetting to the correct minimum value + * for the host GIC. + */ + c->icc_bpr[GICV3_G0] =3D 0; + c->icc_bpr[GICV3_G1] =3D 0; + c->icc_bpr[GICV3_G1NS] =3D 0; + + c->icc_sre_el1 =3D 0x7; + memset(c->icc_apr, 0, sizeof(c->icc_apr)); + memset(c->icc_igrpen, 0, sizeof(c->icc_igrpen)); +} + +static void hvf_gicv3_reset_hold(Object *obj, ResetType type) +{ + GICv3State *s =3D ARM_GICV3_COMMON(obj); + HVFARMGICv3Class *kgc =3D HVF_GICV3_GET_CLASS(s); + + if (kgc->parent_phases.hold) { + kgc->parent_phases.hold(obj, type); + } + + hvf_gicv3_put(s); +} + + +/* + * CPU interface registers of GIC needs to be reset on CPU reset. + * For the calling arm_gicv3_icc_reset() on CPU reset, we register + * below ARMCPRegInfo. As we reset the whole cpu interface under single + * register reset, we define only one register of CPU interface instead + * of defining all the registers. + */ +static const ARMCPRegInfo gicv3_cpuif_reginfo[] =3D { + { .name =3D "ICC_CTLR_EL1", .state =3D ARM_CP_STATE_BOTH, + .opc0 =3D 3, .opc1 =3D 0, .crn =3D 12, .crm =3D 12, .opc2 =3D 4, + /* + * If ARM_CP_NOP is used, resetfn is not called, + * So ARM_CP_NO_RAW is appropriate type. + */ + .type =3D ARM_CP_NO_RAW, + .access =3D PL1_RW, + .readfn =3D arm_cp_read_zero, + .writefn =3D arm_cp_write_ignore, + /* + * We hang the whole cpu interface reset routine off here + * rather than parcelling it out into one little function + * per register + */ + .resetfn =3D hvf_gicv3_icc_reset, + }, +}; + +static void hvf_gicv3_realize(DeviceState *dev, Error **errp) +{ + ERRP_GUARD(); + GICv3State *s =3D HVF_GICV3(dev); + HVFARMGICv3Class *kgc =3D HVF_GICV3_GET_CLASS(s); + int i; + + kgc->parent_realize(dev, errp); + if (*errp) { + return; + } + + if (s->revision !=3D 3) { + error_setg(errp, "unsupported GIC revision %d for platform GIC", + s->revision); + } + + if (s->security_extn) { + error_setg(errp, "the platform vGICv3 does not implement the " + "security extensions"); + return; + } + + if (s->nmi_support) { + error_setg(errp, "NMI is not supported with the platform GIC"); + return; + } + + if (s->nb_redist_regions > 1) { + error_setg(errp, "Multiple VGICv3 redistributor regions are not " + "supported by HVF"); + error_append_hint(errp, "A maximum of %d VCPUs can be used", + s->redist_region_count[0]); + return; + } + + gicv3_init_irqs_and_mmio(s, hvf_gicv3_set_irq, NULL); + + for (i =3D 0; i < s->num_cpu; i++) { + ARMCPU *cpu =3D ARM_CPU(qemu_get_cpu(i)); + + define_arm_cp_regs(cpu, gicv3_cpuif_reginfo); + } + + if (s->maint_irq && s->maint_irq !=3D HV_GIC_INT_MAINTENANCE) { + error_setg(errp, "vGIC maintenance IRQ mismatch with the hardcoded= one in HVF."); + return; + } +} + +static void hvf_gicv3_class_init(ObjectClass *klass, const void *data) +{ + DeviceClass *dc =3D DEVICE_CLASS(klass); + ResettableClass *rc =3D RESETTABLE_CLASS(klass); + ARMGICv3CommonClass *agcc =3D ARM_GICV3_COMMON_CLASS(klass); + HVFARMGICv3Class *kgc =3D HVF_GICV3_CLASS(klass); + + agcc->pre_save =3D hvf_gicv3_get; + agcc->post_load =3D hvf_gicv3_put; + + device_class_set_parent_realize(dc, hvf_gicv3_realize, + &kgc->parent_realize); + resettable_class_set_parent_phases(rc, NULL, hvf_gicv3_reset_hold, NUL= L, + &kgc->parent_phases); +} + +static const TypeInfo hvf_arm_gicv3_info =3D { + .name =3D TYPE_HVF_GICV3, + .parent =3D TYPE_ARM_GICV3_COMMON, + .instance_size =3D sizeof(GICv3State), + .class_init =3D hvf_gicv3_class_init, + .class_size =3D sizeof(HVFARMGICv3Class), +}; + +static void hvf_gicv3_register_types(void) +{ + type_register_static(&hvf_arm_gicv3_info); +} + +type_init(hvf_gicv3_register_types) diff --git a/hw/intc/meson.build b/hw/intc/meson.build index 96742df090..b7baf8a0f6 100644 --- a/hw/intc/meson.build +++ b/hw/intc/meson.build @@ -42,6 +42,7 @@ arm_common_ss.add(when: 'CONFIG_ARM_GIC', if_true: files(= 'arm_gicv3_cpuif_common arm_common_ss.add(when: 'CONFIG_ARM_GICV3', if_true: files('arm_gicv3_cpui= f.c')) specific_ss.add(when: 'CONFIG_ARM_GIC_KVM', if_true: files('arm_gic_kvm.c'= )) specific_ss.add(when: ['CONFIG_WHPX', 'TARGET_AARCH64'], if_true: files('a= rm_gicv3_whpx.c')) +specific_ss.add(when: ['CONFIG_HVF', 'CONFIG_ARM_GICV3'], if_true: files('= arm_gicv3_hvf.c')) specific_ss.add(when: ['CONFIG_ARM_GIC_KVM', 'TARGET_AARCH64'], if_true: f= iles('arm_gicv3_kvm.c', 'arm_gicv3_its_kvm.c')) arm_common_ss.add(when: 'CONFIG_ARM_V7M', if_true: files('armv7m_nvic.c')) specific_ss.add(when: 'CONFIG_GRLIB', if_true: files('grlib_irqmp.c')) diff --git a/include/hw/intc/arm_gicv3_common.h b/include/hw/intc/arm_gicv3= _common.h index c55cf18120..9adcab0a0c 100644 --- a/include/hw/intc/arm_gicv3_common.h +++ b/include/hw/intc/arm_gicv3_common.h @@ -315,6 +315,7 @@ DECLARE_OBJ_CHECKERS(GICv3State, ARMGICv3CommonClass, =20 /* Types for GICv3 kernel-irqchip */ #define TYPE_WHPX_GICV3 "whpx-arm-gicv3" +#define TYPE_HVF_GICV3 "hvf-arm-gicv3" =20 struct ARMGICv3CommonClass { /*< private >*/ --=20 2.50.1 (Apple Git-155)