From nobody Fri Nov 1 07:26:15 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org ARC-Seal: i=1; a=rsa-sha256; t=1708128180; cv=none; d=zohomail.com; s=zohoarc; b=Ah9RZobVS760ohzLfOzfgwB8eLbTY6XEFOS4EciXF8J01eED31c8cpvUmnO1rXu2iFCeqEzfaTLEzXGgB7ijIRKRz1pk+5XdC6OtvSJb/qwsHEJLW+ihMyhlCFJRU3UQ7a5UMDwg0WSC0DWUxPjQI87wcFuesZBAcdUihEkujlM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1708128180; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8HPqiJI3M1dvD9DrcGC5UFEadCKmgfzHDzRldDx4Pn0=; b=fJSxg/BZpxL23jbXjBWGYiQdm30cj8MfFFczlduFkzeXXiiATrRDdDNfNEm1ciCwpVWdZpjp6DCB8qPn/RBvPEtdYYPjQ8n5sSf1uyDet6DoNvs4bPSIWqYJuOMTKakJFwZBiJ4Go5d7YYmicgQKgo/viCijAQve1KQVX54miRo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 170812818011079.38904942842589; Fri, 16 Feb 2024 16:03:00 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rb89o-0003Uo-0t; Fri, 16 Feb 2024 19:02:16 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rb89j-0003RW-AU for qemu-devel@nongnu.org; Fri, 16 Feb 2024 19:02:11 -0500 Received: from mail-io1-xd32.google.com ([2607:f8b0:4864:20::d32]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rb89V-0007xA-Ts for qemu-devel@nongnu.org; Fri, 16 Feb 2024 19:02:11 -0500 Received: by mail-io1-xd32.google.com with SMTP id ca18e2360f4ac-7beda6a274bso92169439f.2 for ; Fri, 16 Feb 2024 16:01:57 -0800 (PST) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id p13-20020a635b0d000000b005d66caee3d0sm464015pgb.22.2024.02.16.16.01.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Feb 2024 16:01:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1708128116; x=1708732916; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8HPqiJI3M1dvD9DrcGC5UFEadCKmgfzHDzRldDx4Pn0=; b=zy8mmGzIymAPHu+xvqnMOWsIZFdgNATPiSUoBu5M/wsgE0XFcYD5LhQ8T/DID/xOcN 9B7cnuWlGG06OsUgKC/GsnOagH4DNohZk6c9fHNDeUiF/ixZYXGmY6J4JX6qVua3w6si /ZRCxQ2nN9TwZjrVLlCusS9mspxTztapKHLhXSKKmI6PEUq5CZuARMvmRMp2wmR24YkV HRzmoAvnUOYDDcu4C3ZhqPvzqBsTPYc9OqNxe1MvUC23zDnGD1Arwce1DKzF0XklCpX0 Syl3WXRq6ZUnmfkCTxZcra9Zkhr6UR68a41b7yrVQyyJjizGO/47O+9nI3lmkzQArApX BPUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1708128116; x=1708732916; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8HPqiJI3M1dvD9DrcGC5UFEadCKmgfzHDzRldDx4Pn0=; b=AX72UxWGFlSFC0X3aWcUmkEtsgKM+Ga0q7FfxE7RTxnyUX6sq/hHVVOLQMFJ6+5RX7 D79hhHxDZaZnbIxm/l+76xQMzG18ygpg36QPaZYPQLkxj2aSHH1UedSp1tEYRPtQT/S1 GGiCP5A+P8qa0EoPWArSdjpWnKJQ87Io4ULUyAdzt2XNThvl2rgGIyl89ksC3G1mDLce VbWpHwiY+EGtSAz3sQqLhXY4GqtwOPUfA9PLei0cwJjFyBhm5LaG1LuhzV0EJ3cEEc8N An1yuWu12qrmPsAcZFR3pP9J/2ZZTlCexlSVOiwxKgyI7ObrKgpzta7mEkAtzstwviuN y9Kg== X-Gm-Message-State: AOJu0Yx3n8wAwiUvRqrs4aTzRKWXGF5TrEa2cLquXQ5yFyaMz2MqrTxu H69+tKax3cQlWbaRNLzMf6daHMtjHYTBoeyHg/LFgT3VMbo37VmXfxGJ1Wj30aGJXjJlm/v0SRU p X-Google-Smtp-Source: AGHT+IEYDSiNxdOrUHgzSTFKumDYqs7+xHR2/zDbbewvBKwPv45W7LmA4x1JWjfSTQUdYlm4iyXkDw== X-Received: by 2002:a05:6e02:590:b0:365:1305:fac4 with SMTP id c16-20020a056e02059000b003651305fac4mr2430668ils.11.1708128115836; Fri, 16 Feb 2024 16:01:55 -0800 (PST) From: Atish Patra To: qemu-devel@nongnu.org Cc: Atish Patra , Alistair Francis , Bin Meng , Daniel Henrique Barboza , Liu Zhiwei , Palmer Dabbelt , qemu-riscv@nongnu.org, Weiwei Li , kaiwenxue1@gmail.com Subject: [PATCH RFC 8/8] target/riscv: Add counter delegation/configuration support Date: Fri, 16 Feb 2024 16:01:34 -0800 Message-Id: <20240217000134.3634191-9-atishp@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240217000134.3634191-1-atishp@rivosinc.com> References: <20240217000134.3634191-1-atishp@rivosinc.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=2607:f8b0:4864:20::d32; envelope-from=atishp@rivosinc.com; helo=mail-io1-xd32.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @rivosinc-com.20230601.gappssmtp.com) X-ZM-MESSAGEID: 1708128180809100001 Content-Type: text/plain; charset="utf-8" From: Kaiwen Xue The Smcdeleg/Ssccfg adds the support for counter delegation via S*indcsr and Ssccfg. It also adds a new shadow CSR scountinhibit and menvcfg enable bit (CDE) to enable this extension and scountovf virtualization. Signed-off-by: Kaiwen Xue Co-developed-by: Atish Patra Signed-off-by: Atish Patra --- target/riscv/csr.c | 307 +++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 294 insertions(+), 13 deletions(-) diff --git a/target/riscv/csr.c b/target/riscv/csr.c index d5218a47ffbf..3542c522ba07 100644 --- a/target/riscv/csr.c +++ b/target/riscv/csr.c @@ -366,6 +366,21 @@ static int aia_smode32(CPURISCVState *env, int csrno) return smode32(env, csrno); } =20 +static RISCVException scountinhibit_pred(CPURISCVState *env, int csrno) +{ + RISCVCPU *cpu =3D env_archcpu(env); + + if (!cpu->cfg.ext_ssccfg || !cpu->cfg.ext_smcdeleg) { + return RISCV_EXCP_ILLEGAL_INST; + } + + if (env->virt_enabled) { + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT; + } + + return smode(env, csrno); +} + static RISCVException sxcsrind_smode(CPURISCVState *env, int csrno) { RISCVCPU *cpu =3D env_archcpu(env); @@ -1089,9 +1104,9 @@ done: return result; } =20 -static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong v= al) +static RISCVException riscv_pmu_write_ctr(CPURISCVState *env, target_ulong= val, + uint32_t ctr_idx) { - int ctr_idx =3D csrno - CSR_MCYCLE; PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; uint64_t mhpmctr_val =3D val; =20 @@ -1115,9 +1130,9 @@ static int write_mhpmcounter(CPURISCVState *env, int = csrno, target_ulong val) return RISCV_EXCP_NONE; } =20 -static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong = val) +static RISCVException riscv_pmu_write_ctrh(CPURISCVState *env, target_ulon= g val, + uint32_t ctr_idx) { - int ctr_idx =3D csrno - CSR_MCYCLEH; PMUCTRState *counter =3D &env->pmu_ctrs[ctr_idx]; uint64_t mhpmctr_val =3D counter->mhpmcounter_val; uint64_t mhpmctrh_val =3D val; @@ -1138,6 +1153,20 @@ static int write_mhpmcounterh(CPURISCVState *env, in= t csrno, target_ulong val) return RISCV_EXCP_NONE; } =20 +static int write_mhpmcounter(CPURISCVState *env, int csrno, target_ulong v= al) +{ + int ctr_idx =3D csrno - CSR_MCYCLE; + + return riscv_pmu_write_ctr(env, val, ctr_idx); +} + +static int write_mhpmcounterh(CPURISCVState *env, int csrno, target_ulong = val) +{ + int ctr_idx =3D csrno - CSR_MCYCLEH; + + return riscv_pmu_write_ctrh(env, val, ctr_idx); +} + static RISCVException riscv_pmu_read_ctr(CPURISCVState *env, target_ulong = *val, bool upper_half, uint32_t ctr_idx) { @@ -1207,6 +1236,167 @@ static int read_hpmcounterh(CPURISCVState *env, int= csrno, target_ulong *val) return riscv_pmu_read_ctr(env, val, true, ctr_index); } =20 +static int rmw_cd_mhpmcounter(CPURISCVState *env, int ctr_idx, + target_ulong *val, target_ulong new_val, + target_ulong wr_mask) +{ + if (wr_mask !=3D 0 && wr_mask !=3D -1) { + return -EINVAL; + } + + if (!wr_mask && val) { + riscv_pmu_read_ctr(env, val, false, ctr_idx); + } else if (wr_mask) { + riscv_pmu_write_ctr(env, new_val, ctr_idx); + } else { + return -EINVAL; + } + + return 0; +} + +static int rmw_cd_mhpmcounterh(CPURISCVState *env, int ctr_idx, + target_ulong *val, target_ulong new_val, + target_ulong wr_mask) +{ + if (wr_mask !=3D 0 && wr_mask !=3D -1) { + return -EINVAL; + } + + if (!wr_mask && val) { + riscv_pmu_read_ctr(env, val, true, ctr_idx); + } else if (wr_mask) { + riscv_pmu_write_ctrh(env, new_val, ctr_idx); + } else { + return -EINVAL; + } + + return 0; +} + +static int rmw_cd_mhpmevent(CPURISCVState *env, int evt_index, + target_ulong *val, target_ulong new_val, + target_ulong wr_mask) +{ + uint64_t mhpmevt_val =3D new_val; + + if (wr_mask !=3D 0 && wr_mask !=3D -1) { + return -EINVAL; + } + + if (!wr_mask && val) { + *val =3D env->mhpmevent_val[evt_index]; + if (riscv_cpu_cfg(env)->ext_sscofpmf) { + *val &=3D ~MHPMEVENT_BIT_MINH; + } + } else if (wr_mask) { + wr_mask &=3D ~MHPMEVENT_BIT_MINH; + mhpmevt_val =3D (new_val & wr_mask) | + (env->mhpmevent_val[evt_index] & ~wr_mask); + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { + mhpmevt_val =3D mhpmevt_val | + ((uint64_t)env->mhpmeventh_val[evt_index] << 32); + } + env->mhpmevent_val[evt_index] =3D mhpmevt_val; + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index); + } else { + return -EINVAL; + } + + return 0; +} + +static int rmw_cd_mhpmeventh(CPURISCVState *env, int evt_index, + target_ulong *val, target_ulong new_val, + target_ulong wr_mask) +{ + uint64_t mhpmevth_val; + uint64_t mhpmevt_val =3D env->mhpmevent_val[evt_index]; + + if (wr_mask !=3D 0 && wr_mask !=3D -1) { + return -EINVAL; + } + + if (!wr_mask && val) { + *val =3D env->mhpmeventh_val[evt_index]; + if (riscv_cpu_cfg(env)->ext_sscofpmf) { + *val &=3D ~MHPMEVENTH_BIT_MINH; + } + } else if (wr_mask) { + wr_mask &=3D ~MHPMEVENTH_BIT_MINH; + env->mhpmeventh_val[evt_index] =3D + (new_val & wr_mask) | (env->mhpmeventh_val[evt_index] & ~wr_ma= sk); + mhpmevth_val =3D env->mhpmeventh_val[evt_index]; + mhpmevt_val =3D mhpmevt_val | (mhpmevth_val << 32); + riscv_pmu_update_event_map(env, mhpmevt_val, evt_index); + } else { + return -EINVAL; + } + + return 0; +} + +static int rmw_cd_ctr_cfg(CPURISCVState *env, int cfg_index, target_ulong = *val, + target_ulong new_val, target_ulong wr_mask) +{ + switch (cfg_index) { + case 0: /* CYCLECFG */ + if (wr_mask) { + wr_mask &=3D ~MCYCLECFG_BIT_MINH; + env->mcyclecfg =3D (new_val & wr_mask) | (env->mcyclecfg & ~wr= _mask); + } else { + *val =3D env->mcyclecfg &=3D ~MHPMEVENTH_BIT_MINH; + } + break; + case 2: /* INSTRETCFG */ + if (wr_mask) { + wr_mask &=3D ~MINSTRETCFG_BIT_MINH; + env->minstretcfg =3D (new_val & wr_mask) | + (env->minstretcfg & ~wr_mask); + } else { + *val =3D env->minstretcfg &=3D ~MHPMEVENTH_BIT_MINH; + } + break; + default: + return -EINVAL; + } + return 0; +} + +static int rmw_cd_ctr_cfgh(CPURISCVState *env, int cfg_index, target_ulong= *val, + target_ulong new_val, target_ulong wr_mask) +{ + + if (riscv_cpu_mxl(env) !=3D MXL_RV32) { + return RISCV_EXCP_ILLEGAL_INST; + } + + switch (cfg_index) { + case 0: /* CYCLECFGH */ + if (wr_mask) { + wr_mask &=3D ~MCYCLECFGH_BIT_MINH; + env->mcyclecfgh =3D (new_val & wr_mask) | + (env->mcyclecfgh & ~wr_mask); + } else { + *val =3D env->mcyclecfgh; + } + break; + case 2: /* INSTRETCFGH */ + if (wr_mask) { + wr_mask &=3D ~MINSTRETCFGH_BIT_MINH; + env->minstretcfgh =3D (new_val & wr_mask) | + (env->minstretcfgh & ~wr_mask); + } else { + *val =3D env->minstretcfgh; + } + break; + default: + return -EINVAL; + } + return 0; +} + + static int read_scountovf(CPURISCVState *env, int csrno, target_ulong *val) { int mhpmevt_start =3D CSR_MHPMEVENT3 - CSR_MCOUNTINHIBIT; @@ -1215,6 +1405,14 @@ static int read_scountovf(CPURISCVState *env, int cs= rno, target_ulong *val) target_ulong *mhpm_evt_val; uint64_t of_bit_mask; =20 + /* Virtualize scountovf for counter delegation */ + if (riscv_cpu_cfg(env)->ext_sscofpmf && + riscv_cpu_cfg(env)->ext_ssccfg && + get_field(env->menvcfg, MENVCFG_CDE) && + env->virt_enabled) { + return RISCV_EXCP_VIRT_INSTRUCTION_FAULT; + } + if (riscv_cpu_mxl(env) =3D=3D MXL_RV32) { mhpm_evt_val =3D env->mhpmeventh_val; of_bit_mask =3D MHPMEVENTH_BIT_OF; @@ -2112,11 +2310,70 @@ static int rmw_xireg_cd(CPURISCVState *env, int csr= no, target_ulong isel, target_ulong *val, target_ulong new_val, target_ulong wr_mask) { - if (!riscv_cpu_cfg(env)->ext_smcdeleg) { + int ret =3D -EINVAL; + int ctr_index =3D isel - ISELECT_CD_FIRST; + int isel_hpm_start =3D ISELECT_CD_FIRST + 3; + + if (!riscv_cpu_cfg(env)->ext_smcdeleg || !riscv_cpu_cfg(env)->ext_sscc= fg) { return RISCV_EXCP_ILLEGAL_INST; } - /* TODO: Implement the functionality later */ - return RISCV_EXCP_NONE; + + /* Invalid siselect value for reserved */ + if (ctr_index =3D=3D 1) { + goto done; + } + + /* sireg4 and sireg5 provides access RV32 only CSRs */ + if (((csrno =3D=3D CSR_SIREG5) || (csrno =3D=3D CSR_SIREG4)) && + (riscv_cpu_mxl(env) !=3D MXL_RV32)) { + return RISCV_EXCP_ILLEGAL_INST; + } + + /* Check Sscofpmf dependancy */ + if (!riscv_cpu_cfg(env)->ext_sscofpmf && csrno =3D=3D CSR_SIREG5 && + (isel_hpm_start <=3D isel && isel <=3D ISELECT_CD_LAST)) { + goto done; + } + + /* Check smcntrpmf dependancy */ + if (!riscv_cpu_cfg(env)->ext_smcntrpmf && + (csrno =3D=3D CSR_SIREG2 || csrno =3D=3D CSR_SIREG5) && + (ISELECT_CD_FIRST <=3D isel && isel < isel_hpm_start)) { + goto done; + } + + if (!get_field(env->mcounteren, BIT(ctr_index)) || + !get_field(env->menvcfg, MENVCFG_CDE)) { + goto done; + } + + switch (csrno) { + case CSR_SIREG: + ret =3D rmw_cd_mhpmcounter(env, ctr_index, val, new_val, wr_mask); + break; + case CSR_SIREG4: + ret =3D rmw_cd_mhpmcounterh(env, ctr_index, val, new_val, wr_mask); + break; + case CSR_SIREG2: + if (ctr_index <=3D 2) { + ret =3D rmw_cd_ctr_cfg(env, ctr_index, val, new_val, wr_mask); + } else { + ret =3D rmw_cd_mhpmevent(env, ctr_index, val, new_val, wr_mask= ); + } + break; + case CSR_SIREG5: + if (ctr_index <=3D 2) { + ret =3D rmw_cd_ctr_cfgh(env, ctr_index, val, new_val, wr_mask); + } else { + ret =3D rmw_cd_mhpmeventh(env, ctr_index, val, new_val, wr_mas= k); + } + break; + default: + goto done; + } + +done: + return ret; } =20 /* @@ -2335,14 +2592,15 @@ static RISCVException write_mcountinhibit(CPURISCVS= tate *env, int csrno, int cidx; PMUCTRState *counter; RISCVCPU *cpu =3D env_archcpu(env); + uint32_t present_ctrs =3D cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTER= EN_IR; =20 /* WARL register - disable unavailable counters; TM bit is always 0 */ - env->mcountinhibit =3D - val & (cpu->pmu_avail_ctrs | COUNTEREN_CY | COUNTEREN_IR); + env->mcountinhibit =3D val & present_ctrs; =20 /* Check if any other counter is also monitoring cycles/instructions */ for (cidx =3D 0; cidx < RV_MAX_MHPMCOUNTERS; cidx++) { - if (!get_field(env->mcountinhibit, BIT(cidx))) { + if ((BIT(cidx) & present_ctrs) && + (!get_field(env->mcountinhibit, BIT(cidx)))) { counter =3D &env->pmu_ctrs[cidx]; counter->started =3D true; } @@ -2351,6 +2609,21 @@ static RISCVException write_mcountinhibit(CPURISCVSt= ate *env, int csrno, return RISCV_EXCP_NONE; } =20 +static RISCVException read_scountinhibit(CPURISCVState *env, int csrno, + target_ulong *val) +{ + /* S-mode can only access the bits delegated by M-mode */ + *val =3D env->mcountinhibit & env->mcounteren; + return RISCV_EXCP_NONE; +} + +static RISCVException write_scountinhibit(CPURISCVState *env, int csrno, + target_ulong val) +{ + write_mcountinhibit(env, csrno, val & env->mcounteren); + return RISCV_EXCP_NONE; +} + static RISCVException read_mcounteren(CPURISCVState *env, int csrno, target_ulong *val) { @@ -2453,12 +2726,14 @@ static RISCVException write_menvcfg(CPURISCVState *= env, int csrno, target_ulong val) { const RISCVCPUConfig *cfg =3D riscv_cpu_cfg(env); - uint64_t mask =3D MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE | MENVCF= G_CBZE; + uint64_t mask =3D MENVCFG_FIOM | MENVCFG_CBIE | MENVCFG_CBCFE | + MENVCFG_CBZE | MENVCFG_CDE; =20 if (riscv_cpu_mxl(env) =3D=3D MXL_RV64) { mask |=3D (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) | (cfg->ext_sstc ? MENVCFG_STCE : 0) | - (cfg->ext_svadu ? MENVCFG_ADUE : 0); + (cfg->ext_svadu ? MENVCFG_ADUE : 0) | + (cfg->ext_smcdeleg ? MENVCFG_CDE : 0); } env->menvcfg =3D (env->menvcfg & ~mask) | (val & mask); =20 @@ -2478,7 +2753,8 @@ static RISCVException write_menvcfgh(CPURISCVState *e= nv, int csrno, const RISCVCPUConfig *cfg =3D riscv_cpu_cfg(env); uint64_t mask =3D (cfg->ext_svpbmt ? MENVCFG_PBMTE : 0) | (cfg->ext_sstc ? MENVCFG_STCE : 0) | - (cfg->ext_svadu ? MENVCFG_ADUE : 0); + (cfg->ext_svadu ? MENVCFG_ADUE : 0) | + (cfg->ext_smcdeleg ? MENVCFG_CDE : 0); uint64_t valh =3D (uint64_t)val << 32; =20 env->menvcfg =3D (env->menvcfg & ~mask) | (valh & mask); @@ -5102,6 +5378,11 @@ riscv_csr_operations csr_ops[CSR_TABLE_SIZE] =3D { write_sstateen_1_3, .min_priv_ver =3D PRIV_VERSION_1_12_0 }, =20 + /* Supervisor Counter Delegation */ + [CSR_SCOUNTINHIBIT] =3D {"scountinhibit", scountinhibit_pred, + read_scountinhibit, write_scountinhibit, + .min_priv_ver =3D PRIV_VERSION_1_12_0 }, + /* Supervisor Trap Setup */ [CSR_SSTATUS] =3D { "sstatus", smode, read_sstatus, write_sst= atus, NULL, read_sstatus_i128 = }, --=20 2.34.1