From nobody Sun Apr 28 05:39:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1571078358; cv=none; d=zoho.com; s=zohoarc; b=IJb8xtRQY2D6jCsLEEzihxk3g3BcreSbENduFYGp+3wCDGzYjVyHQq5rmZIgnB/VfmQCUwUIL4mGSDRzpOih80GSFvCkv9aUcBUcRAbueXWHOBOwyXHR69Afq2GBEeEOLGJdjVVZIPEvT2unpaYCz6vD9jW9GRuI3rdwNF3zEa8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1571078358; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=6LBv4k+vMQiuzRL1JPUaKIYsTod05Nqxfuq3fa8v5Lg=; b=hnALDc7lzFF4uM+/z9Xdkk+rooZ05g+URs9e6HKMq8wLzL52XovSdEnSEAdt9MDPnEyPWZQLMi3P3OSJMy+r85rITqgmbKrz8/jdFJ9uT1vgC5hfIHSgdseSuds9lUzjEr1AsPnA5WlrtDfkvMfFcunHgGqGKpmNSPhsd41SOWY= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1571078358975152.6852917242237; Mon, 14 Oct 2019 11:39:18 -0700 (PDT) Received: from localhost ([::1]:55440 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iK5FT-0001eH-GX for importer@patchew.org; Mon, 14 Oct 2019 14:39:15 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59188) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iK5EO-0000gu-Nw for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:38:09 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iK5EN-0003gR-Hj for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:38:08 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:26822 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iK5EN-0003gH-Be for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:38:07 -0400 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x9EIc4lJ149393 for ; Mon, 14 Oct 2019 14:38:05 -0400 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 2vmwfr24ev-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 14 Oct 2019 14:38:04 -0400 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 14 Oct 2019 19:37:42 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 14 Oct 2019 19:37:39 +0100 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x9EIbcgu44433456 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 14 Oct 2019 18:37:39 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id DC3515204E; Mon, 14 Oct 2019 18:37:38 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id D99AE52052; Mon, 14 Oct 2019 18:37:37 +0000 (GMT) Subject: [PATCH v3 1/3] mem: move nvdimm_device_list to utilities From: Shivaprasad G Bhat To: imammedo@redhat.com, david@gibson.dropbear.id.au, qemu-ppc@nongnu.org, xiaoguangrong.eric@gmail.com, mst@redhat.com Date: Mon, 14 Oct 2019 13:37:37 -0500 In-Reply-To: <157107820388.27733.3565652855304038259.stgit@lep8c.aus.stglabs.ibm.com> References: <157107820388.27733.3565652855304038259.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19101418-0012-0000-0000-000003580246 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19101418-0013-0000-0000-000021931557 Message-Id: <157107825148.27733.10924648339824665145.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-10-14_09:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910140150 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sbhat@linux.vnet.ibm.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" nvdimm_device_list is required for parsing the list for devices in subsequent patches. Move it to common utility area. Signed-off-by: Shivaprasad G Bhat Reviewed-by: Igor Mammedov --- hw/acpi/nvdimm.c | 28 +--------------------------- include/qemu/nvdimm-utils.h | 7 +++++++ util/Makefile.objs | 1 + util/nvdimm-utils.c | 29 +++++++++++++++++++++++++++++ 4 files changed, 38 insertions(+), 27 deletions(-) create mode 100644 include/qemu/nvdimm-utils.h create mode 100644 util/nvdimm-utils.c diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index 9fdad6dc3f..5219dd0e2e 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -32,33 +32,7 @@ #include "hw/acpi/bios-linker-loader.h" #include "hw/nvram/fw_cfg.h" #include "hw/mem/nvdimm.h" - -static int nvdimm_device_list(Object *obj, void *opaque) -{ - GSList **list =3D opaque; - - if (object_dynamic_cast(obj, TYPE_NVDIMM)) { - *list =3D g_slist_append(*list, DEVICE(obj)); - } - - object_child_foreach(obj, nvdimm_device_list, opaque); - return 0; -} - -/* - * inquire NVDIMM devices and link them into the list which is - * returned to the caller. - * - * Note: it is the caller's responsibility to free the list to avoid - * memory leak. - */ -static GSList *nvdimm_get_device_list(void) -{ - GSList *list =3D NULL; - - object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); - return list; -} +#include "qemu/nvdimm-utils.h" =20 #define NVDIMM_UUID_LE(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7) = \ { (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff= , \ diff --git a/include/qemu/nvdimm-utils.h b/include/qemu/nvdimm-utils.h new file mode 100644 index 0000000000..4b8b198ba7 --- /dev/null +++ b/include/qemu/nvdimm-utils.h @@ -0,0 +1,7 @@ +#ifndef NVDIMM_UTILS_H +#define NVDIMM_UTILS_H + +#include "qemu/osdep.h" + +GSList *nvdimm_get_device_list(void); +#endif diff --git a/util/Makefile.objs b/util/Makefile.objs index 41bf59d127..a0f40d26e3 100644 --- a/util/Makefile.objs +++ b/util/Makefile.objs @@ -20,6 +20,7 @@ util-obj-y +=3D envlist.o path.o module.o util-obj-y +=3D host-utils.o util-obj-y +=3D bitmap.o bitops.o hbitmap.o util-obj-y +=3D fifo8.o +util-obj-y +=3D nvdimm-utils.o util-obj-y +=3D cacheinfo.o util-obj-y +=3D error.o qemu-error.o util-obj-y +=3D qemu-print.o diff --git a/util/nvdimm-utils.c b/util/nvdimm-utils.c new file mode 100644 index 0000000000..5cc768ca47 --- /dev/null +++ b/util/nvdimm-utils.c @@ -0,0 +1,29 @@ +#include "qemu/nvdimm-utils.h" +#include "hw/mem/nvdimm.h" + +static int nvdimm_device_list(Object *obj, void *opaque) +{ + GSList **list =3D opaque; + + if (object_dynamic_cast(obj, TYPE_NVDIMM)) { + *list =3D g_slist_append(*list, DEVICE(obj)); + } + + object_child_foreach(obj, nvdimm_device_list, opaque); + return 0; +} + +/* + * inquire NVDIMM devices and link them into the list which is + * returned to the caller. + * + * Note: it is the caller's responsibility to free the list to avoid + * memory leak. + */ +GSList *nvdimm_get_device_list(void) +{ + GSList *list =3D NULL; + + object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); + return list; +} From nobody Sun Apr 28 05:39:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1571078406; cv=none; d=zoho.com; s=zohoarc; b=TCy1+c5TU4JFdiIuVAQd73f19L53SfKMy2a4wssIwl8ch6sgxkT9ro/YR/7AZzsxuxVJl2IATF8LGXb5XD6NbLVWJPRXGDf2otfjD6DZsQqnG4+NeGzHxPi//PH0txiLVom4hmAgI/fIbbZt3DsRBfrxE/eFSVHtrlUoitpKmws= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1571078406; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=0b4aK3j3BW6SaDLaAKcGlBrtb3wLaCggNdUebm5Zuzg=; b=YjSvSp7bsZu7vvFwAq9ZkrUVugYgL7Yke54TLLuJXsrL1wZgyP/jk/D9mtmD1IAFFKKWbA+tjYVciE+TdxVlHYMx2g3dFFkPh0u57h8HblhUFu+mHGkxSZbWuXehXyDnW6nLDYKFULxbmMa9lzcnzIrArncofmAg/5L4wZpYZKk= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1571078406182588.6411793437785; Mon, 14 Oct 2019 11:40:06 -0700 (PDT) Received: from localhost ([::1]:55460 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iK5GH-0002lE-3h for importer@patchew.org; Mon, 14 Oct 2019 14:40:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59198) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iK5EQ-0000jV-FD for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:38:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iK5EN-0003gX-Mp for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:38:10 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:1032 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iK5EN-0003gN-HL for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:38:07 -0400 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x9EIc4g9149336 for ; Mon, 14 Oct 2019 14:38:07 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0b-001b2d01.pphosted.com with ESMTP id 2vmwfr24n3-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 14 Oct 2019 14:38:06 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 14 Oct 2019 19:37:56 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 14 Oct 2019 19:37:53 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x9EIbqDQ44367908 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 14 Oct 2019 18:37:52 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 27CF24C044; Mon, 14 Oct 2019 18:37:52 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id EA5B24C059; Mon, 14 Oct 2019 18:37:50 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 14 Oct 2019 18:37:50 +0000 (GMT) Subject: [PATCH v3 2/3] spapr: Add NVDIMM device support From: Shivaprasad G Bhat To: imammedo@redhat.com, david@gibson.dropbear.id.au, qemu-ppc@nongnu.org, xiaoguangrong.eric@gmail.com, mst@redhat.com Date: Mon, 14 Oct 2019 13:37:50 -0500 In-Reply-To: <157107820388.27733.3565652855304038259.stgit@lep8c.aus.stglabs.ibm.com> References: <157107820388.27733.3565652855304038259.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19101418-0020-0000-0000-0000037901B5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19101418-0021-0000-0000-000021CF1BC6 Message-Id: <157107826404.27733.10134514695430511105.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-10-14_09:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910140150 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sbhat@linux.vnet.ibm.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Add support for NVDIMM devices for sPAPR. Piggyback on existing nvdimm device interface in QEMU to support virtual NVDIMM devices for Power. Create the required DT entries for the device (some entries have dummy values right now). The patch creates the required DT node and sends a hotplug interrupt to the guest. Guest is expected to undertake the normal DR resource add path in response and start issuing PAPR SCM hcalls. The device support is verified based on the machine version unlike x86. This is how it can be used .. Ex : For coldplug, the device to be added in qemu command line as shown below -object memory-backend-file,id=3Dmemnvdimm0,prealloc=3Dyes,mem-path=3D/tmp/= nvdimm0,share=3Dyes,size=3D1073872896 -device nvdimm,label-size=3D128k,uuid=3D75a3cdd7-6a2f-4791-8d15-fe0a920e8e9= e,memdev=3Dmemnvdimm0,id=3Dnvdimm0,slot=3D0 For hotplug, the device to be added from monitor as below object_add memory-backend-file,id=3Dmemnvdimm0,prealloc=3Dyes,mem-path=3D/t= mp/nvdimm0,share=3Dyes,size=3D1073872896 device_add nvdimm,label-size=3D128k,uuid=3D75a3cdd7-6a2f-4791-8d15-fe0a920e= 8e9e,memdev=3Dmemnvdimm0,id=3Dnvdimm0,slot=3D0 Signed-off-by: Shivaprasad G Bhat Signed-off-by: Bharata B Rao [Early implementation] --- default-configs/ppc64-softmmu.mak | 1=20 hw/mem/Kconfig | 2=20 hw/mem/nvdimm.c | 40 +++++++ hw/ppc/spapr.c | 218 +++++++++++++++++++++++++++++++++= +--- hw/ppc/spapr_drc.c | 18 +++ hw/ppc/spapr_events.c | 4 + include/hw/mem/nvdimm.h | 7 + include/hw/ppc/spapr.h | 11 ++ include/hw/ppc/spapr_drc.h | 9 ++ 9 files changed, 293 insertions(+), 17 deletions(-) diff --git a/default-configs/ppc64-softmmu.mak b/default-configs/ppc64-soft= mmu.mak index cca52665d9..ae0841fa3a 100644 --- a/default-configs/ppc64-softmmu.mak +++ b/default-configs/ppc64-softmmu.mak @@ -8,3 +8,4 @@ CONFIG_POWERNV=3Dy =20 # For pSeries CONFIG_PSERIES=3Dy +CONFIG_NVDIMM=3Dy diff --git a/hw/mem/Kconfig b/hw/mem/Kconfig index 620fd4cb59..2ad052a536 100644 --- a/hw/mem/Kconfig +++ b/hw/mem/Kconfig @@ -8,4 +8,4 @@ config MEM_DEVICE config NVDIMM bool default y - depends on PC + depends on (PC || PSERIES) diff --git a/hw/mem/nvdimm.c b/hw/mem/nvdimm.c index 375f9a588a..e1238b5bed 100644 --- a/hw/mem/nvdimm.c +++ b/hw/mem/nvdimm.c @@ -69,11 +69,51 @@ out: error_propagate(errp, local_err); } =20 +static void nvdimm_get_uuid(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NVDIMMDevice *nvdimm =3D NVDIMM(obj); + char *value =3D NULL; + + value =3D qemu_uuid_unparse_strdup(&nvdimm->uuid); + + visit_type_str(v, name, &value, errp); + g_free(value); +} + + +static void nvdimm_set_uuid(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NVDIMMDevice *nvdimm =3D NVDIMM(obj); + Error *local_err =3D NULL; + char *value; + + visit_type_str(v, name, &value, &local_err); + if (local_err) { + goto out; + } + + if (qemu_uuid_parse(value, &nvdimm->uuid) !=3D 0) { + error_setg(errp, "Property '%s.%s' has invalid value", + object_get_typename(obj), name); + goto out; + } + g_free(value); + +out: + error_propagate(errp, local_err); +} + + static void nvdimm_init(Object *obj) { object_property_add(obj, NVDIMM_LABEL_SIZE_PROP, "int", nvdimm_get_label_size, nvdimm_set_label_size, NULL, NULL, NULL); + + object_property_add(obj, NVDIMM_UUID_PROP, "QemuUUID", nvdimm_get_uuid, + nvdimm_set_uuid, NULL, NULL, NULL); } =20 static void nvdimm_finalize(Object *obj) diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 08a2a5a770..eb5c205078 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -80,6 +80,8 @@ #include "hw/ppc/spapr_cpu_core.h" #include "hw/mem/memory-device.h" #include "hw/ppc/spapr_tpm_proxy.h" +#include "hw/mem/nvdimm.h" +#include "qemu/nvdimm-utils.h" =20 #include =20 @@ -716,7 +718,8 @@ static int spapr_populate_drmem_v2(SpaprMachineState *s= papr, void *fdt, uint8_t *int_buf, *cur_index; int ret; uint64_t lmb_size =3D SPAPR_MEMORY_BLOCK_SIZE; - uint64_t addr, cur_addr, size; + uint64_t addr, cur_addr, size, slot; + uint64_t scm_block_size =3D SPAPR_MINIMUM_SCM_BLOCK_SIZE; uint32_t nr_boot_lmbs =3D (machine->device_memory->base / lmb_size); uint64_t mem_end =3D machine->device_memory->base + memory_region_size(&machine->device_memory->mr); @@ -741,6 +744,7 @@ static int spapr_populate_drmem_v2(SpaprMachineState *s= papr, void *fdt, addr =3D di->addr; size =3D di->size; node =3D di->node; + slot =3D di->slot; =20 /* Entry for hot-pluggable area */ if (cur_addr < addr) { @@ -752,12 +756,20 @@ static int spapr_populate_drmem_v2(SpaprMachineState = *spapr, void *fdt, nr_entries++; } =20 - /* Entry for DIMM */ - drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); - g_assert(drc); - elem =3D spapr_get_drconf_cell(size / lmb_size, addr, - spapr_drc_index(drc), node, - SPAPR_LMB_FLAGS_ASSIGNED); + if (info->value->type =3D=3D MEMORY_DEVICE_INFO_KIND_DIMM) { + /* Entry for DIMM */ + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); + g_assert(drc); + elem =3D spapr_get_drconf_cell(size / lmb_size, addr, + spapr_drc_index(drc), node, + SPAPR_LMB_FLAGS_ASSIGNED); + } else if (info->value->type =3D=3D MEMORY_DEVICE_INFO_KIND_NVDIMM= ) { + /* Entry for NVDIMM */ + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, slot); + g_assert(drc); + elem =3D spapr_get_drconf_cell(size / scm_block_size, addr, + spapr_drc_index(drc), -1, 0); + } QSIMPLEQ_INSERT_TAIL(&drconf_queue, elem, entry); nr_entries++; cur_addr =3D addr + size; @@ -1261,6 +1273,85 @@ static void spapr_dt_hypervisor(SpaprMachineState *s= papr, void *fdt) } } =20 +static int spapr_dt_nvdimm(void *fdt, int parent_offset, + NVDIMMDevice *nvdimm) +{ + int child_offset; + char buf[40]; + SpaprDrc *drc; + uint32_t drc_idx; + uint32_t node =3D object_property_get_uint(OBJECT(nvdimm), PC_DIMM_NOD= E_PROP, + &error_abort); + uint64_t slot =3D object_property_get_uint(OBJECT(nvdimm), PC_DIMM_SLO= T_PROP, + &error_abort); + uint32_t associativity[] =3D { + cpu_to_be32(0x4), /* length */ + cpu_to_be32(0x0), cpu_to_be32(0x0), + cpu_to_be32(0x0), cpu_to_be32(node) + }; + uint64_t lsize =3D nvdimm->label_size; + uint64_t size =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE= _PROP, + NULL); + + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, slot); + g_assert(drc); + + drc_idx =3D spapr_drc_index(drc); + + sprintf(buf, "ibm,pmemory@%x", drc_idx); + child_offset =3D fdt_add_subnode(fdt, parent_offset, buf); + _FDT(child_offset); + + _FDT((fdt_setprop_cell(fdt, child_offset, "reg", drc_idx))); + _FDT((fdt_setprop_string(fdt, child_offset, "compatible", "ibm,pmemory= "))); + _FDT((fdt_setprop_string(fdt, child_offset, "device_type", "ibm,pmemor= y"))); + + _FDT((fdt_setprop(fdt, child_offset, "ibm,associativity", associativit= y, + sizeof(associativity)))); + + qemu_uuid_unparse(&nvdimm->uuid, buf); + _FDT((fdt_setprop_string(fdt, child_offset, "ibm,unit-guid", buf))); + + _FDT((fdt_setprop_cell(fdt, child_offset, "ibm,my-drc-index", drc_idx)= )); + + _FDT((fdt_setprop_u64(fdt, child_offset, "ibm,block-size", + SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_u64(fdt, child_offset, "ibm,number-of-blocks", + size / SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_cell(fdt, child_offset, "ibm,metadata-size", lsize))= ); + + _FDT((fdt_setprop_string(fdt, child_offset, "ibm,pmem-application", + "operating-system"))); + _FDT(fdt_setprop(fdt, child_offset, "ibm,cache-flush-required", NULL, = 0)); + + return child_offset; +} + +static void spapr_dt_persistent_memory(void *fdt) +{ + int offset =3D fdt_subnode_offset(fdt, 0, "persistent-memory"); + GSList *iter, *nvdimms =3D nvdimm_get_device_list(); + + if (offset < 0) { + offset =3D fdt_add_subnode(fdt, 0, "persistent-memory"); + _FDT(offset); + _FDT((fdt_setprop_cell(fdt, offset, "#address-cells", 0x1))); + _FDT((fdt_setprop_cell(fdt, offset, "#size-cells", 0x0))); + _FDT((fdt_setprop_string(fdt, offset, "device_type", + "ibm,persistent-memory"))); + } + + /* Create DT entries for cold plugged NVDIMM devices */ + for (iter =3D nvdimms; iter; iter =3D iter->next) { + NVDIMMDevice *nvdimm =3D iter->data; + + spapr_dt_nvdimm(fdt, offset, nvdimm); + } + g_slist_free(nvdimms); + + return; +} + static void *spapr_build_fdt(SpaprMachineState *spapr) { MachineState *machine =3D MACHINE(spapr); @@ -1392,6 +1483,11 @@ static void *spapr_build_fdt(SpaprMachineState *spap= r) } } =20 + /* NVDIMM devices */ + if (mc->nvdimm_supported) { + spapr_dt_persistent_memory(fdt); + } + return fdt; } =20 @@ -2521,6 +2617,16 @@ static void spapr_create_lmb_dr_connectors(SpaprMach= ineState *spapr) } } =20 +static void spapr_create_nvdimm_dr_connectors(SpaprMachineState *spapr) +{ + MachineState *machine =3D MACHINE(spapr); + int i; + + for (i =3D 0; i < machine->ram_slots; i++) { + spapr_dr_connector_new(OBJECT(spapr), TYPE_SPAPR_DRC_PMEM, i); + } +} + /* * If RAM size, maxmem size and individual node mem sizes aren't aligned * to SPAPR_MEMORY_BLOCK_SIZE(256MB), then refuse to start the guest @@ -2734,6 +2840,7 @@ static void spapr_machine_init(MachineState *machine) { SpaprMachineState *spapr =3D SPAPR_MACHINE(machine); SpaprMachineClass *smc =3D SPAPR_MACHINE_GET_CLASS(machine); + MachineClass *mc =3D MACHINE_GET_CLASS(machine); const char *kernel_filename =3D machine->kernel_filename; const char *initrd_filename =3D machine->initrd_filename; PCIHostState *phb; @@ -2915,6 +3022,10 @@ static void spapr_machine_init(MachineState *machine) spapr_create_lmb_dr_connectors(spapr); } =20 + if (mc->nvdimm_supported) { + spapr_create_nvdimm_dr_connectors(spapr); + } + filename =3D qemu_find_file(QEMU_FILE_TYPE_BIOS, "spapr-rtas.bin"); if (!filename) { error_report("Could not find LPAR rtas '%s'", "spapr-rtas.bin"); @@ -3436,6 +3547,16 @@ static void spapr_nmi(NMIState *n, int cpu_index, Er= ror **errp) } } =20 +int spapr_pmem_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, + void *fdt, int *fdt_start_offset, Error **errp) +{ + NVDIMMDevice *nvdimm =3D NVDIMM(drc->dev); + + *fdt_start_offset =3D spapr_dt_nvdimm(fdt, 0, nvdimm); + + return 0; +} + int spapr_lmb_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, void *fdt, int *fdt_start_offset, Error **errp) { @@ -3498,13 +3619,34 @@ static void spapr_add_lmbs(DeviceState *dev, uint64= _t addr_start, uint64_t size, } } =20 +static void spapr_add_nvdimm(DeviceState *dev, uint64_t slot, Error **errp) +{ + SpaprDrc *drc; + bool hotplugged =3D spapr_drc_hotplugged(dev); + Error *local_err =3D NULL; + + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, slot); + g_assert(drc); + + spapr_drc_attach(drc, dev, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + + if (hotplugged) { + spapr_hotplug_req_add_by_index(drc); + } +} + static void spapr_memory_plug(HotplugHandler *hotplug_dev, DeviceState *de= v, Error **errp) { Error *local_err =3D NULL; SpaprMachineState *ms =3D SPAPR_MACHINE(hotplug_dev); PCDIMMDevice *dimm =3D PC_DIMM(dev); - uint64_t size, addr; + uint64_t size, addr, slot; + bool is_nvdimm =3D object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); =20 size =3D memory_device_get_region_size(MEMORY_DEVICE(dev), &error_abor= t); =20 @@ -3513,14 +3655,24 @@ static void spapr_memory_plug(HotplugHandler *hotpl= ug_dev, DeviceState *dev, goto out; } =20 - addr =3D object_property_get_uint(OBJECT(dimm), - PC_DIMM_ADDR_PROP, &local_err); - if (local_err) { - goto out_unplug; + if (!is_nvdimm) { + addr =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_ADDR_PROP, &local_err); + if (local_err) { + goto out_unplug; + } + spapr_add_lmbs(dev, addr, size, + spapr_ovec_test(ms->ov5_cas, OV5_HP_EVT), + &local_err); + } else { + slot =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_SLOT_PROP, &local_err); + if (local_err) { + goto out_unplug; + } + spapr_add_nvdimm(dev, slot, &local_err); } =20 - spapr_add_lmbs(dev, addr, size, spapr_ovec_test(ms->ov5_cas, OV5_HP_EV= T), - &local_err); if (local_err) { goto out_unplug; } @@ -3538,6 +3690,8 @@ static void spapr_memory_pre_plug(HotplugHandler *hot= plug_dev, DeviceState *dev, { const SpaprMachineClass *smc =3D SPAPR_MACHINE_GET_CLASS(hotplug_dev); SpaprMachineState *spapr =3D SPAPR_MACHINE(hotplug_dev); + const MachineClass *mc =3D MACHINE_CLASS(smc); + bool is_nvdimm =3D object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); PCDIMMDevice *dimm =3D PC_DIMM(dev); Error *local_err =3D NULL; uint64_t size; @@ -3549,16 +3703,40 @@ static void spapr_memory_pre_plug(HotplugHandler *h= otplug_dev, DeviceState *dev, return; } =20 + if (is_nvdimm && !mc->nvdimm_supported) { + error_setg(errp, "NVDIMM hotplug not supported for this machine"); + return; + } + size =3D memory_device_get_region_size(MEMORY_DEVICE(dimm), &local_err= ); if (local_err) { error_propagate(errp, local_err); return; } =20 - if (size % SPAPR_MEMORY_BLOCK_SIZE) { + if (!is_nvdimm && size % SPAPR_MEMORY_BLOCK_SIZE) { error_setg(errp, "Hotplugged memory size must be a multiple of " - "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); + "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); return; + } else if (is_nvdimm) { + char *uuidstr =3D NULL; + QemuUUID uuid; + + if (size % SPAPR_MINIMUM_SCM_BLOCK_SIZE) { + error_setg(errp, "NVDIMM memory size excluding the label area" + " must be a multiple of %" PRIu64 "MB", + SPAPR_MINIMUM_SCM_BLOCK_SIZE / MiB); + return; + } + + uuidstr =3D object_property_get_str(OBJECT(dimm), NVDIMM_UUID_PROP= , NULL); + qemu_uuid_parse(uuidstr, &uuid); + g_free(uuidstr); + + if (qemu_uuid_is_null(&uuid)) { + error_setg(errp, "NVDIMM device requires the uuid to be set"); + return; + } } =20 memdev =3D object_property_get_link(OBJECT(dimm), PC_DIMM_MEMDEV_PROP, @@ -3698,6 +3876,12 @@ static void spapr_memory_unplug_request(HotplugHandl= er *hotplug_dev, int i; SpaprDrc *drc; =20 + if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) { + error_setg(&local_err, + "nvdimm device hot unplug is not supported yet."); + goto out; + } + size =3D memory_device_get_region_size(MEMORY_DEVICE(dimm), &error_abo= rt); nr_lmbs =3D size / SPAPR_MEMORY_BLOCK_SIZE; =20 @@ -4453,6 +4637,7 @@ static void spapr_machine_class_init(ObjectClass *oc,= void *data) smc->update_dt_enabled =3D true; mc->default_cpu_type =3D POWERPC_CPU_TYPE_NAME("power9_v2.0"); mc->has_hotpluggable_cpus =3D true; + mc->nvdimm_supported =3D true; smc->resize_hpt_default =3D SPAPR_RESIZE_HPT_ENABLED; fwc->get_dev_path =3D spapr_get_fw_dev_path; nc->nmi_monitor_handler =3D spapr_nmi; @@ -4558,6 +4743,7 @@ static void spapr_machine_4_1_class_options(MachineCl= ass *mc) }; =20 spapr_machine_4_2_class_options(mc); + mc->nvdimm_supported =3D false; smc->linux_pci_probe =3D false; compat_props_add(mc->compat_props, hw_compat_4_1, hw_compat_4_1_len); compat_props_add(mc->compat_props, compat, G_N_ELEMENTS(compat)); diff --git a/hw/ppc/spapr_drc.c b/hw/ppc/spapr_drc.c index 62f1a42592..815167e42f 100644 --- a/hw/ppc/spapr_drc.c +++ b/hw/ppc/spapr_drc.c @@ -708,6 +708,17 @@ static void spapr_drc_phb_class_init(ObjectClass *k, v= oid *data) drck->dt_populate =3D spapr_phb_dt_populate; } =20 +static void spapr_drc_pmem_class_init(ObjectClass *k, void *data) +{ + SpaprDrcClass *drck =3D SPAPR_DR_CONNECTOR_CLASS(k); + + drck->typeshift =3D SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM; + drck->typename =3D "MEM"; + drck->drc_name_prefix =3D "PMEM "; + drck->release =3D NULL; + drck->dt_populate =3D spapr_pmem_dt_populate; +} + static const TypeInfo spapr_dr_connector_info =3D { .name =3D TYPE_SPAPR_DR_CONNECTOR, .parent =3D TYPE_DEVICE, @@ -758,6 +769,12 @@ static const TypeInfo spapr_drc_phb_info =3D { .class_init =3D spapr_drc_phb_class_init, }; =20 +static const TypeInfo spapr_drc_pmem_info =3D { + .name =3D TYPE_SPAPR_DRC_PMEM, + .parent =3D TYPE_SPAPR_DRC_LOGICAL, + .class_init =3D spapr_drc_pmem_class_init, +}; + /* helper functions for external users */ =20 SpaprDrc *spapr_drc_by_index(uint32_t index) @@ -1229,6 +1246,7 @@ static void spapr_drc_register_types(void) type_register_static(&spapr_drc_pci_info); type_register_static(&spapr_drc_lmb_info); type_register_static(&spapr_drc_phb_info); + type_register_static(&spapr_drc_pmem_info); =20 spapr_rtas_register(RTAS_SET_INDICATOR, "set-indicator", rtas_set_indicator); diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c index 0e4c19523a..b9a4d1607c 100644 --- a/hw/ppc/spapr_events.c +++ b/hw/ppc/spapr_events.c @@ -194,6 +194,7 @@ struct rtas_event_log_v6_hp { #define RTAS_LOG_V6_HP_TYPE_SLOT 3 #define RTAS_LOG_V6_HP_TYPE_PHB 4 #define RTAS_LOG_V6_HP_TYPE_PCI 5 +#define RTAS_LOG_V6_HP_TYPE_PMEM 6 uint8_t hotplug_action; #define RTAS_LOG_V6_HP_ACTION_ADD 1 #define RTAS_LOG_V6_HP_ACTION_REMOVE 2 @@ -530,6 +531,9 @@ static void spapr_hotplug_req_event(uint8_t hp_id, uint= 8_t hp_action, case SPAPR_DR_CONNECTOR_TYPE_PHB: hp->hotplug_type =3D RTAS_LOG_V6_HP_TYPE_PHB; break; + case SPAPR_DR_CONNECTOR_TYPE_PMEM: + hp->hotplug_type =3D RTAS_LOG_V6_HP_TYPE_PMEM; + break; default: /* we shouldn't be signaling hotplug events for resources * that don't support them diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index 523a9b3d4a..4807ca615b 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -25,6 +25,7 @@ =20 #include "hw/mem/pc-dimm.h" #include "hw/acpi/bios-linker-loader.h" +#include "qemu/uuid.h" =20 #define NVDIMM_DEBUG 0 #define nvdimm_debug(fmt, ...) \ @@ -49,6 +50,7 @@ TYPE_NVDIMM) =20 #define NVDIMM_LABEL_SIZE_PROP "label-size" +#define NVDIMM_UUID_PROP "uuid" #define NVDIMM_UNARMED_PROP "unarmed" =20 struct NVDIMMDevice { @@ -83,6 +85,11 @@ struct NVDIMMDevice { * the guest write persistence. */ bool unarmed; + + /* + * The PPC64 - spapr requires each nvdimm device have a uuid. + */ + QemuUUID uuid; }; typedef struct NVDIMMDevice NVDIMMDevice; =20 diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 03111fd55b..a8cb3513d0 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -811,6 +811,8 @@ int spapr_core_dt_populate(SpaprDrc *drc, SpaprMachineS= tate *spapr, void spapr_lmb_release(DeviceState *dev); int spapr_lmb_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, void *fdt, int *fdt_start_offset, Error **errp); +int spapr_pmem_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, + void *fdt, int *fdt_start_offset, Error **errp); void spapr_phb_release(DeviceState *dev); int spapr_phb_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, void *fdt, int *fdt_start_offset, Error **errp); @@ -846,6 +848,15 @@ int spapr_rtc_import_offset(SpaprRtcState *rtc, int64_= t legacy_offset); #define SPAPR_LMB_FLAGS_DRC_INVALID 0x00000020 #define SPAPR_LMB_FLAGS_RESERVED 0x00000080 =20 +/* + * The nvdimm size should be aligned to SCM block size. + * The SCM block size should be aligned to SPAPR_MEMORY_BLOCK_SIZE + * inorder to have SCM regions not to overlap with dimm memory regions. + * The SCM devices can have variable block sizes. For now, fixing the + * block size to the minimum value. + */ +#define SPAPR_MINIMUM_SCM_BLOCK_SIZE SPAPR_MEMORY_BLOCK_SIZE + void spapr_do_system_reset_on_cpu(CPUState *cs, run_on_cpu_data arg); =20 #define HTAB_SIZE(spapr) (1ULL << ((spapr)->htab_shift)) diff --git a/include/hw/ppc/spapr_drc.h b/include/hw/ppc/spapr_drc.h index 83f03cc577..df3d958a66 100644 --- a/include/hw/ppc/spapr_drc.h +++ b/include/hw/ppc/spapr_drc.h @@ -78,6 +78,13 @@ #define SPAPR_DRC_PHB(obj) OBJECT_CHECK(SpaprDrc, (obj), \ TYPE_SPAPR_DRC_PHB) =20 +#define TYPE_SPAPR_DRC_PMEM "spapr-drc-pmem" +#define SPAPR_DRC_PMEM_GET_CLASS(obj) \ + OBJECT_GET_CLASS(SpaprDrcClass, obj, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM_CLASS(klass) \ + OBJECT_CLASS_CHECK(SpaprDrcClass, klass, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM(obj) OBJECT_CHECK(SpaprDrc, (obj), \ + TYPE_SPAPR_DRC_PMEM) /* * Various hotplug types managed by SpaprDrc * @@ -95,6 +102,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO =3D 3, SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI =3D 4, SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB =3D 8, + SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM =3D 9, } SpaprDrcTypeShift; =20 typedef enum { @@ -104,6 +112,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_VIO =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO, SPAPR_DR_CONNECTOR_TYPE_PCI =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI, SPAPR_DR_CONNECTOR_TYPE_LMB =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB, + SPAPR_DR_CONNECTOR_TYPE_PMEM =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PM= EM, } SpaprDrcType; =20 /* From nobody Sun Apr 28 05:39:16 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1571078463; cv=none; d=zoho.com; s=zohoarc; b=NLGnI3B6/+vz2Jer7YWjhV/et1AvYMUAMtFIjv1aYUzELHOEYInT7ZCf9Df4F/xePQDSjKEsxx7XKG8tYACIFtLqCmAI1r1VZo1A99couNQkoS+WoZ6VzPuEh7Ud0ds7dGsUx1SvRG7VDGnupHF4/0S41Jzl6p/pXJP2utvo5w0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1571078463; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=AS00oU86PysSrj2qSrXE5tpkfQUt5AO/P8cfsqy699Q=; b=fpmvk6lcFXcRZ1NuOPCgKAaBLKbKKPxJMciMHbI192tWTcNN6DEE2rC1wO4WA3EOzABZcNauf7vUoV9Dt48FWrd7R/NVDQl/l+FlSntpmqm2SLchICqqA//DsctB/XdmeeVUJP9IuQoTsv4AdwqN7N0naKmspJ7YZ5Lim+M9OfE= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1571078463296943.5771843939127; Mon, 14 Oct 2019 11:41:03 -0700 (PDT) Received: from localhost ([::1]:55464 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iK5H6-0003hC-Ey for importer@patchew.org; Mon, 14 Oct 2019 14:40:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59297) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1iK5FJ-0001pU-1T for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:39:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1iK5FH-0003tu-66 for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:39:04 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:35058 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1iK5FH-0003tl-1R for qemu-devel@nongnu.org; Mon, 14 Oct 2019 14:39:03 -0400 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x9EIcvf0052790 for ; Mon, 14 Oct 2019 14:39:02 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0b-001b2d01.pphosted.com with ESMTP id 2vmupspds0-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 14 Oct 2019 14:39:00 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 14 Oct 2019 19:38:23 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 14 Oct 2019 19:38:19 +0100 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x9EIcITO49217646 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 14 Oct 2019 18:38:18 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B3DC24C046; Mon, 14 Oct 2019 18:38:18 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9F6A94C040; Mon, 14 Oct 2019 18:38:17 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 14 Oct 2019 18:38:17 +0000 (GMT) Subject: [PATCH v3 3/3] spapr: Add Hcalls to support PAPR NVDIMM device From: Shivaprasad G Bhat To: imammedo@redhat.com, david@gibson.dropbear.id.au, qemu-ppc@nongnu.org, xiaoguangrong.eric@gmail.com, mst@redhat.com Date: Mon, 14 Oct 2019 13:38:16 -0500 In-Reply-To: <157107820388.27733.3565652855304038259.stgit@lep8c.aus.stglabs.ibm.com> References: <157107820388.27733.3565652855304038259.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19101418-0020-0000-0000-0000037901B7 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19101418-0021-0000-0000-000021CF1BC8 Message-Id: <157107827730.27733.6442960086351627702.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-10-14_09:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1908290000 definitions=main-1910140151 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sbhat@linux.vnet.ibm.com, qemu-devel@nongnu.org Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" This patch implements few of the necessary hcalls for the nvdimm support. PAPR semantics is such that each NVDIMM device is comprising of multiple SCM(Storage Class Memory) blocks. The guest requests the hypervisor to bind each of the SCM blocks of the NVDIMM device using hcalls. There can be SCM block unbind requests in case of driver errors or unplug(not supported now) use cases. The NVDIMM label read/writes are done through hcalls. Since each virtual NVDIMM device is divided into multiple SCM blocks, the bind, unbind, and queries using hcalls on those blocks can come independently. This doesn't fit well into the qemu device semantics, where the map/unmap are done at the (whole)device/object level granularity. The patch doesnt actually bind/unbind on hcalls but let it happen at the device_add/del phase itself instead. The guest kernel makes bind/unbind requests for the virtual NVDIMM device at the region level granularity. Without interleaving, each virtual NVDIMM device is presented as separate region. There is no way to configure the virtual NVDIMM interleaving for the guests today. So, there is no way a partial bind/unbind request can come for the vNVDIMM in a hcall for a subset of SCM blocks of a virtual NVDIMM. Hence it is safe to do bind/unbind everything during the device_add/del. Signed-off-by: Shivaprasad G Bhat --- --- hw/ppc/spapr_hcall.c | 300 ++++++++++++++++++++++++++++++++++++++++++++= ++++ include/hw/ppc/spapr.h | 8 + 2 files changed, 307 insertions(+), 1 deletion(-) diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c index 23e4bdb829..4e9ad96f7c 100644 --- a/hw/ppc/spapr_hcall.c +++ b/hw/ppc/spapr_hcall.c @@ -18,6 +18,10 @@ #include "hw/ppc/spapr_ovec.h" #include "mmu-book3s-v3.h" #include "hw/mem/memory-device.h" +#include "hw/ppc/spapr_drc.h" +#include "hw/mem/nvdimm.h" +#include "qemu/range.h" +#include "qemu/nvdimm-utils.h" =20 static bool has_spr(PowerPCCPU *cpu, int spr) { @@ -1961,6 +1965,295 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, Sp= aprMachineState *spapr, return H_SUCCESS; } =20 +static target_ulong h_scm_read_metadata(PowerPCCPU *cpu, + SpaprMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t offset =3D args[1]; + uint64_t numBytesToRead =3D args[2]; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm; + NVDIMMClass *ddc; + __be64 data_be =3D 0; + uint64_t data =3D 0; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + if (numBytesToRead !=3D 1 && numBytesToRead !=3D 2 && + numBytesToRead !=3D 4 && numBytesToRead !=3D 8) { + return H_P3; + } + + nvdimm =3D NVDIMM(drc->dev); + if ((offset + numBytesToRead < offset) || + (nvdimm->label_size < numBytesToRead + offset)) { + return H_P2; + } + + ddc =3D NVDIMM_GET_CLASS(nvdimm); + ddc->read_label_data(nvdimm, &data_be, numBytesToRead, offset); + + switch (numBytesToRead) { + case 1: + data =3D data_be & 0xff; + break; + case 2: + data =3D be16_to_cpu(data_be & 0xffff); + break; + case 4: + data =3D be32_to_cpu(data_be & 0xffffffff); + break; + case 8: + data =3D be64_to_cpu(data_be); + break; + default: + break; + } + + args[0] =3D data; + + return H_SUCCESS; +} + +static target_ulong h_scm_write_metadata(PowerPCCPU *cpu, + SpaprMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t offset =3D args[1]; + uint64_t data =3D args[2]; + uint64_t numBytesToWrite =3D args[3]; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm; + DeviceState *dev; + NVDIMMClass *ddc; + __be64 data_be =3D 0; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + if (numBytesToWrite !=3D 1 && numBytesToWrite !=3D 2 && + numBytesToWrite !=3D 4 && numBytesToWrite !=3D 8) { + return H_P4; + } + + dev =3D drc->dev; + nvdimm =3D NVDIMM(dev); + + switch (numBytesToWrite) { + case 1: + if (data & 0xffffffffffffff00) { + return H_P2; + } + data_be =3D data & 0xff; + break; + case 2: + if (data & 0xffffffffffff0000) { + return H_P2; + } + data_be =3D cpu_to_be16(data & 0xffff); + break; + case 4: + if (data & 0xffffffff00000000) { + return H_P2; + } + data_be =3D cpu_to_be32(data & 0xffffffff); + break; + case 8: + data_be =3D cpu_to_be64(data); + break; + default: /* lint */ + break; + } + + ddc =3D NVDIMM_GET_CLASS(nvdimm); + ddc->write_label_data(nvdimm, &data_be, numBytesToWrite, offset); + + return H_SUCCESS; +} + +static target_ulong h_scm_bind_mem(PowerPCCPU *cpu, SpaprMachineState *spa= pr, + target_ulong opcode, target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t starting_idx =3D args[1]; + uint64_t no_of_scm_blocks_to_bind =3D args[2]; + uint64_t target_logical_mem_addr =3D args[3]; + uint64_t continue_token =3D args[4]; + uint64_t size; + uint64_t total_no_of_scm_blocks; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + hwaddr addr; + DeviceState *dev; + PCDIMMDevice *dimm; + Error *local_err =3D NULL; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + dev =3D drc->dev; + dimm =3D PC_DIMM(dev); + + size =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_SIZE_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + total_no_of_scm_blocks =3D size / SPAPR_MINIMUM_SCM_BLOCK_SIZE; + + if ((starting_idx > total_no_of_scm_blocks) || + (no_of_scm_blocks_to_bind > total_no_of_scm_blocks)) { + return H_P2; + } + + if (((starting_idx + no_of_scm_blocks_to_bind) < starting_idx) || + ((starting_idx + no_of_scm_blocks_to_bind) > total_no_of_scm_block= s)) { + return H_P3; + } + + /* Currently qemu assigns the address. */ + if (target_logical_mem_addr !=3D 0xffffffffffffffff) { + return H_OVERLAP; + } + + /* + * Currently continue token should be zero qemu has already bound + * everything and this hcall doesnt return H_BUSY. + */ + if (continue_token > 0) { + return H_P5; + } + + addr =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_ADDR_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + addr +=3D starting_idx * SPAPR_MINIMUM_SCM_BLOCK_SIZE; + + /* Already bound, Return target logical address in R4 */ + args[1] =3D addr; + args[2] =3D no_of_scm_blocks_to_bind; + + return H_SUCCESS; +} + +static target_ulong h_scm_unbind_mem(PowerPCCPU *cpu, SpaprMachineState *s= papr, + target_ulong opcode, target_ulong *ar= gs) +{ + uint32_t drc_index =3D args[0]; + uint64_t starting_scm_logical_addr =3D args[1]; + uint64_t no_of_scm_blocks_to_unbind =3D args[2]; + uint64_t continue_token =3D args[3]; + uint64_t size_to_unbind; + Range blockrange =3D range_empty; + Range nvdimmrange =3D range_empty; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + DeviceState *dev; + PCDIMMDevice *dimm; + uint64_t size, addr; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + /* Check if starting_scm_logical_addr is block aligned */ + if (!QEMU_IS_ALIGNED(starting_scm_logical_addr, + SPAPR_MINIMUM_SCM_BLOCK_SIZE)) { + return H_P2; + } + + dev =3D drc->dev; + dimm =3D PC_DIMM(dev); + size =3D object_property_get_int(OBJECT(dimm), PC_DIMM_SIZE_PROP, NULL= ); + addr =3D object_property_get_int(OBJECT(dimm), PC_DIMM_ADDR_PROP, NULL= ); + + range_init_nofail(&nvdimmrange, addr, size); + + size_to_unbind =3D no_of_scm_blocks_to_unbind * SPAPR_MINIMUM_SCM_BLOC= K_SIZE; + + + range_init_nofail(&blockrange, starting_scm_logical_addr, size_to_unbi= nd); + + if (!range_contains_range(&nvdimmrange, &blockrange)) { + return H_P3; + } + + /* continue_token should be zero as this hcall doesn't return H_BUSY. = */ + if (continue_token > 0) { + return H_P3; + } + + args[1] =3D no_of_scm_blocks_to_unbind; + + /* let unplug take care of actual unbind */ + return H_SUCCESS; +} + +#define H_UNBIND_SCOPE_ALL 0x1 +#define H_UNBIND_SCOPE_DRC 0x2 + +static target_ulong h_scm_unbind_all(PowerPCCPU *cpu, SpaprMachineState *s= papr, + target_ulong opcode, target_ulong *ar= gs) +{ + uint64_t target_scope =3D args[0]; + uint32_t drc_index =3D args[1]; + uint64_t continue_token =3D args[2]; + NVDIMMDevice *nvdimm; + uint64_t size; + uint64_t no_of_scm_blocks_unbound =3D 0; + + if (target_scope =3D=3D H_UNBIND_SCOPE_DRC) { + DeviceState *dev; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_P2; + } + + dev =3D drc->dev; + nvdimm =3D NVDIMM(dev); + size =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_PROP= , NULL); + + no_of_scm_blocks_unbound =3D size / SPAPR_MINIMUM_SCM_BLOCK_SIZE; + } else if (target_scope =3D=3D H_UNBIND_SCOPE_ALL) { + GSList *list, *dimms; + + dimms =3D nvdimm_get_device_list(); + for (list =3D dimms; list; list =3D list->next) { + nvdimm =3D list->data; + size =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_= PROP, + NULL); + + no_of_scm_blocks_unbound +=3D size / SPAPR_MINIMUM_SCM_BLOCK_S= IZE; + } + g_slist_free(dimms); + } else { + return H_PARAMETER; + } + + /* continue_token should be zero as this hcall doesn't return H_BUSY. = */ + if (continue_token > 0) { + return H_P4; + } + + args[1] =3D no_of_scm_blocks_unbound; + + /* let unplug take care of actual unbind */ + return H_SUCCESS; +} + static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1]; static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCA= LL_BASE + 1]; static spapr_hcall_fn svm_hypercall_table[(SVM_HCALL_MAX - SVM_HCALL_BASE)= / 4 + 1]; @@ -2079,6 +2372,13 @@ static void hypercall_register_types(void) /* qemu/KVM-PPC specific hcalls */ spapr_register_hypercall(KVMPPC_H_RTAS, h_rtas); =20 + /* qemu/scm specific hcalls */ + spapr_register_hypercall(H_SCM_READ_METADATA, h_scm_read_metadata); + spapr_register_hypercall(H_SCM_WRITE_METADATA, h_scm_write_metadata); + spapr_register_hypercall(H_SCM_BIND_MEM, h_scm_bind_mem); + spapr_register_hypercall(H_SCM_UNBIND_MEM, h_scm_unbind_mem); + spapr_register_hypercall(H_SCM_UNBIND_ALL, h_scm_unbind_all); + /* ibm,client-architecture-support support */ spapr_register_hypercall(KVMPPC_H_CAS, h_client_architecture_support); =20 diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index a8cb3513d0..e1933e877d 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -286,6 +286,7 @@ struct SpaprMachineState { #define H_P7 -60 #define H_P8 -61 #define H_P9 -62 +#define H_OVERLAP -68 #define H_UNSUPPORTED_FLAG -256 #define H_MULTI_THREADS_ACTIVE -9005 =20 @@ -493,8 +494,13 @@ struct SpaprMachineState { #define H_INT_ESB 0x3C8 #define H_INT_SYNC 0x3CC #define H_INT_RESET 0x3D0 +#define H_SCM_READ_METADATA 0x3E4 +#define H_SCM_WRITE_METADATA 0x3E8 +#define H_SCM_BIND_MEM 0x3EC +#define H_SCM_UNBIND_MEM 0x3F0 +#define H_SCM_UNBIND_ALL 0x3FC =20 -#define MAX_HCALL_OPCODE H_INT_RESET +#define MAX_HCALL_OPCODE H_SCM_UNBIND_ALL =20 /* The hcalls above are standardized in PAPR and implemented by pHyp * as well.