From nobody Mon Apr 29 10:26:46 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1557739733; cv=none; d=zoho.com; s=zohoarc; b=IbUz/IQIsiq6mVRP3CfQRk+69G0k7W4VRbw7Wvwuz/T+07dP+ymIe4qZ4Fk4psISc2DLYEa8eiNFVHiMQNg7jH9VRNouagYn6ZRQMRpE90Su9an389nFkPv1JDsr5HQQCSG08Ugnyl4njDdhqt7ep2eU69YdEPW1QoTe4fCpPuw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557739733; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=lYcUuDMPBarwQ7kgNfL468TL2oRyOmISxzJPrY72XcE=; b=Zfclss0p+VGd1UBf1qiOykza8tYskDUTRCyHQ0siK/LNp01/NEfWgXbtt2QBliS5PxLLDpYhTB/wOFid3ivoz+i0B65usrX7OYRapqumvR6xHutWpuDAR0EFOfOKt6i5JKG6wWEB6J39Jf738rwiDo3FD985Qlnmul3INm/sdew= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (209.51.188.17 [209.51.188.17]) by mx.zohomail.com with SMTPS id 1557739733962507.64497193080376; Mon, 13 May 2019 02:28:53 -0700 (PDT) Received: from localhost ([127.0.0.1]:53980 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hQ7GG-0006qu-Pr for importer@patchew.org; Mon, 13 May 2019 05:28:44 -0400 Received: from eggs.gnu.org ([209.51.188.92]:40655) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hQ7Eq-0006Eb-7U for qemu-devel@nongnu.org; Mon, 13 May 2019 05:27:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hQ7Eo-0005K5-Ku for qemu-devel@nongnu.org; Mon, 13 May 2019 05:27:16 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:58520 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hQ7Eo-0005Jc-DT for qemu-devel@nongnu.org; Mon, 13 May 2019 05:27:14 -0400 Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4D9Q7ZB123947 for ; Mon, 13 May 2019 05:27:13 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2sf4eeuees-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 13 May 2019 05:27:13 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 13 May 2019 10:27:11 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 13 May 2019 10:27:08 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x4D9R7aL42926170 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 May 2019 09:27:07 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6B5E7AE059; Mon, 13 May 2019 09:27:07 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 69F40AE04D; Mon, 13 May 2019 09:27:06 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 13 May 2019 09:27:06 +0000 (GMT) From: Shivaprasad G Bhat To: imammedo@redhat.com, david@gibson.dropbear.id.au, xiaoguangrong.eric@gmail.com, mst@redhat.com Date: Mon, 13 May 2019 04:27:05 -0500 In-Reply-To: <155773946961.49142.5208084426066783536.stgit@lep8c.aus.stglabs.ibm.com> References: <155773946961.49142.5208084426066783536.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19051309-0020-0000-0000-0000033C0D3E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19051309-0021-0000-0000-0000218EC140 Message-Id: <155773955841.49142.7575207917992666491.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-13_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1905130067 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] X-Received-From: 148.163.158.5 Subject: [Qemu-devel] [RFC v2 PATCH 1/3] mem: make nvdimm_device_list global X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, sbhat@linux.ibm.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" nvdimm_device_list is required for parsing the list for devices in subsequent patches. Move it to common area. Signed-off-by: Shivaprasad G Bhat Reviewed-By: Igor Mammedov --- This looks to break the mips*-softmmu build. The mips depend on CONFIG_NVDIMM_ACPI, adding CONFIG_NVDIMM looks wrong. Is there some CONFIG tweak I need to do here? OR Should I move these functions to utilities like I have done here -(https://github.com/ShivaprasadGBhat/qemu/commit/1b8eaea132a8b19= c90b4fcc4d93da356029f4667)? --- hw/acpi/nvdimm.c | 27 --------------------------- hw/mem/nvdimm.c | 27 +++++++++++++++++++++++++++ include/hw/mem/nvdimm.h | 2 ++ 3 files changed, 29 insertions(+), 27 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index 9fdad6dc3f..94baba1b8f 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -33,33 +33,6 @@ #include "hw/nvram/fw_cfg.h" #include "hw/mem/nvdimm.h" =20 -static int nvdimm_device_list(Object *obj, void *opaque) -{ - GSList **list =3D opaque; - - if (object_dynamic_cast(obj, TYPE_NVDIMM)) { - *list =3D g_slist_append(*list, DEVICE(obj)); - } - - object_child_foreach(obj, nvdimm_device_list, opaque); - return 0; -} - -/* - * inquire NVDIMM devices and link them into the list which is - * returned to the caller. - * - * Note: it is the caller's responsibility to free the list to avoid - * memory leak. - */ -static GSList *nvdimm_get_device_list(void) -{ - GSList *list =3D NULL; - - object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); - return list; -} - #define NVDIMM_UUID_LE(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7) = \ { (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff= , \ (b) & 0xff, ((b) >> 8) & 0xff, (c) & 0xff, ((c) >> 8) & 0xff, = \ diff --git a/hw/mem/nvdimm.c b/hw/mem/nvdimm.c index bf2adf5e16..f221ec7a9a 100644 --- a/hw/mem/nvdimm.c +++ b/hw/mem/nvdimm.c @@ -29,6 +29,33 @@ #include "hw/mem/nvdimm.h" #include "hw/mem/memory-device.h" =20 +static int nvdimm_device_list(Object *obj, void *opaque) +{ + GSList **list =3D opaque; + + if (object_dynamic_cast(obj, TYPE_NVDIMM)) { + *list =3D g_slist_append(*list, DEVICE(obj)); + } + + object_child_foreach(obj, nvdimm_device_list, opaque); + return 0; +} + +/* + * inquire NVDIMM devices and link them into the list which is + * returned to the caller. + * + * Note: it is the caller's responsibility to free the list to avoid + * memory leak. + */ +GSList *nvdimm_get_device_list(void) +{ + GSList *list =3D NULL; + + object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); + return list; +} + static void nvdimm_get_label_size(Object *obj, Visitor *v, const char *nam= e, void *opaque, Error **errp) { diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index 523a9b3d4a..bad4fc04b5 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -150,4 +150,6 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *t= able_data, uint32_t ram_slots); void nvdimm_plug(NVDIMMState *state); void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev); +GSList *nvdimm_get_device_list(void); + #endif From nobody Mon Apr 29 10:26:46 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1557739793; cv=none; d=zoho.com; s=zohoarc; b=Mb2d8kmTMGG7KkQamzicc+X4rM/d8Kn949peE6qssQHZcGhuooAbcarJTuSrIe4Lbn0hesK5ks2Pl4xUYXY2mSsmRuDIp/UVeEGABOwKYUwpJDbyWO+dZUG7Lo9RL8Ko8k6u0oXqpuxf/1xtQJ35wDLmYx3/EQSDxXFxv5WqU/I= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557739793; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=OEE6bqtSrLyuySJHMLKDNt9pZrmVE4CrsVZRzJjtOp4=; b=lwVcwhVWicDFElG+hb76eW3T3XlQjJIONsIwJrhBsIMzcS/gmC/aEKrWbxAWt6CUpvvVtx7zK69FKvKBBhxVI9ZlzC5GDcjDmB+LFnhmYGRxq2ivhnc2MEewZoq3v5LFh/qVtuSCIrKFjXRlTkgynW+wbRNwsLv94xZDSGj92pQ= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1557739793639369.21401369213515; Mon, 13 May 2019 02:29:53 -0700 (PDT) Received: from localhost ([127.0.0.1]:53991 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hQ7HJ-0007dH-H0 for importer@patchew.org; Mon, 13 May 2019 05:29:49 -0400 Received: from eggs.gnu.org ([209.51.188.92]:40815) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hQ7Fo-0006nS-D0 for qemu-devel@nongnu.org; Mon, 13 May 2019 05:28:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hQ7Fk-0005ef-T7 for qemu-devel@nongnu.org; Mon, 13 May 2019 05:28:16 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:52962) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hQ7Fk-0005eA-L5 for qemu-devel@nongnu.org; Mon, 13 May 2019 05:28:12 -0400 Received: from pps.filterd (m0098394.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4D9RMDp052743 for ; Mon, 13 May 2019 05:28:11 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2sf4y72hsv-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 13 May 2019 05:28:10 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 13 May 2019 10:28:08 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 13 May 2019 10:28:05 +0100 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x4D9S44g43515990 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 May 2019 09:28:04 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B03B8A405C; Mon, 13 May 2019 09:28:04 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8A207A405F; Mon, 13 May 2019 09:28:03 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 13 May 2019 09:28:03 +0000 (GMT) From: Shivaprasad G Bhat To: imammedo@redhat.com, david@gibson.dropbear.id.au, xiaoguangrong.eric@gmail.com, mst@redhat.com Date: Mon, 13 May 2019 04:28:02 -0500 In-Reply-To: <155773946961.49142.5208084426066783536.stgit@lep8c.aus.stglabs.ibm.com> References: <155773946961.49142.5208084426066783536.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19051309-0020-0000-0000-0000033C0D4B X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19051309-0021-0000-0000-0000218EC152 Message-Id: <155773963257.49142.17067912880307967487.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-13_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=2 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1905130068 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] X-Received-From: 148.163.156.1 Subject: [Qemu-devel] [RFC v2 PATCH 2/3] spapr: Add NVDIMM device support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, sbhat@linux.ibm.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Add support for NVDIMM devices for sPAPR. Piggyback on existing nvdimm device interface in QEMU to support virtual NVDIMM devices for Power (May h= ave to re-look at this later). Create the required DT entries for the device (some entries have dummy values right now). The patch creates the required DT node and sends a hotplug interrupt to the guest. Guest is expected to undertake the normal DR resource add path in response and start issuing PAPR SCM hcalls. This is how it can be used .. Add nvdimm=3Don to the qemu machine argument. Ex : -machine pseries,nvdimm=3Don For coldplug, the device to be added in qemu command line as shown below -object memory-backend-file,id=3Dmemnvdimm0,prealloc=3Dyes,mem-path=3D/tmp/= nvdimm0,share=3Dyes,size=3D1073872896 -device nvdimm,label-size=3D128k,uuid=3D75a3cdd7-6a2f-4791-8d15-fe0a920e8e9= e,memdev=3Dmemnvdimm0,id=3Dnvdimm0,slot=3D0 For hotplug, the device to be added from monitor as below object_add memory-backend-file,id=3Dmemnvdimm0,prealloc=3Dyes,mem-path=3D/t= mp/nvdimm0,share=3Dyes,size=3D1073872896 device_add nvdimm,label-size=3D128k,uuid=3D75a3cdd7-6a2f-4791-8d15-fe0a920e= 8e9e,memdev=3Dmemnvdimm0,id=3Dnvdimm0,slot=3D0 Signed-off-by: Shivaprasad G Bhat Signed-off-by: Bharata B Rao [Early implementation] --- --- default-configs/ppc64-softmmu.mak | 1=20 hw/mem/Kconfig | 2=20 hw/mem/nvdimm.c | 43 ++++++++ hw/ppc/spapr.c | 202 +++++++++++++++++++++++++++++++++= ++-- hw/ppc/spapr_drc.c | 18 +++ hw/ppc/spapr_events.c | 4 + include/hw/mem/nvdimm.h | 6 + include/hw/ppc/spapr.h | 12 ++ include/hw/ppc/spapr_drc.h | 9 ++ 9 files changed, 286 insertions(+), 11 deletions(-) diff --git a/default-configs/ppc64-softmmu.mak b/default-configs/ppc64-soft= mmu.mak index cca52665d9..ae0841fa3a 100644 --- a/default-configs/ppc64-softmmu.mak +++ b/default-configs/ppc64-softmmu.mak @@ -8,3 +8,4 @@ CONFIG_POWERNV=3Dy =20 # For pSeries CONFIG_PSERIES=3Dy +CONFIG_NVDIMM=3Dy diff --git a/hw/mem/Kconfig b/hw/mem/Kconfig index 620fd4cb59..2ad052a536 100644 --- a/hw/mem/Kconfig +++ b/hw/mem/Kconfig @@ -8,4 +8,4 @@ config MEM_DEVICE config NVDIMM bool default y - depends on PC + depends on (PC || PSERIES) diff --git a/hw/mem/nvdimm.c b/hw/mem/nvdimm.c index f221ec7a9a..deaebaaaa5 100644 --- a/hw/mem/nvdimm.c +++ b/hw/mem/nvdimm.c @@ -93,11 +93,54 @@ out: error_propagate(errp, local_err); } =20 +static void nvdimm_get_uuid(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NVDIMMDevice *nvdimm =3D NVDIMM(obj); + char *value =3D NULL; + + value =3D qemu_uuid_unparse_strdup(&nvdimm->uuid); + + visit_type_str(v, name, &value, errp); +} + + +static void nvdimm_set_uuid(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + NVDIMMDevice *nvdimm =3D NVDIMM(obj); + Error *local_err =3D NULL; + char *value; + + visit_type_str(v, name, &value, &local_err); + if (local_err) { + goto out; + } + + if (strcmp(value, "") =3D=3D 0) { + error_setg(&local_err, "Property '%s.%s' %s is required" + " at least 0x%lx", object_get_typename(obj), + name, value, MIN_NAMESPACE_LABEL_SIZE); + goto out; + } + + if (qemu_uuid_parse(value, &nvdimm->uuid) !=3D 0) { + error_setg(errp, "Invalid UUID"); + return; + } +out: + error_propagate(errp, local_err); +} + + static void nvdimm_init(Object *obj) { object_property_add(obj, NVDIMM_LABEL_SIZE_PROP, "int", nvdimm_get_label_size, nvdimm_set_label_size, NULL, NULL, NULL); + + object_property_add(obj, NVDIMM_UUID_PROP, "QemuUUID", nvdimm_get_uuid, + nvdimm_set_uuid, NULL, NULL, NULL); } =20 static void nvdimm_finalize(Object *obj) diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 2ef3ce4362..b6951577e7 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -74,6 +74,7 @@ #include "qemu/cutils.h" #include "hw/ppc/spapr_cpu_core.h" #include "hw/mem/memory-device.h" +#include "hw/mem/nvdimm.h" =20 #include =20 @@ -699,6 +700,7 @@ static int spapr_populate_drmem_v2(SpaprMachineState *s= papr, void *fdt, uint8_t *int_buf, *cur_index; int ret; uint64_t lmb_size =3D SPAPR_MEMORY_BLOCK_SIZE; + uint64_t scm_block_size =3D SPAPR_MINIMUM_SCM_BLOCK_SIZE; uint64_t addr, cur_addr, size; uint32_t nr_boot_lmbs =3D (machine->device_memory->base / lmb_size); uint64_t mem_end =3D machine->device_memory->base + @@ -735,12 +737,20 @@ static int spapr_populate_drmem_v2(SpaprMachineState = *spapr, void *fdt, nr_entries++; } =20 - /* Entry for DIMM */ - drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); - g_assert(drc); - elem =3D spapr_get_drconf_cell(size / lmb_size, addr, - spapr_drc_index(drc), node, - SPAPR_LMB_FLAGS_ASSIGNED); + if (info->value->type =3D=3D MEMORY_DEVICE_INFO_KIND_NVDIMM) { + /* Entry for NVDIMM */ + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, addr / scm_block_= size); + g_assert(drc); + elem =3D spapr_get_drconf_cell(size / scm_block_size, addr, + spapr_drc_index(drc), -1, 0); + } else { + /* Entry for DIMM */ + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); + g_assert(drc); + elem =3D spapr_get_drconf_cell(size / lmb_size, addr, + spapr_drc_index(drc), node, + SPAPR_LMB_FLAGS_ASSIGNED); + } QSIMPLEQ_INSERT_TAIL(&drconf_queue, elem, entry); nr_entries++; cur_addr =3D addr + size; @@ -1235,6 +1245,87 @@ static void spapr_dt_hypervisor(SpaprMachineState *s= papr, void *fdt) } } =20 +static int spapr_populate_nvdimm_node(void *fdt, int parent_offset, + NVDIMMDevice *nvdimm) +{ + int child_offset; + char buf[40]; + SpaprDrc *drc; + uint32_t drc_idx; + uint32_t node =3D object_property_get_uint(OBJECT(nvdimm), PC_DIMM_NOD= E_PROP, + &error_abort); + uint64_t addr =3D object_property_get_uint(OBJECT(nvdimm), PC_DIMM_ADD= R_PROP, + &error_abort); + uint32_t associativity[] =3D { + cpu_to_be32(0x4), /* length */ + cpu_to_be32(0x0), cpu_to_be32(0x0), + cpu_to_be32(0x0), cpu_to_be32(node) + }; + uint64_t lsize =3D nvdimm->label_size; + uint64_t size =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE= _PROP, + NULL); + + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + g_assert(drc); + + drc_idx =3D spapr_drc_index(drc); + + sprintf(buf, "pmem@%x", drc_idx); + child_offset =3D fdt_add_subnode(fdt, parent_offset, buf); + _FDT(child_offset); + + _FDT((fdt_setprop_cell(fdt, child_offset, "reg", drc_idx))); + _FDT((fdt_setprop_string(fdt, child_offset, "compatible", "ibm,pmemory= "))); + _FDT((fdt_setprop_string(fdt, child_offset, "device_type", "ibm,pmemor= y"))); + + _FDT((fdt_setprop(fdt, child_offset, "ibm,associativity", associativit= y, + sizeof(associativity)))); + + qemu_uuid_unparse(&nvdimm->uuid, buf); + _FDT((fdt_setprop_string(fdt, child_offset, "ibm,unit-guid", buf))); + + _FDT((fdt_setprop_cell(fdt, child_offset, "ibm,my-drc-index", drc_idx)= )); + + /*NB : What it should be? */ + _FDT(fdt_setprop_cell(fdt, child_offset, "ibm,latency-attribute", 828)= ); + + _FDT((fdt_setprop_u64(fdt, child_offset, "ibm,block-size", + SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_u64(fdt, child_offset, "ibm,number-of-blocks", + size / SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_cell(fdt, child_offset, "ibm,metadata-size", lsize))= ); + + return child_offset; +} + +static void spapr_dt_nvdimm(void *fdt) +{ + int offset =3D fdt_subnode_offset(fdt, 0, "persistent-memory"); + GSList *dimms =3D NULL; + + if (offset < 0) { + offset =3D fdt_add_subnode(fdt, 0, "persistent-memory"); + _FDT(offset); + _FDT((fdt_setprop_cell(fdt, offset, "#address-cells", 0x2))); + _FDT((fdt_setprop_cell(fdt, offset, "#size-cells", 0x0))); + _FDT((fdt_setprop_string(fdt, offset, "device_type", + "ibm,persistent-memory"))); + } + + /*NB : Add drc-info array here */ + + /* Create DT entries for cold plugged NVDIMM devices */ + dimms =3D nvdimm_get_device_list(); + for (; dimms; dimms =3D dimms->next) { + NVDIMMDevice *nvdimm =3D dimms->data; + + spapr_populate_nvdimm_node(fdt, offset, nvdimm); + } + g_slist_free(dimms); + return; +} + static void *spapr_build_fdt(SpaprMachineState *spapr) { MachineState *machine =3D MACHINE(spapr); @@ -1368,6 +1459,11 @@ static void *spapr_build_fdt(SpaprMachineState *spap= r) } } =20 + /* NVDIMM devices */ + if (spapr->nvdimm_enabled) { + spapr_dt_nvdimm(fdt); + } + return fdt; } =20 @@ -3324,6 +3420,20 @@ static void spapr_set_host_serial(Object *obj, const= char *value, Error **errp) spapr->host_serial =3D g_strdup(value); } =20 +static bool spapr_get_nvdimm(Object *obj, Error **errp) +{ + SpaprMachineState *spapr =3D SPAPR_MACHINE(obj); + + return spapr->nvdimm_enabled; +} + +static void spapr_set_nvdimm(Object *obj, bool value, Error **errp) +{ + SpaprMachineState *spapr =3D SPAPR_MACHINE(obj); + + spapr->nvdimm_enabled =3D value; +} + static void spapr_instance_init(Object *obj) { SpaprMachineState *spapr =3D SPAPR_MACHINE(obj); @@ -3380,6 +3490,12 @@ static void spapr_instance_init(Object *obj) &error_abort); object_property_set_description(obj, "host-serial", "Host serial number to advertise in guest device tree", &error_abo= rt); + + object_property_add_bool(obj, "nvdimm", + spapr_get_nvdimm, spapr_set_nvdimm, NULL); + object_property_set_description(obj, "nvdimm", + "Enable support for nvdimm devices", + NULL); } =20 static void spapr_machine_finalizefn(Object *obj) @@ -3404,6 +3520,16 @@ static void spapr_nmi(NMIState *n, int cpu_index, Er= ror **errp) } } =20 +int spapr_pmem_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, + void *fdt, int *fdt_start_offset, Error **errp) +{ + NVDIMMDevice *nvdimm =3D NVDIMM(drc->dev); + + *fdt_start_offset =3D spapr_populate_nvdimm_node(fdt, 0, nvdimm); + + return 0; +} + int spapr_lmb_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, void *fdt, int *fdt_start_offset, Error **errp) { @@ -3466,12 +3592,37 @@ static void spapr_add_lmbs(DeviceState *dev, uint64= _t addr_start, uint64_t size, } } =20 +static void spapr_add_nvdimm(DeviceState *dev, uint64_t addr, Error **errp) +{ + SpaprMachineState *spapr =3D SPAPR_MACHINE(qdev_get_hotplug_handler(de= v)); + SpaprDrc *drc; + bool hotplugged =3D spapr_drc_hotplugged(dev); + Error *local_err =3D NULL; + + spapr_dr_connector_new(OBJECT(spapr), TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + g_assert(drc); + + spapr_drc_attach(drc, dev, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + + if (hotplugged) { + spapr_hotplug_req_add_by_index(drc); + } +} + static void spapr_memory_plug(HotplugHandler *hotplug_dev, DeviceState *de= v, Error **errp) { Error *local_err =3D NULL; SpaprMachineState *ms =3D SPAPR_MACHINE(hotplug_dev); PCDIMMDevice *dimm =3D PC_DIMM(dev); + bool is_nvdimm =3D object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); uint64_t size, addr; =20 size =3D memory_device_get_region_size(MEMORY_DEVICE(dev), &error_abor= t); @@ -3487,8 +3638,14 @@ static void spapr_memory_plug(HotplugHandler *hotplu= g_dev, DeviceState *dev, goto out_unplug; } =20 - spapr_add_lmbs(dev, addr, size, spapr_ovec_test(ms->ov5_cas, OV5_HP_EV= T), - &local_err); + if (!is_nvdimm) { + spapr_add_lmbs(dev, addr, size, + spapr_ovec_test(ms->ov5_cas, OV5_HP_EVT), + &local_err); + } else { + spapr_add_nvdimm(dev, addr, &local_err); + } + if (local_err) { goto out_unplug; } @@ -3506,6 +3663,7 @@ static void spapr_memory_pre_plug(HotplugHandler *hot= plug_dev, DeviceState *dev, { const SpaprMachineClass *smc =3D SPAPR_MACHINE_GET_CLASS(hotplug_dev); SpaprMachineState *spapr =3D SPAPR_MACHINE(hotplug_dev); + bool is_nvdimm =3D object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); PCDIMMDevice *dimm =3D PC_DIMM(dev); Error *local_err =3D NULL; uint64_t size; @@ -3523,10 +3681,28 @@ static void spapr_memory_pre_plug(HotplugHandler *h= otplug_dev, DeviceState *dev, return; } =20 - if (size % SPAPR_MEMORY_BLOCK_SIZE) { + if (!is_nvdimm && size % SPAPR_MEMORY_BLOCK_SIZE) { error_setg(errp, "Hotplugged memory size must be a multiple of " - "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); + "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); return; + } else if (is_nvdimm) { + char *uuidstr =3D NULL; + QemuUUID uuid; + if (size % SPAPR_MINIMUM_SCM_BLOCK_SIZE) { + error_setg(errp, "NVDIMM memory size excluding the label area" + " must be a multiple of " + "%" PRIu64 "MB", + SPAPR_MINIMUM_SCM_BLOCK_SIZE / MiB); + return; + } + + uuidstr =3D object_property_get_str(OBJECT(dimm), NVDIMM_UUID_PROP= , NULL); + qemu_uuid_parse(uuidstr, &uuid); + if (qemu_uuid_is_null(&uuid)) { + error_setg(errp, "NVDIMM device requires the uuid to be set"); + return; + } + g_free(uuidstr); } =20 memdev =3D object_property_get_link(OBJECT(dimm), PC_DIMM_MEMDEV_PROP, @@ -3666,6 +3842,12 @@ static void spapr_memory_unplug_request(HotplugHandl= er *hotplug_dev, int i; SpaprDrc *drc; =20 + if (object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM)) { + error_setg(&local_err, + "nvdimm device hot unplug is not supported yet."); + goto out; + } + size =3D memory_device_get_region_size(MEMORY_DEVICE(dimm), &error_abo= rt); nr_lmbs =3D size / SPAPR_MEMORY_BLOCK_SIZE; =20 diff --git a/hw/ppc/spapr_drc.c b/hw/ppc/spapr_drc.c index 597f236b9c..983440a711 100644 --- a/hw/ppc/spapr_drc.c +++ b/hw/ppc/spapr_drc.c @@ -707,6 +707,17 @@ static void spapr_drc_phb_class_init(ObjectClass *k, v= oid *data) drck->dt_populate =3D spapr_phb_dt_populate; } =20 +static void spapr_drc_pmem_class_init(ObjectClass *k, void *data) +{ + SpaprDrcClass *drck =3D SPAPR_DR_CONNECTOR_CLASS(k); + + drck->typeshift =3D SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM; + drck->typename =3D "MEM"; + drck->drc_name_prefix =3D "PMEM "; + drck->release =3D NULL; + drck->dt_populate =3D spapr_pmem_dt_populate; +} + static const TypeInfo spapr_dr_connector_info =3D { .name =3D TYPE_SPAPR_DR_CONNECTOR, .parent =3D TYPE_DEVICE, @@ -757,6 +768,12 @@ static const TypeInfo spapr_drc_phb_info =3D { .class_init =3D spapr_drc_phb_class_init, }; =20 +static const TypeInfo spapr_drc_pmem_info =3D { + .name =3D TYPE_SPAPR_DRC_PMEM, + .parent =3D TYPE_SPAPR_DRC_LOGICAL, + .class_init =3D spapr_drc_pmem_class_init, +}; + /* helper functions for external users */ =20 SpaprDrc *spapr_drc_by_index(uint32_t index) @@ -1226,6 +1243,7 @@ static void spapr_drc_register_types(void) type_register_static(&spapr_drc_pci_info); type_register_static(&spapr_drc_lmb_info); type_register_static(&spapr_drc_phb_info); + type_register_static(&spapr_drc_pmem_info); =20 spapr_rtas_register(RTAS_SET_INDICATOR, "set-indicator", rtas_set_indicator); diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c index ae0f093f59..1141203a87 100644 --- a/hw/ppc/spapr_events.c +++ b/hw/ppc/spapr_events.c @@ -193,6 +193,7 @@ struct rtas_event_log_v6_hp { #define RTAS_LOG_V6_HP_TYPE_SLOT 3 #define RTAS_LOG_V6_HP_TYPE_PHB 4 #define RTAS_LOG_V6_HP_TYPE_PCI 5 +#define RTAS_LOG_V6_HP_TYPE_PMEM 6 uint8_t hotplug_action; #define RTAS_LOG_V6_HP_ACTION_ADD 1 #define RTAS_LOG_V6_HP_ACTION_REMOVE 2 @@ -529,6 +530,9 @@ static void spapr_hotplug_req_event(uint8_t hp_id, uint= 8_t hp_action, case SPAPR_DR_CONNECTOR_TYPE_PHB: hp->hotplug_type =3D RTAS_LOG_V6_HP_TYPE_PHB; break; + case SPAPR_DR_CONNECTOR_TYPE_PMEM: + hp->hotplug_type =3D RTAS_LOG_V6_HP_TYPE_PMEM; + break; default: /* we shouldn't be signaling hotplug events for resources * that don't support them diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index bad4fc04b5..3089615e17 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -49,6 +49,7 @@ TYPE_NVDIMM) =20 #define NVDIMM_LABEL_SIZE_PROP "label-size" +#define NVDIMM_UUID_PROP "uuid" #define NVDIMM_UNARMED_PROP "unarmed" =20 struct NVDIMMDevice { @@ -83,6 +84,11 @@ struct NVDIMMDevice { * the guest write persistence. */ bool unarmed; + + /* + * The PPC64 - spapr requires each nvdimm device have a uuid. + */ + QemuUUID uuid; }; typedef struct NVDIMMDevice NVDIMMDevice; =20 diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 7e32f309c2..394ea26335 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -202,6 +202,7 @@ struct SpaprMachineState { SpaprCapabilities def, eff, mig; =20 unsigned gpu_numa_id; + bool nvdimm_enabled; }; =20 #define H_SUCCESS 0 @@ -794,6 +795,8 @@ int spapr_core_dt_populate(SpaprDrc *drc, SpaprMachineS= tate *spapr, void spapr_lmb_release(DeviceState *dev); int spapr_lmb_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, void *fdt, int *fdt_start_offset, Error **errp); +int spapr_pmem_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, + void *fdt, int *fdt_start_offset, Error **errp); void spapr_phb_release(DeviceState *dev); int spapr_phb_dt_populate(SpaprDrc *drc, SpaprMachineState *spapr, void *fdt, int *fdt_start_offset, Error **errp); @@ -829,6 +832,15 @@ int spapr_rtc_import_offset(SpaprRtcState *rtc, int64_= t legacy_offset); #define SPAPR_LMB_FLAGS_DRC_INVALID 0x00000020 #define SPAPR_LMB_FLAGS_RESERVED 0x00000080 =20 +/* + * The nvdimm size should be aligned to SCM block size. + * The SCM block size should be aligned to SPAPR_MEMORY_BLOCK_SIZE + * inorder to have SCM regions not to overlap with dimm memory regions. + * The SCM devices can have variable block sizes. For now, fixing the + * block size to the minimum value. + */ +#define SPAPR_MINIMUM_SCM_BLOCK_SIZE SPAPR_MEMORY_BLOCK_SIZE + void spapr_do_system_reset_on_cpu(CPUState *cs, run_on_cpu_data arg); =20 #define HTAB_SIZE(spapr) (1ULL << ((spapr)->htab_shift)) diff --git a/include/hw/ppc/spapr_drc.h b/include/hw/ppc/spapr_drc.h index fad0a887f9..8b7ce41a0f 100644 --- a/include/hw/ppc/spapr_drc.h +++ b/include/hw/ppc/spapr_drc.h @@ -79,6 +79,13 @@ #define SPAPR_DRC_PHB(obj) OBJECT_CHECK(SpaprDrc, (obj), \ TYPE_SPAPR_DRC_PHB) =20 +#define TYPE_SPAPR_DRC_PMEM "spapr-drc-pmem" +#define SPAPR_DRC_PMEM_GET_CLASS(obj) \ + OBJECT_GET_CLASS(SpaprDrcClass, obj, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM_CLASS(klass) \ + OBJECT_CLASS_CHECK(SpaprDrcClass, klass, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM(obj) OBJECT_CHECK(SpaprDrc, (obj), \ + TYPE_SPAPR_DRC_PMEM) /* * Various hotplug types managed by SpaprDrc * @@ -96,6 +103,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO =3D 3, SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI =3D 4, SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB =3D 8, + SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM =3D 9, } SpaprDrcTypeShift; =20 typedef enum { @@ -105,6 +113,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_VIO =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO, SPAPR_DR_CONNECTOR_TYPE_PCI =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI, SPAPR_DR_CONNECTOR_TYPE_LMB =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB, + SPAPR_DR_CONNECTOR_TYPE_PMEM =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PM= EM, } SpaprDrcType; =20 /* From nobody Mon Apr 29 10:26:46 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com ARC-Seal: i=1; a=rsa-sha256; t=1557739834; cv=none; d=zoho.com; s=zohoarc; b=bC82WpF7BDLk+wrSbQozN0TnCmCF/A0PFywOE2wOg5iyY2pFxJpLHetUB/KdIAwq69venqPJLQtKsGbgtdTKJUzvD7z2lpQPC/mvTzJlTy5uQU0xYra9UMcuZ5R+sHdwCavh0Du5JbzF9PewkyjqFGL7KOBHsDe8q72609oCWvE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1557739834; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To:ARC-Authentication-Results; bh=rySiF0CohsjxCw4CydfLP+gWjVDTAoSGBd2/Ercz//o=; b=Gyke3ikAHMv9DEZ6cN0mqVTONCQO0lfO2q80pIXCkB3lk6orzF7cgLEHrsfpQGU94X/EfOd0ub9pqnK582bVJ7VEzoXF7jucpYxt/4OF32EgiCH2t0NTzLMU/HoTOJmXEUnoyfNtNAGJjTA/HIgK6lhOXVbW8LZVExAgJmrbBc8= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 155773983448737.02959515828729; Mon, 13 May 2019 02:30:34 -0700 (PDT) Received: from localhost ([127.0.0.1]:54000 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hQ7Hy-00086x-Du for importer@patchew.org; Mon, 13 May 2019 05:30:30 -0400 Received: from eggs.gnu.org ([209.51.188.92]:41032) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hQ7GY-0007ND-CY for qemu-devel@nongnu.org; Mon, 13 May 2019 05:29:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hQ7GW-0005yy-OY for qemu-devel@nongnu.org; Mon, 13 May 2019 05:29:02 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:60158) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hQ7GW-0005yd-H8 for qemu-devel@nongnu.org; Mon, 13 May 2019 05:29:00 -0400 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x4D9StU7144240 for ; Mon, 13 May 2019 05:28:59 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2sf2ekrhvc-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 13 May 2019 05:28:57 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 13 May 2019 10:28:42 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 13 May 2019 10:28:39 +0100 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x4D9Scol58720424 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 13 May 2019 09:28:39 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 0FB4852090; Mon, 13 May 2019 09:28:38 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id 0AD2052073; Mon, 13 May 2019 09:28:36 +0000 (GMT) From: Shivaprasad G Bhat To: imammedo@redhat.com, david@gibson.dropbear.id.au, xiaoguangrong.eric@gmail.com, mst@redhat.com Date: Mon, 13 May 2019 04:28:36 -0500 In-Reply-To: <155773946961.49142.5208084426066783536.stgit@lep8c.aus.stglabs.ibm.com> References: <155773946961.49142.5208084426066783536.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19051309-0020-0000-0000-0000033C0D5A X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19051309-0021-0000-0000-0000218EC163 Message-Id: <155773968985.49142.1164691973469833295.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-05-13_06:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1905130068 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] X-Received-From: 148.163.156.1 Subject: [Qemu-devel] [RFC v2 PATCH 3/3] spapr: Add Hcalls to support PAPR NVDIMM device X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, sbhat@linux.ibm.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" This patch implements few of the necessary hcalls for the nvdimm support. PAPR semantics is such that each NVDIMM device is comprising of multiple SCM(Storage Class Memory) blocks. The guest requests the hypervisor to bind each of the SCM blocks of the NVDIMM device using hcalls. There can be SCM block unbind requests in case of driver errors or unplug(not supported = now) use cases. The NVDIMM label read/writes are done through hcalls. Since each virtual NVDIMM device is divided into multiple SCM blocks, the b= ind, unbind, and queries using hcalls on those blocks can come independently. Th= is doesn't fit well into the qemu device semantics, where the map/unmap are do= ne at the (whole)device/object level granularity. The patch doesnt actually bind/unbind on hcalls but let it happen at the object_add/del phase itself instead. The guest kernel makes bind/unbind requests for the virtual NVDIMM device a= t the region level granularity. Without interleaving, each virtual NVDIMM device = is presented as separate region. There is no way to configure the virtual NVDI= MM interleaving for the guests today. So, there is no way a partial bind/unbind request can come for the vNVDIMM in a hcall for a subset of SCM blocks of a virtual NVDIMM. Hence it is safe to do bind/unbind everything during the object_add/del. Signed-off-by: Shivaprasad G Bhat --- hw/ppc/spapr_hcall.c | 202 ++++++++++++++++++++++++++++++++++++++++++++= ++++ include/hw/ppc/spapr.h | 7 +- 2 files changed, 208 insertions(+), 1 deletion(-) diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c index 6c16d2b120..b6e7d04dcf 100644 --- a/hw/ppc/spapr_hcall.c +++ b/hw/ppc/spapr_hcall.c @@ -3,11 +3,13 @@ #include "sysemu/hw_accel.h" #include "sysemu/sysemu.h" #include "qemu/log.h" +#include "qemu/range.h" #include "qemu/error-report.h" #include "cpu.h" #include "exec/exec-all.h" #include "helper_regs.h" #include "hw/ppc/spapr.h" +#include "hw/ppc/spapr_drc.h" #include "hw/ppc/spapr_cpu_core.h" #include "mmu-hash64.h" #include "cpu-models.h" @@ -16,6 +18,7 @@ #include "hw/ppc/spapr_ovec.h" #include "mmu-book3s-v3.h" #include "hw/mem/memory-device.h" +#include "hw/mem/nvdimm.h" =20 static bool has_spr(PowerPCCPU *cpu, int spr) { @@ -1795,6 +1798,199 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, Sp= aprMachineState *spapr, return H_SUCCESS; } =20 +static target_ulong h_scm_read_metadata(PowerPCCPU *cpu, + SpaprMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t offset =3D args[1]; + uint64_t numBytesToRead =3D args[2]; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm =3D NULL; + NVDIMMClass *ddc =3D NULL; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + if (numBytesToRead !=3D 1 && numBytesToRead !=3D 2 && + numBytesToRead !=3D 4 && numBytesToRead !=3D 8) { + return H_P3; + } + + nvdimm =3D NVDIMM(drc->dev); + if ((offset + numBytesToRead < offset) || + (nvdimm->label_size < numBytesToRead + offset)) { + return H_P2; + } + + ddc =3D NVDIMM_GET_CLASS(nvdimm); + ddc->read_label_data(nvdimm, &args[0], numBytesToRead, offset); + + return H_SUCCESS; +} + + +static target_ulong h_scm_write_metadata(PowerPCCPU *cpu, + SpaprMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t offset =3D args[1]; + uint64_t data =3D args[2]; + int8_t numBytesToWrite =3D args[3]; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm =3D NULL; + DeviceState *dev =3D NULL; + NVDIMMClass *ddc =3D NULL; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + if (numBytesToWrite !=3D 1 && numBytesToWrite !=3D 2 && + numBytesToWrite !=3D 4 && numBytesToWrite !=3D 8) { + return H_P4; + } + + dev =3D drc->dev; + nvdimm =3D NVDIMM(dev); + if ((nvdimm->label_size < numBytesToWrite + offset) || + (offset + numBytesToWrite < offset)) { + return H_P2; + } + + ddc =3D NVDIMM_GET_CLASS(nvdimm); + ddc->write_label_data(nvdimm, &data, numBytesToWrite, offset); + + return H_SUCCESS; +} + +static target_ulong h_scm_bind_mem(PowerPCCPU *cpu, SpaprMachineState *spa= pr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t starting_idx =3D args[1]; + uint64_t no_of_scm_blocks_to_bind =3D args[2]; + uint64_t target_logical_mem_addr =3D args[3]; + uint64_t continue_token =3D args[4]; + uint64_t size; + uint64_t total_no_of_scm_blocks; + + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + hwaddr addr; + DeviceState *dev =3D NULL; + PCDIMMDevice *dimm =3D NULL; + Error *local_err =3D NULL; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + dev =3D drc->dev; + dimm =3D PC_DIMM(dev); + + size =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_SIZE_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + total_no_of_scm_blocks =3D size / SPAPR_MINIMUM_SCM_BLOCK_SIZE; + + if ((starting_idx > total_no_of_scm_blocks) || + (no_of_scm_blocks_to_bind > total_no_of_scm_blocks)) { + return H_P2; + } + + if (((starting_idx + no_of_scm_blocks_to_bind) < starting_idx) || + ((starting_idx + no_of_scm_blocks_to_bind) > total_no_of_scm_block= s)) { + return H_P3; + } + + /* Currently qemu assigns the address. */ + if (target_logical_mem_addr !=3D 0xffffffffffffffff) { + return H_OVERLAP; + } + + /* + * Currently continue token should be zero qemu has already bound + * everything and this hcall doesnt return H_BUSY. + */ + if (continue_token > 0) { + return H_P5; + } + + /* NB : Already bound, Return target logical address in R4 */ + addr =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_ADDR_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + args[1] =3D addr; + args[2] =3D no_of_scm_blocks_to_bind; + + return H_SUCCESS; +} + +static target_ulong h_scm_unbind_mem(PowerPCCPU *cpu, SpaprMachineState *s= papr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t starting_scm_logical_addr =3D args[1]; + uint64_t no_of_scm_blocks_to_unbind =3D args[2]; + uint64_t size_to_unbind; + uint64_t continue_token =3D args[3]; + Range blockrange =3D range_empty; + Range nvdimmrange =3D range_empty; + SpaprDrc *drc =3D spapr_drc_by_index(drc_index); + DeviceState *dev =3D NULL; + PCDIMMDevice *dimm =3D NULL; + uint64_t size, addr; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + /* Check if starting_scm_logical_addr is block aligned */ + if (!QEMU_IS_ALIGNED(starting_scm_logical_addr, + SPAPR_MINIMUM_SCM_BLOCK_SIZE)) { + return H_P2; + } + + dev =3D drc->dev; + dimm =3D PC_DIMM(dev); + size =3D object_property_get_int(OBJECT(dimm), PC_DIMM_SIZE_PROP, NULL= ); + addr =3D object_property_get_int(OBJECT(dimm), PC_DIMM_ADDR_PROP, NULL= ); + + range_init_nofail(&nvdimmrange, addr, size); + + size_to_unbind =3D no_of_scm_blocks_to_unbind * SPAPR_MINIMUM_SCM_BLOC= K_SIZE; + + + range_init_nofail(&blockrange, starting_scm_logical_addr, size_to_unbi= nd); + + if (!range_contains_range(&nvdimmrange, &blockrange)) { + return H_P3; + } + + if (continue_token > 0) { + return H_P3; + } + + args[1] =3D no_of_scm_blocks_to_unbind; + + /*NB : dont do anything, let object_del take care of this for now. */ + return H_SUCCESS; +} + static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1]; static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCA= LL_BASE + 1]; =20 @@ -1894,6 +2090,12 @@ static void hypercall_register_types(void) /* qemu/KVM-PPC specific hcalls */ spapr_register_hypercall(KVMPPC_H_RTAS, h_rtas); =20 + /* qemu/scm specific hcalls */ + spapr_register_hypercall(H_SCM_READ_METADATA, h_scm_read_metadata); + spapr_register_hypercall(H_SCM_WRITE_METADATA, h_scm_write_metadata); + spapr_register_hypercall(H_SCM_BIND_MEM, h_scm_bind_mem); + spapr_register_hypercall(H_SCM_UNBIND_MEM, h_scm_unbind_mem); + /* ibm,client-architecture-support support */ spapr_register_hypercall(KVMPPC_H_CAS, h_client_architecture_support); =20 diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 394ea26335..48e2cc9d67 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -283,6 +283,7 @@ struct SpaprMachineState { #define H_P7 -60 #define H_P8 -61 #define H_P9 -62 +#define H_OVERLAP -68 #define H_UNSUPPORTED_FLAG -256 #define H_MULTI_THREADS_ACTIVE -9005 =20 @@ -490,8 +491,12 @@ struct SpaprMachineState { #define H_INT_ESB 0x3C8 #define H_INT_SYNC 0x3CC #define H_INT_RESET 0x3D0 +#define H_SCM_READ_METADATA 0x3E4 +#define H_SCM_WRITE_METADATA 0x3E8 +#define H_SCM_BIND_MEM 0x3EC +#define H_SCM_UNBIND_MEM 0x3F0 =20 -#define MAX_HCALL_OPCODE H_INT_RESET +#define MAX_HCALL_OPCODE H_SCM_UNBIND_MEM =20 /* The hcalls above are standardized in PAPR and implemented by pHyp * as well.