From nobody Thu May 2 13:37:55 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com Return-Path: Received: from lists.gnu.org (209.51.188.17 [209.51.188.17]) by mx.zohomail.com with SMTPS id 1549462377208229.87256164314203; Wed, 6 Feb 2019 06:12:57 -0800 (PST) Received: from localhost ([127.0.0.1]:51534 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNwY-0003s2-No for importer@patchew.org; Wed, 06 Feb 2019 09:12:50 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52688) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFim-000382-So for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:05 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFil-0004gd-Ty for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:04 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:51482 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFil-0004fy-M7 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:03 -0500 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OG6R027648 for ; Wed, 6 Feb 2019 00:26:03 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0b-001b2d01.pphosted.com with ESMTP id 2qfqh44ma9-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:03 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:00 -0000 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:25:57 -0000 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165Pu8B57671836 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:25:56 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4F61211C050; Wed, 6 Feb 2019 05:25:56 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 14E8911C052; Wed, 6 Feb 2019 05:25:55 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Feb 2019 05:25:54 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:25:54 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19020605-0028-0000-0000-0000034508F4 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-0029-0000-0000-000024031400 Message-Id: <154943065253.27958.18316807886952418325.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 1/4] mem: make nvdimm_device_list global X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" nvdimm_device_list is required for parsing the list for devices in subsequent patches. Move it to common area. Signed-off-by: Shivaprasad G Bhat Reviewed-by: Igor Mammedov --- hw/acpi/nvdimm.c | 27 --------------------------- hw/mem/nvdimm.c | 27 +++++++++++++++++++++++++++ include/hw/mem/nvdimm.h | 2 ++ 3 files changed, 29 insertions(+), 27 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index e53b2cb681..34322298c2 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -33,33 +33,6 @@ #include "hw/nvram/fw_cfg.h" #include "hw/mem/nvdimm.h" =20 -static int nvdimm_device_list(Object *obj, void *opaque) -{ - GSList **list =3D opaque; - - if (object_dynamic_cast(obj, TYPE_NVDIMM)) { - *list =3D g_slist_append(*list, DEVICE(obj)); - } - - object_child_foreach(obj, nvdimm_device_list, opaque); - return 0; -} - -/* - * inquire NVDIMM devices and link them into the list which is - * returned to the caller. - * - * Note: it is the caller's responsibility to free the list to avoid - * memory leak. - */ -static GSList *nvdimm_get_device_list(void) -{ - GSList *list =3D NULL; - - object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); - return list; -} - #define NVDIMM_UUID_LE(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7) = \ { (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff= , \ (b) & 0xff, ((b) >> 8) & 0xff, (c) & 0xff, ((c) >> 8) & 0xff, = \ diff --git a/hw/mem/nvdimm.c b/hw/mem/nvdimm.c index bf2adf5e16..f221ec7a9a 100644 --- a/hw/mem/nvdimm.c +++ b/hw/mem/nvdimm.c @@ -29,6 +29,33 @@ #include "hw/mem/nvdimm.h" #include "hw/mem/memory-device.h" =20 +static int nvdimm_device_list(Object *obj, void *opaque) +{ + GSList **list =3D opaque; + + if (object_dynamic_cast(obj, TYPE_NVDIMM)) { + *list =3D g_slist_append(*list, DEVICE(obj)); + } + + object_child_foreach(obj, nvdimm_device_list, opaque); + return 0; +} + +/* + * inquire NVDIMM devices and link them into the list which is + * returned to the caller. + * + * Note: it is the caller's responsibility to free the list to avoid + * memory leak. + */ +GSList *nvdimm_get_device_list(void) +{ + GSList *list =3D NULL; + + object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); + return list; +} + static void nvdimm_get_label_size(Object *obj, Visitor *v, const char *nam= e, void *opaque, Error **errp) { diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index c5c9b3c7f8..e8b086f2df 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -150,4 +150,6 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *t= able_data, uint32_t ram_slots); void nvdimm_plug(AcpiNVDIMMState *state); void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev); +GSList *nvdimm_get_device_list(void); + #endif From nobody Thu May 2 13:37:55 2024 Delivered-To: importer@patchew.org Received-SPF: temperror (zoho.com: Error in retrieving data from DNS) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=temperror (zoho.com: Error in retrieving data from DNS) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com Return-Path: Received: from lists.gnu.org (209.51.188.17 [209.51.188.17]) by mx.zohomail.com with SMTPS id 1549462545578217.44014621213057; Wed, 6 Feb 2019 06:15:45 -0800 (PST) Received: from localhost ([127.0.0.1]:51598 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNzC-0006Cg-CV for importer@patchew.org; Wed, 06 Feb 2019 09:15:34 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52808) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFjK-0003Pt-2K for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFjA-0004y1-2l for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:31 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:49506 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFj6-0004vr-D1 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:25 -0500 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OJ6G051631 for ; Wed, 6 Feb 2019 00:26:22 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0b-001b2d01.pphosted.com with ESMTP id 2qfscar50w-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:22 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:21 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:26:18 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165QGWR35586100 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:26:16 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 822075204E; Wed, 6 Feb 2019 05:26:16 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id 42B7A52050; Wed, 6 Feb 2019 05:26:15 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:26:14 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19020605-0028-0000-0000-0000034508FA X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-0029-0000-0000-000024031408 Message-Id: <154943076146.27958.8619995020189724984.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=800 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 2/4] mem: implement memory_device_set_region_size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Required for PAPR NVDIMM implementation. Need memory_device_set_region_size for aligning the size to the SCM block size. Signed-off-by: Shivaprasad G Bhat --- hw/mem/memory-device.c | 15 +++++++++++++++ include/hw/mem/memory-device.h | 2 ++ 2 files changed, 17 insertions(+) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 5f2c408036..ad0419e203 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -330,6 +330,21 @@ uint64_t memory_device_get_region_size(const MemoryDev= iceState *md, return memory_region_size(mr); } =20 +void memory_device_set_region_size(const MemoryDeviceState *md, + uint64_t size, Error **errp) +{ + const MemoryDeviceClass *mdc =3D MEMORY_DEVICE_GET_CLASS(md); + MemoryRegion *mr; + + /* dropping const here is fine as we don't touch the memory region */ + mr =3D mdc->get_memory_region((MemoryDeviceState *)md, errp); + if (!mr) { + return; + } + + memory_region_set_size(mr, size); +} + static const TypeInfo memory_device_info =3D { .name =3D TYPE_MEMORY_DEVICE, .parent =3D TYPE_INTERFACE, diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h index 0293a96abb..ba9b72fd28 100644 --- a/include/hw/mem/memory-device.h +++ b/include/hw/mem/memory-device.h @@ -103,5 +103,7 @@ void memory_device_plug(MemoryDeviceState *md, MachineS= tate *ms); void memory_device_unplug(MemoryDeviceState *md, MachineState *ms); uint64_t memory_device_get_region_size(const MemoryDeviceState *md, Error **errp); +void memory_device_set_region_size(const MemoryDeviceState *md, + uint64_t size, Error **errp); =20 #endif From nobody Thu May 2 13:37:55 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com Return-Path: Received: from lists.gnu.org (209.51.188.17 [209.51.188.17]) by mx.zohomail.com with SMTPS id 154946246073856.206143303189606; Wed, 6 Feb 2019 06:14:20 -0800 (PST) Received: from localhost ([127.0.0.1]:51540 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNxu-0004vA-GH for importer@patchew.org; Wed, 06 Feb 2019 09:14:14 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52893) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFja-0003Wk-7f for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFjX-00059T-Li for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:54 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:40468) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFjV-00052i-M6 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:51 -0500 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OHFc115972 for ; Wed, 6 Feb 2019 00:26:38 -0500 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qfr78u0eb-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:38 -0500 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:35 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:26:31 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165QTNN66519044 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:26:29 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B65EB4C04E; Wed, 6 Feb 2019 05:26:29 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 51DCD4C050; Wed, 6 Feb 2019 05:26:28 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Feb 2019 05:26:28 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:26:27 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19020605-4275-0000-0000-0000030BEAC5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-4276-0000-0000-00003819EECD Message-Id: <154943078167.27958.5009288263168039462.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.156.1 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 3/4] spapr: Add NVDIMM device support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Add support for NVDIMM devices for sPAPR. Piggyback on existing nvdimm device interface in QEMU to support virtual NVDIMM devices for Power (May h= ave to re-look at this later). Create the required DT entries for the device (some entries have dummy values right now). The patch creates the required DT node and sends a hotplug interrupt to the guest. Guest is expected to undertake the normal DR resource add path in response and start issuing PAPR SCM hcalls. This is how it can be used .. Add nvdimm=3Don to the qemu machine argument. Ex : -machine pseries,nvdimm=3Don For coldplug, the device to be added in qemu command line as shown below -object memory-backend-file,id=3Dmemnvdimm0,prealloc=3Dyes,mem-path=3D/tmp/= nvdimm0.img,share=3Dyes,size=3D512m -device nvdimm,label-size=3D128k,memdev=3Dmemnvdimm0,id=3Dnvdimm0,slot=3D0 For hotplug, the device to be added from monitor as below object_add memory-backend-file,id=3Dmemnvdimm0,prealloc=3Dyes,mem-path=3D/t= mp/nvdimm0.img,share=3Dyes,size=3D512m device_add nvdimm,label-size=3D128k,memdev=3Dmemnvdimm0,id=3Dnvdimm0,slot= =3D0 Signed-off-by: Shivaprasad G Bhat Signed-off-by: Bharata B Rao [Early implementation] --- default-configs/ppc64-softmmu.mak | 1=20 hw/ppc/spapr.c | 212 +++++++++++++++++++++++++++++++++= ++-- hw/ppc/spapr_drc.c | 17 +++ hw/ppc/spapr_events.c | 4 + include/hw/ppc/spapr.h | 10 ++ include/hw/ppc/spapr_drc.h | 9 ++ 6 files changed, 241 insertions(+), 12 deletions(-) diff --git a/default-configs/ppc64-softmmu.mak b/default-configs/ppc64-soft= mmu.mak index 7f34ad0528..b6e1aa5125 100644 --- a/default-configs/ppc64-softmmu.mak +++ b/default-configs/ppc64-softmmu.mak @@ -20,4 +20,5 @@ CONFIG_XIVE=3D$(CONFIG_PSERIES) CONFIG_XIVE_SPAPR=3D$(CONFIG_PSERIES) CONFIG_MEM_DEVICE=3Dy CONFIG_DIMM=3Dy +CONFIG_NVDIMM=3Dy CONFIG_SPAPR_RNG=3Dy diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 0fcdd35cbe..7e7a1a8041 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -73,6 +73,7 @@ #include "qemu/cutils.h" #include "hw/ppc/spapr_cpu_core.h" #include "hw/mem/memory-device.h" +#include "hw/mem/nvdimm.h" =20 #include =20 @@ -690,6 +691,7 @@ static int spapr_populate_drmem_v2(sPAPRMachineState *s= papr, void *fdt, uint8_t *int_buf, *cur_index, buf_len; int ret; uint64_t lmb_size =3D SPAPR_MEMORY_BLOCK_SIZE; + uint64_t scm_block_size =3D SPAPR_MINIMUM_SCM_BLOCK_SIZE; uint64_t addr, cur_addr, size; uint32_t nr_boot_lmbs =3D (machine->device_memory->base / lmb_size); uint64_t mem_end =3D machine->device_memory->base + @@ -726,15 +728,24 @@ static int spapr_populate_drmem_v2(sPAPRMachineState = *spapr, void *fdt, nr_entries++; } =20 - /* Entry for DIMM */ - drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); - g_assert(drc); - elem =3D spapr_get_drconf_cell(size / lmb_size, addr, - spapr_drc_index(drc), node, - SPAPR_LMB_FLAGS_ASSIGNED); + if (info->value->type =3D=3D MEMORY_DEVICE_INFO_KIND_NVDIMM) { + /* Entry for NVDIMM */ + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, addr / scm_block_= size); + g_assert(drc); + elem =3D spapr_get_drconf_cell(size / scm_block_size, addr, + spapr_drc_index(drc), -1, 0); + cur_addr =3D ROUND_UP(addr + size, scm_block_size); + } else { + /* Entry for DIMM */ + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); + g_assert(drc); + elem =3D spapr_get_drconf_cell(size / lmb_size, addr, + spapr_drc_index(drc), node, + SPAPR_LMB_FLAGS_ASSIGNED); + cur_addr =3D addr + size; + } QSIMPLEQ_INSERT_TAIL(&drconf_queue, elem, entry); nr_entries++; - cur_addr =3D addr + size; } =20 /* Entry for remaining hotpluggable area */ @@ -1225,6 +1236,42 @@ static void spapr_dt_hypervisor(sPAPRMachineState *s= papr, void *fdt) } } =20 +static int spapr_populate_nvdimm_node(void *fdt, int fdt_offset, + uint32_t node, uint64_t addr, + uint64_t size, uint64_t label_size); +static void spapr_create_nvdimm(void *fdt) +{ + int offset =3D fdt_subnode_offset(fdt, 0, "persistent-memory"); + GSList *dimms =3D NULL; + + if (offset < 0) { + offset =3D fdt_add_subnode(fdt, 0, "persistent-memory"); + _FDT(offset); + _FDT((fdt_setprop_cell(fdt, offset, "#address-cells", 0x2))); + _FDT((fdt_setprop_cell(fdt, offset, "#size-cells", 0x0))); + _FDT((fdt_setprop_string(fdt, offset, "name", "persistent-memory")= )); + _FDT((fdt_setprop_string(fdt, offset, "device_type", + "ibm,persistent-memory"))); + } + + /*NB : Add drc-info array here */ + + /* Create DT entries for cold plugged NVDIMM devices */ + dimms =3D nvdimm_get_device_list(); + for (; dimms; dimms =3D dimms->next) { + NVDIMMDevice *nvdimm =3D dimms->data; + PCDIMMDevice *di =3D PC_DIMM(nvdimm); + uint64_t lsize =3D nvdimm->label_size; + int size =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_= PROP, + NULL); + + spapr_populate_nvdimm_node(fdt, offset, di->node, di->addr, + size, lsize); + } + g_slist_free(dimms); + return; +} + static void *spapr_build_fdt(sPAPRMachineState *spapr) { MachineState *machine =3D MACHINE(spapr); @@ -1348,6 +1395,11 @@ static void *spapr_build_fdt(sPAPRMachineState *spap= r) exit(1); } =20 + /* NVDIMM devices */ + if (spapr->nvdimm_enabled) { + spapr_create_nvdimm(fdt); + } + return fdt; } =20 @@ -3143,6 +3195,20 @@ static void spapr_set_ic_mode(Object *obj, const cha= r *value, Error **errp) } } =20 +static bool spapr_get_nvdimm(Object *obj, Error **errp) +{ + sPAPRMachineState *spapr =3D SPAPR_MACHINE(obj); + + return spapr->nvdimm_enabled; +} + +static void spapr_set_nvdimm(Object *obj, bool value, Error **errp) +{ + sPAPRMachineState *spapr =3D SPAPR_MACHINE(obj); + + spapr->nvdimm_enabled =3D value; +} + static void spapr_instance_init(Object *obj) { sPAPRMachineState *spapr =3D SPAPR_MACHINE(obj); @@ -3188,6 +3254,11 @@ static void spapr_instance_init(Object *obj) object_property_set_description(obj, "ic-mode", "Specifies the interrupt controller mode (xics, xive, dua= l)", NULL); + object_property_add_bool(obj, "nvdimm", + spapr_get_nvdimm, spapr_set_nvdimm, NULL); + object_property_set_description(obj, "nvdimm", + "Enable support for nvdimm devices", + NULL); } =20 static void spapr_machine_finalizefn(Object *obj) @@ -3267,12 +3338,103 @@ static void spapr_add_lmbs(DeviceState *dev, uint6= 4_t addr_start, uint64_t size, } } =20 +static int spapr_populate_nvdimm_node(void *fdt, int fdt_offset, uint32_t = node, + uint64_t addr, uint64_t size, + uint64_t label_size) +{ + int offset; + char buf[40]; + GString *lcode =3D g_string_sized_new(10); + sPAPRDRConnector *drc; + QemuUUID uuid; + uint32_t drc_idx; + uint32_t associativity[] =3D { + cpu_to_be32(0x4), /* length */ + cpu_to_be32(0x0), cpu_to_be32(0x0), + cpu_to_be32(0x0), cpu_to_be32(node) + }; + + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + g_assert(drc); + + drc_idx =3D spapr_drc_index(drc); + + sprintf(buf, "pmem@%x", drc_idx); + offset =3D fdt_add_subnode(fdt, fdt_offset, buf); + _FDT(offset); + + _FDT((fdt_setprop_cell(fdt, offset, "reg", drc_idx))); + _FDT((fdt_setprop_string(fdt, offset, "compatible", "ibm,pmemory"))); + _FDT((fdt_setprop_string(fdt, offset, "name", "pmem"))); + _FDT((fdt_setprop_string(fdt, offset, "device_type", "ibm,pmemory"))); + + /*NB : Supposed to be random strings. Currently empty 10 strings! */ + _FDT((fdt_setprop(fdt, offset, "ibm,loc-code", lcode->str, lcode->len)= )); + g_string_free(lcode, TRUE); + + _FDT((fdt_setprop(fdt, offset, "ibm,associativity", associativity, + sizeof(associativity)))); + g_random_set_seed(drc_idx); + qemu_uuid_generate(&uuid); + + qemu_uuid_unparse(&uuid, buf); + _FDT((fdt_setprop_string(fdt, offset, "ibm,unit-guid", buf))); + + _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_idx))); + + /*NB : What it should be? */ + _FDT(fdt_setprop_cell(fdt, offset, "ibm,latency-attribute", 828)); + + _FDT((fdt_setprop_u64(fdt, offset, "ibm,block-size", + SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_u64(fdt, offset, "ibm,number-of-blocks", + size / SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_cell(fdt, offset, "ibm,metadata-size", label_size))); + + return offset; +} + +static void spapr_add_nvdimm(DeviceState *dev, uint64_t addr, + uint64_t size, uint32_t node, + Error **errp) +{ + sPAPRMachineState *spapr =3D SPAPR_MACHINE(qdev_get_hotplug_handler(de= v)); + sPAPRDRConnector *drc; + bool hotplugged =3D spapr_drc_hotplugged(dev); + NVDIMMDevice *nvdimm =3D NVDIMM(OBJECT(dev)); + void *fdt; + int fdt_offset, fdt_size; + Error *local_err =3D NULL; + + spapr_dr_connector_new(OBJECT(spapr), TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + drc =3D spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + g_assert(drc); + + fdt =3D create_device_tree(&fdt_size); + fdt_offset =3D spapr_populate_nvdimm_node(fdt, 0, node, addr, + size, nvdimm->label_size); + + spapr_drc_attach(drc, dev, fdt, fdt_offset, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + + if (hotplugged) { + spapr_hotplug_req_add_by_index(drc); + } +} + static void spapr_memory_plug(HotplugHandler *hotplug_dev, DeviceState *de= v, Error **errp) { Error *local_err =3D NULL; sPAPRMachineState *ms =3D SPAPR_MACHINE(hotplug_dev); PCDIMMDevice *dimm =3D PC_DIMM(dev); + bool is_nvdimm =3D object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); uint64_t size, addr; uint32_t node; =20 @@ -3291,9 +3453,14 @@ static void spapr_memory_plug(HotplugHandler *hotplu= g_dev, DeviceState *dev, =20 node =3D object_property_get_uint(OBJECT(dev), PC_DIMM_NODE_PROP, &error_abort); - spapr_add_lmbs(dev, addr, size, node, - spapr_ovec_test(ms->ov5_cas, OV5_HP_EVT), - &local_err); + if (!is_nvdimm) { + spapr_add_lmbs(dev, addr, size, node, + spapr_ovec_test(ms->ov5_cas, OV5_HP_EVT), + &local_err); + } else { + spapr_add_nvdimm(dev, addr, size, node, &local_err); + } + if (local_err) { goto out_unplug; } @@ -3311,6 +3478,7 @@ static void spapr_memory_pre_plug(HotplugHandler *hot= plug_dev, DeviceState *dev, { const sPAPRMachineClass *smc =3D SPAPR_MACHINE_GET_CLASS(hotplug_dev); sPAPRMachineState *spapr =3D SPAPR_MACHINE(hotplug_dev); + bool is_nvdimm =3D object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); PCDIMMDevice *dimm =3D PC_DIMM(dev); Error *local_err =3D NULL; uint64_t size; @@ -3328,10 +3496,30 @@ static void spapr_memory_pre_plug(HotplugHandler *h= otplug_dev, DeviceState *dev, return; } =20 - if (size % SPAPR_MEMORY_BLOCK_SIZE) { + if (!is_nvdimm && size % SPAPR_MEMORY_BLOCK_SIZE) { error_setg(errp, "Hotplugged memory size must be a multiple of " - "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); + "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); return; + } else if (is_nvdimm) { + NVDIMMDevice *nvdimm =3D NVDIMM(OBJECT(dev)); + if ((nvdimm->label_size + size) % SPAPR_MINIMUM_SCM_BLOCK_SIZE) { + error_setg(errp, "NVDIMM memory size must be a multiple of " + "%" PRIu64 "MB", SPAPR_MINIMUM_SCM_BLOCK_SIZE / MiB= ); + return; + } + if (((nvdimm->label_size + size) / SPAPR_MINIMUM_SCM_BLOCK_SIZE) = =3D=3D 1) { + error_setg(errp, "NVDIMM size must be atleast " + "%" PRIu64 "MB", 2 * SPAPR_MINIMUM_SCM_BLOCK_SIZE /= MiB); + return; + } + + /* Align to scm block size, exclude the label */ + memory_device_set_region_size(MEMORY_DEVICE(nvdimm), + QEMU_ALIGN_DOWN(size, SPAPR_MINIMUM_SCM_BLOCK_SIZE), &local= _err); + if (local_err) { + error_propagate(errp, local_err); + return; + } } =20 memdev =3D object_property_get_link(OBJECT(dimm), PC_DIMM_MEMDEV_PROP, diff --git a/hw/ppc/spapr_drc.c b/hw/ppc/spapr_drc.c index 2edb7d1e9c..94ddd102cc 100644 --- a/hw/ppc/spapr_drc.c +++ b/hw/ppc/spapr_drc.c @@ -696,6 +696,16 @@ static void spapr_drc_lmb_class_init(ObjectClass *k, v= oid *data) drck->release =3D spapr_lmb_release; } =20 +static void spapr_drc_pmem_class_init(ObjectClass *k, void *data) +{ + sPAPRDRConnectorClass *drck =3D SPAPR_DR_CONNECTOR_CLASS(k); + + drck->typeshift =3D SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM; + drck->typename =3D "MEM"; + drck->drc_name_prefix =3D "PMEM "; + drck->release =3D NULL; +} + static const TypeInfo spapr_dr_connector_info =3D { .name =3D TYPE_SPAPR_DR_CONNECTOR, .parent =3D TYPE_DEVICE, @@ -739,6 +749,12 @@ static const TypeInfo spapr_drc_lmb_info =3D { .class_init =3D spapr_drc_lmb_class_init, }; =20 +static const TypeInfo spapr_drc_pmem_info =3D { + .name =3D TYPE_SPAPR_DRC_PMEM, + .parent =3D TYPE_SPAPR_DRC_LOGICAL, + .class_init =3D spapr_drc_pmem_class_init, +}; + /* helper functions for external users */ =20 sPAPRDRConnector *spapr_drc_by_index(uint32_t index) @@ -1189,6 +1205,7 @@ static void spapr_drc_register_types(void) type_register_static(&spapr_drc_cpu_info); type_register_static(&spapr_drc_pci_info); type_register_static(&spapr_drc_lmb_info); + type_register_static(&spapr_drc_pmem_info); =20 spapr_rtas_register(RTAS_SET_INDICATOR, "set-indicator", rtas_set_indicator); diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c index 32719a1b72..a4fed84346 100644 --- a/hw/ppc/spapr_events.c +++ b/hw/ppc/spapr_events.c @@ -193,6 +193,7 @@ struct rtas_event_log_v6_hp { #define RTAS_LOG_V6_HP_TYPE_SLOT 3 #define RTAS_LOG_V6_HP_TYPE_PHB 4 #define RTAS_LOG_V6_HP_TYPE_PCI 5 +#define RTAS_LOG_V6_HP_TYPE_PMEM 6 uint8_t hotplug_action; #define RTAS_LOG_V6_HP_ACTION_ADD 1 #define RTAS_LOG_V6_HP_ACTION_REMOVE 2 @@ -526,6 +527,9 @@ static void spapr_hotplug_req_event(uint8_t hp_id, uint= 8_t hp_action, case SPAPR_DR_CONNECTOR_TYPE_CPU: hp->hotplug_type =3D RTAS_LOG_V6_HP_TYPE_CPU; break; + case SPAPR_DR_CONNECTOR_TYPE_PMEM: + hp->hotplug_type =3D RTAS_LOG_V6_HP_TYPE_PMEM; + break; default: /* we shouldn't be signaling hotplug events for resources * that don't support them diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index a947a0a0dc..21a9709afe 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -187,6 +187,7 @@ struct sPAPRMachineState { =20 bool cmd_line_caps[SPAPR_CAP_NUM]; sPAPRCapabilities def, eff, mig; + bool nvdimm_enabled; }; =20 #define H_SUCCESS 0 @@ -798,6 +799,15 @@ int spapr_rtc_import_offset(sPAPRRTCState *rtc, int64_= t legacy_offset); #define SPAPR_LMB_FLAGS_DRC_INVALID 0x00000020 #define SPAPR_LMB_FLAGS_RESERVED 0x00000080 =20 +/* + * The nvdimm size should be aligned to SCM block size. + * The SCM block size should be aligned to SPAPR_MEMORY_BLOCK_SIZE + * inorder to have SCM regions not to overlap with dimm memory regions. + * The SCM devices can have variable block sizes. For now, fixing the + * block size to the minimum value. + */ +#define SPAPR_MINIMUM_SCM_BLOCK_SIZE SPAPR_MEMORY_BLOCK_SIZE + void spapr_do_system_reset_on_cpu(CPUState *cs, run_on_cpu_data arg); =20 #define HTAB_SIZE(spapr) (1ULL << ((spapr)->htab_shift)) diff --git a/include/hw/ppc/spapr_drc.h b/include/hw/ppc/spapr_drc.h index f6ff32e7e2..65925d00b1 100644 --- a/include/hw/ppc/spapr_drc.h +++ b/include/hw/ppc/spapr_drc.h @@ -70,6 +70,13 @@ #define SPAPR_DRC_LMB(obj) OBJECT_CHECK(sPAPRDRConnector, (obj), \ TYPE_SPAPR_DRC_LMB) =20 +#define TYPE_SPAPR_DRC_PMEM "spapr-drc-pmem" +#define SPAPR_DRC_PMEM_GET_CLASS(obj) \ + OBJECT_GET_CLASS(sPAPRDRConnectorClass, obj, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM_CLASS(klass) \ + OBJECT_CLASS_CHECK(sPAPRDRConnectorClass, klass, TYPE_SPAPR_DRC_PM= EM) +#define SPAPR_DRC_PMEM(obj) OBJECT_CHECK(sPAPRDRConnector, (obj), \ + TYPE_SPAPR_DRC_PMEM) /* * Various hotplug types managed by sPAPRDRConnector * @@ -87,6 +94,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO =3D 3, SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI =3D 4, SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB =3D 8, + SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM =3D 9, } sPAPRDRConnectorTypeShift; =20 typedef enum { @@ -96,6 +104,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_VIO =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO, SPAPR_DR_CONNECTOR_TYPE_PCI =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI, SPAPR_DR_CONNECTOR_TYPE_LMB =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB, + SPAPR_DR_CONNECTOR_TYPE_PMEM =3D 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PM= EM, } sPAPRDRConnectorType; =20 /* From nobody Thu May 2 13:37:55 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.ibm.com Return-Path: Received: from lists.gnu.org (209.51.188.17 [209.51.188.17]) by mx.zohomail.com with SMTPS id 1549462384583824.3351602293681; Wed, 6 Feb 2019 06:13:04 -0800 (PST) Received: from localhost ([127.0.0.1]:51538 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNwg-0003wS-EJ for importer@patchew.org; Wed, 06 Feb 2019 09:12:58 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52896) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFja-0003Wl-9l for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFjY-0005Ad-D8 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:54 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:55886) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFjX-00058y-W3 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:52 -0500 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OL5S041928 for ; Wed, 6 Feb 2019 00:26:50 -0500 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qfmq5bp90-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:50 -0500 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:48 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:26:45 -0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165Qhql53674144 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:26:43 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4CB64A4051; Wed, 6 Feb 2019 05:26:43 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F2FADA405E; Wed, 6 Feb 2019 05:26:41 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Feb 2019 05:26:41 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:26:41 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-GCONF: 00 x-cbid: 19020605-0020-0000-0000-0000031343AB X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-0021-0000-0000-000021644494 Message-Id: <154943079488.27958.9812294887340963535.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.156.1 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 4/4] spapr: Add Hcalls to support PAPR NVDIMM device X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" This patch implements few of the necessary hcalls for the nvdimm support. PAPR semantics is such that each NVDIMM device is comprising of multiple SCM(Storage Class Memory) blocks. The guest requests the hypervisor to bind each of the SCM blocks of the NVDIMM device using hcalls. There can be SCM block unbind requests in case of driver errors or unplug(not supported = now) use cases. The NVDIMM label read/writes are done through hcalls. Since each virtual NVDIMM device is divided into multiple SCM blocks, the b= ind, unbind, and queries using hcalls on those blocks can come independently. Th= is doesn't fit well into the qemu device semantics, where the map/unmap are do= ne at the (whole)device/object level granularity. The patch doesnt actually bind/unbind on hcalls but let it happen at the object_add/del phase itself instead. The guest kernel makes bind/unbind requests for the virtual NVDIMM device a= t the region level granularity. Without interleaving, each virtual NVDIMM device = is presented as separate region. There is no way to configure the virtual NVDI= MM interleaving for the guests today. So, there is no way a partial bind/unbind request can come for the vNVDIMM in a hcall for a subset of SCM blocks of a virtual NVDIMM. Hence it is safe to do bind/unbind everything during the object_add/del. The kernel today is not using the hcalls - h_scm_mem_query, h_scm_mem_clear, h_scm_query_logical_mem_binding and h_scm_query_block_mem_binding. They are= just stubs in this patch. Signed-off-by: Shivaprasad G Bhat --- hw/ppc/spapr_hcall.c | 230 ++++++++++++++++++++++++++++++++++++++++++++= ++++ include/hw/ppc/spapr.h | 12 ++- 2 files changed, 240 insertions(+), 2 deletions(-) diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c index 17bcaa3822..40553e80d6 100644 --- a/hw/ppc/spapr_hcall.c +++ b/hw/ppc/spapr_hcall.c @@ -3,11 +3,13 @@ #include "sysemu/hw_accel.h" #include "sysemu/sysemu.h" #include "qemu/log.h" +#include "qemu/range.h" #include "qemu/error-report.h" #include "cpu.h" #include "exec/exec-all.h" #include "helper_regs.h" #include "hw/ppc/spapr.h" +#include "hw/ppc/spapr_drc.h" #include "hw/ppc/spapr_cpu_core.h" #include "mmu-hash64.h" #include "cpu-models.h" @@ -16,6 +18,7 @@ #include "hw/ppc/spapr_ovec.h" #include "mmu-book3s-v3.h" #include "hw/mem/memory-device.h" +#include "hw/mem/nvdimm.h" =20 struct LPCRSyncState { target_ulong value; @@ -1808,6 +1811,222 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, sP= APRMachineState *spapr, return H_SUCCESS; } =20 +static target_ulong h_scm_read_metadata(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t offset =3D args[1]; + uint8_t numBytesToRead =3D args[2]; + sPAPRDRConnector *drc =3D spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm =3D NULL; + NVDIMMClass *ddc =3D NULL; + + if (numBytesToRead !=3D 1 && numBytesToRead !=3D 2 && + numBytesToRead !=3D 4 && numBytesToRead !=3D 8) { + return H_P3; + } + + if (offset & (numBytesToRead - 1)) { + return H_P2; + } + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + nvdimm =3D NVDIMM(drc->dev); + ddc =3D NVDIMM_GET_CLASS(nvdimm); + + ddc->read_label_data(nvdimm, &args[0], numBytesToRead, offset); + + return H_SUCCESS; +} + + +static target_ulong h_scm_write_metadata(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t offset =3D args[1]; + uint64_t data =3D args[2]; + int8_t numBytesToWrite =3D args[3]; + sPAPRDRConnector *drc =3D spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm =3D NULL; + DeviceState *dev =3D NULL; + NVDIMMClass *ddc =3D NULL; + + if (numBytesToWrite !=3D 1 && numBytesToWrite !=3D 2 && + numBytesToWrite !=3D 4 && numBytesToWrite !=3D 8) { + return H_P4; + } + + if (offset & (numBytesToWrite - 1)) { + return H_P2; + } + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + dev =3D drc->dev; + nvdimm =3D NVDIMM(dev); + if (offset >=3D nvdimm->label_size) { + return H_P3; + } + + ddc =3D NVDIMM_GET_CLASS(nvdimm); + + ddc->write_label_data(nvdimm, &data, numBytesToWrite, offset); + + return H_SUCCESS; +} + +static target_ulong h_scm_bind_mem(PowerPCCPU *cpu, sPAPRMachineState *spa= pr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index =3D args[0]; + uint64_t starting_index =3D args[1]; + uint64_t no_of_scm_blocks_to_bind =3D args[2]; + uint64_t target_logical_mem_addr =3D args[3]; + uint64_t continue_token =3D args[4]; + uint64_t size; + uint64_t total_no_of_scm_blocks; + + sPAPRDRConnector *drc =3D spapr_drc_by_index(drc_index); + hwaddr addr; + DeviceState *dev =3D NULL; + PCDIMMDevice *dimm =3D NULL; + Error *local_err =3D NULL; + + if (drc && spapr_drc_type(drc) !=3D SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + dev =3D drc->dev; + dimm =3D PC_DIMM(dev); + + size =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_SIZE_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + total_no_of_scm_blocks =3D size / SPAPR_MINIMUM_SCM_BLOCK_SIZE; + + if (starting_index > total_no_of_scm_blocks) { + return H_P2; + } + + if ((starting_index + no_of_scm_blocks_to_bind) > total_no_of_scm_bloc= ks) { + return H_P3; + } + + /* Currently qemu assigns the address. */ + if (target_logical_mem_addr !=3D 0xffffffffffffffff) { + return H_OVERLAP; + } + + /* + * Currently continue token should be zero qemu has already bound + * everything and this hcall doesnt return H_BUSY. + */ + if (continue_token > 0) { + return H_P5; + } + + /* NB : Already bound, Return target logical address in R4 */ + addr =3D object_property_get_uint(OBJECT(dimm), + PC_DIMM_ADDR_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + args[1] =3D addr; + + return H_SUCCESS; +} + +static target_ulong h_scm_unbind_mem(PowerPCCPU *cpu, sPAPRMachineState *s= papr, + target_ulong opcode, + target_ulong *args) +{ + uint64_t starting_scm_logical_addr =3D args[0]; + uint64_t no_of_scm_blocks_to_unbind =3D args[1]; + uint64_t size_to_unbind; + uint64_t continue_token =3D args[2]; + Range as =3D range_empty; + GSList *dimms =3D NULL; + bool valid =3D false; + + size_to_unbind =3D no_of_scm_blocks_to_unbind * SPAPR_MINIMUM_SCM_BLOC= K_SIZE; + + /* Check if starting_scm_logical_addr is block aligned */ + if (!QEMU_IS_ALIGNED(starting_scm_logical_addr, + SPAPR_MINIMUM_SCM_BLOCK_SIZE)) { + return H_PARAMETER; + } + + range_init_nofail(&as, starting_scm_logical_addr, size_to_unbind); + + dimms =3D nvdimm_get_device_list(); + for (; dimms; dimms =3D dimms->next) { + NVDIMMDevice *nvdimm =3D dimms->data; + Range tmp; + int size =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_= PROP, + NULL); + int addr =3D object_property_get_int(OBJECT(nvdimm), PC_DIMM_ADDR_= PROP, + NULL); + range_init_nofail(&tmp, addr, size); + + if (range_contains_range(&tmp, &as)) { + valid =3D true; + break; + } + } + + if (!valid) { + return H_P2; + } + + if (continue_token > 0) { + return H_P3; + } + + /*NB : dont do anything, let object_del take care of this for now. */ + + return H_SUCCESS; +} + +static target_ulong h_scm_query_block_mem_binding(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + return H_SUCCESS; +} + +static target_ulong h_scm_query_logical_mem_binding(PowerPCCPU *cpu, + sPAPRMachineState *spa= pr, + target_ulong opcode, + target_ulong *args) +{ + return H_SUCCESS; +} + +static target_ulong h_scm_mem_query(PowerPCCPU *cpu, sPAPRMachineState *sp= apr, + target_ulong opcode, + target_ulong *args) +{ + return H_SUCCESS; +} + static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1]; static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCA= LL_BASE + 1]; =20 @@ -1907,6 +2126,17 @@ static void hypercall_register_types(void) /* qemu/KVM-PPC specific hcalls */ spapr_register_hypercall(KVMPPC_H_RTAS, h_rtas); =20 + /* qemu/scm specific hcalls */ + spapr_register_hypercall(H_SCM_READ_METADATA, h_scm_read_metadata); + spapr_register_hypercall(H_SCM_WRITE_METADATA, h_scm_write_metadata); + spapr_register_hypercall(H_SCM_BIND_MEM, h_scm_bind_mem); + spapr_register_hypercall(H_SCM_UNBIND_MEM, h_scm_unbind_mem); + spapr_register_hypercall(H_SCM_QUERY_BLOCK_MEM_BINDING, + h_scm_query_block_mem_binding); + spapr_register_hypercall(H_SCM_QUERY_LOGICAL_MEM_BINDING, + h_scm_query_logical_mem_binding); + spapr_register_hypercall(H_SCM_MEM_QUERY, h_scm_mem_query); + /* ibm,client-architecture-support support */ spapr_register_hypercall(KVMPPC_H_CAS, h_client_architecture_support); =20 diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 21a9709afe..28249567f4 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -268,6 +268,7 @@ struct sPAPRMachineState { #define H_P7 -60 #define H_P8 -61 #define H_P9 -62 +#define H_OVERLAP -68 #define H_UNSUPPORTED_FLAG -256 #define H_MULTI_THREADS_ACTIVE -9005 =20 @@ -473,8 +474,15 @@ struct sPAPRMachineState { #define H_INT_ESB 0x3C8 #define H_INT_SYNC 0x3CC #define H_INT_RESET 0x3D0 - -#define MAX_HCALL_OPCODE H_INT_RESET +#define H_SCM_READ_METADATA 0x3E4 +#define H_SCM_WRITE_METADATA 0x3E8 +#define H_SCM_BIND_MEM 0x3EC +#define H_SCM_UNBIND_MEM 0x3F0 +#define H_SCM_QUERY_BLOCK_MEM_BINDING 0x3F4 +#define H_SCM_QUERY_LOGICAL_MEM_BINDING 0x3F8 +#define H_SCM_MEM_QUERY 0x3FC + +#define MAX_HCALL_OPCODE H_SCM_MEM_QUERY =20 /* The hcalls above are standardized in PAPR and implemented by pHyp * as well.