From nobody Mon Feb 9 09:53:07 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; arc=fail (Bad Signature) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 162384893365960.15781323019701; Wed, 16 Jun 2021 06:08:53 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.143114.263983 (Exim 4.92) (envelope-from ) id 1ltVHc-00049v-G9; Wed, 16 Jun 2021 13:08:40 +0000 Received: by outflank-mailman (output) from mailman id 143114.263983; Wed, 16 Jun 2021 13:08:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ltVHc-00048f-Bd; Wed, 16 Jun 2021 13:08:40 +0000 Received: by outflank-mailman (input) for mailman id 143114; Wed, 16 Jun 2021 13:08:38 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1ltV1y-00075D-8o for xen-devel@lists.xenproject.org; Wed, 16 Jun 2021 12:52:30 +0000 Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.172]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id a6f8c5a5-da29-4a23-8ff5-586f6ef15fae; Wed, 16 Jun 2021 12:51:52 +0000 (UTC) Received: from sender by smtp.strato.de (RZmta 47.27.2 AUTH) with ESMTPSA id j0415bx5GCpjtmK (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits)) (Client did not present a certificate); Wed, 16 Jun 2021 14:51:45 +0200 (CEST) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a6f8c5a5-da29-4a23-8ff5-586f6ef15fae ARC-Seal: i=1; a=rsa-sha256; t=1623847905; cv=none; d=strato.com; s=strato-dkim-0002; b=JvlnntSKYv9tF9aoHR4YcbHinXeXv7WSGPTF4iEOj3hEgaZ3rfYK71Z+c1u/P9oqXP ygpI2DAhmADpkQjxV7nIy4Nl6T9C6yg4tDpufy7R5mP+18i3dNBfoIpuVB5DcSmrGKJd Q1JJV1P+F5f53g89lP/qCWI0UeBb/Wi7Okc/ZR9/rFf9vxReF2goTbAb9BapTDPzZbkG uI/8EdXhHw3bnVr9ZSZqbUprSS2+/e1ItAUXhABtzOpSMkWBcn6HkhYKSid34+pM0XVF T4mv1PoEej+efCX4C99jewUswe36nTVWhCbZfO03vAnHEeDEWmCiRxgd5R7C3OIXCfem kGwg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847905; s=strato-dkim-0002; d=strato.com; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date: From:Subject:Sender; bh=hD2Ug5waXHQrkHDlWstdnaBGpKn51RUN9z6BofzxctY=; b=NN9G0ZUM6yI8Gy7r02RYTVweKFCGoz1PK1+mndvDLbuszw92lldN86tr8zyviSCxpy ngHevMpvqH004PlVQc2aYk48z8IwTkQsXPBcUTrgtumFCJRtuMF0ExtzkTFjFzoED+tR Ka94LAsMlrlNkzSBCFoF8pvotQFKcMTUOrcpbTOIsuDd5h/5qbm2KIGc7AiRk2QaDvoh ztEwWWiwbFbBxtcVqjGrElq/E7Ggw+XRwlADRbYuyym7DCJ37omw22SFFRQQ5gXVw4fn mgO+8kg8UlwU1Uox7+JdfnmpyW5HDBNnthlUY7y0jZN90DSAM/DceN6aaOGeekRjoJUC Oe6A== ARC-Authentication-Results: i=1; strato.com; dkim=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1623847905; s=strato-dkim-0002; d=aepfle.de; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Cc:Date: From:Subject:Sender; bh=hD2Ug5waXHQrkHDlWstdnaBGpKn51RUN9z6BofzxctY=; b=oIIgINgdysp5e2Ft3jMn8MxLIF1+rBoIenWMzeKeJJkk9UmTE8HEZ72sknf5SHBHOa vD7+xM6CmoLdF2lXes3AeMGOouESDswyoY5uQdO8wRaVmRj0ScOQ6JzIvdd59fg1WNa/ 0UJ+UtXbRmyUV+RI+GSyg/E4WZy0r07M7otyy8bzWD2pBr3tVnA75udXorURoU9JU7Uu OoTP0QhMVOyAvJAvG1AXIydMnd++Dy82qX1neuEdXE3WjSaRfCBY5B5Ms566TbprjUrD rJybBnxLw7VQ+MtYmK41GoEeU85HXlKmqxunqTzsT0GgkkTb/yTEzi3xsWmiVhRy51Fu Ou5A== Authentication-Results: strato.com; dkim=none X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAg5l+Vz7FJgt8+TgOd9sTrMwXjWWExsBKQCrpnqhqg==" X-RZG-CLASS-ID: mo00 From: Olaf Hering To: xen-devel@lists.xenproject.org Cc: Olaf Hering , Ian Jackson , Wei Liu , Juergen Gross Subject: [PATCH v20210616 25/36] tools: restore: split handle_page_data Date: Wed, 16 Jun 2021 14:51:18 +0200 Message-Id: <20210616125129.26563-26-olaf@aepfle.de> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210616125129.26563-1-olaf@aepfle.de> References: <20210616125129.26563-1-olaf@aepfle.de> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" handle_page_data must be able to read directly into mapped guest memory. This will avoid unneccesary memcpy calls for data that can be consumed verb= atim. Split the various steps of record processing: - move processing to handle_buffered_page_data - adjust xenforeignmemory_map to set errno in case of failure - adjust verify mode to set errno in case of failure This change is preparation for future changes in handle_page_data, no change in behavior is intended. Signed-off-by: Olaf Hering --- tools/libs/saverestore/common.h | 9 + tools/libs/saverestore/restore.c | 343 ++++++++++++++++++++----------- 2 files changed, 231 insertions(+), 121 deletions(-) diff --git a/tools/libs/saverestore/common.h b/tools/libs/saverestore/commo= n.h index 2ced6f100d..d479f1a918 100644 --- a/tools/libs/saverestore/common.h +++ b/tools/libs/saverestore/common.h @@ -242,9 +242,14 @@ struct sr_restore_arrays { /* process_page_data */ xen_pfn_t mfns[MAX_BATCH_SIZE]; int map_errs[MAX_BATCH_SIZE]; + void *guest_data[MAX_BATCH_SIZE]; + /* populate_pfns */ xen_pfn_t pp_mfns[MAX_BATCH_SIZE]; xen_pfn_t pp_pfns[MAX_BATCH_SIZE]; + + /* Must be the last member */ + struct xc_sr_rec_page_data_header pages; }; =20 struct xc_sr_context @@ -335,7 +340,11 @@ struct xc_sr_context =20 /* Sender has invoked verify mode on the stream. */ bool verify; + void *verify_buf; + struct sr_restore_arrays *m; + void *guest_mapping; + uint32_t nr_mapped_pages; } restore; }; =20 diff --git a/tools/libs/saverestore/restore.c b/tools/libs/saverestore/rest= ore.c index 2409c8d603..877fd19a9b 100644 --- a/tools/libs/saverestore/restore.c +++ b/tools/libs/saverestore/restore.c @@ -186,123 +186,18 @@ int populate_pfns(struct xc_sr_context *ctx, unsigne= d int count, return rc; } =20 -/* - * Given a list of pfns, their types, and a block of page data from the - * stream, populate and record their types, map the relevant subset and co= py - * the data into the guest. - */ -static int process_page_data(struct xc_sr_context *ctx, unsigned int count, - xen_pfn_t *pfns, uint32_t *types, void *page_= data) +static int handle_static_data_end_v2(struct xc_sr_context *ctx) { - xc_interface *xch =3D ctx->xch; - xen_pfn_t *mfns =3D ctx->restore.m->mfns; - int *map_errs =3D ctx->restore.m->map_errs; - int rc; - void *mapping =3D NULL, *guest_page =3D NULL; - unsigned int i, /* i indexes the pfns from the record. */ - j, /* j indexes the subset of pfns we decide to map. */ - nr_pages =3D 0; - - rc =3D populate_pfns(ctx, count, pfns, types); - if ( rc ) - { - ERROR("Failed to populate pfns for batch of %u pages", count); - goto err; - } - - for ( i =3D 0; i < count; ++i ) - { - ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]); - - if ( page_type_has_stream_data(types[i]) =3D=3D true ) - mfns[nr_pages++] =3D ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]); - } - - /* Nothing to do? */ - if ( nr_pages =3D=3D 0 ) - goto done; - - mapping =3D guest_page =3D xenforeignmemory_map( - xch->fmem, ctx->domid, PROT_READ | PROT_WRITE, - nr_pages, mfns, map_errs); - if ( !mapping ) - { - rc =3D -1; - PERROR("Unable to map %u mfns for %u pages of data", - nr_pages, count); - goto err; - } - - for ( i =3D 0, j =3D 0; i < count; ++i ) - { - if ( page_type_has_stream_data(types[i]) =3D=3D false ) - continue; - - if ( map_errs[j] ) - { - rc =3D -1; - ERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32"= ) failed with %d", - pfns[i], mfns[j], types[i], map_errs[j]); - goto err; - } - - /* Undo page normalisation done by the saver. */ - rc =3D ctx->restore.ops.localise_page(ctx, types[i], page_data); - if ( rc ) - { - ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")", - pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT); - goto err; - } - - if ( ctx->restore.verify ) - { - /* Verify mode - compare incoming data to what we already have= . */ - if ( memcmp(guest_page, page_data, PAGE_SIZE) ) - ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")", - pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT); - } - else - { - /* Regular mode - copy incoming data into place. */ - memcpy(guest_page, page_data, PAGE_SIZE); - } - - ++j; - guest_page +=3D PAGE_SIZE; - page_data +=3D PAGE_SIZE; - } - - done: - rc =3D 0; - - err: - if ( mapping ) - xenforeignmemory_unmap(xch->fmem, mapping, nr_pages); - - return rc; -} + int rc =3D 0; =20 -/* - * Validate a PAGE_DATA record from the stream, and pass the results to - * process_page_data() to actually perform the legwork. - */ -static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record= *rec) -{ +#if defined(__i386__) || defined(__x86_64__) xc_interface *xch =3D ctx->xch; - struct xc_sr_rec_page_data_header *pages =3D rec->data; - unsigned int i, pages_of_data =3D 0; - int rc =3D -1; - - xen_pfn_t *pfns =3D ctx->restore.m->pfns, pfn; - uint32_t *types =3D ctx->restore.m->types, type; - /* * v2 compatibility only exists for x86 streams. This is a bit of a * bodge, but it is less bad than duplicating handle_page_data() betwe= en * different architectures. */ -#if defined(__i386__) || defined(__x86_64__) + /* v2 compat. Infer the position of STATIC_DATA_END. */ if ( ctx->restore.format_version < 3 && !ctx->restore.seen_static_data= _end ) { @@ -320,12 +215,26 @@ static int handle_page_data(struct xc_sr_context *ctx= , struct xc_sr_record *rec) ERROR("No STATIC_DATA_END seen"); goto err; } + + rc =3D 0; +err: #endif =20 - if ( rec->length < sizeof(*pages) ) + return rc; +} + +static bool verify_rec_page_hdr(struct xc_sr_context *ctx, uint32_t rec_le= ngth, + struct xc_sr_rec_page_data_header *pages) +{ + xc_interface *xch =3D ctx->xch; + bool ret =3D false; + + errno =3D EINVAL; + + if ( rec_length < sizeof(*pages) ) { ERROR("PAGE_DATA record truncated: length %u, min %zu", - rec->length, sizeof(*pages)); + rec_length, sizeof(*pages)); goto err; } =20 @@ -335,13 +244,35 @@ static int handle_page_data(struct xc_sr_context *ctx= , struct xc_sr_record *rec) goto err; } =20 - if ( rec->length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) ) + if ( pages->count > MAX_BATCH_SIZE ) + { + ERROR("pfn count %u in PAGE_DATA record too large", pages->count); + errno =3D E2BIG; + goto err; + } + + if ( rec_length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) ) { ERROR("PAGE_DATA record (length %u) too short to contain %u" - " pfns worth of information", rec->length, pages->count); + " pfns worth of information", rec_length, pages->count); goto err; } =20 + ret =3D true; + +err: + return ret; +} + +static bool verify_rec_page_pfns(struct xc_sr_context *ctx, uint32_t rec_l= ength, + struct xc_sr_rec_page_data_header *pages) +{ + xc_interface *xch =3D ctx->xch; + uint32_t i, pages_of_data =3D 0; + xen_pfn_t pfn; + uint32_t type; + bool ret =3D false; + for ( i =3D 0; i < pages->count; ++i ) { pfn =3D pages->pfn[i] & PAGE_DATA_PFN_MASK; @@ -364,23 +295,183 @@ static int handle_page_data(struct xc_sr_context *ct= x, struct xc_sr_record *rec) * have a page worth of data in the record. */ pages_of_data++; =20 - pfns[i] =3D pfn; - types[i] =3D type; + ctx->restore.m->pfns[i] =3D pfn; + ctx->restore.m->types[i] =3D type; } =20 - if ( rec->length !=3D (sizeof(*pages) + + if ( rec_length !=3D (sizeof(*pages) + (sizeof(uint64_t) * pages->count) + (PAGE_SIZE * pages_of_data)) ) { ERROR("PAGE_DATA record wrong size: length %u, expected " - "%zu + %zu + %lu", rec->length, sizeof(*pages), + "%zu + %zu + %lu", rec_length, sizeof(*pages), (sizeof(uint64_t) * pages->count), (PAGE_SIZE * pages_of_dat= a)); goto err; } =20 - rc =3D process_page_data(ctx, pages->count, pfns, types, - &pages->pfn[pages->count]); + ret =3D true; + +err: + return ret; +} + +/* + * Populate pfns, if required + * Fill m->guest_data with either mapped address or NULL + * The caller must unmap guest_mapping + */ +static int map_guest_pages(struct xc_sr_context *ctx, + struct xc_sr_rec_page_data_header *pages) +{ + xc_interface *xch =3D ctx->xch; + struct sr_restore_arrays *m =3D ctx->restore.m; + uint32_t i, p; + int rc; + + rc =3D populate_pfns(ctx, pages->count, m->pfns, m->types); + if ( rc ) + { + ERROR("Failed to populate pfns for batch of %u pages", pages->coun= t); + goto err; + } + + ctx->restore.nr_mapped_pages =3D 0; + + for ( i =3D 0; i < pages->count; i++ ) + { + ctx->restore.ops.set_page_type(ctx, m->pfns[i], m->types[i]); + + if ( page_type_has_stream_data(m->types[i]) =3D=3D false ) + { + m->guest_data[i] =3D NULL; + continue; + } + + m->mfns[ctx->restore.nr_mapped_pages++] =3D ctx->restore.ops.pfn_t= o_gfn(ctx, m->pfns[i]); + } + + /* Nothing to do? */ + if ( ctx->restore.nr_mapped_pages =3D=3D 0 ) + goto done; + + ctx->restore.guest_mapping =3D xenforeignmemory_map(xch->fmem, ctx->do= mid, + PROT_READ | PROT_WRITE, ctx->restore.nr_mapped_pages, + m->mfns, m->map_errs); + if ( !ctx->restore.guest_mapping ) + { + rc =3D -1; + PERROR("Unable to map %u mfns for %u pages of data", + ctx->restore.nr_mapped_pages, pages->count); + goto err; + } + + /* Verify mapping, and assign address to pfn data */ + for ( i =3D 0, p =3D 0; i < pages->count; i++ ) + { + if ( page_type_has_stream_data(m->types[i]) =3D=3D false ) + continue; + + if ( m->map_errs[p] =3D=3D 0 ) + { + m->guest_data[i] =3D ctx->restore.guest_mapping + (p * PAGE_SI= ZE); + p++; + continue; + } + + errno =3D m->map_errs[p]; + rc =3D -1; + PERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") f= ailed", + m->pfns[i], m->mfns[p], m->types[i]); + goto err; + } + +done: + rc =3D 0; + +err: + return rc; +} + +/* + * Handle PAGE_DATA record from an existing buffer + * Given a list of pfns, their types, and a block of page data from the + * stream, populate and record their types, map the relevant subset and co= py + * the data into the guest. + */ +static int handle_buffered_page_data(struct xc_sr_context *ctx, + struct xc_sr_record *rec) +{ + xc_interface *xch =3D ctx->xch; + struct xc_sr_rec_page_data_header *pages =3D rec->data; + struct sr_restore_arrays *m =3D ctx->restore.m; + void *p; + uint32_t i; + int rc =3D -1, idx; + + rc =3D handle_static_data_end_v2(ctx); + if ( rc ) + goto err; + + /* First read and verify the header */ + if ( verify_rec_page_hdr(ctx, rec->length, pages) =3D=3D false ) + { + rc =3D -1; + goto err; + } + + /* Then read and verify the pfn numbers */ + if ( verify_rec_page_pfns(ctx, rec->length, pages) =3D=3D false ) + { + rc =3D -1; + goto err; + } + + /* Map the target pfn */ + rc =3D map_guest_pages(ctx, pages); + if ( rc ) + goto err; + + for ( i =3D 0, idx =3D 0; i < pages->count; i++ ) + { + if ( !m->guest_data[i] ) + continue; + + p =3D &pages->pfn[pages->count] + (idx * PAGE_SIZE); + rc =3D ctx->restore.ops.localise_page(ctx, m->types[i], p); + if ( rc ) + { + ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")", + m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT); + goto err; + + } + + if ( ctx->restore.verify ) + { + if ( memcmp(m->guest_data[i], p, PAGE_SIZE) ) + { + errno =3D EIO; + ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")", + m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SH= IFT); + goto err; + } + } + else + { + memcpy(m->guest_data[i], p, PAGE_SIZE); + } + + idx++; + } + + rc =3D 0; + err: + if ( ctx->restore.guest_mapping ) + { + xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx-= >restore.nr_mapped_pages); + ctx->restore.guest_mapping =3D NULL; + } return rc; } =20 @@ -641,12 +732,21 @@ static int process_buffered_record(struct xc_sr_conte= xt *ctx, struct xc_sr_recor break; =20 case REC_TYPE_PAGE_DATA: - rc =3D handle_page_data(ctx, rec); + rc =3D handle_buffered_page_data(ctx, rec); break; =20 case REC_TYPE_VERIFY: DPRINTF("Verify mode enabled"); ctx->restore.verify =3D true; + if ( !ctx->restore.verify_buf ) + { + ctx->restore.verify_buf =3D malloc(MAX_BATCH_SIZE * PAGE_SIZE); + if ( !ctx->restore.verify_buf ) + { + rc =3D -1; + PERROR("Unable to allocate verify_buf"); + } + } break; =20 case REC_TYPE_CHECKPOINT: @@ -725,7 +825,8 @@ static int setup(struct xc_sr_context *ctx) } ctx->restore.allocated_rec_num =3D DEFAULT_BUF_RECORDS; =20 - ctx->restore.m =3D malloc(sizeof(*ctx->restore.m)); + ctx->restore.m =3D malloc(sizeof(*ctx->restore.m) + + (sizeof(*ctx->restore.m->pages.pfn) * MAX_BATCH_SIZE)); if ( !ctx->restore.m ) { ERROR("Unable to allocate memory for arrays"); rc =3D -1;