From nobody Tue Feb 10 05:46:06 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1520965603911443.52382279793096; Tue, 13 Mar 2018 11:26:43 -0700 (PDT) Received: from localhost ([::1]:41878 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1evodG-0004DP-T7 for importer@patchew.org; Tue, 13 Mar 2018 14:26:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:55566) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1evnog-00006Y-Qj for qemu-devel@nongnu.org; Tue, 13 Mar 2018 13:34:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1evnof-0007xj-Dv for qemu-devel@nongnu.org; Tue, 13 Mar 2018 13:34:26 -0400 Received: from mout.kundenserver.de ([212.227.126.133]:38633) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1evnof-0007wJ-2V for qemu-devel@nongnu.org; Tue, 13 Mar 2018 13:34:25 -0400 Received: from localhost.localdomain ([78.238.229.36]) by mrelayeu.kundenserver.de (mreue005 [212.227.15.167]) with ESMTPSA (Nemesis) id 0M05xQ-1edIzd0UM0-00uKma; Tue, 13 Mar 2018 18:34:09 +0100 From: Laurent Vivier To: qemu-devel@nongnu.org Date: Tue, 13 Mar 2018 18:33:51 +0100 Message-Id: <20180313173355.4468-15-laurent@vivier.eu> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180313173355.4468-1-laurent@vivier.eu> References: <20180313173355.4468-1-laurent@vivier.eu> X-Provags-ID: V03:K0:Pdg+dBtTNV72caW++Tz4KpvPbzfuyFYzmfmIZVpUuPGTZbMOig4 ZUoSmDWWxG4dCfW+d+RfFscD5bCLBXAnX//2WcMPiyOQLG98dS3okzpjaQMwbaf0l6dUy75 fT5NRYH6nuX69HtsTG96ZRk2oZbAPLBQ0tAFwn9wJ1PUq690EZoKYZu29jiAi+r1tiTBxFc rWn8IupAEux24pui1vebw== X-UI-Out-Filterresults: notjunk:1;V01:K0:R/81efiQ5rU=:8MsBMUs2g987+nzTst1HKg CDsH+yHUootef8jv8fsASO+LQuGAaQH0E2xhAjQx6zCPdbHSoqTgteG4hiMkzbsMwxTJBs7bL fvaF0J0bfvTYSRBOy//AFMJAA3VbGR7j049qePA36BY1nRmQxPWxiFDh39spftyzwOneEcwo6 Sr+FP+Vh2I1is6Lpg4PY9IURMjQVaoVVVwPz51xmPZE5qwkyw/0Oc3zEAPAunXv2OFd7GbbyA 3O5YBwsH0kkFPWcmK7dU5GeUuXi/5zfqqMNFQ7D5x2PETZG5kvrhxcRMoHyUsIPBje7BgzK6a Aqlx5+ll1T+vEz5FkIjSL1sSVsGyLiP1ciHLhjHeaYsorUjoRG1BsYvhatXtl/7gWYExRcjcO eF0oi/vPJIRQxRoIbWWkcav9L8cjByiY9xUGGIqvDvQLpT28bsLF9/Nm0VQscDFzDsFxnWnKW U9f+fXC5Pe0/yq5iLsM63X7mGw+cJq9lX8Ld6T+gACTDWyuspnmOmbY3dAA9LJ/mNOROB2vwu LlgaVNQcvVSYgBm1O/xNYoVT+eX7Sl9Eo8SJl4lIA4t58iF4KhH4Nw8XMWAFH30CofvWX20ak w+OJfAki2xciojc2y0DYscH+WrmoI0ET7EaI6Kwh+Y4rL4egucEwWuoxKcYchXY5eY9+reIrj gU4h82hW4NWkHg7/s5mAtTojj1b/Ywx6ZX5x+C8zCWgwZLD7Ve6dy8AoGwf9uLrb7Dgfku4VV SlCsT4oI2TR6zxWwIw6u0sZ6M46xhi0VyKSnPA== X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.227.126.133 Subject: [Qemu-devel] [PULL 14/18] linux-user: init_guest_space: Clarify page alignment logic X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Luke Shumaker , Riku Voipio , Laurent Vivier Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" From: Luke Shumaker There are 3 parts to this change: - Add a comment showing the relative sizes and positions of the blocks of memory - introduce and use new aligned_{start,size} instead of adjusting real_{start_size} - When we clean up (on failure), munmap(real_start, real_size) instead of munmap(aligned_start, aligned_size). It *shouldn't* make any difference, but I will admit that this does mean we are making the syscall with different values, so this isn't quite a no-op patch. Signed-off-by: Luke Shumaker Message-Id: <20171228180814.9749-6-lukeshu@lukeshu.com> Reviewed-by: Peter Maydell Signed-off-by: Laurent Vivier --- linux-user/elfload.c | 43 +++++++++++++++++++++++++++++++++---------- 1 file changed, 33 insertions(+), 10 deletions(-) diff --git a/linux-user/elfload.c b/linux-user/elfload.c index feecbd4163..653157876c 100644 --- a/linux-user/elfload.c +++ b/linux-user/elfload.c @@ -1801,7 +1801,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, unsigned long guest_start, bool fixed) { - unsigned long current_start, real_start; + unsigned long current_start, aligned_start; int flags; =20 assert(host_start || host_size); @@ -1827,7 +1827,8 @@ unsigned long init_guest_space(unsigned long host_sta= rt, /* Otherwise, a non-zero size region of memory needs to be mapped * and validated. */ while (1) { - unsigned long real_size =3D host_size; + unsigned long real_start, real_size, aligned_size; + aligned_size =3D real_size =3D host_size; =20 /* Do not use mmap_find_vma here because that is limited to the * guest address space. We are going to make the @@ -1841,26 +1842,48 @@ unsigned long init_guest_space(unsigned long host_s= tart, =20 /* Ensure the address is properly aligned. */ if (real_start & ~qemu_host_page_mask) { + /* Ideally, we adjust like + * + * pages: [ ][ ][ ][ ][ ] + * old: [ real ] + * [ aligned ] + * new: [ real ] + * [ aligned ] + * + * But if there is something else mapped right after it, + * then obviously it won't have room to grow, and the + * kernel will put the new larger real someplace else with + * unknown alignment (if we made it to here, then + * fixed=3Dfalse). Which is why we grow real by a full page + * size, instead of by part of one; so that even if we get + * moved, we can still guarantee alignment. But this does + * mean that there is a padding of < 1 page both before + * and after the aligned range; the "after" could could + * cause problems for ARM emulation where it could butt in + * to where we need to put the commpage. + */ munmap((void *)real_start, host_size); - real_size =3D host_size + qemu_host_page_size; + real_size =3D aligned_size + qemu_host_page_size; real_start =3D (unsigned long) mmap((void *)real_start, real_size, PROT_NONE, flags, -1, = 0); if (real_start =3D=3D (unsigned long)-1) { return (unsigned long)-1; } - real_start =3D HOST_PAGE_ALIGN(real_start); + aligned_start =3D HOST_PAGE_ALIGN(real_start); + } else { + aligned_start =3D real_start; } =20 /* Check to see if the address is valid. */ - if (!host_start || real_start =3D=3D current_start) { + if (!host_start || aligned_start =3D=3D current_start) { #if defined(TARGET_ARM) && !defined(TARGET_AARCH64) /* On 32-bit ARM, we need to also be able to map the commpage.= */ - int valid =3D init_guest_commpage(real_start - guest_start, - real_size + guest_start); + int valid =3D init_guest_commpage(aligned_start - guest_start, + aligned_size + guest_start); if (valid =3D=3D 1) { break; } else if (valid =3D=3D -1) { - munmap((void *)real_start, host_size); + munmap((void *)real_start, real_size); return (unsigned long)-1; } /* valid =3D=3D 0, so try again. */ @@ -1879,7 +1902,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, * address space randomization put a shared library somewhere * inconvenient. */ - munmap((void *)real_start, host_size); + munmap((void *)real_start, real_size); current_start +=3D qemu_host_page_size; if (host_start =3D=3D current_start) { /* Theoretically possible if host doesn't have any suitably @@ -1891,7 +1914,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, =20 qemu_log_mask(CPU_LOG_PAGE, "Reserved 0x%lx bytes of guest address spa= ce\n", host_size); =20 - return real_start; + return aligned_start; } =20 static void probe_guest_base(const char *image_name, --=20 2.14.3