From nobody Thu Apr 18 04:48:44 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 153116381112070.12159510621655; Mon, 9 Jul 2018 12:16:51 -0700 (PDT) Received: from localhost ([::1]:44044 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fcbeG-0000HV-Hk for importer@patchew.org; Mon, 09 Jul 2018 15:16:36 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34842) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fcbdI-0008Nz-Di for qemu-devel@nongnu.org; Mon, 09 Jul 2018 15:15:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fcbdF-0004ZO-6z for qemu-devel@nongnu.org; Mon, 09 Jul 2018 15:15:36 -0400 Received: from mail-pl0-x243.google.com ([2607:f8b0:400e:c01::243]:39665) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fcbdE-0004Xt-Qq for qemu-devel@nongnu.org; Mon, 09 Jul 2018 15:15:33 -0400 Received: by mail-pl0-x243.google.com with SMTP id s24-v6so6464891plq.6 for ; Mon, 09 Jul 2018 12:15:32 -0700 (PDT) Received: from cloudburst.twiddle.net (97-126-112-211.tukw.qwest.net. [97.126.112.211]) by smtp.gmail.com with ESMTPSA id h9-v6sm8647827pgn.67.2018.07.09.12.15.29 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 09 Jul 2018 12:15:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id; bh=MdvqFDM6IFfa3hgvmTqAiZk/nAeCdw+JoPGXtLSgGvM=; b=VgP0WG071/KiG3MO8eDNFA4ExWOmoYNBEhXSGYt7yH3kfi8yRlRC2I7THnjHtFqaNt NsM5XN/2lRZy1PiiWwS0Z4s44a0U6FhJgroCEeJaAoaIqZ5ooTED8ABTdg13ITQcXXSB abebaOLSDooAgSmj/3xyhIln3e95UFPpd/1qk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=MdvqFDM6IFfa3hgvmTqAiZk/nAeCdw+JoPGXtLSgGvM=; b=jJA15NbpQPMmKGZw7nM5wLuf1Y9BhBmjw3xuTgXqQ8kItcenTv2duNGWExK5RGl1n9 8MsAvDfjugU2OVzycZbkvXH1ezTTtvsouQWKPbWZQpB5FC3VfO1CxR7fNrXvkR/vBHth B5F+5xK76ywh7F7fqtRCE1BISb3kSR2a/6NS3MLrdIqckxyb4Nq/q79R8yzO4jdal3ns kI/NQfjoU+2R9V7mUePzSmDw+/g3lGDu5SzdiYTxlkm3zukNIenBJLbHsabs+dC6K4cL OOmSR9eFchdsWXdvIEzcR2rSlvWI5DrLBI8jXJbG+iNt+BCLEDse+z66HizC/Ri0NkR9 bJkQ== X-Gm-Message-State: APt69E22r02FoUCjLgdLORfi4ZAQhe4mCKGL7GqNcV0plnDFWqi/Y5SH zw245mdPeb9f+OlQDG4SV5JowT+HkRo= X-Google-Smtp-Source: AAOMgpePwp/L9/rJ1lDQmyREux/OUhRJIHwo43Fy7azHNXNOGodGdkFZlXyYbD8akBZacngFC5iewQ== X-Received: by 2002:a17:902:1682:: with SMTP id h2-v6mr21766937plh.327.1531163731348; Mon, 09 Jul 2018 12:15:31 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Mon, 9 Jul 2018 12:15:29 -0700 Message-Id: <20180709191529.13080-1-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c01::243 Subject: [Qemu-devel] [PATCH] linux-user: Fix shmat emulation by honoring host SHMLBA X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: laurent@vivier.eu Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) X-ZohoMail: RDKM_2 RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" For those hosts with SHMLBA > getpagesize, we don't automatically select a guest address that is compatible with the host. We can achieve this by boosting the alignment of guest_base and by adding an extra alignment argument to mmap_find_vma. Signed-off-by: Richard Henderson --- Found by running check-tcg on an arm32 host, which defines (host) SHMLBA as 16K, executing the "native" or "non-cross-compiled" linux-test. Pushing that back to x86 snuffled out more fixes required for mips, which defines (guest) SHMLBA as 256k. While this is newly exposed by check-tcg, the problem has been latent for quite a while. It probably doesn't warrent rushing in for 3.0. r~ --- linux-user/qemu.h | 2 +- linux-user/elfload.c | 17 +++++----- linux-user/mmap.c | 74 +++++++++++++++++++++++--------------------- linux-user/syscall.c | 3 +- 4 files changed, 52 insertions(+), 44 deletions(-) diff --git a/linux-user/qemu.h b/linux-user/qemu.h index bb85c81aa4..b79ac8bd01 100644 --- a/linux-user/qemu.h +++ b/linux-user/qemu.h @@ -436,7 +436,7 @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong ol= d_size, abi_ulong new_addr); extern unsigned long last_brk; extern abi_ulong mmap_next_start; -abi_ulong mmap_find_vma(abi_ulong, abi_ulong); +abi_ulong mmap_find_vma(abi_ulong, abi_ulong, abi_ulong); void mmap_fork_start(void); void mmap_fork_end(int child); =20 diff --git a/linux-user/elfload.c b/linux-user/elfload.c index 942a1b661f..3467e3512e 100644 --- a/linux-user/elfload.c +++ b/linux-user/elfload.c @@ -3,6 +3,7 @@ #include =20 #include +#include =20 #include "qemu.h" #include "disas/disas.h" @@ -1929,6 +1930,8 @@ unsigned long init_guest_space(unsigned long host_sta= rt, unsigned long guest_start, bool fixed) { + /* In order to use host shmat, we must be able to honor SHMLBA. */ + unsigned long align =3D MAX(SHMLBA, qemu_host_page_size); unsigned long current_start, aligned_start; int flags; =20 @@ -1946,7 +1949,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, } =20 /* Setup the initial flags and start address. */ - current_start =3D host_start & qemu_host_page_mask; + current_start =3D host_start & -align; flags =3D MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE; if (fixed) { flags |=3D MAP_FIXED; @@ -1982,8 +1985,8 @@ unsigned long init_guest_space(unsigned long host_sta= rt, return (unsigned long)-1; } munmap((void *)real_start, host_full_size); - if (real_start & ~qemu_host_page_mask) { - /* The same thing again, but with an extra qemu_host_page_size + if (real_start & (align - 1)) { + /* The same thing again, but with extra * so that we can shift around alignment. */ unsigned long real_size =3D host_full_size + qemu_host_page_si= ze; @@ -1996,7 +1999,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, return (unsigned long)-1; } munmap((void *)real_start, real_size); - real_start =3D HOST_PAGE_ALIGN(real_start); + real_start =3D ROUND_UP(real_start, align); } current_start =3D real_start; } @@ -2023,7 +2026,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, } =20 /* Ensure the address is properly aligned. */ - if (real_start & ~qemu_host_page_mask) { + if (real_start & (align - 1)) { /* Ideally, we adjust like * * pages: [ ][ ][ ][ ][ ] @@ -2051,7 +2054,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, if (real_start =3D=3D (unsigned long)-1) { return (unsigned long)-1; } - aligned_start =3D HOST_PAGE_ALIGN(real_start); + aligned_start =3D ROUND_UP(real_start, align); } else { aligned_start =3D real_start; } @@ -2088,7 +2091,7 @@ unsigned long init_guest_space(unsigned long host_sta= rt, * because of trouble with ARM commpage setup. */ munmap((void *)real_start, real_size); - current_start +=3D qemu_host_page_size; + current_start +=3D align; if (host_start =3D=3D current_start) { /* Theoretically possible if host doesn't have any suitably * aligned areas. Normally the first mmap will fail. diff --git a/linux-user/mmap.c b/linux-user/mmap.c index d0c50e4888..c800c9ab82 100644 --- a/linux-user/mmap.c +++ b/linux-user/mmap.c @@ -202,49 +202,52 @@ unsigned long last_brk; =20 /* Subroutine of mmap_find_vma, used when we have pre-allocated a chunk of guest address space. */ -static abi_ulong mmap_find_vma_reserved(abi_ulong start, abi_ulong size) +static abi_ulong mmap_find_vma_reserved(abi_ulong start, abi_ulong size, + abi_ulong align) { - abi_ulong addr; - abi_ulong end_addr; + abi_ulong addr, end_addr, incr =3D qemu_host_page_size; int prot; - int looped =3D 0; + bool looped =3D false; =20 if (size > reserved_va) { return (abi_ulong)-1; } =20 - size =3D HOST_PAGE_ALIGN(size); - end_addr =3D start + size; - if (end_addr > reserved_va) { - end_addr =3D reserved_va; - } - addr =3D end_addr - qemu_host_page_size; + /* Note that start and size have already been aligned by mmap_find_vma= . */ =20 + end_addr =3D start + size; + if (start > reserved_va - size) { + /* Start at the top of the address space. */ + end_addr =3D ((reserved_va - size) & -align) + size; + looped =3D true; + } + + /* Search downward from END_ADDR, checking to see if a page is in use.= */ + addr =3D end_addr; while (1) { + addr -=3D incr; if (addr > end_addr) { if (looped) { + /* Failure. The entire address space has been searched. = */ return (abi_ulong)-1; } - end_addr =3D reserved_va; - addr =3D end_addr - qemu_host_page_size; - looped =3D 1; - continue; + /* Re-start at the top of the address space. */ + addr =3D end_addr =3D ((reserved_va - size) & -align) + size; + looped =3D true; + } else { + prot =3D page_get_flags(addr); + if (prot) { + /* Page in use. Restart below this page. */ + addr =3D end_addr =3D ((addr - size) & -align) + size; + } else if (addr && addr + size =3D=3D end_addr) { + /* Success! All pages between ADDR and END_ADDR are free.= */ + if (start =3D=3D mmap_next_start) { + mmap_next_start =3D addr; + } + return addr; + } } - prot =3D page_get_flags(addr); - if (prot) { - end_addr =3D addr; - } - if (addr && addr + size =3D=3D end_addr) { - break; - } - addr -=3D qemu_host_page_size; } - - if (start =3D=3D mmap_next_start) { - mmap_next_start =3D addr; - } - - return addr; } =20 /* @@ -253,7 +256,7 @@ static abi_ulong mmap_find_vma_reserved(abi_ulong start= , abi_ulong size) * It must be called with mmap_lock() held. * Return -1 if error. */ -abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size) +abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size, abi_ulong align) { void *ptr, *prev; abi_ulong addr; @@ -265,11 +268,12 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong si= ze) } else { start &=3D qemu_host_page_mask; } + start =3D ROUND_UP(start, align); =20 size =3D HOST_PAGE_ALIGN(size); =20 if (reserved_va) { - return mmap_find_vma_reserved(start, size); + return mmap_find_vma_reserved(start, size, align); } =20 addr =3D start; @@ -299,7 +303,7 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size) if (h2g_valid(ptr + size - 1)) { addr =3D h2g(ptr); =20 - if ((addr & ~TARGET_PAGE_MASK) =3D=3D 0) { + if ((addr & (align - 1)) =3D=3D 0) { /* Success. */ if (start =3D=3D mmap_next_start && addr >=3D TASK_UNMAPPE= D_BASE) { mmap_next_start =3D addr + size; @@ -313,12 +317,12 @@ abi_ulong mmap_find_vma(abi_ulong start, abi_ulong si= ze) /* Assume the result that the kernel gave us is the first with enough free space, so start again at the next higher target page. */ - addr =3D TARGET_PAGE_ALIGN(addr); + addr =3D ROUND_UP(addr, align); break; case 1: /* Sometimes the kernel decides to perform the allocation at the top end of memory instead. */ - addr &=3D TARGET_PAGE_MASK; + addr &=3D align - 1; break; case 2: /* Start over at low memory. */ @@ -407,7 +411,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, in= t prot, if (!(flags & MAP_FIXED)) { host_len =3D len + offset - host_offset; host_len =3D HOST_PAGE_ALIGN(host_len); - start =3D mmap_find_vma(real_start, host_len); + start =3D mmap_find_vma(real_start, host_len, TARGET_PAGE_SIZE); if (start =3D=3D (abi_ulong)-1) { errno =3D ENOMEM; goto fail; @@ -701,7 +705,7 @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong ol= d_size, } else if (flags & MREMAP_MAYMOVE) { abi_ulong mmap_start; =20 - mmap_start =3D mmap_find_vma(0, new_size); + mmap_start =3D mmap_find_vma(0, new_size, TARGET_PAGE_SIZE); =20 if (mmap_start =3D=3D -1) { errno =3D ENOMEM; diff --git a/linux-user/syscall.c b/linux-user/syscall.c index 5822e03e28..b6c37cbb97 100644 --- a/linux-user/syscall.c +++ b/linux-user/syscall.c @@ -5000,7 +5000,8 @@ static inline abi_ulong do_shmat(CPUArchState *cpu_en= v, else { abi_ulong mmap_start; =20 - mmap_start =3D mmap_find_vma(0, shm_info.shm_segsz); + /* In order to use the host shmat, we need to honor host SHMLBA. = */ + mmap_start =3D mmap_find_vma(0, shm_info.shm_segsz, MAX(SHMLBA, sh= mlba)); =20 if (mmap_start =3D=3D -1) { errno =3D ENOMEM; --=20 2.17.1