From nobody Sat May 4 04:18:27 2024 Delivered-To: importer@patchew.org Received-SPF: none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org ARC-Seal: i=1; a=rsa-sha256; t=1564762049; cv=none; d=zoho.com; s=zohoarc; b=PgP19rkNNa9xqUPBmKkk1aiK7xsKgeSu7hgHkCFXNeitzbOkLzncD1dhIfS9soGZCkEFA8vDgewzjCMjeytKuNjlmBZ/qHd0tLFFq0LBQ1VtW61yf1gnZkEAE5ADKbkBsfi+5rvWlqv42u27KxgR5lTQRqvw3NvI66zKen9rgDM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1564762049; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To:ARC-Authentication-Results; bh=hXex7tmQHxdLRWICMYslB3wG2kHrsd+CZ5uqYJU41f4=; b=DjYwm4Vhnu0QN+ttNdi5HD1nAWgtQSqC+eDv4dKy8F7jKWCrcGtBW6ZArp55iCGLSOHC5YKtMHFx2+epdOx4+qyPYQZt7kwgUuvUKxe113CKMwIqoJ8/gVnaHe4DszlTSd7SVpVTxfAjLWh0gnGuoghha8h+m3pX5sfwS2phRSg= ARC-Authentication-Results: i=1; mx.zoho.com; spf=none (zoho.com: 192.237.175.120 is neither permitted nor denied by domain of lists.xenproject.org) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1564762049019618.884270572778; Fri, 2 Aug 2019 09:07:29 -0700 (PDT) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hta4h-00019m-8p; Fri, 02 Aug 2019 16:06:35 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hta4f-00019h-R6 for xen-devel@lists.xenproject.org; Fri, 02 Aug 2019 16:06:33 +0000 Received: from mx1.suse.de (unknown [195.135.220.15]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 7d7d3f0c-b53f-11e9-8980-bc764e045a96; Fri, 02 Aug 2019 16:06:31 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id BDFEBAC91; Fri, 2 Aug 2019 16:06:29 +0000 (UTC) X-Inumbo-ID: 7d7d3f0c-b53f-11e9-8980-bc764e045a96 X-Virus-Scanned: by amavisd-new at test-mx.suse.de From: Vlastimil Babka To: stable@vger.kernel.org Date: Fri, 2 Aug 2019 18:06:14 +0200 Message-Id: <20190802160614.8089-1-vbabka@suse.cz> X-Mailer: git-send-email 2.22.0 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH STABLE 4.9] x86, mm, gup: prevent get_page() race with munmap in paravirt guest X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Juergen Gross , Ben Hutchings , Dave Hansen , Jann Horn , Peter Zijlstra , x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ingo Molnar , Borislav Petkov , "Kirill A . Shutemov" , Andy Lutomirski , xen-devel@lists.xenproject.org, Thomas Gleixner , Linus Torvalds , Vitaly Kuznetsov , Vlastimil Babka , Oscar Salvador Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The x86 version of get_user_pages_fast() relies on disabled interrupts to synchronize gup_pte_range() between gup_get_pte(ptep); and get_page() again= st a parallel munmap. The munmap side nulls the pte, then flushes TLBs, then releases the page. As TLB flush is done synchronously via IPI disabling interrupts blocks the page release, and get_page(), which assumes existing reference on page, is thus safe. However when TLB flush is done by a hypercall, e.g. in a Xen PV guest, ther= e is no blocking thanks to disabled interrupts, and get_page() can succeed on a = page that was already freed or even reused. We have recently seen this happen with our 4.4 and 4.12 based kernels, with userspace (java) that exits a thread, where mm_release() performs a futex_w= ake() on tsk->clear_child_tid, and another thread in parallel unmaps the page whe= re tsk->clear_child_tid points to. The spurious get_page() succeeds, but futex= code immediately releases the page again, while it's already on a freelist. Symp= toms include a bad page state warning, general protection faults acessing a pois= oned list prev/next pointer in the freelist, or free page pcplists of two cpus j= oined together in a single list. Oscar has also reproduced this scenario, with a patch inserting delays before the get_page() to make the race window larger. Fix this by removing the dependency on TLB flush interrupts the same way as= the generic get_user_pages_fast() code by using page_cache_add_speculative() and revalidating the PTE contents after pinning the page. Mainline is safe since 4.13 where the x86 gup code was removed in favor of the common code. Access= ing the page table itself safely also relies on disabled interrupts and TLB flu= sh IPIs that don't happen with hypercalls, which was acknowledged in commit 9e52fc2b50de ("x86/mm: Enable RCU based page table freeing (CONFIG_HAVE_RCU_TABLE_FREE=3Dy)"). That commit with follups should also be backported for full safety, although our reproducer didn't hit a problem without that backport. Reproduced-by: Oscar Salvador Signed-off-by: Vlastimil Babka Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juergen Gross Cc: Kirill A. Shutemov Cc: Vitaly Kuznetsov Cc: Linus Torvalds Cc: Borislav Petkov Cc: Dave Hansen Cc: Andy Lutomirski --- Hi, I'm sending this stable-only patch for consideration because it's proba= bly unrealistic to backport the 4.13 switch to generic GUP. I can look at 4.4 a= nd 3.16 if accepted. The RCU page table freeing could be also considered. Note the patch also includes page refcount protection. I found out that 8fde12ca79af ("mm: prevent get_user_pages() from overflowing page refcount") backport to 4.9 missed the arch-specific gup implementations: https://lore.kernel.org/lkml/6650323f-dbc9-f069-000b-f6b0f941a065@suse.cz/ arch/x86/mm/gup.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c index 1680768d392c..d7db45bdfb3b 100644 --- a/arch/x86/mm/gup.c +++ b/arch/x86/mm/gup.c @@ -97,6 +97,20 @@ static inline int pte_allows_gup(unsigned long pteval, i= nt write) return 1; } =20 +/* + * Return the compund head page with ref appropriately incremented, + * or NULL if that failed. + */ +static inline struct page *try_get_compound_head(struct page *page, int re= fs) +{ + struct page *head =3D compound_head(page); + if (WARN_ON_ONCE(page_ref_count(head) < 0)) + return NULL; + if (unlikely(!page_cache_add_speculative(head, refs))) + return NULL; + return head; +} + /* * The performance critical leaf functions are made noinline otherwise gcc * inlines everything into a single function which results in too much @@ -112,7 +126,7 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned l= ong addr, ptep =3D pte_offset_map(&pmd, addr); do { pte_t pte =3D gup_get_pte(ptep); - struct page *page; + struct page *head, *page; =20 /* Similar to the PMD case, NUMA hinting must take slow path */ if (pte_protnone(pte)) { @@ -138,7 +152,21 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned = long addr, } VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page =3D pte_page(pte); - get_page(page); + + head =3D try_get_compound_head(page, 1); + if (!head) { + put_dev_pagemap(pgmap); + pte_unmap(ptep); + return 0; + } + + if (unlikely(pte_val(pte) !=3D pte_val(*ptep))) { + put_page(head); + put_dev_pagemap(pgmap); + pte_unmap(ptep); + return 0; + } + put_dev_pagemap(pgmap); SetPageReferenced(page); pages[*nr] =3D page; --=20 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel