From nobody Mon Apr 27 20:46:55 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24A3FC433EF for ; Thu, 9 Jun 2022 13:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244638AbiFINIy (ORCPT ); Thu, 9 Jun 2022 09:08:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242601AbiFINIf (ORCPT ); Thu, 9 Jun 2022 09:08:35 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46749255BC for ; Thu, 9 Jun 2022 06:08:32 -0700 (PDT) Received: from canpemm500002.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4LJkrJ0Hq3zgY9W; Thu, 9 Jun 2022 21:06:40 +0800 (CST) Received: from huawei.com (10.175.124.27) by canpemm500002.china.huawei.com (7.192.104.244) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.24; Thu, 9 Jun 2022 21:08:30 +0800 From: Miaohe Lin To: CC: , , , , , , Subject: [PATCH] lib/test_hmm: avoid accessing uninitialized pages Date: Thu, 9 Jun 2022 21:08:35 +0800 Message-ID: <20220609130835.35110-1-linmiaohe@huawei.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.124.27] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To canpemm500002.china.huawei.com (7.192.104.244) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" If make_device_exclusive_range() fails or returns pages marked for exclusive access less than required, remaining fields of pages will left uninitialized. So dmirror_atomic_map() will access those yet uninitialized fields of pages. To fix it, do dmirror_atomic_map() iff all pages are marked for exclusive access (we will break if mapped is less than required anyway) so we won't access those uninitialized fields of pages. Fixes: b659baea7546 ("mm: selftests for exclusive device memory") Signed-off-by: Miaohe Lin --- lib/test_hmm.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/lib/test_hmm.c b/lib/test_hmm.c index 7930853e7fc5..e3965cafd27c 100644 --- a/lib/test_hmm.c +++ b/lib/test_hmm.c @@ -797,7 +797,7 @@ static int dmirror_exclusive(struct dmirror *dmirror, =20 mmap_read_lock(mm); for (addr =3D start; addr < end; addr =3D next) { - unsigned long mapped; + unsigned long mapped =3D 0; int i; =20 if (end < addr + (ARRAY_SIZE(pages) << PAGE_SHIFT)) @@ -806,7 +806,13 @@ static int dmirror_exclusive(struct dmirror *dmirror, next =3D addr + (ARRAY_SIZE(pages) << PAGE_SHIFT); =20 ret =3D make_device_exclusive_range(mm, addr, next, pages, NULL); - mapped =3D dmirror_atomic_map(addr, next, pages, dmirror); + /* + * Do dmirror_atomic_map() iff all pages are marked for + * exclusive access to avoid accessing uninitialized + * fields of pages. + */ + if (ret =3D=3D (next - addr) >> PAGE_SHIFT) + mapped =3D dmirror_atomic_map(addr, next, pages, dmirror); for (i =3D 0; i < ret; i++) { if (pages[i]) { unlock_page(pages[i]); --=20 2.23.0