From nobody Wed Apr 29 00:44:02 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B9ECC433F5 for ; Fri, 27 May 2022 10:23:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1350918AbiE0KXi (ORCPT ); Fri, 27 May 2022 06:23:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1350370AbiE0KXg (ORCPT ); Fri, 27 May 2022 06:23:36 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D92F8F6893; Fri, 27 May 2022 03:23:33 -0700 (PDT) Date: Fri, 27 May 2022 10:23:31 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1653647012; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DvhvamQw95czvXs0uw4uN1J6aqGA3JmU+FjqjMTU1sA=; b=IKAfjUqYRKIZimxNT1cOaFoOtYmyID+lHi1/CkuJ0WcfbV+H4Av0oviyJppuIrdSgz3pfu Lf07fgSHJbAEjASx8KMMHsB8VMa4pKP0+8E91Oe8T/KK1GWxIpWntQ2uagbSO0lqdzNWUB U1I6x67e5ljyQZQ7Jyh0ISheQMOBN+rADn1JWKYVQYlA6lm5oKOOWI9PPJ8XFI2+xB8nSb bSb4Ny1347bHU6fq8ml/CvZxPxhrfx0zwIyh4bjOQAJUwU9nkvHGL+Juu58OBYCQ+lnfGE vHss7X4RWoyehVqHOFG0s2HWAqQVSfDMHvM7FOdi3VJ+fEWbF9b5yojWZ8B/Pw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1653647012; h=from:from:sender:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DvhvamQw95czvXs0uw4uN1J6aqGA3JmU+FjqjMTU1sA=; b=FWpuQNDiXIUO7zALmbpqgr8MxTsZQwgX7NljQKPDHmJbgjbycYW2zl/cjns5sa6GnY2x9e bWC4aL7DVgIb00AQ== From: "tip-bot2 for Fanjun Kong" Sender: tip-bot2@linutronix.de Reply-to: linux-kernel@vger.kernel.org To: linux-tip-commits@vger.kernel.org Subject: [tip: x86/mm] x86/mm: Use PAGE_ALIGNED(x) instead of IS_ALIGNED(x, PAGE_SIZE) Cc: Fanjun Kong , Ingo Molnar , x86@kernel.org, linux-kernel@vger.kernel.org In-Reply-To: <20220526142038.1582839-1-bh1scw@gmail.com> References: <20220526142038.1582839-1-bh1scw@gmail.com> MIME-Version: 1.0 Message-ID: <165364701137.4207.15691426598646044587.tip-bot2@tip-bot2> Robot-ID: Robot-Unsubscribe: Contact to get blacklisted from these emails Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The following commit has been merged into the x86/mm branch of tip: Commit-ID: e19d11267f0e6c8aff2d15d2dfed12365b4c9184 Gitweb: https://git.kernel.org/tip/e19d11267f0e6c8aff2d15d2dfed12365= b4c9184 Author: Fanjun Kong AuthorDate: Thu, 26 May 2022 22:20:39 +08:00 Committer: Ingo Molnar CommitterDate: Fri, 27 May 2022 12:19:56 +02:00 x86/mm: Use PAGE_ALIGNED(x) instead of IS_ALIGNED(x, PAGE_SIZE) The already provides the PAGE_ALIGNED() macro. Let's use this macro instead of IS_ALIGNED() and passing PAGE_SIZE directly. No change in functionality. [ mingo: Tweak changelog. ] Signed-off-by: Fanjun Kong Signed-off-by: Ingo Molnar Link: https://lore.kernel.org/r/20220526142038.1582839-1-bh1scw@gmail.com --- arch/x86/mm/init_64.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 61d0ab1..8779d6b 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1240,8 +1240,8 @@ remove_pagetable(unsigned long start, unsigned long e= nd, bool direct, void __ref vmemmap_free(unsigned long start, unsigned long end, struct vmem_altmap *altmap) { - VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); - VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + VM_BUG_ON(!PAGE_ALIGNED(start)); + VM_BUG_ON(!PAGE_ALIGNED(end)); =20 remove_pagetable(start, end, false, altmap); } @@ -1605,8 +1605,8 @@ int __meminit vmemmap_populate(unsigned long start, u= nsigned long end, int node, { int err; =20 - VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE)); - VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE)); + VM_BUG_ON(!PAGE_ALIGNED(start)); + VM_BUG_ON(!PAGE_ALIGNED(end)); =20 if (end - start < PAGES_PER_SECTION * sizeof(struct page)) err =3D vmemmap_populate_basepages(start, end, node, NULL);