From nobody Fri Apr 10 20:15:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3D2DC25B0E for ; Fri, 19 Aug 2022 09:10:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347978AbiHSJKH (ORCPT ); Fri, 19 Aug 2022 05:10:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39826 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347975AbiHSJKE (ORCPT ); Fri, 19 Aug 2022 05:10:04 -0400 Received: from mail-qt1-x844.google.com (mail-qt1-x844.google.com [IPv6:2607:f8b0:4864:20::844]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6AF23F14DC for ; Fri, 19 Aug 2022 02:10:03 -0700 (PDT) Received: by mail-qt1-x844.google.com with SMTP id a4so2882286qto.10 for ; Fri, 19 Aug 2022 02:10:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=2z0FzeFb0sFurgTQB3OFi4LJtDp72dKYIdEv0EoSLB0=; b=gOjMhoM+81Tf0KMrOOhrqnacshHYdirV3D7iBKZn5xNxwWUEiQKNnoC/09C3xJeZJD OANCu1KLBk5xqH4x7OEge3VSzxyrbCN9enO1YCk6j474gET6/4NeqxEvJmpbJ9yheeuL 0ib6fzOVcjkiztbJ/xZ4EuDKRtxfdXNpOB80oxyIN+utjsBBZLPFOf+pWvkR2EJaFi2A Z2bTvQE24nr7WuhK8KOlRrXrhByMAsTAH+fkg75VLWax38j8opYNYZRK5px31hCMmQdO ctB2LQNvyM9g/6iHOdR02kO7epICj9DdrPK9pxVv1PGlVfuTiF/6BTPhwtzPl7hiPKAu jdEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=2z0FzeFb0sFurgTQB3OFi4LJtDp72dKYIdEv0EoSLB0=; b=hrVmJXmQcD534oukLSb881kmkSJE2Y/LiM7tJxVq0WvUBm3mncqt/AlaLcvPCDXXs0 9HSqdKr+EvETd3JefqxGrlPVzmtbO2CUe3Yvwy58m5CqvC10eGtuVUBScrrUsc9FB4Sg 9hFPTHMsP20xQPy6CPN9nWtpywKCuKrKQdgTT9RP3Cu+RBuFYmUxtKsp8NMD6vn7oqyL UilM7fPWxjLWdBjYfQun7FLca8Qy3ZpzwNzQYUf2J0Bjh1HPQjFf1aJDenOIGc3tB7xl btH7hGT1An+pJIYvP0GsthcuBVoz6QexeIw1OQRXf/FGYP7vF+la64R+2aTNiXhndC8b ozww== X-Gm-Message-State: ACgBeo0UhDXgHBANmezuqn+/mLtkVw68uc0Cuf6cgF7eydLfHFfV/K+T g0Zjg1Xr34XcNwjgpm+Au4E= X-Google-Smtp-Source: AA6agR6AxIlBI3ibgp9VwsAzdSnR0jpMpVkwwL0Z/sWHdrBUOGmGm6JpJeTx6+3bfVrLhyG4Z5VJiA== X-Received: by 2002:a05:622a:1055:b0:344:9748:493f with SMTP id f21-20020a05622a105500b003449748493fmr3983395qte.190.1660900202518; Fri, 19 Aug 2022 02:10:02 -0700 (PDT) Received: from sophie ([89.46.62.64]) by smtp.gmail.com with ESMTPSA id r25-20020ae9d619000000b006bb2f555ba4sm3215017qkk.41.2022.08.19.02.10.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 02:10:02 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH v2 1/4] memblock tests: add simulation of physical memory with multiple NUMA nodes Date: Fri, 19 Aug 2022 02:05:31 -0700 Message-Id: <0cfb3c69ba6ca9ff55e1fc2528d18d108416ba57.1660897864.git.remckee0@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add functions setup_numa_memblock_generic() and setup_numa_memblock() for setting up a memory layout with multiple NUMA nodes in a previously allocated dummy physical memory. These functions can be used in place of setup_memblock() in tests that need to simulate a NUMA system. setup_numa_memblock_generic(): - allows for setting up a custom memory layout by specifying the amount of memory in each node, the number of nodes, and a factor that will be used to scale the memory in each node setup_numa_memblock(): - allows for setting up a default memory layout Introduce constant MEM_FACTOR, which is used to scale the default memory layout based on MEM_SIZE. Set CONFIG_NODES_SHIFT to 4 when building with NUMA=3D1 to allow for up to 16 NUMA nodes. Signed-off-by: Rebecca Mckeever --- .../testing/memblock/scripts/Makefile.include | 2 +- tools/testing/memblock/tests/common.c | 38 +++++++++++++++++++ tools/testing/memblock/tests/common.h | 9 ++++- 3 files changed, 47 insertions(+), 2 deletions(-) diff --git a/tools/testing/memblock/scripts/Makefile.include b/tools/testin= g/memblock/scripts/Makefile.include index aa6d82d56a23..998281723590 100644 --- a/tools/testing/memblock/scripts/Makefile.include +++ b/tools/testing/memblock/scripts/Makefile.include @@ -3,7 +3,7 @@ =20 # Simulate CONFIG_NUMA=3Dy ifeq ($(NUMA), 1) - CFLAGS +=3D -D CONFIG_NUMA + CFLAGS +=3D -D CONFIG_NUMA -D CONFIG_NODES_SHIFT=3D4 endif =20 # Use 32 bit physical addresses. diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock= /tests/common.c index eec6901081af..15d8767dc70c 100644 --- a/tools/testing/memblock/tests/common.c +++ b/tools/testing/memblock/tests/common.c @@ -34,6 +34,10 @@ static const char * const help_opts[] =3D { =20 static int verbose; =20 +static const phys_addr_t node_sizes[] =3D { + SZ_4K, SZ_1K, SZ_2K, SZ_2K, SZ_1K, SZ_1K, SZ_4K, SZ_1K +}; + /* sets global variable returned by movable_node_is_enabled() stub */ bool movable_node_enabled; =20 @@ -72,6 +76,40 @@ void setup_memblock(void) fill_memblock(); } =20 +/** + * setup_numa_memblock_generic: + * Set up a memory layout with multiple NUMA nodes in a previously allocat= ed + * dummy physical memory. + * @nodes: an array containing the amount of memory in each node + * @node_cnt: the size of @nodes + * @factor: a factor that will be used to scale the memory in each node + * + * The nids will be set to 0 through node_cnt - 1. + */ +void setup_numa_memblock_generic(const phys_addr_t nodes[], + int node_cnt, int factor) +{ + phys_addr_t base; + int flags; + + reset_memblock_regions(); + base =3D (phys_addr_t)memory_block.base; + flags =3D (movable_node_is_enabled()) ? MEMBLOCK_NONE : MEMBLOCK_HOTPLUG; + + for (int i =3D 0; i < node_cnt; i++) { + phys_addr_t size =3D factor * nodes[i]; + + memblock_add_node(base, size, i, flags); + base +=3D size; + } + fill_memblock(); +} + +void setup_numa_memblock(void) +{ + setup_numa_memblock_generic(node_sizes, NUMA_NODES, MEM_FACTOR); +} + void dummy_physical_memory_init(void) { memory_block.base =3D malloc(MEM_SIZE); diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock= /tests/common.h index 4fd3534ff955..e5117d959d6c 100644 --- a/tools/testing/memblock/tests/common.h +++ b/tools/testing/memblock/tests/common.h @@ -10,7 +10,11 @@ #include #include <../selftests/kselftest.h> =20 -#define MEM_SIZE SZ_16K +#define MEM_SIZE SZ_16K +#define NUMA_NODES 8 + +/* used to resize values that need to scale with MEM_SIZE */ +#define MEM_FACTOR (MEM_SIZE / SZ_16K) =20 enum test_flags { TEST_ZEROED =3D 0x0, @@ -100,6 +104,9 @@ struct region { void reset_memblock_regions(void); void reset_memblock_attributes(void); void setup_memblock(void); +void setup_numa_memblock_generic(const phys_addr_t nodes[], + int node_cnt, int factor); +void setup_numa_memblock(void); void dummy_physical_memory_init(void); void dummy_physical_memory_cleanup(void); void parse_args(int argc, char **argv); --=20 2.25.1 From nobody Fri Apr 10 20:15:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9766DC25B0E for ; Fri, 19 Aug 2022 09:12:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348002AbiHSJMV (ORCPT ); Fri, 19 Aug 2022 05:12:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42744 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1346413AbiHSJMS (ORCPT ); Fri, 19 Aug 2022 05:12:18 -0400 Received: from mail-qt1-x843.google.com (mail-qt1-x843.google.com [IPv6:2607:f8b0:4864:20::843]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EB22F23C5 for ; Fri, 19 Aug 2022 02:12:16 -0700 (PDT) Received: by mail-qt1-x843.google.com with SMTP id e28so2905049qts.1 for ; Fri, 19 Aug 2022 02:12:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=737uGpFyM7EHHbgHX8/XJNJ8R6/MZ/Uk1sm6mUaxKWI=; b=N4Rn3/ZnLmbuBLvr2r2e1SrjjuZj8VAGpZtPYnRK0HCaSbmPQyACRJdMFbk45terG3 +sMrmQo0hZ9XweTMrjz+heqlhJiNZ0mufOoVG4U9v5nATTFq/dCQklf3zn85NI6RxmrG Ds+HrbH5KGaDHBfrQdf5xqwU3G4SDeGct/DGyRpJ/Y8uYlmVRXDMtSezpCDxAQn69r54 v+reDqB8Ihjpf65MSWej33lPycRzRFivNmB49yJIu0bX85iRspSzpfJKgXl+UFY7yN7B uT8nPz6WMVDBVztkas+xpMUr0gnQlCmnXrmUr3WcB+gP+FZc7uYI2PUIxKEV7XMhUjyR ui1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=737uGpFyM7EHHbgHX8/XJNJ8R6/MZ/Uk1sm6mUaxKWI=; b=uMiIj2pSMZ7jkK7luqMXSn2U5spEkJEeXTbHUMU5N+mLm+BL8vpsBFAampxg4uO0zY a/pa79puepgQmNVVNN/J7lr1U8qQ+Tlk4+SLaHFtd4nGoQ/43wnT72vPJsR2Km5CVyOY 8LKP03GpAH7WAZKwoQcXCwKQ/zyTKb3Z11wUFdqhsWLGE6YaMj7/ty5aSzbPWjSLPbfM 7HdrSm+1KXlobtny8A8UUkImzL1dMdzx82Jhdq+fJXwTzh4M6SDEcKyrNndzFOy1jYhr iOQQVzi3Ce8i17t8GOKwS0foGQLYnbdgN1JB5+McSPt5CIxwCRwJbZUAL7zT2MGTVx4S F9cA== X-Gm-Message-State: ACgBeo2m46T0gcTLkJCSvSX/RIQcoFI8J/Ral013nZ4SBbzkDXyPgEef e+Q/6WiER2qzq8TU/0tKtEE= X-Google-Smtp-Source: AA6agR7++R7PoGaHnzMLczJKdAbqkO79vS/9pFCHqEXHUZwKpI8GebRJyxA84Bl39LzjAHBA2TxI7w== X-Received: by 2002:ac8:5a12:0:b0:343:6d1b:eb3d with SMTP id n18-20020ac85a12000000b003436d1beb3dmr5676287qta.364.1660900335457; Fri, 19 Aug 2022 02:12:15 -0700 (PDT) Received: from sophie ([89.46.62.64]) by smtp.gmail.com with ESMTPSA id p123-20020a37bf81000000b006a6d7c3a82esm3266126qkf.15.2022.08.19.02.12.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 02:12:15 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH v2 2/4] memblock tests: add top-down NUMA tests for memblock_alloc_try_nid* Date: Fri, 19 Aug 2022 02:05:32 -0700 Message-Id: <957966f06474e3885796247ad1beaa6b3841ebd1.1660897864.git.remckee0@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw() where the simulated physical memory is set up with multiple NUMA nodes. Additionally, all of these tests set nid !=3D NUMA_NO_NODE. These tests are run with a top-down allocation direction. The tested scenarios are: Range unrestricted: - region can be allocated in the specific node requested: + there are no previously reserved regions + the requested node is partially reserved but has enough space - the specific node requested cannot accommodate the request, but the region can be allocated in a different node: + there are no previously reserved regions, but node is too small + the requested node is fully reserved + the requested node is partially reserved and does not have enough space Range restricted: - region can be allocated in the specific node requested after dropping min_addr: + range partially overlaps with two different nodes, where the first node is the requested node + range partially overlaps with two different nodes, where the requested node ends before min_addr - region cannot be allocated in the specific node requested, but it can be allocated in the requested range: + range overlaps with multiple nodes along node boundaries, and the requested node ends before min_addr + range overlaps with multiple nodes along node boundaries, and the requested node starts after max_addr - region cannot be allocated in the specific node requested, but it can be allocated after dropping min_addr: + range partially overlaps with two different nodes, where the second node is the requested node Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/alloc_nid_api.c | 702 ++++++++++++++++++- tools/testing/memblock/tests/alloc_nid_api.h | 16 + tools/testing/memblock/tests/common.h | 18 + 3 files changed, 725 insertions(+), 11 deletions(-) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/m= emblock/tests/alloc_nid_api.c index 2c1d5035e057..a410f1318402 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -1102,7 +1102,7 @@ static int alloc_try_nid_bottom_up_cap_min_check(void) return 0; } =20 -/* Test case wrappers */ +/* Test case wrappers for range tests */ static int alloc_try_nid_simple_check(void) { test_print("\tRunning %s...\n", __func__); @@ -1234,17 +1234,10 @@ static int alloc_try_nid_low_max_check(void) return 0; } =20 -static int memblock_alloc_nid_checks_internal(int flags) +static int memblock_alloc_nid_range_checks(void) { - const char *func =3D get_func_testing(flags); - - alloc_nid_test_flags =3D flags; - prefix_reset(); - prefix_push(func); - test_print("Running %s tests...\n", func); - - reset_memblock_attributes(); - dummy_physical_memory_init(); + test_print("Running %s range tests...\n", + get_func_testing(alloc_nid_test_flags)); =20 alloc_try_nid_simple_check(); alloc_try_nid_misaligned_check(); @@ -1261,6 +1254,693 @@ static int memblock_alloc_nid_checks_internal(int f= lags) alloc_try_nid_reserved_all_check(); alloc_try_nid_low_max_check(); =20 + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * has enough memory to allocate a region of the requested size. + * Expect to allocate an aligned region at the end of the requested node. + */ +static int alloc_try_nid_top_down_numa_simple_check(void) +{ + int nid_req =3D 3; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + ASSERT_LE(SZ_4, req_node->size); + size =3D req_node->size / SZ_4; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(req_node) - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * does not have enough memory to allocate a region of the requested size: + * + * | +-----+ +------------------+ | + * | | req | | expected | | + * +---+-----+----------+------------------+-----+ + * + * | +---------+ | + * | | rgn | | + * +-----------------------------+---------+-----+ + * + * Expect to allocate an aligned region at the end of the last node that h= as + * enough memory (in this case, nid =3D 6) after falling back to NUMA_NO_N= ODE. + */ +static int alloc_try_nid_top_down_numa_small_node_check(void) +{ + int nid_req =3D 1; + int nid_exp =3D 6; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_2K * MEM_FACTOR; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is fully reserved: + * + * | +---------+ +------------------+ | + * | |requested| | expected | | + * +--------------+---------+------------+------------------+-----+ + * + * | +---------+ +---------+ | + * | | reserved| | new | | + * +--------------+---------+---------------------+---------+-----+ + * + * Expect to allocate an aligned region at the end of the last node that is + * large enough and has enough unreserved memory (in this case, nid =3D 6)= after + * falling back to NUMA_NO_NODE. The region count and total size get updat= ed. + */ +static int alloc_try_nid_top_down_numa_node_reserved_check(void) +{ + int nid_req =3D 2; + int nid_exp =3D 6; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[1]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_2K * MEM_FACTOR; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(req_node->base, req_node->size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + req_node->size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved but has enough memory for the allocated region: + * + * | +---------------------------------------+ | + * | | requested | | + * +-----------+---------------------------------------+----------+ + * + * | +------------------+ +-----+ | + * | | reserved | | new | | + * +-----------+------------------+--------------+-----+----------+ + * + * Expect to allocate an aligned region at the end of the requested node. = The + * region count and total size get updated. + */ +static int alloc_try_nid_top_down_numa_part_reserved_check(void) +{ + int nid_req =3D 4; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[1]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + r1.base =3D req_node->base; + r1.size =3D SZ_512 * MEM_FACTOR; + size =3D SZ_128 * MEM_FACTOR; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(req_node) - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + r1.size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved and does not have enough contiguous memory for the + * allocated region: + * + * | +-----------------------+ +----------------------| + * | | requested | | expected | + * +-----------+-----------------------+---------+----------------------+ + * + * | +----------+ +-----------| + * | | reserved | | new | + * +-----------------+----------+---------------------------+-----------+ + * + * Expect to allocate an aligned region at the end of the last node that is + * large enough and has enough unreserved memory (in this case, + * nid =3D NUMA_NODES - 1) after falling back to NUMA_NO_NODE. The region = count + * and total size get updated. + */ +static int alloc_try_nid_top_down_numa_part_reserved_fallback_check(void) +{ + int nid_req =3D 4; + int nid_exp =3D NUMA_NODES - 1; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[1]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512 * MEM_FACTOR; + r1.base =3D req_node->base + SZ_256 * MEM_FACTOR; + r1.size =3D size; + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + r1.size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the fir= st + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * | +-----------------------+-----------+ | + * | | requested | node3 | | + * +-----------+-----------------------+-----------+--------------+ + * + + + * | +-----------+ | + * | | rgn | | + * +-----------------------+-----------+--------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * ends at the end of the requested node. + */ +static int alloc_try_nid_top_down_numa_split_range_low_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t req_node_end; + + setup_numa_memblock(); + + req_node_end =3D region_end(req_node); + min_addr =3D req_node_end - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node_end - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the sec= ond + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * | +--------------------------+---------+ | + * | | expected |requested| | + * +------+--------------------------+---------+----------------+ + * + + + * | +---------+ | + * | | rgn | | + * +-----------------------+---------+--------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * ends at the end of the first node that overlaps with the range. + */ +static int alloc_try_nid_top_down_numa_split_range_high_check(void) +{ + int nid_req =3D 3; + int nid_exp =3D nid_req - 1; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t exp_node_end; + + setup_numa_memblock(); + + exp_node_end =3D region_end(exp_node); + min_addr =3D exp_node_end - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node_end - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the req= uested + * node ends before min_addr: + * + * min_addr + * | max_addr + * | | + * v v + * | +---------------+ +-------------+---------+ | + * | | requested | | node1 | node2 | | + * +----+---------------+--------+-------------+---------+----------+ + * + + + * | +---------+ | + * | | rgn | | + * +----------+---------+-------------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * ends at the end of the requested node. + */ +static int alloc_try_nid_top_down_numa_no_overlap_split_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *node2 =3D &memblock.memory.regions[6]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512; + min_addr =3D node2->base - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(req_node) - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node ends + * before min_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * |-----------+ +----------+----...----+----------+ | + * | requested | | min node | ... | max node | | + * +-----------+-----------+----------+----...----+----------+------+ + * + + + * | +-----+ | + * | | rgn | | + * +---------------------------------------------------+-----+------+ + * + * Expect to allocate a cleared memory region at the end of the final node= in + * the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_top_down_no_overlap_low_check(void) +{ + int nid_req =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, max_addr - size); + ASSERT_LE(max_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node sta= rts + * after max_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * | +----------+----...----+----------+ +-----------+ | + * | | min node | ... | max node | | requested | | + * +-----+----------+----...----+----------+--------+-----------+---+ + * + + + * | +-----+ | + * | | rgn | | + * +---------------------------------+-----+------------------------+ + * + * Expect to allocate a cleared memory region at the end of the final node= in + * the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_top_down_no_overlap_high_check(void) +{ + int nid_req =3D 7; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, max_addr - size); + ASSERT_LE(max_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* Test case wrappers for NUMA tests */ +static int alloc_try_nid_numa_simple_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_simple_check(); + + return 0; +} + +static int alloc_try_nid_numa_small_node_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_small_node_check(); + + return 0; +} + +static int alloc_try_nid_numa_node_reserved_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_node_reserved_check(); + + return 0; +} + +static int alloc_try_nid_numa_part_reserved_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_part_reserved_check(); + + return 0; +} + +static int alloc_try_nid_numa_part_reserved_fallback_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_part_reserved_fallback_check(); + + return 0; +} + +static int alloc_try_nid_numa_split_range_low_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_split_range_low_check(); + + return 0; +} + +static int alloc_try_nid_numa_split_range_high_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_split_range_high_check(); + + return 0; +} + +static int alloc_try_nid_numa_no_overlap_split_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_no_overlap_split_check(); + + return 0; +} + +static int alloc_try_nid_numa_no_overlap_low_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_numa_top_down_no_overlap_low_check(); + + return 0; +} + +static int alloc_try_nid_numa_no_overlap_high_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_numa_top_down_no_overlap_high_check(); + + return 0; +} + +int __memblock_alloc_nid_numa_checks(void) +{ + test_print("Running %s NUMA tests...\n", + get_func_testing(alloc_nid_test_flags)); + + alloc_try_nid_numa_simple_check(); + alloc_try_nid_numa_small_node_check(); + alloc_try_nid_numa_node_reserved_check(); + alloc_try_nid_numa_part_reserved_check(); + alloc_try_nid_numa_part_reserved_fallback_check(); + alloc_try_nid_numa_split_range_low_check(); + alloc_try_nid_numa_split_range_high_check(); + + alloc_try_nid_numa_no_overlap_split_check(); + alloc_try_nid_numa_no_overlap_low_check(); + alloc_try_nid_numa_no_overlap_high_check(); + + return 0; +} + +static int memblock_alloc_nid_checks_internal(int flags) +{ + alloc_nid_test_flags =3D flags; + prefix_reset(); + prefix_push(get_func_testing(flags)); + + reset_memblock_attributes(); + dummy_physical_memory_init(); + + memblock_alloc_nid_range_checks(); + memblock_alloc_nid_numa_checks(); + dummy_physical_memory_cleanup(); =20 prefix_pop(); diff --git a/tools/testing/memblock/tests/alloc_nid_api.h b/tools/testing/m= emblock/tests/alloc_nid_api.h index b35cf3c3f489..92d07d230e18 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.h +++ b/tools/testing/memblock/tests/alloc_nid_api.h @@ -5,5 +5,21 @@ #include "common.h" =20 int memblock_alloc_nid_checks(void); +int __memblock_alloc_nid_numa_checks(void); + +#ifdef CONFIG_NUMA +static inline int memblock_alloc_nid_numa_checks(void) +{ + __memblock_alloc_nid_numa_checks(); + return 0; +} + +#else +static inline int memblock_alloc_nid_numa_checks(void) +{ + return 0; +} + +#endif /* CONFIG_NUMA */ =20 #endif diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock= /tests/common.h index e5117d959d6c..2de928a54e6f 100644 --- a/tools/testing/memblock/tests/common.h +++ b/tools/testing/memblock/tests/common.h @@ -60,6 +60,19 @@ enum test_flags { assert((_expected) < (_seen)); \ } while (0) =20 +/** + * ASSERT_LE(): + * Check the condition + * @_expected <=3D @_seen + * If false, print failed test message (if running with --verbose) and then + * assert. + */ +#define ASSERT_LE(_expected, _seen) do { \ + if ((_expected) > (_seen)) \ + test_fail(); \ + assert((_expected) <=3D (_seen)); \ +} while (0) + /** * ASSERT_MEM_EQ(): * Check that the first @_size bytes of @_seen are all equal to @_expected. @@ -101,6 +114,11 @@ struct region { phys_addr_t size; }; =20 +static inline phys_addr_t __maybe_unused region_end(struct memblock_region= *rgn) +{ + return rgn->base + rgn->size; +} + void reset_memblock_regions(void); void reset_memblock_attributes(void); void setup_memblock(void); --=20 2.25.1 From nobody Fri Apr 10 20:15:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3359C25B0E for ; Fri, 19 Aug 2022 09:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348067AbiHSJOn (ORCPT ); Fri, 19 Aug 2022 05:14:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1348052AbiHSJOb (ORCPT ); Fri, 19 Aug 2022 05:14:31 -0400 Received: from mail-qv1-xf42.google.com (mail-qv1-xf42.google.com [IPv6:2607:f8b0:4864:20::f42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 764AF7C30E for ; Fri, 19 Aug 2022 02:14:29 -0700 (PDT) Received: by mail-qv1-xf42.google.com with SMTP id d1so2974110qvs.0 for ; Fri, 19 Aug 2022 02:14:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=FUttZlsb5kRymB7HG8DmR5mzN3RL9TgjxIxymOyhAJQ=; b=MUFhw+7vAu5AVzCow5A7XdhzzdoNARMReTy9t5MXemXKkLrzqJW9QIG3wg4S6Xgv90 mhTe+sTey97KwE/s+Yh0F1NOy3nmQWQYeVssIWDRCgOO8bY89yGyGalvy0Ki4qaJgQwg RY77wqlparbeQKds9hLTAz0WO9SRP0Kq/CPKfzLOsnTBGLmFI4qnUR2sZRuhEvzXoaGW mlE8zI/k2q+d2mEn+7LAR3hDVpKmU9Z0gOndQrCpkD5e1Y5V8f4bw7y+QmiZBAamL2oE KnRcgISUgJCl0Vr9ulxt+dxw/+rQ93k13knZdggjJlIcMXcd/2Xn+TLGI6uPMIT7cdwR yr9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=FUttZlsb5kRymB7HG8DmR5mzN3RL9TgjxIxymOyhAJQ=; b=WvrRCm477OPB3by0yhbKRoKoeVkZNSnRpn75cWre+60F7OEwm+VhMbTqkza1/eWkvx cSq8HgzlbRkV0yGAm9TeQSQS+qUaLyDgO9kgwW5tRjoHt1ueP5kFH84X/aVqYsXQZV0O Q3m6XsLaAn8tMYu7WGSSld/HlsaaZFIeH2QnGqB2cMkv3SfsQoX46TfVHg8NSYGRD8da Cjo0hGa0sX0S8lathGC999E1raiWr5gfYtePsrHdd8bjeMpvHv6bObRX9eGdZRV11gc/ M/t+foq2FqOA4cukzBvd5uSALqnZtskB3Bhw2hxPzz5wpfug1z/7s03kQ6fU2zUgKAMC nAhA== X-Gm-Message-State: ACgBeo0AE/7z9U1IFcDQKhQFNzvOhcMVT/R/7JKOamg9MEhWD7JyRS4V 1bD68AR828tUEcv179DglvwdE25a56s= X-Google-Smtp-Source: AA6agR6TA77OX7K6hF4JAo90+CciMkx9SqfGg8+TvNqNozrzsIeD3PdBh/mQKd7kyx94MOqjbYAXWQ== X-Received: by 2002:a05:6214:5299:b0:47e:89e9:e27b with SMTP id kj25-20020a056214529900b0047e89e9e27bmr5579900qvb.52.1660900468510; Fri, 19 Aug 2022 02:14:28 -0700 (PDT) Received: from sophie ([89.46.62.64]) by smtp.gmail.com with ESMTPSA id 66-20020a370945000000b006b8d1914504sm3126919qkj.22.2022.08.19.02.14.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 02:14:28 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH v2 3/4] memblock tests: add bottom-up NUMA tests for memblock_alloc_try_nid* Date: Fri, 19 Aug 2022 02:05:33 -0700 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw() where the simulated physical memory is set up with multiple NUMA nodes. Additionally, all of these tests set nid !=3D NUMA_NO_NODE. These tests are run with a bottom-up allocation direction. The tested scenarios are: Range unrestricted: - region can be allocated in the specific node requested: + there are no previously reserved regions + the requested node is partially reserved but has enough space - the specific node requested cannot accommodate the request, but the region can be allocated in a different node: + there are no previously reserved regions, but node is too small + the requested node is fully reserved + the requested node is partially reserved and does not have enough space Range restricted: - region can be allocated in the specific node requested after dropping min_addr: + range partially overlaps with two different nodes, where the first node is the requested node + range partially overlaps with two different nodes, where the requested node ends before min_addr - region cannot be allocated in the specific node requested, but it can be allocated in the requested range: + range overlaps with multiple nodes along node boundaries, and the requested node ends before min_addr + range overlaps with multiple nodes along node boundaries, and the requested node starts after max_addr - region cannot be allocated in the specific node requested, but it can be allocated after dropping min_addr: + range partially overlaps with two different nodes, where the second node is the requested node Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/alloc_nid_api.c | 584 +++++++++++++++++++ 1 file changed, 584 insertions(+) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/m= emblock/tests/alloc_nid_api.c index a410f1318402..0a7a7494a157 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -1818,12 +1818,578 @@ static int alloc_try_nid_numa_top_down_no_overlap_= high_check(void) return 0; } =20 +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * has enough memory to allocate a region of the requested size. + * Expect to allocate an aligned region at the beginning of the requested = node. + */ +static int alloc_try_nid_bottom_up_numa_simple_check(void) +{ + int nid_req =3D 3; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + ASSERT_LE(SZ_4, req_node->size); + size =3D req_node->size / SZ_4; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), region_end(req_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * does not have enough memory to allocate a region of the requested size: + * + * |----------------------+-----+ | + * | expected | req | | + * +----------------------+-----+----------------+ + * + * |---------+ | + * | rgn | | + * +---------+-----------------------------------+ + * + * Expect to allocate an aligned region at the beginning of the first node= that + * has enough memory (in this case, nid =3D 0) after falling back to NUMA_= NO_NODE. + */ +static int alloc_try_nid_bottom_up_numa_small_node_check(void) +{ + int nid_req =3D 1; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_2K * MEM_FACTOR; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), region_end(exp_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is fully reserved: + * + * |----------------------+ +-----------+ | + * | expected | | requested | | + * +----------------------+-----+-----------+--------------------+ + * + * |-----------+ +-----------+ | + * | new | | reserved | | + * +-----------+----------------+-----------+--------------------+ + * + * Expect to allocate an aligned region at the beginning of the first node= that + * is large enough and has enough unreserved memory (in this case, nid =3D= 0) + * after falling back to NUMA_NO_NODE. The region count and total size get + * updated. + */ +static int alloc_try_nid_bottom_up_numa_node_reserved_check(void) +{ + int nid_req =3D 2; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_2K * MEM_FACTOR; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(req_node->base, req_node->size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), region_end(exp_node)); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + req_node->size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved but has enough memory for the allocated region: + * + * | +---------------------------------------+ | + * | | requested | | + * +-----------+---------------------------------------+---------+ + * + * | +------------------+-----+ | + * | | reserved | new | | + * +-----------+------------------+-----+------------------------+ + * + * Expect to allocate an aligned region in the requested node that merges = with + * the existing reserved region. The total size gets updated. + */ +static int alloc_try_nid_bottom_up_numa_part_reserved_check(void) +{ + int nid_req =3D 4; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t total_size; + + setup_numa_memblock(); + + r1.base =3D req_node->base; + r1.size =3D SZ_512 * MEM_FACTOR; + size =3D SZ_128 * MEM_FACTOR; + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + total_size =3D size + r1.size; + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, total_size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), region_end(req_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, total_size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved and does not have enough contiguous memory for the + * allocated region: + * + * |----------------------+ +-----------------------+ | + * | expected | | requested | | + * +----------------------+-------+-----------------------+---------+ + * + * |-----------+ +----------+ | + * | new | | reserved | | + * +-----------+------------------------+----------+----------------+ + * + * Expect to allocate an aligned region at the beginning of the first + * node that is large enough and has enough unreserved memory (in this cas= e, + * nid =3D 0) after falling back to NUMA_NO_NODE. The region count and tot= al size + * get updated. + */ +static int alloc_try_nid_bottom_up_numa_part_reserved_fallback_check(void) +{ + int nid_req =3D 4; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512 * MEM_FACTOR; + r1.base =3D req_node->base + SZ_256 * MEM_FACTOR; + r1.size =3D size; + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), region_end(exp_node)); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + r1.size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the fir= st + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * | +-----------------------+-----------+ | + * | | requested | node3 | | + * +-----------+-----------------------+-----------+--------------+ + * + + + * | +-----------+ | + * | | rgn | | + * +-----------+-----------+--------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region at = the + * beginning of the requested node. + */ +static int alloc_try_nid_bottom_up_numa_split_range_low_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t req_node_end; + + setup_numa_memblock(); + + req_node_end =3D region_end(req_node); + min_addr =3D req_node_end - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), req_node_end); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the sec= ond + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * |------------------+ +----------------------+---------+ | + * | expected | | previous |requested| | + * +------------------+--------+----------------------+---------+------+ + * + + + * |---------+ | + * | rgn | | + * +---------+---------------------------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region at = the + * beginning of the first node that has enough memory. + */ +static int alloc_try_nid_bottom_up_numa_split_range_high_check(void) +{ + int nid_req =3D 3; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t exp_node_end; + + setup_numa_memblock(); + + exp_node_end =3D region_end(req_node); + min_addr =3D req_node->base - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), exp_node_end); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the req= uested + * node ends before min_addr: + * + * min_addr + * | max_addr + * | | + * v v + * | +---------------+ +-------------+---------+ | + * | | requested | | node1 | node2 | | + * +----+---------------+--------+-------------+---------+---------+ + * + + + * | +---------+ | + * | | rgn | | + * +----+---------+------------------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * starts at the beginning of the requested node. + */ +static int alloc_try_nid_bottom_up_numa_no_overlap_split_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *node2 =3D &memblock.memory.regions[6]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512; + min_addr =3D node2->base - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), region_end(req_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node ends + * before min_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * |-----------+ +----------+----...----+----------+ | + * | requested | | min node | ... | max node | | + * +-----------+-----------+----------+----...----+----------+------+ + * + + + * | +-----+ | + * | | rgn | | + * +-----------------------+-----+----------------------------------+ + * + * Expect to allocate a cleared memory region at the beginning of the firs= t node + * in the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_bottom_up_no_overlap_low_check(void) +{ + int nid_req =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, min_addr); + ASSERT_LE(region_end(new_rgn), region_end(min_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node sta= rts + * after max_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * | +----------+----...----+----------+ +---------+ | + * | | min node | ... | max node | |requested| | + * +-----+----------+----...----+----------+---------+---------+---+ + * + + + * | +-----+ | + * | | rgn | | + * +-----+-----+---------------------------------------------------+ + * + * Expect to allocate a cleared memory region at the beginning of the firs= t node + * in the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_bottom_up_no_overlap_high_check(void) +{ + int nid_req =3D 7; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, min_addr); + ASSERT_LE(region_end(new_rgn), region_end(min_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + /* Test case wrappers for NUMA tests */ static int alloc_try_nid_numa_simple_check(void) { test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_simple_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_simple_check(); =20 return 0; } @@ -1833,6 +2399,8 @@ static int alloc_try_nid_numa_small_node_check(void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_small_node_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_small_node_check(); =20 return 0; } @@ -1842,6 +2410,8 @@ static int alloc_try_nid_numa_node_reserved_check(voi= d) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_node_reserved_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_node_reserved_check(); =20 return 0; } @@ -1851,6 +2421,8 @@ static int alloc_try_nid_numa_part_reserved_check(voi= d) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_part_reserved_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_part_reserved_check(); =20 return 0; } @@ -1860,6 +2432,8 @@ static int alloc_try_nid_numa_part_reserved_fallback_= check(void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_part_reserved_fallback_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_part_reserved_fallback_check(); =20 return 0; } @@ -1869,6 +2443,8 @@ static int alloc_try_nid_numa_split_range_low_check(v= oid) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_split_range_low_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_split_range_low_check(); =20 return 0; } @@ -1878,6 +2454,8 @@ static int alloc_try_nid_numa_split_range_high_check(= void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_split_range_high_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_split_range_high_check(); =20 return 0; } @@ -1887,6 +2465,8 @@ static int alloc_try_nid_numa_no_overlap_split_check(= void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_no_overlap_split_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_no_overlap_split_check(); =20 return 0; } @@ -1896,6 +2476,8 @@ static int alloc_try_nid_numa_no_overlap_low_check(vo= id) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_numa_top_down_no_overlap_low_check(); + memblock_set_bottom_up(true); + alloc_try_nid_numa_bottom_up_no_overlap_low_check(); =20 return 0; } @@ -1905,6 +2487,8 @@ static int alloc_try_nid_numa_no_overlap_high_check(v= oid) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_numa_top_down_no_overlap_high_check(); + memblock_set_bottom_up(true); + alloc_try_nid_numa_bottom_up_no_overlap_high_check(); =20 return 0; } --=20 2.25.1 From nobody Fri Apr 10 20:15:58 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4E704C25B0E for ; Fri, 19 Aug 2022 09:16:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348072AbiHSJQr (ORCPT ); Fri, 19 Aug 2022 05:16:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51716 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347431AbiHSJQn (ORCPT ); Fri, 19 Aug 2022 05:16:43 -0400 Received: from mail-qv1-xf43.google.com (mail-qv1-xf43.google.com [IPv6:2607:f8b0:4864:20::f43]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8EB68F2D51 for ; Fri, 19 Aug 2022 02:16:42 -0700 (PDT) Received: by mail-qv1-xf43.google.com with SMTP id j1so2937190qvv.8 for ; Fri, 19 Aug 2022 02:16:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=4z/7rQZurTl+mnXCfAAXxrfDTIcIQthn9O8WhytR9hA=; b=VE1uy9wuhN/nqQWUIX/L4SV6jCjUfHlpTXidQXcA3Fhex79TIraZbAgNWXam9nTOKG v0/3SO+c7sgMCYiSEF5b4V3OqU3WbLAFScKgZWEEtpdHPrrkz7Ua62dXKovcTcDB+mMq rMuqm/RGiKJjyTHtPN/9TuQ/E9YVQ3JdpCDlQadKfaFCPgyiHafKzjFJPXv+/fLoGpwg 8QNLP7uEPOG+3m2+gNwvib4PSCPK3DfHQv6ax4F2rcx1MoCYiNHLRcGwzTiixDPxaCw8 Isy3xE+pgta+ZfyA6skJVF5swRToq4xLNq+VcLDffs7jFAY7gVagQX6W5J2W+DsvJ0ai XBcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=4z/7rQZurTl+mnXCfAAXxrfDTIcIQthn9O8WhytR9hA=; b=6rdFrliZyEW7YPcAsJoc+q9ruum2AS3HkACnLATcCjTb6X9RIiCI/WpaWNEidwRt38 JTq0qVJD+xaGp6xz0rrfl/8TMoj4Ym7C3uo7ik8a1dx7PEwZNQwaPARB4rbv/Ok/1O8d H4wQtVk9CL3wjJk7+H+IDllRUACVCS3zX+Uog/EYBpUgF6o3TMw9COcK/Y72b4M/bkhR WBTYNNFFWXtqYPq6MqYPkVdPh8g/VFjum8S/8qrKCYRJsSpXgQgFWsaAhKT+nX8qtBXy wEcnII3u/eYuP7NSiA9nTKPkjdfHIgNC38uhMVgAEn3ghcH+08yYlfF5W5rLmL/PvMSY LInQ== X-Gm-Message-State: ACgBeo2HxlHyMubh4+wTDUYH8w8ZSxUXWKIiXkBEAx5oag6zZMMIUWbe d815O8SyLjOWlwgGhqV/DaM= X-Google-Smtp-Source: AA6agR6HNVvFQ6ExpeShHhc/NZBKc4hfibcgDnFc0Sgz0oZBsSltJwdYUQfHJ7mhy+hjw1K/UHvR7w== X-Received: by 2002:a0c:e482:0:b0:496:c63a:a3a3 with SMTP id n2-20020a0ce482000000b00496c63aa3a3mr1230278qvl.28.1660900601656; Fri, 19 Aug 2022 02:16:41 -0700 (PDT) Received: from sophie ([89.46.62.64]) by smtp.gmail.com with ESMTPSA id j5-20020a05620a410500b006b640efe6dasm3723318qko.132.2022.08.19.02.16.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 19 Aug 2022 02:16:41 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH v2 4/4] memblock tests: add generic NUMA tests for memblock_alloc_try_nid* Date: Fri, 19 Aug 2022 02:05:34 -0700 Message-Id: <16b7fc2d5cee8590c9f255b0fd3c69c443431c0b.1660897864.git.remckee0@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw() where the simulated physical memory is set up with multiple NUMA nodes. Additionally, two of these tests set nid !=3D NUMA_NO_NODE. All tests are run for both top-down and bottom-up allocation directions. The tested scenarios are: Range unrestricted: - region cannot be allocated: + none of the nodes have enough memory to allocate the region Range restricted: - region can be allocated in the specific node requested without dropping min_addr: + the range fully overlaps with the node, and there are adjacent reserved regions - region cannot be allocated: + nid is set to NUMA_NO_NODE and the total range can fit the region, but the range is split between two nodes and everything else is reserved Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/alloc_nid_api.c | 203 +++++++++++++++++++ 1 file changed, 203 insertions(+) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/m= emblock/tests/alloc_nid_api.c index 0a7a7494a157..c11c467cece3 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -2382,6 +2382,179 @@ static int alloc_try_nid_numa_bottom_up_no_overlap_= high_check(void) return 0; } =20 +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * does not have enough memory to allocate a region of the requested size. + * Additionally, none of the nodes have enough memory to allocate the regi= on: + * + * +-----------------------------------+ + * | new | + * +-----------------------------------+ + * |-------+-------+-------+-------+-------+-------+-------+-------| + * | node0 | node1 | node2 | node3 | node4 | node5 | node6 | node7 | + * +-------+-------+-------+-------+-------+-------+-------+-------+ + * + * Expect no allocation to happen. + */ +static int alloc_try_nid_generic_numa_large_region_check(void) +{ + int nid_req =3D 3; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_8K * MEM_FACTOR; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + ASSERT_EQ(allocated_ptr, NULL); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_addr range= when + * there are two reserved regions at the borders. The requested node start= s at + * min_addr and ends at max_addr and is the same size as the region to be + * allocated: + * + * min_addr + * | max_addr + * | | + * v v + * | +-----------+-----------------------+-----------------------| + * | | node5 | requested | node7 | + * +------+-----------+-----------------------+-----------------------+ + * + + + * | +----+-----------------------+----+ | + * | | r2 | new | r1 | | + * +-------------+----+-----------------------+----+------------------+ + * + * Expect to merge all of the regions into one. The region counter and tot= al + * size fields get updated. + */ +static int alloc_try_nid_numa_reserved_full_merge_generic_check(void) +{ + int nid_req =3D 6; + int nid_next =3D nid_req + 1; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *next_node =3D &memblock.memory.regions[nid_next]; + void *allocated_ptr =3D NULL; + struct region r1, r2; + + PREFIX_PUSH(); + + phys_addr_t size =3D req_node->size; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + r1.base =3D next_node->base; + r1.size =3D SZ_128; + + r2.size =3D SZ_128; + r2.base =3D r1.base - (size + r2.size); + + total_size =3D r1.size + r2.size + size; + min_addr =3D r2.base + r2.size; + max_addr =3D r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size, alloc_nid_test_flags); + + ASSERT_EQ(new_rgn->size, total_size); + ASSERT_EQ(new_rgn->base, r2.base); + + ASSERT_LE(new_rgn->base, req_node->base); + ASSERT_LE(region_end(req_node), region_end(new_rgn)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, total_size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, + * where the total range can fit the region, but it is split between two n= odes + * and everything else is reserved. Additionally, nid is set to NUMA_NO_NO= DE + * instead of requesting a specific node: + * + * +-----------+ + * | new | + * +-----------+ + * | +---------------------+-----------| + * | | prev node | next node | + * +------+---------------------+-----------+ + * + + + * |----------------------+ +-----| + * | r1 | | r2 | + * +----------------------+-----------+-----+ + * ^ ^ + * | | + * | max_addr + * | + * min_addr + * + * Expect no allocation to happen. + */ +static int alloc_try_nid_numa_split_all_reserved_generic_check(void) +{ + void *allocated_ptr =3D NULL; + struct memblock_region *next_node =3D &memblock.memory.regions[7]; + struct region r1, r2; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_256; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + r2.base =3D next_node->base + SZ_128; + r2.size =3D memblock_end_of_DRAM() - r2.base; + + r1.size =3D MEM_SIZE - (r2.size + size); + r1.base =3D memblock_start_of_DRAM(); + + min_addr =3D r1.base + r1.size; + max_addr =3D r2.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, + NUMA_NO_NODE); + + ASSERT_EQ(allocated_ptr, NULL); + + test_pass_pop(); + + return 0; +} + /* Test case wrappers for NUMA tests */ static int alloc_try_nid_numa_simple_check(void) { @@ -2493,6 +2666,33 @@ static int alloc_try_nid_numa_no_overlap_high_check(= void) return 0; } =20 +static int alloc_try_nid_numa_large_region_check(void) +{ + test_print("\tRunning %s...\n", __func__); + run_top_down(alloc_try_nid_generic_numa_large_region_check); + run_bottom_up(alloc_try_nid_generic_numa_large_region_check); + + return 0; +} + +static int alloc_try_nid_numa_reserved_full_merge_check(void) +{ + test_print("\tRunning %s...\n", __func__); + run_top_down(alloc_try_nid_numa_reserved_full_merge_generic_check); + run_bottom_up(alloc_try_nid_numa_reserved_full_merge_generic_check); + + return 0; +} + +static int alloc_try_nid_numa_split_all_reserved_check(void) +{ + test_print("\tRunning %s...\n", __func__); + run_top_down(alloc_try_nid_numa_split_all_reserved_generic_check); + run_bottom_up(alloc_try_nid_numa_split_all_reserved_generic_check); + + return 0; +} + int __memblock_alloc_nid_numa_checks(void) { test_print("Running %s NUMA tests...\n", @@ -2509,6 +2709,9 @@ int __memblock_alloc_nid_numa_checks(void) alloc_try_nid_numa_no_overlap_split_check(); alloc_try_nid_numa_no_overlap_low_check(); alloc_try_nid_numa_no_overlap_high_check(); + alloc_try_nid_numa_large_region_check(); + alloc_try_nid_numa_reserved_full_merge_check(); + alloc_try_nid_numa_split_all_reserved_check(); =20 return 0; } --=20 2.25.1