From nobody Sat Apr 11 07:52:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8BBDFC3F6B0 for ; Sun, 14 Aug 2022 06:07:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240155AbiHNGG6 (ORCPT ); Sun, 14 Aug 2022 02:06:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41564 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236064AbiHNGGz (ORCPT ); Sun, 14 Aug 2022 02:06:55 -0400 Received: from mail-io1-xd42.google.com (mail-io1-xd42.google.com [IPv6:2607:f8b0:4864:20::d42]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A099A1A83E for ; Sat, 13 Aug 2022 23:06:53 -0700 (PDT) Received: by mail-io1-xd42.google.com with SMTP id e69so3815095iof.5 for ; Sat, 13 Aug 2022 23:06:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=9BV+JT1RS/2/MYqZZuZ9l7gp2iMxSyyMljieAYG86aA=; b=hl0w8eJQRHA+pvdB89Pl5fKE8tBDO5unKCZF5r1Lz7CMgjgOsH2uR5lhQCRkJM65GF lHHrEb/15NW7rQCUbL97hk317mRjL9IROyozoc608CWryvt7XreyRQoeCeP1t51e01K3 C9CqEvlLA4Wy+H7a7au2R/PmLSHaXtJo1+/AqdZZ4JO3kLRhmEW5MFCCmGbFtweyjVQL 1fC23+3Idb/Co1zfdEIxNsv64XlAcWb0eA/gNWSdUyRtIeMh5UHCxYXR6AL7Sb8CffuH 8JKQjg8nTCjdRnCOcCApRaBGQbz5mo89jDNjon5G00VlNfQAT5AgMGsNCD2GA4O1kzQD 5Kbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=9BV+JT1RS/2/MYqZZuZ9l7gp2iMxSyyMljieAYG86aA=; b=e2jvNSSEZb2dvLUSfGj3V9cOSTcXNkn/2B2Alk1gxMCzpwgQySTAQm/TiRcHlBQo9C rPMNEt7K6YPaLpYJAhKQ3pXHXEgiupLdkJ/g+yY/TUJ+2TLfQQjisF4bssfV/tE0Ysp8 2p6FhUKC/wXs6AvC8kKzXBWTfBifUqsZURBwD4qcoM4Kv910BpQg8iCg1YoVPJ6sEIua fVwHfClpk0WeUxaJu02vybDgjJJU3U7Hx8g9/o0F7AOJlu+revncmQWDhA0ocE2CX2K8 wB1MdZIj9OXQm6l04VNYbTKW1yO9qLYVM/k51oV3dybi9b7Rx7fLjkwo7ytLbfVH59UK cQjw== X-Gm-Message-State: ACgBeo0qbG/PG93GwfE2xouEsZsb1GGvJ+TO3qiT+W4JO7KVDkRZlgCx lN20UY358PJx6qoEjDkxXqE= X-Google-Smtp-Source: AA6agR4BlJt6RMS83dyzlyNFbxFOOOcH1bPuMJlZOxm6b+mSIPBO0727x6t48hdu0gSIkkkMO51FbA== X-Received: by 2002:a5d:8ac8:0:b0:67f:ac0f:6cf6 with SMTP id e8-20020a5d8ac8000000b0067fac0f6cf6mr4261512iot.204.1660457212911; Sat, 13 Aug 2022 23:06:52 -0700 (PDT) Received: from sophie ([68.235.43.126]) by smtp.gmail.com with ESMTPSA id j6-20020a6bf906000000b00685a7cccd2csm2865093iog.45.2022.08.13.23.06.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 13 Aug 2022 23:06:52 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH 1/4] memblock tests: add simulation of physical memory with multiple NUMA nodes Date: Sun, 14 Aug 2022 01:06:15 -0500 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add functions setup_numa_memblock_generic() and setup_numa_memblock() for setting up a memory layout with multiple NUMA nodes in a previously allocated dummy physical memory. These functions can be used in place of setup_memblock() in tests that need to simulate a NUMA system. setup_numa_memblock_generic(): - allows for setting up a custom memory layout by specifying the amount of memory in each node, the number of nodes, and a factor that will be used to scale the memory in each node setup_numa_memblock(): - allows for setting up a default memory layout Introduce constant MEM_FACTOR, which is used to scale the default memory layout based on MEM_SIZE. Set CONFIG_NODES_SHIFT to 4 when building with NUMA=3D1 to allow for up to 16 NUMA nodes. Signed-off-by: Rebecca Mckeever --- .../testing/memblock/scripts/Makefile.include | 2 +- tools/testing/memblock/tests/common.c | 38 +++++++++++++++++++ tools/testing/memblock/tests/common.h | 9 ++++- 3 files changed, 47 insertions(+), 2 deletions(-) diff --git a/tools/testing/memblock/scripts/Makefile.include b/tools/testin= g/memblock/scripts/Makefile.include index aa6d82d56a23..998281723590 100644 --- a/tools/testing/memblock/scripts/Makefile.include +++ b/tools/testing/memblock/scripts/Makefile.include @@ -3,7 +3,7 @@ =20 # Simulate CONFIG_NUMA=3Dy ifeq ($(NUMA), 1) - CFLAGS +=3D -D CONFIG_NUMA + CFLAGS +=3D -D CONFIG_NUMA -D CONFIG_NODES_SHIFT=3D4 endif =20 # Use 32 bit physical addresses. diff --git a/tools/testing/memblock/tests/common.c b/tools/testing/memblock= /tests/common.c index 0ca26fe12c38..179b9b4a8fc8 100644 --- a/tools/testing/memblock/tests/common.c +++ b/tools/testing/memblock/tests/common.c @@ -34,6 +34,10 @@ static const char * const help_opts[] =3D { =20 static int verbose; =20 +static const phys_addr_t node_sizes[] =3D { + SZ_4K, SZ_1K, SZ_2K, SZ_2K, SZ_1K, SZ_1K, SZ_4K, SZ_1K +}; + /* sets global variable returned by movable_node_is_enabled() stub */ bool movable_node_enabled; =20 @@ -72,6 +76,40 @@ void setup_memblock(void) fill_memblock(); } =20 +/** + * setup_numa_memblock_generic: + * Set up a memory layout with multiple NUMA nodes in a previously allocat= ed + * dummy physical memory. + * @nodes: an array containing the amount of memory in each node + * @node_cnt: the size of @nodes + * @factor: a factor that will be used to scale the memory in each node + * + * The nids will be set to 0 through node_cnt - 1. + */ +void setup_numa_memblock_generic(const phys_addr_t nodes[], + int node_cnt, int factor) +{ + phys_addr_t base; + int flags; + + reset_memblock_regions(); + base =3D (phys_addr_t)memory_block.base; + flags =3D (movable_node_is_enabled()) ? MEMBLOCK_NONE : MEMBLOCK_HOTPLUG; + + for (int i =3D 0; i < node_cnt; i++) { + phys_addr_t size =3D factor * nodes[i]; + + memblock_add_node(base, size, i, flags); + base +=3D size; + } + fill_memblock(); +} + +void setup_numa_memblock(void) +{ + setup_numa_memblock_generic(node_sizes, NUMA_NODES, MEM_FACTOR); +} + void dummy_physical_memory_init(void) { memory_block.base =3D malloc(MEM_SIZE); diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock= /tests/common.h index a0594f1e4fe3..abd77beff06c 100644 --- a/tools/testing/memblock/tests/common.h +++ b/tools/testing/memblock/tests/common.h @@ -10,7 +10,11 @@ #include #include <../selftests/kselftest.h> =20 -#define MEM_SIZE SZ_16K +#define MEM_SIZE SZ_16K +#define NUMA_NODES 8 + +/* used to resize values that need to scale with MEM_SIZE */ +#define MEM_FACTOR (MEM_SIZE / SZ_16K) =20 enum test_flags { TEST_ZEROED =3D 0x0, @@ -101,6 +105,9 @@ void reset_memblock_regions(void); void reset_memblock_attributes(void); void fill_memblock(void); void setup_memblock(void); +void setup_numa_memblock_generic(const phys_addr_t nodes[], + int node_cnt, int factor); +void setup_numa_memblock(void); void dummy_physical_memory_init(void); void dummy_physical_memory_cleanup(void); void parse_args(int argc, char **argv); --=20 2.25.1 From nobody Sat Apr 11 07:52:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6D45C25B0F for ; Sun, 14 Aug 2022 06:07:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240212AbiHNGHH (ORCPT ); Sun, 14 Aug 2022 02:07:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240204AbiHNGHB (ORCPT ); Sun, 14 Aug 2022 02:07:01 -0400 Received: from mail-il1-x143.google.com (mail-il1-x143.google.com [IPv6:2607:f8b0:4864:20::143]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 875441EEF4 for ; Sat, 13 Aug 2022 23:06:59 -0700 (PDT) Received: by mail-il1-x143.google.com with SMTP id c5so2497782ilh.3 for ; Sat, 13 Aug 2022 23:06:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=bewBGzDdAiO/1A7PLG1euKiog7suU6ZL3PnFDzjCnZ8=; b=bZFgvFuPLBVwnefMkTFsfSV54ijUgJfQxjtzUFPPwRtCjBHIYguIloipY8CV67rIeU UsVQXQqhcfB1ORleeDRH/Ijx74aThlrcTGT9JUmxt0hMM74pUAnPXqWpRv2JNevSfi5e e0ueIOVluEQGQRVXxF0sgTLkmVo8msU0KIT3A8gWQZu+6c3GRedXdviXeUARIJFJgB1i 0raaHlSYSRKrT99Jw7NzToAeWOR/mbdV1Ib5oWdqPzAFiWvflcanHyZvzZW+qyX8mle+ DPkIQUD6VO+zUuqoAASR83GWccSqu2fRReeoimPrxdek1+NJC1Z3Wx54uqjibIUCpdvN ffZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=bewBGzDdAiO/1A7PLG1euKiog7suU6ZL3PnFDzjCnZ8=; b=bY/An28xmicSYHbcGp7gW/bwixvozwP+Jw8setkJCfQcUUwo62GR93hmwi7LMhLDSZ GX98k/Yvt21LHvyGaAR3Z0/kVi7Crx8Fgs7cWCTDgPFOvTvnb0K3KLwClhm0EohMT92l 5To6tCstw6Six7eo0eCULYZMGzD7N2k9tSUhDKtto9knENxwsMUVEA2fph+9UG6vFpAd vtFZ/O9LMzv+WsQYmtTlCTWgfqqBdSEFgpfIeouZU6U5CCVpgyqsVDvtdrK6sdPENlRA suhS6wWdHJpbEfeWecNpTV/n6VXu+LptnN0FBp/8wEK8pRwn++tX1K3wLrav2zcr80pm ArVQ== X-Gm-Message-State: ACgBeo2gEasmDTlPX4c3xt8QiY/f4Ptk0rwcdMMk6MZUyekjBOOqtsym CcHHCvUrAEUD1glLgWXHe00= X-Google-Smtp-Source: AA6agR6pI6ZVFjw8DG5d2WOAY4zqhDCAbdNDzKMC1ykvLdN8J10msXUkUeyuie5HTS4NUGFcPQP8/w== X-Received: by 2002:a92:6007:0:b0:2e4:464f:6e57 with SMTP id u7-20020a926007000000b002e4464f6e57mr3750206ilb.181.1660457218799; Sat, 13 Aug 2022 23:06:58 -0700 (PDT) Received: from sophie ([68.235.43.126]) by smtp.gmail.com with ESMTPSA id c4-20020a02a604000000b00342b2483620sm1793686jam.163.2022.08.13.23.06.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 13 Aug 2022 23:06:58 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH 2/4] memblock tests: add top-down NUMA tests for memblock_alloc_try_nid* Date: Sun, 14 Aug 2022 01:06:16 -0500 Message-Id: <2928d1620051636f86b6ebe7d1bb29a36ce97771.1660454970.git.remckee0@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw() where the simulated physical memory is set up with multiple NUMA nodes. Additionally, all of these tests set nid !=3D NUMA_NO_NODE. These tests are run with a top-down allocation direction. The tested scenarios are: Range unrestricted: - region can be allocated in the specific node requested: + there are no previously reserved regions + the requested node is partially reserved but has enough space - the specific node requested cannot accommodate the request, but the region can be allocated in a different node: + there are no previously reserved regions, but node is too small + the requested node is fully reserved + the requested node is partially reserved and does not have enough space Range restricted: - region can be allocated in the specific node requested after dropping min_addr: + range partially overlaps with two different nodes, where the first node is the requested node + range partially overlaps with two different nodes, where the requested node ends before min_addr - region cannot be allocated in the specific node requested, but it can be allocated in the requested range: + range overlaps with multiple nodes along node boundaries, and the requested node ends before min_addr + range overlaps with multiple nodes along node boundaries, and the requested node starts after max_addr - region cannot be allocated in the specific node requested, but it can be allocated after dropping min_addr: + range partially overlaps with two different nodes, where the second node is the requested node Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/alloc_nid_api.c | 702 ++++++++++++++++++- tools/testing/memblock/tests/alloc_nid_api.h | 16 + tools/testing/memblock/tests/common.h | 18 + 3 files changed, 725 insertions(+), 11 deletions(-) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/m= emblock/tests/alloc_nid_api.c index e16106d8446b..3ffd042298f1 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -1110,7 +1110,7 @@ static int alloc_try_nid_bottom_up_cap_min_check(void) return 0; } =20 -/* Test case wrappers */ +/* Test case wrappers for range tests */ static int alloc_try_nid_simple_check(void) { test_print("\tRunning %s...\n", __func__); @@ -1242,17 +1242,10 @@ static int alloc_try_nid_low_max_check(void) return 0; } =20 -static int memblock_alloc_nid_checks_internal(int flags) +static int memblock_alloc_nid_range_checks(void) { - const char *func =3D get_func_testing(flags); - - alloc_nid_test_flags =3D flags; - prefix_reset(); - prefix_push(func); - test_print("Running %s tests...\n", func); - - reset_memblock_attributes(); - dummy_physical_memory_init(); + test_print("Running %s range tests...\n", + get_func_testing(alloc_nid_test_flags)); =20 alloc_try_nid_simple_check(); alloc_try_nid_misaligned_check(); @@ -1269,6 +1262,693 @@ static int memblock_alloc_nid_checks_internal(int f= lags) alloc_try_nid_reserved_all_check(); alloc_try_nid_low_max_check(); =20 + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * has enough memory to allocate a region of the requested size. + * Expect to allocate an aligned region at the end of the requested node. + */ +static int alloc_try_nid_top_down_numa_simple_check(void) +{ + int nid_req =3D 3; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + ASSERT_LE(SZ_4, req_node->size); + size =3D req_node->size / SZ_4; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(req_node) - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * does not have enough memory to allocate a region of the requested size: + * + * | +-----+ +------------------+ | + * | | req | | expected | | + * +---+-----+----------+------------------+-----+ + * + * | +---------+ | + * | | rgn | | + * +-----------------------------+---------+-----+ + * + * Expect to allocate an aligned region at the end of the last node that h= as + * enough memory (in this case, nid =3D 6) after falling back to NUMA_NO_N= ODE. + */ +static int alloc_try_nid_top_down_numa_small_node_check(void) +{ + int nid_req =3D 1; + int nid_exp =3D 6; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_2K * MEM_FACTOR; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is fully reserved: + * + * | +---------+ +------------------+ | + * | |requested| | expected | | + * +--------------+---------+------------+------------------+-----+ + * + * | +---------+ +---------+ | + * | | reserved| | new | | + * +--------------+---------+---------------------+---------+-----+ + * + * Expect to allocate an aligned region at the end of the last node that is + * large enough and has enough unreserved memory (in this case, nid =3D 6)= after + * falling back to NUMA_NO_NODE. The region count and total size get updat= ed. + */ +static int alloc_try_nid_top_down_numa_node_reserved_check(void) +{ + int nid_req =3D 2; + int nid_exp =3D 6; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[1]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_2K * MEM_FACTOR; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(req_node->base, req_node->size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + req_node->size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved but has enough memory for the allocated region: + * + * | +---------------------------------------+ | + * | | requested | | + * +-----------+---------------------------------------+----------+ + * + * | +------------------+ +-----+ | + * | | reserved | | new | | + * +-----------+------------------+--------------+-----+----------+ + * + * Expect to allocate an aligned region at the end of the requested node. = The + * region count and total size get updated. + */ +static int alloc_try_nid_top_down_numa_part_reserved_check(void) +{ + int nid_req =3D 4; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[1]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + r1.base =3D req_node->base; + r1.size =3D SZ_512 * MEM_FACTOR; + size =3D SZ_128 * MEM_FACTOR; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(req_node) - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + r1.size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved and does not have enough contiguous memory for the + * allocated region: + * + * | +-----------------------+ +----------------------| + * | | requested | | expected | + * +-----------+-----------------------+---------+----------------------+ + * + * | +----------+ +-----------| + * | | reserved | | new | + * +-----------------+----------+---------------------------+-----------+ + * + * Expect to allocate an aligned region at the end of the last node that is + * large enough and has enough unreserved memory (in this case, + * nid =3D NUMA_NODES - 1) after falling back to NUMA_NO_NODE. The region = count + * and total size get updated. + */ +static int alloc_try_nid_top_down_numa_part_reserved_fallback_check(void) +{ + int nid_req =3D 4; + int nid_exp =3D NUMA_NODES - 1; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[1]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512 * MEM_FACTOR; + r1.base =3D req_node->base + SZ_256 * MEM_FACTOR; + r1.size =3D size; + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(exp_node) - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + r1.size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the fir= st + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * | +-----------------------+-----------+ | + * | | requested | node3 | | + * +-----------+-----------------------+-----------+--------------+ + * + + + * | +-----------+ | + * | | rgn | | + * +-----------------------+-----------+--------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * ends at the end of the requested node. + */ +static int alloc_try_nid_top_down_numa_split_range_low_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t req_node_end; + + setup_numa_memblock(); + + req_node_end =3D region_end(req_node); + min_addr =3D req_node_end - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node_end - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the sec= ond + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * | +--------------------------+---------+ | + * | | expected |requested| | + * +------+--------------------------+---------+----------------+ + * + + + * | +---------+ | + * | | rgn | | + * +-----------------------+---------+--------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * ends at the end of the first node that overlaps with the range. + */ +static int alloc_try_nid_top_down_numa_split_range_high_check(void) +{ + int nid_req =3D 3; + int nid_exp =3D nid_req - 1; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t exp_node_end; + + setup_numa_memblock(); + + exp_node_end =3D region_end(exp_node); + min_addr =3D exp_node_end - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node_end - size); + ASSERT_LE(exp_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the req= uested + * node ends before min_addr: + * + * min_addr + * | max_addr + * | | + * v v + * | +---------------+ +-------------+---------+ | + * | | requested | | node1 | node2 | | + * +----+---------------+--------+-------------+---------+----------+ + * + + + * | +---------+ | + * | | rgn | | + * +----------+---------+-------------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * ends at the end of the requested node. + */ +static int alloc_try_nid_top_down_numa_no_overlap_split_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *node2 =3D &memblock.memory.regions[6]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512; + min_addr =3D node2->base - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, region_end(req_node) - size); + ASSERT_LE(req_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node ends + * before min_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * |-----------+ +----------+----...----+----------+ | + * | requested | | min node | ... | max node | | + * +-----------+-----------+----------+----...----+----------+------+ + * + + + * | +-----+ | + * | | rgn | | + * +---------------------------------------------------+-----+------+ + * + * Expect to allocate a cleared memory region at the end of the final node= in + * the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_top_down_no_overlap_low_check(void) +{ + int nid_req =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, max_addr - size); + ASSERT_LE(max_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node sta= rts + * after max_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * | +----------+----...----+----------+ +-----------+ | + * | | min node | ... | max node | | requested | | + * +-----+----------+----...----+----------+--------+-----------+---+ + * + + + * | +-----+ | + * | | rgn | | + * +---------------------------------+-----+------------------------+ + * + * Expect to allocate a cleared memory region at the end of the final node= in + * the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_top_down_no_overlap_high_check(void) +{ + int nid_req =3D 7; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, max_addr - size); + ASSERT_LE(max_node->base, new_rgn->base); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* Test case wrappers for NUMA tests */ +static int alloc_try_nid_numa_simple_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_simple_check(); + + return 0; +} + +static int alloc_try_nid_numa_small_node_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_small_node_check(); + + return 0; +} + +static int alloc_try_nid_numa_node_reserved_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_node_reserved_check(); + + return 0; +} + +static int alloc_try_nid_numa_part_reserved_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_part_reserved_check(); + + return 0; +} + +static int alloc_try_nid_numa_part_reserved_fallback_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_part_reserved_fallback_check(); + + return 0; +} + +static int alloc_try_nid_numa_split_range_low_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_split_range_low_check(); + + return 0; +} + +static int alloc_try_nid_numa_split_range_high_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_split_range_high_check(); + + return 0; +} + +static int alloc_try_nid_numa_no_overlap_split_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_top_down_numa_no_overlap_split_check(); + + return 0; +} + +static int alloc_try_nid_numa_no_overlap_low_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_numa_top_down_no_overlap_low_check(); + + return 0; +} + +static int alloc_try_nid_numa_no_overlap_high_check(void) +{ + test_print("\tRunning %s...\n", __func__); + memblock_set_bottom_up(false); + alloc_try_nid_numa_top_down_no_overlap_high_check(); + + return 0; +} + +int __memblock_alloc_nid_numa_checks(void) +{ + test_print("Running %s NUMA tests...\n", + get_func_testing(alloc_nid_test_flags)); + + alloc_try_nid_numa_simple_check(); + alloc_try_nid_numa_small_node_check(); + alloc_try_nid_numa_node_reserved_check(); + alloc_try_nid_numa_part_reserved_check(); + alloc_try_nid_numa_part_reserved_fallback_check(); + alloc_try_nid_numa_split_range_low_check(); + alloc_try_nid_numa_split_range_high_check(); + + alloc_try_nid_numa_no_overlap_split_check(); + alloc_try_nid_numa_no_overlap_low_check(); + alloc_try_nid_numa_no_overlap_high_check(); + + return 0; +} + +static int memblock_alloc_nid_checks_internal(int flags) +{ + alloc_nid_test_flags =3D flags; + prefix_reset(); + prefix_push(get_func_testing(flags)); + + reset_memblock_attributes(); + dummy_physical_memory_init(); + + memblock_alloc_nid_range_checks(); + memblock_alloc_nid_numa_checks(); + dummy_physical_memory_cleanup(); =20 prefix_pop(); diff --git a/tools/testing/memblock/tests/alloc_nid_api.h b/tools/testing/m= emblock/tests/alloc_nid_api.h index b35cf3c3f489..92d07d230e18 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.h +++ b/tools/testing/memblock/tests/alloc_nid_api.h @@ -5,5 +5,21 @@ #include "common.h" =20 int memblock_alloc_nid_checks(void); +int __memblock_alloc_nid_numa_checks(void); + +#ifdef CONFIG_NUMA +static inline int memblock_alloc_nid_numa_checks(void) +{ + __memblock_alloc_nid_numa_checks(); + return 0; +} + +#else +static inline int memblock_alloc_nid_numa_checks(void) +{ + return 0; +} + +#endif /* CONFIG_NUMA */ =20 #endif diff --git a/tools/testing/memblock/tests/common.h b/tools/testing/memblock= /tests/common.h index abd77beff06c..532489939ec2 100644 --- a/tools/testing/memblock/tests/common.h +++ b/tools/testing/memblock/tests/common.h @@ -60,6 +60,19 @@ enum test_flags { assert((_expected) < (_seen)); \ } while (0) =20 +/** + * ASSERT_LE(): + * Check the condition + * @_expected <=3D @_seen + * If false, print failed test message (if running with --verbose) and then + * assert. + */ +#define ASSERT_LE(_expected, _seen) do { \ + if ((_expected) > (_seen)) \ + test_fail(); \ + assert((_expected) <=3D (_seen)); \ +} while (0) + /** * ASSERT_MEM_EQ(): * Check that the first @_size bytes of @_seen are all equal to @_expected. @@ -101,6 +114,11 @@ struct region { phys_addr_t size; }; =20 +static inline phys_addr_t __maybe_unused region_end(struct memblock_region= *rgn) +{ + return rgn->base + rgn->size; +} + void reset_memblock_regions(void); void reset_memblock_attributes(void); void fill_memblock(void); --=20 2.25.1 From nobody Sat Apr 11 07:52:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C1CF4C25B06 for ; Sun, 14 Aug 2022 06:07:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240169AbiHNGHQ (ORCPT ); Sun, 14 Aug 2022 02:07:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42072 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240182AbiHNGHN (ORCPT ); Sun, 14 Aug 2022 02:07:13 -0400 Received: from mail-il1-x143.google.com (mail-il1-x143.google.com [IPv6:2607:f8b0:4864:20::143]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58B395A8A5 for ; Sat, 13 Aug 2022 23:07:07 -0700 (PDT) Received: by mail-il1-x143.google.com with SMTP id x2so2492575ilp.10 for ; Sat, 13 Aug 2022 23:07:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=fGLf4alj+ilP9AiAsRyxj4qI01biyk8divJu7uSJukY=; b=MH3NnAQ+409j15U33mXtwd2kHEVZMvpJoH4odqwAa9C/Vn0JcrLz1mTYhgLAuz8o20 YyibqUeyzEBvjWdSAoAyjAKyatZHrBvTakDVY9crmeL0JJ9DVXqOXX/1SFgtapJLQBfC OYv4RsKwyh8DUoD5SiY8rK7cVPbQun8r7cWmoYPrgoQYK3mdXFvtwJTcJhV4LVFLbHUD HQTU9Ucg4VibCVAaH9MFyZNaRUb3WZQIp33oXGh3xmrABA8JyBo46P6crmc+LEr53FXt 43iaWcu2PYI2d7HZqOcOP+w2qqHPkRM8ERXRMvDk9gQq1So7Z3WOa25ZaPExw9yVH2we oA+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=fGLf4alj+ilP9AiAsRyxj4qI01biyk8divJu7uSJukY=; b=haXVLCEBYPEHTsbU5dOT3V1DIDlvciZ/m8c1voXv8X1NBQgxI1/rUTTeYFTjBG9KEx OT7fTS+8kLTaGwx+k9Ib2wu7a7UTVKDdvoB1rw5D8WLQ5kynz5uR9krzDWWnpTmrxz1D JSOKoSPz4hvZ3e79O5Zp40Wxuktq/SRsv+uts9TrPBDzM9rEI2DTVy69hVK5H8fL3Qxw oc61lD5zEGS+pkpfRthlUMNGpW/9XUEWfWTtmHKHd1CO+3VOC830GRrw5iAAY9Tx5VCL L1p/FQIXFXOha/5bnYEeYjs8Zl6J6plZoGTd26b0o74HfuX54A3g6TL9SbWXgaSGvgKo 0MUQ== X-Gm-Message-State: ACgBeo32uwWCbqfIhJvhRDV2xZHNCv6uENi8VBBJbeEyLJWwJopRUlqR W5IfvVVYtgH8mJIyh4Fmmnk= X-Google-Smtp-Source: AA6agR5A2ATrg1D1U3EnY5ED49xXyDO4RMKiRm6Y64P7qOvp0xoa3qlT4+//1H3nmLZAMqYz/IDR+w== X-Received: by 2002:a05:6e02:1647:b0:2df:b870:a518 with SMTP id v7-20020a056e02164700b002dfb870a518mr4788773ilu.24.1660457226384; Sat, 13 Aug 2022 23:07:06 -0700 (PDT) Received: from sophie ([68.235.43.126]) by smtp.gmail.com with ESMTPSA id w15-20020a92c88f000000b002de87995de8sm2615859ilo.84.2022.08.13.23.07.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 13 Aug 2022 23:07:06 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH 3/4] memblock tests: add bottom-up NUMA tests for memblock_alloc_try_nid* Date: Sun, 14 Aug 2022 01:06:17 -0500 Message-Id: <86e4808d21ace6608e0c3a5d26117ab8ccb4d065.1660454970.git.remckee0@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw() where the simulated physical memory is set up with multiple NUMA nodes. Additionally, all of these tests set nid !=3D NUMA_NO_NODE. These tests are run with a bottom-up allocation direction. The tested scenarios are: Range unrestricted: - region can be allocated in the specific node requested: + there are no previously reserved regions + the requested node is partially reserved but has enough space - the specific node requested cannot accommodate the request, but the region can be allocated in a different node: + there are no previously reserved regions, but node is too small + the requested node is fully reserved + the requested node is partially reserved and does not have enough space Range restricted: - region can be allocated in the specific node requested after dropping min_addr: + range partially overlaps with two different nodes, where the first node is the requested node + range partially overlaps with two different nodes, where the requested node ends before min_addr - region cannot be allocated in the specific node requested, but it can be allocated in the requested range: + range overlaps with multiple nodes along node boundaries, and the requested node ends before min_addr + range overlaps with multiple nodes along node boundaries, and the requested node starts after max_addr - region cannot be allocated in the specific node requested, but it can be allocated after dropping min_addr: + range partially overlaps with two different nodes, where the second node is the requested node Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/alloc_nid_api.c | 584 +++++++++++++++++++ 1 file changed, 584 insertions(+) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/m= emblock/tests/alloc_nid_api.c index 3ffd042298f1..112cd8018d7c 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -1826,12 +1826,578 @@ static int alloc_try_nid_numa_top_down_no_overlap_= high_check(void) return 0; } =20 +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * has enough memory to allocate a region of the requested size. + * Expect to allocate an aligned region at the beginning of the requested = node. + */ +static int alloc_try_nid_bottom_up_numa_simple_check(void) +{ + int nid_req =3D 3; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + ASSERT_LE(SZ_4, req_node->size); + size =3D req_node->size / SZ_4; + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), region_end(req_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * does not have enough memory to allocate a region of the requested size: + * + * |----------------------+-----+ | + * | expected | req | | + * +----------------------+-----+----------------+ + * + * |---------+ | + * | rgn | | + * +---------+-----------------------------------+ + * + * Expect to allocate an aligned region at the beginning of the first node= that + * has enough memory (in this case, nid =3D 0) after falling back to NUMA_= NO_NODE. + */ +static int alloc_try_nid_bottom_up_numa_small_node_check(void) +{ + int nid_req =3D 1; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_2K * MEM_FACTOR; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), region_end(exp_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is fully reserved: + * + * |----------------------+ +-----------+ | + * | expected | | requested | | + * +----------------------+-----+-----------+--------------------+ + * + * |-----------+ +-----------+ | + * | new | | reserved | | + * +-----------+----------------+-----------+--------------------+ + * + * Expect to allocate an aligned region at the beginning of the first node= that + * is large enough and has enough unreserved memory (in this case, nid =3D= 0) + * after falling back to NUMA_NO_NODE. The region count and total size get + * updated. + */ +static int alloc_try_nid_bottom_up_numa_node_reserved_check(void) +{ + int nid_req =3D 2; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_2K * MEM_FACTOR; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(req_node->base, req_node->size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), region_end(exp_node)); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + req_node->size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved but has enough memory for the allocated region: + * + * | +---------------------------------------+ | + * | | requested | | + * +-----------+---------------------------------------+---------+ + * + * | +------------------+-----+ | + * | | reserved | new | | + * +-----------+------------------+-----+------------------------+ + * + * Expect to allocate an aligned region in the requested node that merges = with + * the existing reserved region. The total size gets updated. + */ +static int alloc_try_nid_bottom_up_numa_part_reserved_check(void) +{ + int nid_req =3D 4; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t total_size; + + setup_numa_memblock(); + + r1.base =3D req_node->base; + r1.size =3D SZ_512 * MEM_FACTOR; + size =3D SZ_128 * MEM_FACTOR; + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + total_size =3D size + r1.size; + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, total_size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), region_end(req_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, total_size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * is partially reserved and does not have enough contiguous memory for the + * allocated region: + * + * |----------------------+ +-----------------------+ | + * | expected | | requested | | + * +----------------------+-------+-----------------------+---------+ + * + * |-----------+ +----------+ | + * | new | | reserved | | + * +-----------+------------------------+----------+----------------+ + * + * Expect to allocate an aligned region at the beginning of the first + * node that is large enough and has enough unreserved memory (in this cas= e, + * nid =3D 0) after falling back to NUMA_NO_NODE. The region count and tot= al size + * get updated. + */ +static int alloc_try_nid_bottom_up_numa_part_reserved_fallback_check(void) +{ + int nid_req =3D 4; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + struct region r1; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512 * MEM_FACTOR; + r1.base =3D req_node->base + SZ_256 * MEM_FACTOR; + r1.size =3D size; + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + memblock_reserve(r1.base, r1.size); + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), region_end(exp_node)); + + ASSERT_EQ(memblock.reserved.cnt, 2); + ASSERT_EQ(memblock.reserved.total_size, size + r1.size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the fir= st + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * | +-----------------------+-----------+ | + * | | requested | node3 | | + * +-----------+-----------------------+-----------+--------------+ + * + + + * | +-----------+ | + * | | rgn | | + * +-----------+-----------+--------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region at = the + * beginning of the requested node. + */ +static int alloc_try_nid_bottom_up_numa_split_range_low_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t req_node_end; + + setup_numa_memblock(); + + req_node_end =3D region_end(req_node); + min_addr =3D req_node_end - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), req_node_end); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the sec= ond + * node is the requested node: + * + * min_addr + * | max_addr + * | | + * v v + * |------------------+ +----------------------+---------+ | + * | expected | | previous |requested| | + * +------------------+--------+----------------------+---------+------+ + * + + + * |---------+ | + * | rgn | | + * +---------+---------------------------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region at = the + * beginning of the first node that has enough memory. + */ +static int alloc_try_nid_bottom_up_numa_split_range_high_check(void) +{ + int nid_req =3D 3; + int nid_exp =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *exp_node =3D &memblock.memory.regions[nid_exp]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_512; + phys_addr_t min_addr; + phys_addr_t max_addr; + phys_addr_t exp_node_end; + + setup_numa_memblock(); + + exp_node_end =3D region_end(req_node); + min_addr =3D req_node->base - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, exp_node->base); + ASSERT_LE(region_end(new_rgn), exp_node_end); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate a memory region that spans over the min_a= ddr + * and max_addr range and overlaps with two different nodes, where the req= uested + * node ends before min_addr: + * + * min_addr + * | max_addr + * | | + * v v + * | +---------------+ +-------------+---------+ | + * | | requested | | node1 | node2 | | + * +----+---------------+--------+-------------+---------+---------+ + * + + + * | +---------+ | + * | | rgn | | + * +----+---------+------------------------------------------------+ + * + * Expect to drop the lower limit and allocate a cleared memory region that + * starts at the beginning of the requested node. + */ +static int alloc_try_nid_bottom_up_numa_no_overlap_split_check(void) +{ + int nid_req =3D 2; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *node2 =3D &memblock.memory.regions[6]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + size =3D SZ_512; + min_addr =3D node2->base - SZ_256; + max_addr =3D min_addr + size; + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, req_node->base); + ASSERT_LE(region_end(new_rgn), region_end(req_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node ends + * before min_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * |-----------+ +----------+----...----+----------+ | + * | requested | | min node | ... | max node | | + * +-----------+-----------+----------+----...----+----------+------+ + * + + + * | +-----+ | + * | | rgn | | + * +-----------------------+-----+----------------------------------+ + * + * Expect to allocate a cleared memory region at the beginning of the firs= t node + * in the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_bottom_up_no_overlap_low_check(void) +{ + int nid_req =3D 0; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, min_addr); + ASSERT_LE(region_end(new_rgn), region_end(min_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range = when + * the requested node and the range do not overlap, and requested node sta= rts + * after max_addr. The range overlaps with multiple nodes along node + * boundaries: + * + * min_addr + * | max_addr + * | | + * v v + * | +----------+----...----+----------+ +---------+ | + * | | min node | ... | max node | |requested| | + * +-----+----------+----...----+----------+---------+---------+---+ + * + + + * | +-----+ | + * | | rgn | | + * +-----+-----+---------------------------------------------------+ + * + * Expect to allocate a cleared memory region at the beginning of the firs= t node + * in the range after falling back to NUMA_NO_NODE. + */ +static int alloc_try_nid_numa_bottom_up_no_overlap_high_check(void) +{ + int nid_req =3D 7; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *min_node =3D &memblock.memory.regions[2]; + struct memblock_region *max_node =3D &memblock.memory.regions[5]; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_64; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + min_addr =3D min_node->base; + max_addr =3D region_end(max_node); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, size); + ASSERT_EQ(new_rgn->base, min_addr); + ASSERT_LE(region_end(new_rgn), region_end(min_node)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, size); + + test_pass_pop(); + + return 0; +} + /* Test case wrappers for NUMA tests */ static int alloc_try_nid_numa_simple_check(void) { test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_simple_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_simple_check(); =20 return 0; } @@ -1841,6 +2407,8 @@ static int alloc_try_nid_numa_small_node_check(void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_small_node_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_small_node_check(); =20 return 0; } @@ -1850,6 +2418,8 @@ static int alloc_try_nid_numa_node_reserved_check(voi= d) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_node_reserved_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_node_reserved_check(); =20 return 0; } @@ -1859,6 +2429,8 @@ static int alloc_try_nid_numa_part_reserved_check(voi= d) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_part_reserved_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_part_reserved_check(); =20 return 0; } @@ -1868,6 +2440,8 @@ static int alloc_try_nid_numa_part_reserved_fallback_= check(void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_part_reserved_fallback_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_part_reserved_fallback_check(); =20 return 0; } @@ -1877,6 +2451,8 @@ static int alloc_try_nid_numa_split_range_low_check(v= oid) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_split_range_low_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_split_range_low_check(); =20 return 0; } @@ -1886,6 +2462,8 @@ static int alloc_try_nid_numa_split_range_high_check(= void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_split_range_high_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_split_range_high_check(); =20 return 0; } @@ -1895,6 +2473,8 @@ static int alloc_try_nid_numa_no_overlap_split_check(= void) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_top_down_numa_no_overlap_split_check(); + memblock_set_bottom_up(true); + alloc_try_nid_bottom_up_numa_no_overlap_split_check(); =20 return 0; } @@ -1904,6 +2484,8 @@ static int alloc_try_nid_numa_no_overlap_low_check(vo= id) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_numa_top_down_no_overlap_low_check(); + memblock_set_bottom_up(true); + alloc_try_nid_numa_bottom_up_no_overlap_low_check(); =20 return 0; } @@ -1913,6 +2495,8 @@ static int alloc_try_nid_numa_no_overlap_high_check(v= oid) test_print("\tRunning %s...\n", __func__); memblock_set_bottom_up(false); alloc_try_nid_numa_top_down_no_overlap_high_check(); + memblock_set_bottom_up(true); + alloc_try_nid_numa_bottom_up_no_overlap_high_check(); =20 return 0; } --=20 2.25.1 From nobody Sat Apr 11 07:52:48 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDA50C25B06 for ; Sun, 14 Aug 2022 06:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240254AbiHNGHT (ORCPT ); Sun, 14 Aug 2022 02:07:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240218AbiHNGHO (ORCPT ); Sun, 14 Aug 2022 02:07:14 -0400 Received: from mail-il1-x141.google.com (mail-il1-x141.google.com [IPv6:2607:f8b0:4864:20::141]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C7FF55D0D1 for ; Sat, 13 Aug 2022 23:07:11 -0700 (PDT) Received: by mail-il1-x141.google.com with SMTP id i18so1341425ila.12 for ; Sat, 13 Aug 2022 23:07:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc; bh=Y17OOl0b7CQD6Vt9597weZ/gJQko4O3Bqol78jXbBU8=; b=g6KBQDx7nImhHT+L3NFhQmH1Sa/REqOtGIGFp5sCwL2vICbQstkCaUDkDMTrCZSf/f 3yedy95vHRFDu9hWm0G59ByGCass/byR1uscsrChQjKgya3s1Lomny3OwrNK9GkQAgwg eSyfy1mHj5etSD7ajwgMyqKsDAQA8XI4boCm4trTBkQX4pelkMJ/CwunZRwnUo+6Vg2Z faQH4VlPsTXES0s6Ta6YPGsoEWDujT83L9yMNy3iSRFAJuEZBoWhbdbpyU9nv8pwvwmi ShXQND6x6exEMSiBidGNuXRXTVb9JOR674vJd/vQNhxa6Q+J6WELwiNN2YYl6fEVt2a6 m3Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Y17OOl0b7CQD6Vt9597weZ/gJQko4O3Bqol78jXbBU8=; b=FIXd9txtiOC1VdmhpbY2uArqbdsEgrdAit3q6MMyPCu2ggeXx2WO927GAAR+8+/jFU VEBVh09GZU80ycjMdfLJDkqmLrAm3yPHtWcp0PJT0JI6ZX6ZoJI+7sb5Cxy1ZaLeh3lf RA+1zV/nSoWirvC+Ke2coTeh5uj9kmRHz8fHVh8WfPzIWmagCHPtkzOrtM4Y5thfXDTs q4OfBWgvJVlTNimfvqQBD4XBcCT7z7zblcOas3c/rIoCaaTLUT81CB8LJHe1qPvzko3m +hGP9Yw9qHL/34SCAsYtimsYMySlcTEkP3KfGL2TAF5tIuO4g9Qmv/LtHLP/HfQb5FwG 2UfA== X-Gm-Message-State: ACgBeo1zrOJwXyfMI/7b7/kPCGjiz3gZQ3uhA3wFOoiU1pwiYgnEu0Nc p47hLI66pdsAskaTevUVFAWblK2Gqy8= X-Google-Smtp-Source: AA6agR4i28dUdKHGO68m/+VYuT0meUTZWYUmoraPj1FjhP+hWTB/83KeY27CYEOi7oIVNVCCwLrh7w== X-Received: by 2002:a92:3652:0:b0:2df:4133:787 with SMTP id d18-20020a923652000000b002df41330787mr4872527ilf.39.1660457230804; Sat, 13 Aug 2022 23:07:10 -0700 (PDT) Received: from sophie ([68.235.43.126]) by smtp.gmail.com with ESMTPSA id ay21-20020a5d9d95000000b00684584f7354sm3194761iob.55.2022.08.13.23.07.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 13 Aug 2022 23:07:10 -0700 (PDT) From: Rebecca Mckeever To: Mike Rapoport , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: David Hildenbrand , Rebecca Mckeever Subject: [PATCH 4/4] memblock tests: add generic NUMA tests for memblock_alloc_try_nid* Date: Sun, 14 Aug 2022 01:06:18 -0500 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Add tests for memblock_alloc_try_nid() and memblock_alloc_try_nid_raw() where the simulated physical memory is set up with multiple NUMA nodes. Additionally, two of these tests set nid !=3D NUMA_NO_NODE. All tests are run for both top-down and bottom-up allocation directions. The tested scenarios are: Range unrestricted: - region cannot be allocated: + none of the nodes have enough memory to allocate the region Range restricted: - region can be allocated in the specific node requested without dropping min_addr: + the range fully overlaps with the node, and there are adjacent reserved regions - region cannot be allocated: + nid is set to NUMA_NO_NODE and the total range can fit the region, but the range is split between two nodes and everything else is reserved Signed-off-by: Rebecca Mckeever --- tools/testing/memblock/tests/alloc_nid_api.c | 203 +++++++++++++++++++ 1 file changed, 203 insertions(+) diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/m= emblock/tests/alloc_nid_api.c index 112cd8018d7c..9cbc95ebe07d 100644 --- a/tools/testing/memblock/tests/alloc_nid_api.c +++ b/tools/testing/memblock/tests/alloc_nid_api.c @@ -2390,6 +2390,179 @@ static int alloc_try_nid_numa_bottom_up_no_overlap_= high_check(void) return 0; } =20 +/* + * A test that tries to allocate a memory region in a specific NUMA node t= hat + * does not have enough memory to allocate a region of the requested size. + * Additionally, none of the nodes have enough memory to allocate the regi= on: + * + * +-----------------------------------+ + * | new | + * +-----------------------------------+ + * |-------+-------+-------+-------+-------+-------+-------+-------| + * | node0 | node1 | node2 | node3 | node4 | node5 | node6 | node7 | + * +-------+-------+-------+-------+-------+-------+-------+-------+ + * + * Expect no allocation to happen. + */ +static int alloc_try_nid_generic_numa_large_region_check(void) +{ + int nid_req =3D 3; + void *allocated_ptr =3D NULL; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_8K * MEM_FACTOR; + phys_addr_t min_addr; + phys_addr_t max_addr; + + setup_numa_memblock(); + + min_addr =3D memblock_start_of_DRAM(); + max_addr =3D memblock_end_of_DRAM(); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + ASSERT_EQ(allocated_ptr, NULL); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_addr range= when + * there are two reserved regions at the borders. The requested node start= s at + * min_addr and ends at max_addr and is the same size as the region to be + * allocated: + * + * min_addr + * | max_addr + * | | + * v v + * | +-----------+-----------------------+-----------------------| + * | | node5 | requested | node7 | + * +------+-----------+-----------------------+-----------------------+ + * + + + * | +----+-----------------------+----+ | + * | | r2 | new | r1 | | + * +-------------+----+-----------------------+----+------------------+ + * + * Expect to merge all of the regions into one. The region counter and tot= al + * size fields get updated. + */ +static int alloc_try_nid_numa_reserved_full_merge_generic_check(void) +{ + int nid_req =3D 6; + int nid_next =3D nid_req + 1; + struct memblock_region *new_rgn =3D &memblock.reserved.regions[0]; + struct memblock_region *req_node =3D &memblock.memory.regions[nid_req]; + struct memblock_region *next_node =3D &memblock.memory.regions[nid_next]; + void *allocated_ptr =3D NULL; + struct region r1, r2; + + PREFIX_PUSH(); + + phys_addr_t size =3D req_node->size; + phys_addr_t total_size; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + r1.base =3D next_node->base; + r1.size =3D SZ_128; + + r2.size =3D SZ_128; + r2.base =3D r1.base - (size + r2.size); + + total_size =3D r1.size + r2.size + size; + min_addr =3D r2.base + r2.size; + max_addr =3D r1.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, nid_req); + + ASSERT_NE(allocated_ptr, NULL); + verify_mem_content(allocated_ptr, size); + + ASSERT_EQ(new_rgn->size, total_size); + ASSERT_EQ(new_rgn->base, r2.base); + + ASSERT_LE(new_rgn->base, req_node->base); + ASSERT_LE(region_end(req_node), region_end(new_rgn)); + + ASSERT_EQ(memblock.reserved.cnt, 1); + ASSERT_EQ(memblock.reserved.total_size, total_size); + + test_pass_pop(); + + return 0; +} + +/* + * A test that tries to allocate memory within min_addr and max_add range, + * where the total range can fit the region, but it is split between two n= odes + * and everything else is reserved. Additionally, nid is set to NUMA_NO_NO= DE + * instead of requesting a specific node: + * + * +-----------+ + * | new | + * +-----------+ + * | +---------------------+-----------| + * | | prev node | next node | + * +------+---------------------+-----------+ + * + + + * |----------------------+ +-----| + * | r1 | | r2 | + * +----------------------+-----------+-----+ + * ^ ^ + * | | + * | max_addr + * | + * min_addr + * + * Expect no allocation to happen. + */ +static int alloc_try_nid_numa_split_all_reserved_generic_check(void) +{ + void *allocated_ptr =3D NULL; + struct memblock_region *next_node =3D &memblock.memory.regions[7]; + struct region r1, r2; + + PREFIX_PUSH(); + + phys_addr_t size =3D SZ_256; + phys_addr_t max_addr; + phys_addr_t min_addr; + + setup_numa_memblock(); + + r2.base =3D next_node->base + SZ_128; + r2.size =3D memblock_end_of_DRAM() - r2.base; + + r1.size =3D MEM_SIZE - (r2.size + size); + r1.base =3D memblock_start_of_DRAM(); + + min_addr =3D r1.base + r1.size; + max_addr =3D r2.base; + + memblock_reserve(r1.base, r1.size); + memblock_reserve(r2.base, r2.size); + + allocated_ptr =3D run_memblock_alloc_try_nid(size, SMP_CACHE_BYTES, + min_addr, max_addr, + NUMA_NO_NODE); + + ASSERT_EQ(allocated_ptr, NULL); + + test_pass_pop(); + + return 0; +} + /* Test case wrappers for NUMA tests */ static int alloc_try_nid_numa_simple_check(void) { @@ -2501,6 +2674,33 @@ static int alloc_try_nid_numa_no_overlap_high_check(= void) return 0; } =20 +static int alloc_try_nid_numa_large_region_check(void) +{ + test_print("\tRunning %s...\n", __func__); + run_top_down(alloc_try_nid_generic_numa_large_region_check); + run_bottom_up(alloc_try_nid_generic_numa_large_region_check); + + return 0; +} + +static int alloc_try_nid_numa_reserved_full_merge_check(void) +{ + test_print("\tRunning %s...\n", __func__); + run_top_down(alloc_try_nid_numa_reserved_full_merge_generic_check); + run_bottom_up(alloc_try_nid_numa_reserved_full_merge_generic_check); + + return 0; +} + +static int alloc_try_nid_numa_split_all_reserved_check(void) +{ + test_print("\tRunning %s...\n", __func__); + run_top_down(alloc_try_nid_numa_split_all_reserved_generic_check); + run_bottom_up(alloc_try_nid_numa_split_all_reserved_generic_check); + + return 0; +} + int __memblock_alloc_nid_numa_checks(void) { test_print("Running %s NUMA tests...\n", @@ -2517,6 +2717,9 @@ int __memblock_alloc_nid_numa_checks(void) alloc_try_nid_numa_no_overlap_split_check(); alloc_try_nid_numa_no_overlap_low_check(); alloc_try_nid_numa_no_overlap_high_check(); + alloc_try_nid_numa_large_region_check(); + alloc_try_nid_numa_reserved_full_merge_check(); + alloc_try_nid_numa_split_all_reserved_check(); =20 return 0; } --=20 2.25.1