[PATCH v7 0/2] Dynamic Allocation of the reserved_mem array

Oreoluwa Babatunde posted 2 patches 1 year, 6 months ago
There is a newer version of this series
drivers/of/fdt.c             |   5 +-
drivers/of/of_private.h      |   3 +-
drivers/of/of_reserved_mem.c | 231 +++++++++++++++++++++++++++--------
3 files changed, 188 insertions(+), 51 deletions(-)
[PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
Posted by Oreoluwa Babatunde 1 year, 6 months ago
The reserved_mem array is used to store data for the different
reserved memory regions defined in the DT of a device.  The array
stores information such as region name, node reference, start-address,
and size of the different reserved memory regions.

The array is currently statically allocated with a size of
MAX_RESERVED_REGIONS(64). This means that any system that specifies a
number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
will not have enough space to store the information for all the regions.

This can be fixed by making the reserved_mem array a dynamically sized
array which is allocated using memblock_alloc() based on the exact
number of reserved memory regions defined in the DT.

On architectures such as arm64, memblock allocated memory is not
writable until after the page tables have been setup.
This is an issue because the current implementation initializes the
reserved memory regions and stores their information in the array before
the page tables are setup. Hence, dynamically allocating the
reserved_mem array and attempting to write information to it at this
point will fail.

Therefore, the allocation of the reserved_mem array will need to be done
after the page tables have been setup, which means that the reserved
memory regions will also need to wait until after the page tables have
been setup to be stored in the array.

When processing the reserved memory regions defined in the DT, these
regions are marked as reserved by calling memblock_reserve(base, size).
Where:  base = base address of the reserved region.
	size = the size of the reserved memory region.

Depending on if that region is defined using the "no-map" property,
memblock_mark_nomap(base, size) is also called.

The "no-map" property is used to indicate to the operating system that a
mapping of the specified region must NOT be created. This also means
that no access (including speculative accesses) is allowed on this
region of memory except when it is coming from the device driver that
this region of memory is being reserved for.[1]

Therefore, it is important to call memblock_reserve() and
memblock_mark_nomap() on all the reserved memory regions before the
system sets up the page tables so that the system does not unknowingly
include any of the no-map reserved memory regions in the memory map.

There are two ways to define how/where a reserved memory region is
placed in memory:
i) Statically-placed reserved memory regions
i.e. regions defined with a set start address and size using the
     "reg" property in the DT.
ii) Dynamically-placed reserved memory regions.
i.e. regions defined by specifying a range of addresses where they can
     be placed in memory using the "alloc_ranges" and "size" properties
     in the DT.

The dynamically-placed reserved memory regions get assigned a start
address only at runtime. And this needs to  be done before the page
tables are setup so that memblock_reserve() and memblock_mark_nomap()
can be called on the allocated region as explained above.
Since the dynamically allocated reserved_mem array can only be
available after the page tables have been setup, the information for
the dynamically-placed reserved memory regions needs to be stored
somewhere temporarily until the reserved_mem array is available.

Therefore, this series makes use of a temporary static array to store
the information of the dynamically-placed reserved memory regions until
the reserved_mem array is allocated.
Once the reserved_mem array is available, the information is copied over
from the temporary array into the reserved_mem array, and the memory for
the temporary array is freed back to the system.

The information for the statically-placed reserved memory regions does
not need to be stored in a temporary array because their starting
address is already stored in the devicetree.
Once the reserved_mem array is allocated, the information for the
statically-placed reserved memory regions is added to the array.

Note:
Because of the use of a temporary array to store the information of the
dynamically-placed reserved memory regions, there still exists a
limitation of 64 for this particular kind of reserved memory regions.
From my observation, these regions are typically small in number and
hence I expect this to not be an issue for now.

Patch Versions:
v7:
- Make changes to initialize the reserved memory regions earlier in
  response to issue reported in v6:
  https://lore.kernel.org/all/20240610213403.GA1697364@thelio-3990X/

- For the reserved regions to be setup properly,
  fdt_init_reserved_mem_node() needs to be called on each of the regions
  before the page tables are setup. Since the function requires a
  refernece to the devicetree node of each region, we are not able to
  use the unflattened_devicetree APIs since they are not available until
  after the page tables have been setup.
  Hence, revert the use of the unflatten_device APIs as a result of this
  limitation which was discovered in v6:
  https://lore.kernel.org/all/986361f4-f000-4129-8214-39f2fb4a90da@gmail.com/
  https://lore.kernel.org/all/DU0PR04MB9299C3EC247E1FE2C373440F80DE2@DU0PR04MB9299.eurprd04.prod.outlook.com/

v6:
https://lore.kernel.org/all/20240528223650.619532-1-quic_obabatun@quicinc.com/
- Rebased patchset on top of v6.10-rc1.
- Addressed comments received in v5 such as:
  1. Switched to using relevant typed functions such as
     of_property_read_u32(), of_property_present(), etc.
  2. Switched to using of_address_to_resource() to read the "reg"
     property of nodes.
  3. Renamed functions using "of_*" naming scheme instead of "dt_*".

v5:
https://lore.kernel.org/all/20240328211543.191876-1-quic_obabatun@quicinc.com/
- Rebased changes on top of v6.9-rc1.
- Addressed minor code comments from v4.

v4:
https://lore.kernel.org/all/20240308191204.819487-2-quic_obabatun@quicinc.com/
- Move fdt_init_reserved_mem() back into the unflatten_device_tree()
  function.
- Fix warnings found by Kernel test robot:
  https://lore.kernel.org/all/202401281219.iIhqs1Si-lkp@intel.com/
  https://lore.kernel.org/all/202401281304.tsu89Kcm-lkp@intel.com/
  https://lore.kernel.org/all/202401291128.e7tdNh5x-lkp@intel.com/

v3:
https://lore.kernel.org/all/20240126235425.12233-1-quic_obabatun@quicinc.com/
- Make use of __initdata to delete the temporary static array after
  dynamically allocating memory for reserved_mem array using memblock.
- Move call to fdt_init_reserved_mem() out of the
  unflatten_device_tree() function and into architecture specific setup
  code.
- Breaking up the changes for the individual architectures into separate
  patches.

v2:
https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@quicinc.com/
- Extend changes to all other relevant architectures by moving
  fdt_init_reserved_mem() into the unflatten_device_tree() function.
- Add code to use unflatten devicetree APIs to process the reserved
  memory regions.

v1:
https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@quicinc.com/

References:
[1]
https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reserved-memory/reserved-memory.yaml#L79

Oreoluwa Babatunde (2):
  of: reserved_mem: Restruture how the reserved memory regions are
    processed
  of: reserved_mem: Add code to dynamically allocate reserved_mem array

 drivers/of/fdt.c             |   5 +-
 drivers/of/of_private.h      |   3 +-
 drivers/of/of_reserved_mem.c | 231 +++++++++++++++++++++++++++--------
 3 files changed, 188 insertions(+), 51 deletions(-)

-- 
2.34.1
Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
Posted by Andy Shevchenko 1 year, 5 months ago
On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
> The reserved_mem array is used to store data for the different
> reserved memory regions defined in the DT of a device.  The array
> stores information such as region name, node reference, start-address,
> and size of the different reserved memory regions.
> 
> The array is currently statically allocated with a size of
> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
> will not have enough space to store the information for all the regions.
> 
> This can be fixed by making the reserved_mem array a dynamically sized
> array which is allocated using memblock_alloc() based on the exact
> number of reserved memory regions defined in the DT.
> 
> On architectures such as arm64, memblock allocated memory is not
> writable until after the page tables have been setup.
> This is an issue because the current implementation initializes the
> reserved memory regions and stores their information in the array before
> the page tables are setup. Hence, dynamically allocating the
> reserved_mem array and attempting to write information to it at this
> point will fail.
> 
> Therefore, the allocation of the reserved_mem array will need to be done
> after the page tables have been setup, which means that the reserved
> memory regions will also need to wait until after the page tables have
> been setup to be stored in the array.
> 
> When processing the reserved memory regions defined in the DT, these
> regions are marked as reserved by calling memblock_reserve(base, size).
> Where:  base = base address of the reserved region.
> 	size = the size of the reserved memory region.
> 
> Depending on if that region is defined using the "no-map" property,
> memblock_mark_nomap(base, size) is also called.
> 
> The "no-map" property is used to indicate to the operating system that a
> mapping of the specified region must NOT be created. This also means
> that no access (including speculative accesses) is allowed on this
> region of memory except when it is coming from the device driver that
> this region of memory is being reserved for.[1]
> 
> Therefore, it is important to call memblock_reserve() and
> memblock_mark_nomap() on all the reserved memory regions before the
> system sets up the page tables so that the system does not unknowingly
> include any of the no-map reserved memory regions in the memory map.
> 
> There are two ways to define how/where a reserved memory region is
> placed in memory:
> i) Statically-placed reserved memory regions
> i.e. regions defined with a set start address and size using the
>      "reg" property in the DT.
> ii) Dynamically-placed reserved memory regions.
> i.e. regions defined by specifying a range of addresses where they can
>      be placed in memory using the "alloc_ranges" and "size" properties
>      in the DT.
> 
> The dynamically-placed reserved memory regions get assigned a start
> address only at runtime. And this needs to  be done before the page
> tables are setup so that memblock_reserve() and memblock_mark_nomap()
> can be called on the allocated region as explained above.
> Since the dynamically allocated reserved_mem array can only be
> available after the page tables have been setup, the information for
> the dynamically-placed reserved memory regions needs to be stored
> somewhere temporarily until the reserved_mem array is available.
> 
> Therefore, this series makes use of a temporary static array to store
> the information of the dynamically-placed reserved memory regions until
> the reserved_mem array is allocated.
> Once the reserved_mem array is available, the information is copied over
> from the temporary array into the reserved_mem array, and the memory for
> the temporary array is freed back to the system.
> 
> The information for the statically-placed reserved memory regions does
> not need to be stored in a temporary array because their starting
> address is already stored in the devicetree.
> Once the reserved_mem array is allocated, the information for the
> statically-placed reserved memory regions is added to the array.
> 
> Note:
> Because of the use of a temporary array to store the information of the
> dynamically-placed reserved memory regions, there still exists a
> limitation of 64 for this particular kind of reserved memory regions.
> >From my observation, these regions are typically small in number and
> hence I expect this to not be an issue for now.


This series (in particular the first patch) broke boot on Intel Meteor
Lake-P. Taking Linux next of 20240819 with these being reverted makes
things work again.

Taking into account bisectability issue (that's how I noticed the issue
in the first place) I think it would be nice to have no such patches at
all in the respective subsystem tree. On my side I may help with testing
whatever solution or next version provides.

git bisect start
# status: waiting for both good and bad commits
# good: [47ac09b91befbb6a235ab620c32af719f8208399] Linux 6.11-rc4
git bisect good 47ac09b91befbb6a235ab620c32af719f8208399
# status: waiting for bad commit, 1 good commit known
# bad: [469f1bad3c1c6e268059f78c0eec7e9552b3894c] Add linux-next specific files for 20240819
git bisect bad 469f1bad3c1c6e268059f78c0eec7e9552b3894c
# good: [3f6ea50f8205eb79e4a321559c292eecb059bfaa] Merge branch 'spi-nor/next' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux.git
git bisect good 3f6ea50f8205eb79e4a321559c292eecb059bfaa
# good: [95ff8c994d58104a68eb12988d7bc24597876831] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git
git bisect good 95ff8c994d58104a68eb12988d7bc24597876831
# bad: [9434b7c52128e9959dce1111b8e1078ffc91468d] Merge branch 'usb-next' of git://git.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial.git
git bisect bad 9434b7c52128e9959dce1111b8e1078ffc91468d
# bad: [791ba08d6d977046e8c4a7f01dabd8770d1eb94d] Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
git bisect bad 791ba08d6d977046e8c4a7f01dabd8770d1eb94d
# good: [2b3eb431609a479193044bba064090141a504b9a] Merge branch into tip/master: 'timers/core'
git bisect good 2b3eb431609a479193044bba064090141a504b9a
# good: [81b6ef7427cb4b90c913488c665414ba21bbe46d] Merge branch into tip/master: 'x86/timers'
git bisect good 81b6ef7427cb4b90c913488c665414ba21bbe46d
# bad: [f5d0a26ecd6875f02c6cf4fedf245812015b4cef] Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux.git
git bisect bad f5d0a26ecd6875f02c6cf4fedf245812015b4cef
# good: [5c80b13d27252446973a5ce14a5331b336556f28] Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/iommu/linux.git
git bisect good 5c80b13d27252446973a5ce14a5331b336556f28
# good: [84252c1d2c6efed706037e00f25455378fdda97c] dt-bindings: timer: nxp,lpc3220-timer: Convert to dtschema
git bisect good 84252c1d2c6efed706037e00f25455378fdda97c
# good: [ca35f2837927d73441cfb51174b824ae82a15f93] dt-bindings: soc: fsl: cpm_qe: convert network.txt to yaml
git bisect good ca35f2837927d73441cfb51174b824ae82a15f93
# bad: [a27afc7a6266f02703a6bd492e1f57e8d1ee069b] of: reserved_mem: Add code to dynamically allocate reserved_mem array
git bisect bad a27afc7a6266f02703a6bd492e1f57e8d1ee069b
# bad: [4be66e32070d1e8da72934dbe4dff44a49bd2e5f] of: reserved_mem: Restructure how the reserved memory regions are processed
git bisect bad 4be66e32070d1e8da72934dbe4dff44a49bd2e5f
# good: [d2a97be34548fc5643b4e9536ac8789d839f7374] scripts/dtc: Update to upstream version v1.7.0-95-gbcd02b523429
git bisect good d2a97be34548fc5643b4e9536ac8789d839f7374
# first bad commit: [4be66e32070d1e8da72934dbe4dff44a49bd2e5f] of: reserved_mem: Restructure how the reserved memory regions are processed

-- 
With Best Regards,
Andy Shevchenko
Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
Posted by Oreoluwa Babatunde 1 year, 5 months ago
On 8/19/2024 10:23 AM, Andy Shevchenko wrote:
> On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
>> The reserved_mem array is used to store data for the different
>> reserved memory regions defined in the DT of a device.  The array
>> stores information such as region name, node reference, start-address,
>> and size of the different reserved memory regions.
>>
>> The array is currently statically allocated with a size of
>> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
>> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
>> will not have enough space to store the information for all the regions.
>>
>> This can be fixed by making the reserved_mem array a dynamically sized
>> array which is allocated using memblock_alloc() based on the exact
>> number of reserved memory regions defined in the DT.
>>
>> On architectures such as arm64, memblock allocated memory is not
>> writable until after the page tables have been setup.
>> This is an issue because the current implementation initializes the
>> reserved memory regions and stores their information in the array before
>> the page tables are setup. Hence, dynamically allocating the
>> reserved_mem array and attempting to write information to it at this
>> point will fail.
>>
>> Therefore, the allocation of the reserved_mem array will need to be done
>> after the page tables have been setup, which means that the reserved
>> memory regions will also need to wait until after the page tables have
>> been setup to be stored in the array.
>>
>> When processing the reserved memory regions defined in the DT, these
>> regions are marked as reserved by calling memblock_reserve(base, size).
>> Where:  base = base address of the reserved region.
>> 	size = the size of the reserved memory region.
>>
>> Depending on if that region is defined using the "no-map" property,
>> memblock_mark_nomap(base, size) is also called.
>>
>> The "no-map" property is used to indicate to the operating system that a
>> mapping of the specified region must NOT be created. This also means
>> that no access (including speculative accesses) is allowed on this
>> region of memory except when it is coming from the device driver that
>> this region of memory is being reserved for.[1]
>>
>> Therefore, it is important to call memblock_reserve() and
>> memblock_mark_nomap() on all the reserved memory regions before the
>> system sets up the page tables so that the system does not unknowingly
>> include any of the no-map reserved memory regions in the memory map.
>>
>> There are two ways to define how/where a reserved memory region is
>> placed in memory:
>> i) Statically-placed reserved memory regions
>> i.e. regions defined with a set start address and size using the
>>      "reg" property in the DT.
>> ii) Dynamically-placed reserved memory regions.
>> i.e. regions defined by specifying a range of addresses where they can
>>      be placed in memory using the "alloc_ranges" and "size" properties
>>      in the DT.
>>
>> The dynamically-placed reserved memory regions get assigned a start
>> address only at runtime. And this needs to  be done before the page
>> tables are setup so that memblock_reserve() and memblock_mark_nomap()
>> can be called on the allocated region as explained above.
>> Since the dynamically allocated reserved_mem array can only be
>> available after the page tables have been setup, the information for
>> the dynamically-placed reserved memory regions needs to be stored
>> somewhere temporarily until the reserved_mem array is available.
>>
>> Therefore, this series makes use of a temporary static array to store
>> the information of the dynamically-placed reserved memory regions until
>> the reserved_mem array is allocated.
>> Once the reserved_mem array is available, the information is copied over
>> from the temporary array into the reserved_mem array, and the memory for
>> the temporary array is freed back to the system.
>>
>> The information for the statically-placed reserved memory regions does
>> not need to be stored in a temporary array because their starting
>> address is already stored in the devicetree.
>> Once the reserved_mem array is allocated, the information for the
>> statically-placed reserved memory regions is added to the array.
>>
>> Note:
>> Because of the use of a temporary array to store the information of the
>> dynamically-placed reserved memory regions, there still exists a
>> limitation of 64 for this particular kind of reserved memory regions.
>> >From my observation, these regions are typically small in number and
>> hence I expect this to not be an issue for now.
>
> This series (in particular the first patch) broke boot on Intel Meteor
> Lake-P. Taking Linux next of 20240819 with these being reverted makes
> things work again.
>
> Taking into account bisectability issue (that's how I noticed the issue
> in the first place) I think it would be nice to have no such patches at
> all in the respective subsystem tree. On my side I may help with testing
> whatever solution or next version provides.
Hi Andy,

I have re-uploaded another version of my patches.
Please can you help test it on your platform to confirm
that the issue is no longer present?
https://lore.kernel.org/all/20240830162857.2821502-1-quic_obabatun@quicinc.com/

Thank you!
Oreoluwa
Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
Posted by Rob Herring 1 year, 5 months ago
On Mon, Aug 19, 2024 at 12:23 PM Andy Shevchenko
<andy@black.fi.intel.com> wrote:
>
> On Fri, Aug 09, 2024 at 11:48:12AM -0700, Oreoluwa Babatunde wrote:
> > The reserved_mem array is used to store data for the different
> > reserved memory regions defined in the DT of a device.  The array
> > stores information such as region name, node reference, start-address,
> > and size of the different reserved memory regions.
> >
> > The array is currently statically allocated with a size of
> > MAX_RESERVED_REGIONS(64). This means that any system that specifies a
> > number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
> > will not have enough space to store the information for all the regions.
> >
> > This can be fixed by making the reserved_mem array a dynamically sized
> > array which is allocated using memblock_alloc() based on the exact
> > number of reserved memory regions defined in the DT.
> >
> > On architectures such as arm64, memblock allocated memory is not
> > writable until after the page tables have been setup.
> > This is an issue because the current implementation initializes the
> > reserved memory regions and stores their information in the array before
> > the page tables are setup. Hence, dynamically allocating the
> > reserved_mem array and attempting to write information to it at this
> > point will fail.
> >
> > Therefore, the allocation of the reserved_mem array will need to be done
> > after the page tables have been setup, which means that the reserved
> > memory regions will also need to wait until after the page tables have
> > been setup to be stored in the array.
> >
> > When processing the reserved memory regions defined in the DT, these
> > regions are marked as reserved by calling memblock_reserve(base, size).
> > Where:  base = base address of the reserved region.
> >       size = the size of the reserved memory region.
> >
> > Depending on if that region is defined using the "no-map" property,
> > memblock_mark_nomap(base, size) is also called.
> >
> > The "no-map" property is used to indicate to the operating system that a
> > mapping of the specified region must NOT be created. This also means
> > that no access (including speculative accesses) is allowed on this
> > region of memory except when it is coming from the device driver that
> > this region of memory is being reserved for.[1]
> >
> > Therefore, it is important to call memblock_reserve() and
> > memblock_mark_nomap() on all the reserved memory regions before the
> > system sets up the page tables so that the system does not unknowingly
> > include any of the no-map reserved memory regions in the memory map.
> >
> > There are two ways to define how/where a reserved memory region is
> > placed in memory:
> > i) Statically-placed reserved memory regions
> > i.e. regions defined with a set start address and size using the
> >      "reg" property in the DT.
> > ii) Dynamically-placed reserved memory regions.
> > i.e. regions defined by specifying a range of addresses where they can
> >      be placed in memory using the "alloc_ranges" and "size" properties
> >      in the DT.
> >
> > The dynamically-placed reserved memory regions get assigned a start
> > address only at runtime. And this needs to  be done before the page
> > tables are setup so that memblock_reserve() and memblock_mark_nomap()
> > can be called on the allocated region as explained above.
> > Since the dynamically allocated reserved_mem array can only be
> > available after the page tables have been setup, the information for
> > the dynamically-placed reserved memory regions needs to be stored
> > somewhere temporarily until the reserved_mem array is available.
> >
> > Therefore, this series makes use of a temporary static array to store
> > the information of the dynamically-placed reserved memory regions until
> > the reserved_mem array is allocated.
> > Once the reserved_mem array is available, the information is copied over
> > from the temporary array into the reserved_mem array, and the memory for
> > the temporary array is freed back to the system.
> >
> > The information for the statically-placed reserved memory regions does
> > not need to be stored in a temporary array because their starting
> > address is already stored in the devicetree.
> > Once the reserved_mem array is allocated, the information for the
> > statically-placed reserved memory regions is added to the array.
> >
> > Note:
> > Because of the use of a temporary array to store the information of the
> > dynamically-placed reserved memory regions, there still exists a
> > limitation of 64 for this particular kind of reserved memory regions.
> > >From my observation, these regions are typically small in number and
> > hence I expect this to not be an issue for now.
>
>
> This series (in particular the first patch) broke boot on Intel Meteor
> Lake-P. Taking Linux next of 20240819 with these being reverted makes
> things work again.

Looks like this provides some detail:
https://lore.kernel.org/all/202408192157.8d8fe8a9-oliver.sang@intel.com/

I've dropped the patches for now.

> Taking into account bisectability issue (that's how I noticed the issue
> in the first place) I think it would be nice to have no such patches at
> all in the respective subsystem tree. On my side I may help with testing
> whatever solution or next version provides.

I don't follow what you are asking for? That the patches should be
bisectable? Well, yes, of course, but I don't verify that typically.
Patch 1 builds fine for m, so I'm not sure what issue you see.

Rob
Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
Posted by Andy Shevchenko 1 year, 5 months ago
Mon, Aug 19, 2024 at 04:55:49PM -0500, Rob Herring kirjoitti:
> On Mon, Aug 19, 2024 at 12:23 PM Andy Shevchenko
> <andy@black.fi.intel.com> wrote:

...

> > This series (in particular the first patch) broke boot on Intel Meteor
> > Lake-P. Taking Linux next of 20240819 with these being reverted makes
> > things work again.
> 
> Looks like this provides some detail:
> https://lore.kernel.org/all/202408192157.8d8fe8a9-oliver.sang@intel.com/
> 
> I've dropped the patches for now.

Thank you, that's what I have asked for!

> > Taking into account bisectability issue (that's how I noticed the issue
> > in the first place) I think it would be nice to have no such patches at
> > all in the respective subsystem tree. On my side I may help with testing
> > whatever solution or next version provides.
> 
> I don't follow what you are asking for? That the patches should be
> bisectable? Well, yes, of course, but I don't verify that typically.
> Patch 1 builds fine for m, so I'm not sure what issue you see.

There are two types of bisectability:
1) compile-time;
2) run-time.

People often forgot about #2 and that's exactly what I'm complaining about.
Due to bisecting another thing, I have stumbled over this issue.

-- 
With Best Regards,
Andy Shevchenko


Re: [PATCH v7 0/2] Dynamic Allocation of the reserved_mem array
Posted by Klara Modin 1 year, 6 months ago
Hi,

On 2024-08-09 20:48, Oreoluwa Babatunde wrote:
> The reserved_mem array is used to store data for the different
> reserved memory regions defined in the DT of a device.  The array
> stores information such as region name, node reference, start-address,
> and size of the different reserved memory regions.
> 
> The array is currently statically allocated with a size of
> MAX_RESERVED_REGIONS(64). This means that any system that specifies a
> number of reserved memory regions greater than MAX_RESERVED_REGIONS(64)
> will not have enough space to store the information for all the regions.
> 
> This can be fixed by making the reserved_mem array a dynamically sized
> array which is allocated using memblock_alloc() based on the exact
> number of reserved memory regions defined in the DT.
> 
> On architectures such as arm64, memblock allocated memory is not
> writable until after the page tables have been setup.
> This is an issue because the current implementation initializes the
> reserved memory regions and stores their information in the array before
> the page tables are setup. Hence, dynamically allocating the
> reserved_mem array and attempting to write information to it at this
> point will fail.
> 
> Therefore, the allocation of the reserved_mem array will need to be done
> after the page tables have been setup, which means that the reserved
> memory regions will also need to wait until after the page tables have
> been setup to be stored in the array.
> 
> When processing the reserved memory regions defined in the DT, these
> regions are marked as reserved by calling memblock_reserve(base, size).
> Where:  base = base address of the reserved region.
> 	size = the size of the reserved memory region.
> 
> Depending on if that region is defined using the "no-map" property,
> memblock_mark_nomap(base, size) is also called.
> 
> The "no-map" property is used to indicate to the operating system that a
> mapping of the specified region must NOT be created. This also means
> that no access (including speculative accesses) is allowed on this
> region of memory except when it is coming from the device driver that
> this region of memory is being reserved for.[1]
> 
> Therefore, it is important to call memblock_reserve() and
> memblock_mark_nomap() on all the reserved memory regions before the
> system sets up the page tables so that the system does not unknowingly
> include any of the no-map reserved memory regions in the memory map.
> 
> There are two ways to define how/where a reserved memory region is
> placed in memory:
> i) Statically-placed reserved memory regions
> i.e. regions defined with a set start address and size using the
>       "reg" property in the DT.
> ii) Dynamically-placed reserved memory regions.
> i.e. regions defined by specifying a range of addresses where they can
>       be placed in memory using the "alloc_ranges" and "size" properties
>       in the DT.
> 
> The dynamically-placed reserved memory regions get assigned a start
> address only at runtime. And this needs to  be done before the page
> tables are setup so that memblock_reserve() and memblock_mark_nomap()
> can be called on the allocated region as explained above.
> Since the dynamically allocated reserved_mem array can only be
> available after the page tables have been setup, the information for
> the dynamically-placed reserved memory regions needs to be stored
> somewhere temporarily until the reserved_mem array is available.
> 
> Therefore, this series makes use of a temporary static array to store
> the information of the dynamically-placed reserved memory regions until
> the reserved_mem array is allocated.
> Once the reserved_mem array is available, the information is copied over
> from the temporary array into the reserved_mem array, and the memory for
> the temporary array is freed back to the system.
> 
> The information for the statically-placed reserved memory regions does
> not need to be stored in a temporary array because their starting
> address is already stored in the devicetree.
> Once the reserved_mem array is allocated, the information for the
> statically-placed reserved memory regions is added to the array.
> 
> Note:
> Because of the use of a temporary array to store the information of the
> dynamically-placed reserved memory regions, there still exists a
> limitation of 64 for this particular kind of reserved memory regions.
>  From my observation, these regions are typically small in number and
> hence I expect this to not be an issue for now.
> 
> Patch Versions:
> v7:
> - Make changes to initialize the reserved memory regions earlier in
>    response to issue reported in v6:
>    https://lore.kernel.org/all/20240610213403.GA1697364@thelio-3990X/
> 
> - For the reserved regions to be setup properly,
>    fdt_init_reserved_mem_node() needs to be called on each of the regions
>    before the page tables are setup. Since the function requires a
>    refernece to the devicetree node of each region, we are not able to
>    use the unflattened_devicetree APIs since they are not available until
>    after the page tables have been setup.
>    Hence, revert the use of the unflatten_device APIs as a result of this
>    limitation which was discovered in v6:
>    https://lore.kernel.org/all/986361f4-f000-4129-8214-39f2fb4a90da@gmail.com/
>    https://lore.kernel.org/all/DU0PR04MB9299C3EC247E1FE2C373440F80DE2@DU0PR04MB9299.eurprd04.prod.outlook.com/
> 
> v6:
> https://lore.kernel.org/all/20240528223650.619532-1-quic_obabatun@quicinc.com/
> - Rebased patchset on top of v6.10-rc1.
> - Addressed comments received in v5 such as:
>    1. Switched to using relevant typed functions such as
>       of_property_read_u32(), of_property_present(), etc.
>    2. Switched to using of_address_to_resource() to read the "reg"
>       property of nodes.
>    3. Renamed functions using "of_*" naming scheme instead of "dt_*".
> 
> v5:
> https://lore.kernel.org/all/20240328211543.191876-1-quic_obabatun@quicinc.com/
> - Rebased changes on top of v6.9-rc1.
> - Addressed minor code comments from v4.
> 
> v4:
> https://lore.kernel.org/all/20240308191204.819487-2-quic_obabatun@quicinc.com/
> - Move fdt_init_reserved_mem() back into the unflatten_device_tree()
>    function.
> - Fix warnings found by Kernel test robot:
>    https://lore.kernel.org/all/202401281219.iIhqs1Si-lkp@intel.com/
>    https://lore.kernel.org/all/202401281304.tsu89Kcm-lkp@intel.com/
>    https://lore.kernel.org/all/202401291128.e7tdNh5x-lkp@intel.com/
> 
> v3:
> https://lore.kernel.org/all/20240126235425.12233-1-quic_obabatun@quicinc.com/
> - Make use of __initdata to delete the temporary static array after
>    dynamically allocating memory for reserved_mem array using memblock.
> - Move call to fdt_init_reserved_mem() out of the
>    unflatten_device_tree() function and into architecture specific setup
>    code.
> - Breaking up the changes for the individual architectures into separate
>    patches.
> 
> v2:
> https://lore.kernel.org/all/20231204041339.9902-1-quic_obabatun@quicinc.com/
> - Extend changes to all other relevant architectures by moving
>    fdt_init_reserved_mem() into the unflatten_device_tree() function.
> - Add code to use unflatten devicetree APIs to process the reserved
>    memory regions.
> 
> v1:
> https://lore.kernel.org/all/20231019184825.9712-1-quic_obabatun@quicinc.com/
> 
> References:
> [1]
> https://github.com/devicetree-org/dt-schema/blob/main/dtschema/schemas/reserved-memory/reserved-memory.yaml#L79
> 
> Oreoluwa Babatunde (2):
>    of: reserved_mem: Restruture how the reserved memory regions are
>      processed
>    of: reserved_mem: Add code to dynamically allocate reserved_mem array
> 
>   drivers/of/fdt.c             |   5 +-
>   drivers/of/of_private.h      |   3 +-
>   drivers/of/of_reserved_mem.c | 231 +++++++++++++++++++++++++++--------
>   3 files changed, 188 insertions(+), 51 deletions(-)
> 

I did not see anything suspicious on my relevant machines with this 
patch series (Raspberry Pi 1 and 3, Edgerouter 6P).

Regards,
Tested-by: Klara Modin <klarasmodin@gmail.com>