[PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS

Jagdish Gediya posted 7 patches 4 years ago
Documentation/admin-guide/mm/index.rst        |  1 +
.../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
drivers/base/node.c                           | 70 ++++++++++++++++---
drivers/dax/kmem.c                            |  2 +
include/linux/migrate.h                       |  1 +
include/linux/nodemask.h                      |  1 +
mm/migrate.c                                  | 54 ++++++++++----
7 files changed, 162 insertions(+), 24 deletions(-)
create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
[PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Jagdish Gediya 4 years ago
Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
NUMA node which are N_MEMORY and slow memory(persistent memory)
only NUMA node which are also N_MEMORY. As the current demotion
target finding algorithm works based on N_MEMORY and best distance,
it will choose DRAM only NUMA node as demotion target instead of
persistent memory node on such systems. If DRAM only NUMA node is
filled with demoted pages then at some point new allocations can
start falling to persistent memory, so basically cold pages are in
fast memor (due to demotion) and new pages are in slow memory, this
is why persistent memory nodes should be utilized for demotion and
dram node should be avoided for demotion so that they can be used
for new allocations.

Current implementation can work fine on the system where the memory
only numa nodes are possible only for persistent/slow memory but it
is not suitable for the like of systems mentioned above.

This patch series introduces the new node state N_DEMOTION_TARGETS,
which is used to distinguish the nodes which can be used as demotion
targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
nodes which can be used as demotion targets.

node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
certain type of memory which registers through dax kmem (e.g. HBM)
may not be the right choices for demotion so in future they should
be distinguished based on certain attributes and dax kmem driver
should avoid setting them as N_DEMOTION_TARGETS, however current
implementation also doesn't distinguish any  such memory and it
considers all N_MEMORY as demotion targets so this patch series
doesn't modify the current behavior.

below command can be used to view the available demotion targets in
the system,

$ cat /sys/devices/system/node/demotion_targets

This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
It may be possible that some memory node desired as demotion target
is not detected in the system from dax-device kmem probe path. It is
also possible that some of the dax-devices are not preferred as
demotion target e.g. HBM, for such devices, node shouldn't be set to
N_DEMOTION_TARGETS, so support is also added to set the demotion
target list from user space so that default behavior can be overridden
to avoid or add specific node to demotion targets manually.

Override the demotion targets in the system (which sets the
node_states[N_DEMOTION_TARGETS] in kernel),
$ echo <node list> > /sys/devices/system/node/demotion_targets

As by default node attributes under /sys/devices/system/node/ are read-
only, support is added to write node_states[] via sysfs so that
node_states[N_DEMOTION_TARGETS] can be modified from user space via
sysfs.

It is also helpful to know per node demotion target path prepared by
kernel to understand the demotion behaviour during reclaim, so this
patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
interface to view per-node demotion targets via sysfs.

Current code which sets migration targets is modified in
this patch series to avoid some of the limitations on the demotion
target sharing and to use N_DEMOTION_TARGETS only nodes while
finding demotion targets.

Changelog
----------

v2:
In v1, only 1st patch of this patch series was sent, which was
implemented to avoid some of the limitations on the demotion
target sharing, however for certain numa topology, the demotion
targets found by that patch was not most optimal, so 1st patch
in this series is modified according to suggestions from Huang
and Baolin. Different examples of demotion list comparasion
between existing implementation and changed implementation can
be found in the commit message of 1st patch.

v3:
- Modify patch 1 subject to make it more specific
- Remove /sys/kernel/mm/numa/demotion_targets interface, use
  /sys/devices/system/node/demotion_targets instead and make
  it writable to override node_states[N_DEMOTION_TARGETS].
- Add support to view per node demotion targets via sysfs

Jagdish Gediya (8):
  mm: demotion: Fix demotion targets sharing among sources
  mm: demotion: Add new node state N_DEMOTION_TARGETS
  drivers/base/node: Add support to write node_states[] via sysfs
  device-dax/kmem: Set node state as N_DEMOTION_TARGETS
  mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
  mm: demotion: expose per-node demotion targets via sysfs
  docs: numa: Add documentation for demotion

 Documentation/admin-guide/mm/index.rst        |  1 +
 .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
 drivers/base/node.c                           | 70 ++++++++++++++++---
 drivers/dax/kmem.c                            |  2 +
 include/linux/migrate.h                       |  1 +
 include/linux/nodemask.h                      |  1 +
 mm/migrate.c                                  | 54 ++++++++++----
 7 files changed, 162 insertions(+), 24 deletions(-)
 create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst

-- 
2.35.1
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by ying.huang@intel.com 4 years ago
On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> NUMA node which are N_MEMORY and slow memory(persistent memory)
> only NUMA node which are also N_MEMORY. As the current demotion
> target finding algorithm works based on N_MEMORY and best distance,
> it will choose DRAM only NUMA node as demotion target instead of
> persistent memory node on such systems. If DRAM only NUMA node is
> filled with demoted pages then at some point new allocations can
> start falling to persistent memory, so basically cold pages are in
> fast memor (due to demotion) and new pages are in slow memory, this
> is why persistent memory nodes should be utilized for demotion and
> dram node should be avoided for demotion so that they can be used
> for new allocations.
> 
> Current implementation can work fine on the system where the memory
> only numa nodes are possible only for persistent/slow memory but it
> is not suitable for the like of systems mentioned above.

Can you share the NUMA topology information of your machine?  And the
demotion order before and after your change?

Whether it's good to use the PMEM nodes as the demotion targets of the
DRAM-only node too?

Best Regards,
Huang, Ying

> This patch series introduces the new node state N_DEMOTION_TARGETS,
> which is used to distinguish the nodes which can be used as demotion
> targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> nodes which can be used as demotion targets.
> 
> node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> certain type of memory which registers through dax kmem (e.g. HBM)
> may not be the right choices for demotion so in future they should
> be distinguished based on certain attributes and dax kmem driver
> should avoid setting them as N_DEMOTION_TARGETS, however current
> implementation also doesn't distinguish any  such memory and it
> considers all N_MEMORY as demotion targets so this patch series
> doesn't modify the current behavior.
> 
> below command can be used to view the available demotion targets in
> the system,
> 
> $ cat /sys/devices/system/node/demotion_targets
> 
> This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> It may be possible that some memory node desired as demotion target
> is not detected in the system from dax-device kmem probe path. It is
> also possible that some of the dax-devices are not preferred as
> demotion target e.g. HBM, for such devices, node shouldn't be set to
> N_DEMOTION_TARGETS, so support is also added to set the demotion
> target list from user space so that default behavior can be overridden
> to avoid or add specific node to demotion targets manually.
> 
> Override the demotion targets in the system (which sets the
> node_states[N_DEMOTION_TARGETS] in kernel),
> $ echo <node list> > /sys/devices/system/node/demotion_targets
> 
> As by default node attributes under /sys/devices/system/node/ are read-
> only, support is added to write node_states[] via sysfs so that
> node_states[N_DEMOTION_TARGETS] can be modified from user space via
> sysfs.
> 
> It is also helpful to know per node demotion target path prepared by
> kernel to understand the demotion behaviour during reclaim, so this
> patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> interface to view per-node demotion targets via sysfs.
> 
> Current code which sets migration targets is modified in
> this patch series to avoid some of the limitations on the demotion
> target sharing and to use N_DEMOTION_TARGETS only nodes while
> finding demotion targets.
> 
> Changelog
> ----------
> 
> v2:
> In v1, only 1st patch of this patch series was sent, which was
> implemented to avoid some of the limitations on the demotion
> target sharing, however for certain numa topology, the demotion
> targets found by that patch was not most optimal, so 1st patch
> in this series is modified according to suggestions from Huang
> and Baolin. Different examples of demotion list comparasion
> between existing implementation and changed implementation can
> be found in the commit message of 1st patch.
> 
> v3:
> - Modify patch 1 subject to make it more specific
> - Remove /sys/kernel/mm/numa/demotion_targets interface, use
>   /sys/devices/system/node/demotion_targets instead and make
>   it writable to override node_states[N_DEMOTION_TARGETS].
> - Add support to view per node demotion targets via sysfs
> 
> Jagdish Gediya (8):
>   mm: demotion: Fix demotion targets sharing among sources
>   mm: demotion: Add new node state N_DEMOTION_TARGETS
>   drivers/base/node: Add support to write node_states[] via sysfs
>   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
>   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
>   mm: demotion: expose per-node demotion targets via sysfs
>   docs: numa: Add documentation for demotion
> 
>  Documentation/admin-guide/mm/index.rst        |  1 +
>  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
>  drivers/base/node.c                           | 70 ++++++++++++++++---
>  drivers/dax/kmem.c                            |  2 +
>  include/linux/migrate.h                       |  1 +
>  include/linux/nodemask.h                      |  1 +
>  mm/migrate.c                                  | 54 ++++++++++----
>  7 files changed, 162 insertions(+), 24 deletions(-)
>  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> 


Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Jagdish Gediya 4 years ago
On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > only NUMA node which are also N_MEMORY. As the current demotion
> > target finding algorithm works based on N_MEMORY and best distance,
> > it will choose DRAM only NUMA node as demotion target instead of
> > persistent memory node on such systems. If DRAM only NUMA node is
> > filled with demoted pages then at some point new allocations can
> > start falling to persistent memory, so basically cold pages are in
> > fast memor (due to demotion) and new pages are in slow memory, this
> > is why persistent memory nodes should be utilized for demotion and
> > dram node should be avoided for demotion so that they can be used
> > for new allocations.
> > 
> > Current implementation can work fine on the system where the memory
> > only numa nodes are possible only for persistent/slow memory but it
> > is not suitable for the like of systems mentioned above.
> 
> Can you share the NUMA topology information of your machine?  And the
> demotion order before and after your change?
> 
> Whether it's good to use the PMEM nodes as the demotion targets of the
> DRAM-only node too?

$ numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13392 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1971 MB
node distances:
node   0   1
  0:  10  40
  1:  40  10

1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
   for 0 even when 1 is DRAM node and there is no demotion targets for 1.

$ cat /sys/bus/nd/devices/dax0.0/target_node
2
$
# cd /sys/bus/dax/drivers/
:/sys/bus/dax/drivers# ls
device_dax  kmem
:/sys/bus/dax/drivers# cd device_dax/
:/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
:/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
:/sys/bus/dax/drivers/device_dax# numactl -H
available: 3 nodes (0-2)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 14272 MB
node 0 free: 13380 MB
node 1 cpus:
node 1 size: 2028 MB
node 1 free: 1961 MB
node 2 cpus:
node 2 size: 0 MB
node 2 free: 0 MB
node distances:
node   0   1   2
  0:  10  40  80
  1:  40  10  80
  2:  80  80  10

2) Once this new node brought online,  without N_DEMOTION_TARGETS
patch series, 1 is demotion target for 0 and 2 is demotion target
for 1.

With this patch series applied,
1) No demotion target for either 0 or 1 before dax device is online
2) 2 is demotion target for both 0 and 1 after dax device is online.

> Best Regards,
> Huang, Ying
> 
> > This patch series introduces the new node state N_DEMOTION_TARGETS,
> > which is used to distinguish the nodes which can be used as demotion
> > targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> > nodes which can be used as demotion targets.
> > 
> > node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> > certain type of memory which registers through dax kmem (e.g. HBM)
> > may not be the right choices for demotion so in future they should
> > be distinguished based on certain attributes and dax kmem driver
> > should avoid setting them as N_DEMOTION_TARGETS, however current
> > implementation also doesn't distinguish any  such memory and it
> > considers all N_MEMORY as demotion targets so this patch series
> > doesn't modify the current behavior.
> > 
> > below command can be used to view the available demotion targets in
> > the system,
> > 
> > $ cat /sys/devices/system/node/demotion_targets
> > 
> > This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> > It may be possible that some memory node desired as demotion target
> > is not detected in the system from dax-device kmem probe path. It is
> > also possible that some of the dax-devices are not preferred as
> > demotion target e.g. HBM, for such devices, node shouldn't be set to
> > N_DEMOTION_TARGETS, so support is also added to set the demotion
> > target list from user space so that default behavior can be overridden
> > to avoid or add specific node to demotion targets manually.
> > 
> > Override the demotion targets in the system (which sets the
> > node_states[N_DEMOTION_TARGETS] in kernel),
> > $ echo <node list> > /sys/devices/system/node/demotion_targets
> > 
> > As by default node attributes under /sys/devices/system/node/ are read-
> > only, support is added to write node_states[] via sysfs so that
> > node_states[N_DEMOTION_TARGETS] can be modified from user space via
> > sysfs.
> > 
> > It is also helpful to know per node demotion target path prepared by
> > kernel to understand the demotion behaviour during reclaim, so this
> > patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> > interface to view per-node demotion targets via sysfs.
> > 
> > Current code which sets migration targets is modified in
> > this patch series to avoid some of the limitations on the demotion
> > target sharing and to use N_DEMOTION_TARGETS only nodes while
> > finding demotion targets.
> > 
> > Changelog
> > ----------
> > 
> > v2:
> > In v1, only 1st patch of this patch series was sent, which was
> > implemented to avoid some of the limitations on the demotion
> > target sharing, however for certain numa topology, the demotion
> > targets found by that patch was not most optimal, so 1st patch
> > in this series is modified according to suggestions from Huang
> > and Baolin. Different examples of demotion list comparasion
> > between existing implementation and changed implementation can
> > be found in the commit message of 1st patch.
> > 
> > v3:
> > - Modify patch 1 subject to make it more specific
> > - Remove /sys/kernel/mm/numa/demotion_targets interface, use
> >   /sys/devices/system/node/demotion_targets instead and make
> >   it writable to override node_states[N_DEMOTION_TARGETS].
> > - Add support to view per node demotion targets via sysfs
> > 
> > Jagdish Gediya (8):
> >   mm: demotion: Fix demotion targets sharing among sources
> >   mm: demotion: Add new node state N_DEMOTION_TARGETS
> >   drivers/base/node: Add support to write node_states[] via sysfs
> >   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
> >   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
> >   mm: demotion: expose per-node demotion targets via sysfs
> >   docs: numa: Add documentation for demotion
> > 
> >  Documentation/admin-guide/mm/index.rst        |  1 +
> >  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
> >  drivers/base/node.c                           | 70 ++++++++++++++++---
> >  drivers/dax/kmem.c                            |  2 +
> >  include/linux/migrate.h                       |  1 +
> >  include/linux/nodemask.h                      |  1 +
> >  mm/migrate.c                                  | 54 ++++++++++----
> >  7 files changed, 162 insertions(+), 24 deletions(-)
> >  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> > 
> 
> 
> 
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by ying.huang@intel.com 4 years ago
On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > only NUMA node which are also N_MEMORY. As the current demotion
> > > target finding algorithm works based on N_MEMORY and best distance,
> > > it will choose DRAM only NUMA node as demotion target instead of
> > > persistent memory node on such systems. If DRAM only NUMA node is
> > > filled with demoted pages then at some point new allocations can
> > > start falling to persistent memory, so basically cold pages are in
> > > fast memor (due to demotion) and new pages are in slow memory, this
> > > is why persistent memory nodes should be utilized for demotion and
> > > dram node should be avoided for demotion so that they can be used
> > > for new allocations.
> > > 
> > > Current implementation can work fine on the system where the memory
> > > only numa nodes are possible only for persistent/slow memory but it
> > > is not suitable for the like of systems mentioned above.
> > 
> > Can you share the NUMA topology information of your machine?  And the
> > demotion order before and after your change?
> > 
> > Whether it's good to use the PMEM nodes as the demotion targets of the
> > DRAM-only node too?
> 
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13392 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1971 MB
> node distances:
> node   0   1
>   0:  10  40
>   1:  40  10
> 
> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> 
> $ cat /sys/bus/nd/devices/dax0.0/target_node
> 2
> $
> # cd /sys/bus/dax/drivers/
> :/sys/bus/dax/drivers# ls
> device_dax  kmem
> :/sys/bus/dax/drivers# cd device_dax/
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> :/sys/bus/dax/drivers/device_dax# numactl -H
> available: 3 nodes (0-2)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13380 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1961 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 free: 0 MB
> node distances:
> node   0   1   2
>   0:  10  40  80
>   1:  40  10  80
>   2:  80  80  10
> 

This looks like a virtual machine, not a real machine.  That's
unfortunate.  I am looking forward to a real issue, not a theoritical
possible issue.

> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> patch series, 1 is demotion target for 0 and 2 is demotion target
> for 1.
> 
> With this patch series applied,
> 1) No demotion target for either 0 or 1 before dax device is online
> 2) 2 is demotion target for both 0 and 1 after dax device is online.
> 

So with your change, if a node hasn't N_DEMOTION_TARGETS, it will become
a top-level demotion source even if it hasn't N_CPU?  If so, I cannot
clear N_DEMOTION_TARGETS for a node in middle or bottom level?

Best Regards,
Huang, Ying

> > 
[snip]


Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Jagdish Gediya 4 years ago
On Tue, Apr 26, 2022 at 03:55:36PM +0800, ying.huang@intel.com wrote:
> On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > filled with demoted pages then at some point new allocations can
> > > > start falling to persistent memory, so basically cold pages are in
> > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > is why persistent memory nodes should be utilized for demotion and
> > > > dram node should be avoided for demotion so that they can be used
> > > > for new allocations.
> > > > 
> > > > Current implementation can work fine on the system where the memory
> > > > only numa nodes are possible only for persistent/slow memory but it
> > > > is not suitable for the like of systems mentioned above.
> > > 
> > > Can you share the NUMA topology information of your machine?  And the
> > > demotion order before and after your change?
> > > 
> > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > DRAM-only node too?
> > 
> > $ numactl -H
> > available: 2 nodes (0-1)
> > node 0 cpus: 0 1 2 3 4 5 6 7
> > node 0 size: 14272 MB
> > node 0 free: 13392 MB
> > node 1 cpus:
> > node 1 size: 2028 MB
> > node 1 free: 1971 MB
> > node distances:
> > node   0   1
> >   0:  10  40
> >   1:  40  10
> > 
> > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >    for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > 
> > $ cat /sys/bus/nd/devices/dax0.0/target_node
> > 2
> > $
> > # cd /sys/bus/dax/drivers/
> > :/sys/bus/dax/drivers# ls
> > device_dax  kmem
> > :/sys/bus/dax/drivers# cd device_dax/
> > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> > :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> > :/sys/bus/dax/drivers/device_dax# numactl -H
> > available: 3 nodes (0-2)
> > node 0 cpus: 0 1 2 3 4 5 6 7
> > node 0 size: 14272 MB
> > node 0 free: 13380 MB
> > node 1 cpus:
> > node 1 size: 2028 MB
> > node 1 free: 1961 MB
> > node 2 cpus:
> > node 2 size: 0 MB
> > node 2 free: 0 MB
> > node distances:
> > node   0   1   2
> >   0:  10  40  80
> >   1:  40  10  80
> >   2:  80  80  10
> > 
> 
> This looks like a virtual machine, not a real machine.  That's
> unfortunate.  I am looking forward to a real issue, not a theoritical
> possible issue.
> 
> > 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> > patch series, 1 is demotion target for 0 and 2 is demotion target
> > for 1.
> > 
> > With this patch series applied,
> > 1) No demotion target for either 0 or 1 before dax device is online
> > 2) 2 is demotion target for both 0 and 1 after dax device is online.
> > 
> 
> So with your change, if a node hasn't N_DEMOTION_TARGETS, it will become
> a top-level demotion source even if it hasn't N_CPU?  If so, I cannot
> clear N_DEMOTION_TARGETS for a node in middle or bottom level?

Yes, only N_MEMORY node also become demotion source because it is not
N_DEMOTION_TARGETS. You can clear N_DEMOTION_TARGETS from middle
or bottom but in that case, as the implementation works based on the
passes, cleared node will not be found as demotion target hence
demotion target will not be found for it, but does it make sense to
use faster persistent memory as demotion target leaving slowerer
persistent memory out of demotion list, if not, then it is not an
issue.

> Best Regards,
> Huang, Ying
> 
> > > 
> [snip]
> 
> 
> 
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Aneesh Kumar K V 4 years ago
On 4/26/22 1:25 PM, ying.huang@intel.com wrote:
> On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine?  And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node   0   1
>>    0:  10  40
>>    1:  40  10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax  kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node   0   1   2
>>    0:  10  40  80
>>    1:  40  10  80
>>    2:  80  80  10
>>
> 
> This looks like a virtual machine, not a real machine.  That's
> unfortunate.  I am looking forward to a real issue, not a theoritical
> possible issue.
> 

This is the source of confusion i guess. A large class of ppc64 systems 
are virtualized. The firmware include a hypervisor (PowerVM) and end 
user creates guest (aka LPAR) on them. That is the way end user will use 
this system. There is no baremetal access on this (There is an openpower 
variant, but all new systems built by IBM these days do have PowerVM on 
them).


So this is not a theoretical possibility.

-aneesh
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by ying.huang@intel.com 4 years ago
On Tue, 2022-04-26 at 14:37 +0530, Aneesh Kumar K V wrote:
> On 4/26/22 1:25 PM, ying.huang@intel.com wrote:
> > On Mon, 2022-04-25 at 16:45 +0530, Jagdish Gediya wrote:
> > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > filled with demoted pages then at some point new allocations can
> > > > > start falling to persistent memory, so basically cold pages are in
> > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > dram node should be avoided for demotion so that they can be used
> > > > > for new allocations.
> > > > > 
> > > > > Current implementation can work fine on the system where the memory
> > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > is not suitable for the like of systems mentioned above.
> > > > 
> > > > Can you share the NUMA topology information of your machine?  And the
> > > > demotion order before and after your change?
> > > > 
> > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > DRAM-only node too?
> > > 
> > > $ numactl -H
> > > available: 2 nodes (0-1)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13392 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1971 MB
> > > node distances:
> > > node   0   1
> > >    0:  10  40
> > >    1:  40  10
> > > 
> > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > >     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > > 
> > > $ cat /sys/bus/nd/devices/dax0.0/target_node
> > > 2
> > > $
> > > # cd /sys/bus/dax/drivers/
> > > :/sys/bus/dax/drivers# ls
> > > device_dax  kmem
> > > :/sys/bus/dax/drivers# cd device_dax/
> > > :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> > > :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> > > :/sys/bus/dax/drivers/device_dax# numactl -H
> > > available: 3 nodes (0-2)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13380 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1961 MB
> > > node 2 cpus:
> > > node 2 size: 0 MB
> > > node 2 free: 0 MB
> > > node distances:
> > > node   0   1   2
> > >    0:  10  40  80
> > >    1:  40  10  80
> > >    2:  80  80  10
> > > 
> > 
> > This looks like a virtual machine, not a real machine.  That's
> > unfortunate.  I am looking forward to a real issue, not a theoritical
> > possible issue.
> > 
> 
> This is the source of confusion i guess. A large class of ppc64 systems 
> are virtualized. The firmware include a hypervisor (PowerVM) and end 
> user creates guest (aka LPAR) on them. That is the way end user will use 
> this system. There is no baremetal access on this (There is an openpower 
> variant, but all new systems built by IBM these days do have PowerVM on 
> them).
> 
> 
> So this is not a theoretical possibility.
> 

Now I get it.  Thanks for detailed explanation.

Best Regards,
Huang, Ying



Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Jonathan Cameron 4 years ago
On Mon, 25 Apr 2022 16:45:38 +0530
Jagdish Gediya <jvgediya@linux.ibm.com> wrote:

> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:  
> > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > only NUMA node which are also N_MEMORY. As the current demotion
> > > target finding algorithm works based on N_MEMORY and best distance,
> > > it will choose DRAM only NUMA node as demotion target instead of
> > > persistent memory node on such systems. If DRAM only NUMA node is
> > > filled with demoted pages then at some point new allocations can
> > > start falling to persistent memory, so basically cold pages are in
> > > fast memor (due to demotion) and new pages are in slow memory, this
> > > is why persistent memory nodes should be utilized for demotion and
> > > dram node should be avoided for demotion so that they can be used
> > > for new allocations.
> > > 
> > > Current implementation can work fine on the system where the memory
> > > only numa nodes are possible only for persistent/slow memory but it
> > > is not suitable for the like of systems mentioned above.  
> > 
> > Can you share the NUMA topology information of your machine?  And the
> > demotion order before and after your change?
> > 
> > Whether it's good to use the PMEM nodes as the demotion targets of the
> > DRAM-only node too?  
> 
> $ numactl -H
> available: 2 nodes (0-1)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13392 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1971 MB
> node distances:
> node   0   1
>   0:  10  40
>   1:  40  10
> 
> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>    for 0 even when 1 is DRAM node and there is no demotion targets for 1.

I'm not convinced the distinction between DRAM and persistent memory is
valid. There will definitely be systems with a large pool
of remote DRAM (and potentially no NV memory) where the right choice
is to demote to that DRAM pool.

Basing the decision on whether the memory is from kmem or
normal DRAM doesn't provide sufficient information to make the decision.

> 
> $ cat /sys/bus/nd/devices/dax0.0/target_node
> 2
> $
> # cd /sys/bus/dax/drivers/
> :/sys/bus/dax/drivers# ls
> device_dax  kmem
> :/sys/bus/dax/drivers# cd device_dax/
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> :/sys/bus/dax/drivers/device_dax# numactl -H
> available: 3 nodes (0-2)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 14272 MB
> node 0 free: 13380 MB
> node 1 cpus:
> node 1 size: 2028 MB
> node 1 free: 1961 MB
> node 2 cpus:
> node 2 size: 0 MB
> node 2 free: 0 MB
> node distances:
> node   0   1   2
>   0:  10  40  80
>   1:  40  10  80
>   2:  80  80  10
> 
> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> patch series, 1 is demotion target for 0 and 2 is demotion target
> for 1.
> 
> With this patch series applied,
> 1) No demotion target for either 0 or 1 before dax device is online

I'd argue that is wrong.  At this state you have a tiered memory system
be it one with just DRAM.  Using it as such is correct behavior that
we should not be preventing.  Sure some usecases wouldn't want that
arrangement but some do want it.

For your case we could add a heuristic along the lines of the demotion
target should be at least as big as the starting point but that would
be a bit hacky.

Jonathan

> 2) 2 is demotion target for both 0 and 1 after dax device is online.
> 
> > Best Regards,
> > Huang, Ying
> >   
> > > This patch series introduces the new node state N_DEMOTION_TARGETS,
> > > which is used to distinguish the nodes which can be used as demotion
> > > targets, node_states[N_DEMOTION_TARGETS] is used to hold the list of
> > > nodes which can be used as demotion targets.
> > > 
> > > node state N_DEMOTION_TARGETS is also set from the dax kmem driver,
> > > certain type of memory which registers through dax kmem (e.g. HBM)
> > > may not be the right choices for demotion so in future they should
> > > be distinguished based on certain attributes and dax kmem driver
> > > should avoid setting them as N_DEMOTION_TARGETS, however current
> > > implementation also doesn't distinguish any  such memory and it
> > > considers all N_MEMORY as demotion targets so this patch series
> > > doesn't modify the current behavior.
> > > 
> > > below command can be used to view the available demotion targets in
> > > the system,
> > > 
> > > $ cat /sys/devices/system/node/demotion_targets
> > > 
> > > This patch series sets N_DEMOTION_TARGET from dax device kmem driver,
> > > It may be possible that some memory node desired as demotion target
> > > is not detected in the system from dax-device kmem probe path. It is
> > > also possible that some of the dax-devices are not preferred as
> > > demotion target e.g. HBM, for such devices, node shouldn't be set to
> > > N_DEMOTION_TARGETS, so support is also added to set the demotion
> > > target list from user space so that default behavior can be overridden
> > > to avoid or add specific node to demotion targets manually.
> > > 
> > > Override the demotion targets in the system (which sets the
> > > node_states[N_DEMOTION_TARGETS] in kernel),
> > > $ echo <node list> > /sys/devices/system/node/demotion_targets
> > > 
> > > As by default node attributes under /sys/devices/system/node/ are read-
> > > only, support is added to write node_states[] via sysfs so that
> > > node_states[N_DEMOTION_TARGETS] can be modified from user space via
> > > sysfs.
> > > 
> > > It is also helpful to know per node demotion target path prepared by
> > > kernel to understand the demotion behaviour during reclaim, so this
> > > patch series also adds a /sys/devices/system/node/nodeX/demotion_targets
> > > interface to view per-node demotion targets via sysfs.
> > > 
> > > Current code which sets migration targets is modified in
> > > this patch series to avoid some of the limitations on the demotion
> > > target sharing and to use N_DEMOTION_TARGETS only nodes while
> > > finding demotion targets.
> > > 
> > > Changelog
> > > ----------
> > > 
> > > v2:
> > > In v1, only 1st patch of this patch series was sent, which was
> > > implemented to avoid some of the limitations on the demotion
> > > target sharing, however for certain numa topology, the demotion
> > > targets found by that patch was not most optimal, so 1st patch
> > > in this series is modified according to suggestions from Huang
> > > and Baolin. Different examples of demotion list comparasion
> > > between existing implementation and changed implementation can
> > > be found in the commit message of 1st patch.
> > > 
> > > v3:
> > > - Modify patch 1 subject to make it more specific
> > > - Remove /sys/kernel/mm/numa/demotion_targets interface, use
> > >   /sys/devices/system/node/demotion_targets instead and make
> > >   it writable to override node_states[N_DEMOTION_TARGETS].
> > > - Add support to view per node demotion targets via sysfs
> > > 
> > > Jagdish Gediya (8):
> > >   mm: demotion: Fix demotion targets sharing among sources
> > >   mm: demotion: Add new node state N_DEMOTION_TARGETS
> > >   drivers/base/node: Add support to write node_states[] via sysfs
> > >   device-dax/kmem: Set node state as N_DEMOTION_TARGETS
> > >   mm: demotion: Build demotion list based on N_DEMOTION_TARGETS
> > >   mm: demotion: expose per-node demotion targets via sysfs
> > >   docs: numa: Add documentation for demotion
> > > 
> > >  Documentation/admin-guide/mm/index.rst        |  1 +
> > >  .../admin-guide/mm/numa_demotion.rst          | 57 +++++++++++++++
> > >  drivers/base/node.c                           | 70 ++++++++++++++++---
> > >  drivers/dax/kmem.c                            |  2 +
> > >  include/linux/migrate.h                       |  1 +
> > >  include/linux/nodemask.h                      |  1 +
> > >  mm/migrate.c                                  | 54 ++++++++++----
> > >  7 files changed, 162 insertions(+), 24 deletions(-)
> > >  create mode 100644 Documentation/admin-guide/mm/numa_demotion.rst
> > >   
> > 
> > 
> >   
> 
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Aneesh Kumar K V 4 years ago
On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>

....

>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node   0   1
>>    0:  10  40
>>    1:  40  10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> 
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
> 
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
> 
>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax  kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node   0   1   2
>>    0:  10  40  80
>>    1:  40  10  80
>>    2:  80  80  10
>>
>> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
> 
> I'd argue that is wrong.  At this state you have a tiered memory system
> be it one with just DRAM.  Using it as such is correct behavior that
> we should not be preventing.  Sure some usecases wouldn't want that
> arrangement but some do want it.
> 

I missed this in my earlier reply. Are you suggesting that we would want 
Node 1 (DRAM only memory numa node) to act as demotion target for Node 
0?  Any reason why we would want to do that? That is clearly opposite of 
what we are trying to do here. IMHO node using Node1 as demotion target 
for Node0 is a better default?



-aneesh
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Jonathan Cameron 4 years ago
On Mon, 25 Apr 2022 20:23:56 +0530
Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> wrote:

> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> >  
> 
> ....
> 
> >> $ numactl -H
> >> available: 2 nodes (0-1)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13392 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1971 MB
> >> node distances:
> >> node   0   1
> >>    0:  10  40
> >>    1:  40  10
> >>
> >> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.  
> > 
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> > 
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> >   
> >>
> >> $ cat /sys/bus/nd/devices/dax0.0/target_node
> >> 2
> >> $
> >> # cd /sys/bus/dax/drivers/
> >> :/sys/bus/dax/drivers# ls
> >> device_dax  kmem
> >> :/sys/bus/dax/drivers# cd device_dax/
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> >> :/sys/bus/dax/drivers/device_dax# numactl -H
> >> available: 3 nodes (0-2)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13380 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1961 MB
> >> node 2 cpus:
> >> node 2 size: 0 MB
> >> node 2 free: 0 MB
> >> node distances:
> >> node   0   1   2
> >>    0:  10  40  80
> >>    1:  40  10  80
> >>    2:  80  80  10
> >>
> >> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> >> patch series, 1 is demotion target for 0 and 2 is demotion target
> >> for 1.
> >>
> >> With this patch series applied,
> >> 1) No demotion target for either 0 or 1 before dax device is online  
> > 
> > I'd argue that is wrong.  At this state you have a tiered memory system
> > be it one with just DRAM.  Using it as such is correct behavior that
> > we should not be preventing.  Sure some usecases wouldn't want that
> > arrangement but some do want it.
> >   
> 
> I missed this in my earlier reply. Are you suggesting that we would want 
> Node 1 (DRAM only memory numa node) to act as demotion target for Node 
> 0?  Any reason why we would want to do that? That is clearly opposite of 
> what we are trying to do here. IMHO node using Node1 as demotion target 
> for Node0 is a better default?

In this case, because of the small size that probably wouldn't make sense.
But, if that were a CXL memory pool with multiple TB of DDR then yes
we would want the default case to use that memory for the demotion path.

So I don't think DDR vs NV via kmem alone is the right basis for a decision
on the default behavior.

Sure we can make this all a userspace problem.

Jonathan

> 
> 
> 
> -aneesh
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Aneesh Kumar K V 4 years ago
On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> On Mon, 25 Apr 2022 16:45:38 +0530
> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> 
>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>> filled with demoted pages then at some point new allocations can
>>>> start falling to persistent memory, so basically cold pages are in
>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>> is why persistent memory nodes should be utilized for demotion and
>>>> dram node should be avoided for demotion so that they can be used
>>>> for new allocations.
>>>>
>>>> Current implementation can work fine on the system where the memory
>>>> only numa nodes are possible only for persistent/slow memory but it
>>>> is not suitable for the like of systems mentioned above.
>>>
>>> Can you share the NUMA topology information of your machine?  And the
>>> demotion order before and after your change?
>>>
>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>> DRAM-only node too?
>>
>> $ numactl -H
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13392 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1971 MB
>> node distances:
>> node   0   1
>>    0:  10  40
>>    1:  40  10
>>
>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> 
> I'm not convinced the distinction between DRAM and persistent memory is
> valid. There will definitely be systems with a large pool
> of remote DRAM (and potentially no NV memory) where the right choice
> is to demote to that DRAM pool.
> 
> Basing the decision on whether the memory is from kmem or
> normal DRAM doesn't provide sufficient information to make the decision.
> 

Hence the suggestion for the ability to override this from userspace. 
Now, for example, we could build a system with memory from the remote 
machine (memory inception in case of power which will mostly be plugged 
in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI 
memory.

In the former case, we won't consider that for demotion with this series 
because that is not instantiated via dax kmem. So yes definitely we 
would need the ability to override this from userspace so that we could 
put these remote memory NUMA nodes as demotion targets if we want.

>>
>> $ cat /sys/bus/nd/devices/dax0.0/target_node
>> 2
>> $
>> # cd /sys/bus/dax/drivers/
>> :/sys/bus/dax/drivers# ls
>> device_dax  kmem
>> :/sys/bus/dax/drivers# cd device_dax/
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
>> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
>> :/sys/bus/dax/drivers/device_dax# numactl -H
>> available: 3 nodes (0-2)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 14272 MB
>> node 0 free: 13380 MB
>> node 1 cpus:
>> node 1 size: 2028 MB
>> node 1 free: 1961 MB
>> node 2 cpus:
>> node 2 size: 0 MB
>> node 2 free: 0 MB
>> node distances:
>> node   0   1   2
>>    0:  10  40  80
>>    1:  40  10  80
>>    2:  80  80  10
>>
>> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
>> patch series, 1 is demotion target for 0 and 2 is demotion target
>> for 1.
>>
>> With this patch series applied,
>> 1) No demotion target for either 0 or 1 before dax device is online
> 
> I'd argue that is wrong.  At this state you have a tiered memory system
> be it one with just DRAM.  Using it as such is correct behavior that
> we should not be preventing.  Sure some usecases wouldn't want that
> arrangement but some do want it.
> 
> For your case we could add a heuristic along the lines of the demotion
> target should be at least as big as the starting point but that would
> be a bit hacky.
> 

Hence the proposal to do a per node demotion target override with the 
semantics that i explained here


https://lore.kernel.org/linux-mm/8735i1zurt.fsf@linux.ibm.com/

Let me know if that interface would be good to handle all the possible 
demotion target configs we would want to have.

-aneesh
Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by ying.huang@intel.com 4 years ago
On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> > 
> > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > filled with demoted pages then at some point new allocations can
> > > > > start falling to persistent memory, so basically cold pages are in
> > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > dram node should be avoided for demotion so that they can be used
> > > > > for new allocations.
> > > > > 
> > > > > Current implementation can work fine on the system where the memory
> > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > is not suitable for the like of systems mentioned above.
> > > > 
> > > > Can you share the NUMA topology information of your machine?  And the
> > > > demotion order before and after your change?
> > > > 
> > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > DRAM-only node too?
> > > 
> > > $ numactl -H
> > > available: 2 nodes (0-1)
> > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > node 0 size: 14272 MB
> > > node 0 free: 13392 MB
> > > node 1 cpus:
> > > node 1 size: 2028 MB
> > > node 1 free: 1971 MB
> > > node distances:
> > > node   0   1
> > >    0:  10  40
> > >    1:  40  10
> > > 
> > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > >     for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > 
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> > 
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> > 
> 
> Hence the suggestion for the ability to override this from userspace. 
> Now, for example, we could build a system with memory from the remote 
> machine (memory inception in case of power which will mostly be plugged 
> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI 
> memory.
> 
> In the former case, we won't consider that for demotion with this series 
> because that is not instantiated via dax kmem. So yes definitely we 
> would need the ability to override this from userspace so that we could 
> put these remote memory NUMA nodes as demotion targets if we want.
> > > 

Is there a driver for the device (memory from the remote machine)?  If
so, we can adjust demotion order for it in the driver.

In general, I think that we can adjust demotion order inside kernel from
various information sources.  In addition to ACPI SLIT, we also have
HMAT, kmem driver, other drivers, etc.

> > 
Best Regards,
Huang, Ying

Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Aneesh Kumar K V 4 years ago
On 4/27/22 6:59 AM, ying.huang@intel.com wrote:
> On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
>> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
>>> On Mon, 25 Apr 2022 16:45:38 +0530
>>> Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
>>>
>>>> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
>>>>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
>>>>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
>>>>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
>>>>>> only NUMA node which are also N_MEMORY. As the current demotion
>>>>>> target finding algorithm works based on N_MEMORY and best distance,
>>>>>> it will choose DRAM only NUMA node as demotion target instead of
>>>>>> persistent memory node on such systems. If DRAM only NUMA node is
>>>>>> filled with demoted pages then at some point new allocations can
>>>>>> start falling to persistent memory, so basically cold pages are in
>>>>>> fast memor (due to demotion) and new pages are in slow memory, this
>>>>>> is why persistent memory nodes should be utilized for demotion and
>>>>>> dram node should be avoided for demotion so that they can be used
>>>>>> for new allocations.
>>>>>>
>>>>>> Current implementation can work fine on the system where the memory
>>>>>> only numa nodes are possible only for persistent/slow memory but it
>>>>>> is not suitable for the like of systems mentioned above.
>>>>>
>>>>> Can you share the NUMA topology information of your machine?  And the
>>>>> demotion order before and after your change?
>>>>>
>>>>> Whether it's good to use the PMEM nodes as the demotion targets of the
>>>>> DRAM-only node too?
>>>>
>>>> $ numactl -H
>>>> available: 2 nodes (0-1)
>>>> node 0 cpus: 0 1 2 3 4 5 6 7
>>>> node 0 size: 14272 MB
>>>> node 0 free: 13392 MB
>>>> node 1 cpus:
>>>> node 1 size: 2028 MB
>>>> node 1 free: 1971 MB
>>>> node distances:
>>>> node   0   1
>>>>     0:  10  40
>>>>     1:  40  10
>>>>
>>>> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
>>>>      for 0 even when 1 is DRAM node and there is no demotion targets for 1.
>>>
>>> I'm not convinced the distinction between DRAM and persistent memory is
>>> valid. There will definitely be systems with a large pool
>>> of remote DRAM (and potentially no NV memory) where the right choice
>>> is to demote to that DRAM pool.
>>>
>>> Basing the decision on whether the memory is from kmem or
>>> normal DRAM doesn't provide sufficient information to make the decision.
>>>
>>
>> Hence the suggestion for the ability to override this from userspace.
>> Now, for example, we could build a system with memory from the remote
>> machine (memory inception in case of power which will mostly be plugged
>> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
>> memory.
>>
>> In the former case, we won't consider that for demotion with this series
>> because that is not instantiated via dax kmem. So yes definitely we
>> would need the ability to override this from userspace so that we could
>> put these remote memory NUMA nodes as demotion targets if we want.
>>>>
> 
> Is there a driver for the device (memory from the remote machine)?  If
> so, we can adjust demotion order for it in the driver.
> 

At this point, it is managed by hypervisor, is hotplugged into the the 
LPAR with more additional properties specified via device tree. So there 
is no inception specific device driver.

> In general, I think that we can adjust demotion order inside kernel from
> various information sources.  In addition to ACPI SLIT, we also have
> HMAT, kmem driver, other drivers, etc.
> 

Managing inception memory will any way requires a userspace component to 
track the owner machine for the remote memory. So we should be ok to 
have userspace manage demotion order.

-aneesh

Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by ying.huang@intel.com 4 years ago
On Wed, 2022-04-27 at 08:27 +0530, Aneesh Kumar K V wrote:
> On 4/27/22 6:59 AM, ying.huang@intel.com wrote:
> > On Mon, 2022-04-25 at 20:14 +0530, Aneesh Kumar K V wrote:
> > > On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > > > On Mon, 25 Apr 2022 16:45:38 +0530
> > > > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> > > > 
> > > > > On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:
> > > > > > On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:
> > > > > > > Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> > > > > > > NUMA node which are N_MEMORY and slow memory(persistent memory)
> > > > > > > only NUMA node which are also N_MEMORY. As the current demotion
> > > > > > > target finding algorithm works based on N_MEMORY and best distance,
> > > > > > > it will choose DRAM only NUMA node as demotion target instead of
> > > > > > > persistent memory node on such systems. If DRAM only NUMA node is
> > > > > > > filled with demoted pages then at some point new allocations can
> > > > > > > start falling to persistent memory, so basically cold pages are in
> > > > > > > fast memor (due to demotion) and new pages are in slow memory, this
> > > > > > > is why persistent memory nodes should be utilized for demotion and
> > > > > > > dram node should be avoided for demotion so that they can be used
> > > > > > > for new allocations.
> > > > > > > 
> > > > > > > Current implementation can work fine on the system where the memory
> > > > > > > only numa nodes are possible only for persistent/slow memory but it
> > > > > > > is not suitable for the like of systems mentioned above.
> > > > > > 
> > > > > > Can you share the NUMA topology information of your machine?  And the
> > > > > > demotion order before and after your change?
> > > > > > 
> > > > > > Whether it's good to use the PMEM nodes as the demotion targets of the
> > > > > > DRAM-only node too?
> > > > > 
> > > > > $ numactl -H
> > > > > available: 2 nodes (0-1)
> > > > > node 0 cpus: 0 1 2 3 4 5 6 7
> > > > > node 0 size: 14272 MB
> > > > > node 0 free: 13392 MB
> > > > > node 1 cpus:
> > > > > node 1 size: 2028 MB
> > > > > node 1 free: 1971 MB
> > > > > node distances:
> > > > > node   0   1
> > > > >     0:  10  40
> > > > >     1:  40  10
> > > > > 
> > > > > 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> > > > >      for 0 even when 1 is DRAM node and there is no demotion targets for 1.
> > > > 
> > > > I'm not convinced the distinction between DRAM and persistent memory is
> > > > valid. There will definitely be systems with a large pool
> > > > of remote DRAM (and potentially no NV memory) where the right choice
> > > > is to demote to that DRAM pool.
> > > > 
> > > > Basing the decision on whether the memory is from kmem or
> > > > normal DRAM doesn't provide sufficient information to make the decision.
> > > > 
> > > 
> > > Hence the suggestion for the ability to override this from userspace.
> > > Now, for example, we could build a system with memory from the remote
> > > machine (memory inception in case of power which will mostly be plugged
> > > in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI
> > > memory.
> > > 
> > > In the former case, we won't consider that for demotion with this series
> > > because that is not instantiated via dax kmem. So yes definitely we
> > > would need the ability to override this from userspace so that we could
> > > put these remote memory NUMA nodes as demotion targets if we want.
> > > > > 
> > 
> > Is there a driver for the device (memory from the remote machine)?  If
> > so, we can adjust demotion order for it in the driver.
> > 
> 
> At this point, it is managed by hypervisor, is hotplugged into the the 
> LPAR with more additional properties specified via device tree. So there 
> is no inception specific device driver.

Because there's information in device tree, I still think it's doable in
the kernel.  But it's up to you to choose the appropriate way.

Best Regards,
Huang, Ying

> > In general, I think that we can adjust demotion order inside kernel from
> > various information sources.  In addition to ACPI SLIT, we also have
> > HMAT, kmem driver, other drivers, etc.
> > 
> 
> Managing inception memory will any way requires a userspace component to 
> track the owner machine for the remote memory. So we should be ok to 
> have userspace manage demotion order.
> 
> -aneesh
> 


Re: [PATCH v3 0/7] mm: demotion: Introduce new node state N_DEMOTION_TARGETS
Posted by Jonathan Cameron 4 years ago
On Mon, 25 Apr 2022 20:14:58 +0530
Aneesh Kumar K V <aneesh.kumar@linux.ibm.com> wrote:

> On 4/25/22 7:27 PM, Jonathan Cameron wrote:
> > On Mon, 25 Apr 2022 16:45:38 +0530
> > Jagdish Gediya <jvgediya@linux.ibm.com> wrote:
> >   
> >> On Sun, Apr 24, 2022 at 11:19:53AM +0800, ying.huang@intel.com wrote:  
> >>> On Sat, 2022-04-23 at 01:25 +0530, Jagdish Gediya wrote:  
> >>>> Some systems(e.g. PowerVM) can have both DRAM(fast memory) only
> >>>> NUMA node which are N_MEMORY and slow memory(persistent memory)
> >>>> only NUMA node which are also N_MEMORY. As the current demotion
> >>>> target finding algorithm works based on N_MEMORY and best distance,
> >>>> it will choose DRAM only NUMA node as demotion target instead of
> >>>> persistent memory node on such systems. If DRAM only NUMA node is
> >>>> filled with demoted pages then at some point new allocations can
> >>>> start falling to persistent memory, so basically cold pages are in
> >>>> fast memor (due to demotion) and new pages are in slow memory, this
> >>>> is why persistent memory nodes should be utilized for demotion and
> >>>> dram node should be avoided for demotion so that they can be used
> >>>> for new allocations.
> >>>>
> >>>> Current implementation can work fine on the system where the memory
> >>>> only numa nodes are possible only for persistent/slow memory but it
> >>>> is not suitable for the like of systems mentioned above.  
> >>>
> >>> Can you share the NUMA topology information of your machine?  And the
> >>> demotion order before and after your change?
> >>>
> >>> Whether it's good to use the PMEM nodes as the demotion targets of the
> >>> DRAM-only node too?  
> >>
> >> $ numactl -H
> >> available: 2 nodes (0-1)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13392 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1971 MB
> >> node distances:
> >> node   0   1
> >>    0:  10  40
> >>    1:  40  10
> >>
> >> 1) without N_DEMOTION_TARGETS patch series, 1 is demotion target
> >>     for 0 even when 1 is DRAM node and there is no demotion targets for 1.  
> > 
> > I'm not convinced the distinction between DRAM and persistent memory is
> > valid. There will definitely be systems with a large pool
> > of remote DRAM (and potentially no NV memory) where the right choice
> > is to demote to that DRAM pool.
> > 
> > Basing the decision on whether the memory is from kmem or
> > normal DRAM doesn't provide sufficient information to make the decision.
> >   
> 
> Hence the suggestion for the ability to override this from userspace. 
> Now, for example, we could build a system with memory from the remote 
> machine (memory inception in case of power which will mostly be plugged 
> in as regular hotpluggable memory ) and a slow CXL memory or OpenCAPI 
> memory.
> 
> In the former case, we won't consider that for demotion with this series 
> because that is not instantiated via dax kmem. So yes definitely we 
> would need the ability to override this from userspace so that we could 
> put these remote memory NUMA nodes as demotion targets if we want.


Agreed.  I would like to have a better 'guess' at the right default
though if possible.  With hindsight my instinct would have been to
have a default of no demotion path at all and hence ensure distros will carry
appropriate userspace setup scripts.  Ah well, too late :)

> 
> >>
> >> $ cat /sys/bus/nd/devices/dax0.0/target_node
> >> 2
> >> $
> >> # cd /sys/bus/dax/drivers/
> >> :/sys/bus/dax/drivers# ls
> >> device_dax  kmem
> >> :/sys/bus/dax/drivers# cd device_dax/
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 > unbind
> >> :/sys/bus/dax/drivers/device_dax# echo dax0.0 >  ../kmem/new_id
> >> :/sys/bus/dax/drivers/device_dax# numactl -H
> >> available: 3 nodes (0-2)
> >> node 0 cpus: 0 1 2 3 4 5 6 7
> >> node 0 size: 14272 MB
> >> node 0 free: 13380 MB
> >> node 1 cpus:
> >> node 1 size: 2028 MB
> >> node 1 free: 1961 MB
> >> node 2 cpus:
> >> node 2 size: 0 MB
> >> node 2 free: 0 MB
> >> node distances:
> >> node   0   1   2
> >>    0:  10  40  80
> >>    1:  40  10  80
> >>    2:  80  80  10
> >>
> >> 2) Once this new node brought online,  without N_DEMOTION_TARGETS
> >> patch series, 1 is demotion target for 0 and 2 is demotion target
> >> for 1.
> >>
> >> With this patch series applied,
> >> 1) No demotion target for either 0 or 1 before dax device is online  
> > 
> > I'd argue that is wrong.  At this state you have a tiered memory system
> > be it one with just DRAM.  Using it as such is correct behavior that
> > we should not be preventing.  Sure some usecases wouldn't want that
> > arrangement but some do want it.
> > 
> > For your case we could add a heuristic along the lines of the demotion
> > target should be at least as big as the starting point but that would
> > be a bit hacky.
> >   
> 
> Hence the proposal to do a per node demotion target override with the 
> semantics that i explained here
> 
> 
> https://lore.kernel.org/linux-mm/8735i1zurt.fsf@linux.ibm.com/
> 
> Let me know if that interface would be good to handle all the possible 
> demotion target configs we would want to have.

At first glance it looks good to me.

Jonathan

> 
> -aneesh