[PATCH v3 11/26] x86/numa: use get_pfn_range_for_nid to verify that node spans memory

Mike Rapoport rppt at kernel.org
Tue Aug 6 06:35:22 AEST 2024


On Mon, Aug 05, 2024 at 01:03:56PM -0700, Dan Williams wrote:
> Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)" <rppt at kernel.org>
> > 
> > Instead of looping over numa_meminfo array to detect node's start and
> > end addresses use get_pfn_range_for_init().
> > 
> > This is shorter and make it easier to lift numa_memblks to generic code.
> > 
> > Signed-off-by: Mike Rapoport (Microsoft) <rppt at kernel.org>
> > Tested-by: Zi Yan <ziy at nvidia.com> # for x86_64 and arm64
> > ---
> >  arch/x86/mm/numa.c | 13 +++----------
> >  1 file changed, 3 insertions(+), 10 deletions(-)
> > 
> > diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
> > index edfc38803779..cfe7e5477cf8 100644
> > --- a/arch/x86/mm/numa.c
> > +++ b/arch/x86/mm/numa.c
> > @@ -521,17 +521,10 @@ static int __init numa_register_memblks(struct numa_meminfo *mi)
> >  
> >  	/* Finally register nodes. */
> >  	for_each_node_mask(nid, node_possible_map) {
> > -		u64 start = PFN_PHYS(max_pfn);
> > -		u64 end = 0;
> > +		unsigned long start_pfn, end_pfn;
> >  
> > -		for (i = 0; i < mi->nr_blks; i++) {
> > -			if (nid != mi->blk[i].nid)
> > -				continue;
> > -			start = min(mi->blk[i].start, start);
> > -			end = max(mi->blk[i].end, end);
> > -		}
> > -
> > -		if (start >= end)
> > +		get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
> > +		if (start_pfn >= end_pfn)
> 
> Assuming I understand why this works, would it be worth a comment like:
> 
> "Note, get_pfn_range_for_nid() depends on memblock_set_node() having
>  already happened"

Will add a comment, sure.
 
> ...at least that context was not part of the diff so took me second to
> figure out how this works.
> 

-- 
Sincerely yours,
Mike.


More information about the Linuxppc-dev mailing list