[PATCH V5 1/3] mm: Add get_user_pages_cma_migrate
Aneesh Kumar K.V
aneesh.kumar at linux.ibm.com
Thu Dec 20 17:26:31 AEDT 2018
On 12/20/18 11:50 AM, Alexey Kardashevskiy wrote:
>
>
> On 20/12/2018 16:52, Aneesh Kumar K.V wrote:
>> On 12/20/18 11:18 AM, Alexey Kardashevskiy wrote:
>>>
>>>
>>> On 20/12/2018 16:22, Aneesh Kumar K.V wrote:
>>>> On 12/20/18 9:49 AM, Alexey Kardashevskiy wrote:
>>>>>
>>>>>
>>>>> On 19/12/2018 14:40, Aneesh Kumar K.V wrote:
>>>>>> This helper does a get_user_pages_fast and if it find pages in the
>>>>>> CMA area
>>>>>> it will try to migrate them before taking page reference. This makes
>>>>>> sure that
>>>>>> we don't keep non-movable pages (due to page reference count) in the
>>>>>> CMA area.
>>>>>> Not able to move pages out of CMA area result in CMA allocation
>>>>>> failures.
>>>>>>
>>>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.ibm.com>
>>>>>
>>>>
>>>> .....
>>>>>> + * We did migrate all the pages, Try to get the page
>>>>>> references again
>>>>>> + * migrating any new CMA pages which we failed to isolate
>>>>>> earlier.
>>>>>> + */
>>>>>> + drain_allow = true;
>>>>>> + goto get_user_again;
>>>>>
>>>>>
>>>>> So it is possible to have pages pinned, then successfully migrated
>>>>> (migrate_pages() returned 0), then pinned again, then some pages may
>>>>> end
>>>>> up in CMA again and migrate again and nothing seems to prevent this
>>>>> loop
>>>>> from being endless. What do I miss?
>>>>>
>>>>
>>>> pages used as target page for migration won't be allocated from CMA
>>>> region.
>>>
>>>
>>> Then migrate_allow should be set to "false" regardless what
>>> migrate_pages() returned and then I am totally missing the point of this
>>> goto and going through the loop again even when we know for sure it
>>> won't do literally anything but checking is_migrate_cma_page() even
>>> though we know pages won't be allocated from CMA.
>>>
>>
>> Because we might have failed to isolate all the pages in the first attempt.
>
> isolate==migrate?
no
The call to isolate_lru_page and isolate_huge_page. We can fail because
the percpu pagevec is not fully drained
-aneesh
More information about the Linuxppc-dev
mailing list