[PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
Aneesh Kumar K.V
aneesh.kumar at linux.vnet.ibm.com
Fri Aug 4 13:38:53 AEST 2017
Balbir Singh <bsingharora at gmail.com> writes:
> On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V
> <aneesh.kumar at linux.vnet.ibm.com> wrote:
>> Balbir Singh <bsingharora at gmail.com> writes:
>>
>>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
>>> feature support we got support for changing the page permissions
>>> for pte ranges. This patch adds support for both radix and hash
>>> so that we can change their permissions via set/clear masks.
>>>
>>> A new helper is required for hash (hash__change_memory_range()
>>> is changed to hash__change_boot_memory_range() as it deals with
>>> bolted PTE's).
>>>
>>> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
>>> for permission changes. hash__change_memory_range() does not invoke
>>> updatepp, instead it changes the software PTE and invalidates the PTE.
>>>
>>> For radix, radix__change_memory_range() is setup to do the right
>>> thing for vmalloc'd addresses. It takes a new parameter to decide
>>> what attributes to set.
>>>
>> ....
>>
>>> +int hash__change_memory_range(unsigned long start, unsigned long end,
>>> + unsigned long set, unsigned long clear)
>>> +{
>>> + unsigned long idx;
>>> + pgd_t *pgdp;
>>> + pud_t *pudp;
>>> + pmd_t *pmdp;
>>> + pte_t *ptep;
>>> +
>>> + start = ALIGN_DOWN(start, PAGE_SIZE);
>>> + end = PAGE_ALIGN(end); // aligns up
>>> +
>>> + /*
>>> + * Update the software PTE and flush the entry.
>>> + * This should cause a new fault with the right
>>> + * things setup in the hash page table
>>> + */
>>> + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
>>> + start, end, set, clear);
>>> +
>>> + for (idx = start; idx < end; idx += PAGE_SIZE) {
>>
>>
>>> + pgdp = pgd_offset_k(idx);
>>> + pudp = pud_alloc(&init_mm, pgdp, idx);
>>> + if (!pudp)
>>> + return -1;
>>> + pmdp = pmd_alloc(&init_mm, pudp, idx);
>>> + if (!pmdp)
>>> + return -1;
>>> + ptep = pte_alloc_kernel(pmdp, idx);
>>> + if (!ptep)
>>> + return -1;
>>> + hash__pte_update(&init_mm, idx, ptep, clear, set, 0);
>
> I think this does the needful, if H_PAGE_HASHPTE is set, the flush
> will happen
>
>>> + hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE);
>>> + }
>>
>> You can use find_linux_pte_or_hugepte. with my recent patch series
>> find_init_mm_pte() ?
>>
>
> for pte_mkwrite and pte_wrprotect?
For walking page table. I am not sure you really want to allocate page
table in that function. If you do, then what will be the initial value
of PTE ? We are requesting to set an clear from and existing PTE entry
right ? If you find a none page table entry you should handle it via a
fault ?
-aneesh
More information about the Linuxppc-dev
mailing list