[PATCH 4/4] powerpc/64: reuse PPC32 static inline flush_dcache_range()
Aneesh Kumar K.V
aneesh.kumar at linux.ibm.com
Tue Jul 9 12:51:54 AEST 2019
On 7/9/19 7:50 AM, Oliver O'Halloran wrote:
> On Tue, Jul 9, 2019 at 12:22 AM Aneesh Kumar K.V
> <aneesh.kumar at linux.ibm.com> wrote:
>> Christophe Leroy <christophe.leroy at c-s.fr> writes:
>>> + if (IS_ENABLED(CONFIG_PPC64))
>>> + isync();
>> Was checking with Michael about why we need that extra isync. Michael
>> pointed this came via
>> for 970 which doesn't have coherent icache. So possibly isync there is
>> to flush the prefetch instructions? But even so we would need an icbi
>> there before that isync.
> I don't think it's that, there's some magic in flush_icache_range() to
> handle dropping prefetched instructions on 970.
>> So overall wondering why we need that extra barriers there.
> I think the isync is needed there because the architecture only
> requires sync to provide ordering. A sync alone doesn't guarantee the
> dcbfs have actually completed so the isync is necessary to ensure the
> flushed cache lines are back in memory. That said, as far as I know
> all the IBM book3s chips from power4 onwards will wait for pending
> dcbfs when they hit a sync, but that might change in the future.
ISA doesn't list that as the sequence. Only place where isync was
mentioned was w.r.t icbi where want to discards the prefetch.
> If it's a problem we could add a cpu-feature section around the isync
> to no-op it in the common case. However, when I had a look with perf
> it always showed that the sync was the hotspot so I don't think it'll
> help much.
What about the preceding barriers (sync; isync;) before dcbf? Why are
More information about the Linuxppc-dev