[PATCH] fix dmaengine_unmap failure.
Xuelin Shi
xuelin.shi at freescale.com
Wed Mar 19 17:39:59 EST 2014
Hi Dan,
In async_mult(...) of async_raid6_recov.c, the count 3 is used to request an unmap.
However the to_cnt and bidi_cnt are both set to 1 and from_cnt to 0.
Then while trying to do unmap, we are getting the wrong "unmap" from a different mempool.
In this patch, the mempool is associated with the unmap structure instead of computing it again.
By this way, it is guaranteed that the unmap is the same when we get and put the unmap data.
BTW: the mempool is just used to manage the struct unmap, not the pages.
Thanks,
Xuelin Shi
-----Original Message-----
From: Dan Williams [mailto:dan.j.williams at intel.com]
Sent: 2014年3月19日 1:14
To: Shi Xuelin-B29237
Cc: Vinod Koul; linuxppc-dev; dmaengine at vger.kernel.org
Subject: Re: [PATCH] fix dmaengine_unmap failure.
On Tue, Mar 18, 2014 at 1:32 AM, <xuelin.shi at freescale.com> wrote:
> From: Xuelin Shi <xuelin.shi at freescale.com>
>
> The count which is used to get_unmap_data maybe not the same as the
> count computed in dmaengine_unmap which causes to free data in a wrong
> pool.
>
> This patch fixes this issue by keeping the pool in unmap_data
> structure.
Won't this free the entire count anyways? In what scenario is the count different at unmap?
More information about the Linuxppc-dev
mailing list