[PATCH] erofs: clean up a loop

Dan Carpenter dan.carpenter at oracle.com
Mon Jul 18 22:23:13 AEST 2022


On Mon, Jul 18, 2022 at 07:36:14PM +0800, Gao Xiang wrote:
> Hi Dan,
> 
> On Mon, Jul 18, 2022 at 02:20:08PM +0300, Dan Carpenter wrote:
> > It's easier to see what this loop is doing when the decrement is in
> > the normal place.
> > 
> > Signed-off-by: Dan Carpenter <dan.carpenter at oracle.com>
> > ---
> >  fs/erofs/zdata.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
> > index 601cfcb07c50..2691100eb231 100644
> > --- a/fs/erofs/zdata.c
> > +++ b/fs/erofs/zdata.c
> > @@ -419,8 +419,8 @@ static bool z_erofs_try_inplace_io(struct z_erofs_decompress_frontend *fe,
> >  {
> >  	struct z_erofs_pcluster *const pcl = fe->pcl;
> >  
> > -	while (fe->icur > 0) {
> > -		if (!cmpxchg(&pcl->compressed_bvecs[--fe->icur].page,
> > +	while (fe->icur--) {
> 
> Thanks for your patch!
> Yet at a quick glance, on my side, that doesn't equal
> to be honest...
> 
> .. What we're trying to do here is to find a free slot
> for inplace i/o, but also need to leave fe->icur as 0
> when going out the loop since z_erofs_try_inplace_io()
> can be called again the next time when attaching
> another page but it will overflow then...

Ah.  Sorry.  I never thought about it being called twice in a row.

regards,
dan carpenter



More information about the Linux-erofs mailing list