[bug report] staging: erofs: tidy up decompression frontend
Gao Xiang
hsiangkao at aol.com
Fri Nov 15 11:45:14 AEDT 2019
Hi Dan and Matthew,
On Thu, Nov 14, 2019 at 02:00:15PM -0800, Matthew Wilcox wrote:
> On Thu, Nov 14, 2019 at 10:10:03PM +0300, Dan Carpenter wrote:
> > fs/erofs/zdata.c:443 z_erofs_register_collection()
> > error: double unlocked 'cl->lock' (orig line 439)
> >
> > fs/erofs/zdata.c
> > 432 cl = z_erofs_primarycollection(pcl);
> > 433 cl->pageofs = map->m_la & ~PAGE_MASK;
> > 434
> > 435 /*
> > 436 * lock all primary followed works before visible to others
> > 437 * and mutex_trylock *never* fails for a new pcluster.
> > 438 */
> > 439 mutex_trylock(&cl->lock);
> > ^^^^^^^^^^^^^^^^^^^^^^^^
> > 440
> > 441 err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
> > 442 if (err) {
> > 443 mutex_unlock(&cl->lock);
> > ^^^^^^^^^^^^^^^^^^^^^^^
> > How can we unlock if we don't know that the trylock succeeded?
>
> The comment says it'll always succeed. That said, this is an uncommon
> pattern -- usually we just mutex_lock(). If there's a good reason to use
> mutex_trylock() instead, then I'd prefer it to be guarded with a BUG_ON.
>
I think there is no actual problem here. If I am wrong, please kindly point out.
The selected code snippet is too short. The current full code is
static struct z_erofs_collection *clregister(struct z_erofs_collector *clt,
struct inode *inode,
struct erofs_map_blocks *map)
{
struct z_erofs_pcluster *pcl;
struct z_erofs_collection *cl;
int err;
/* no available workgroup, let's allocate one */
pcl = kmem_cache_alloc(pcluster_cachep, GFP_NOFS);
if (!pcl)
return ERR_PTR(-ENOMEM);
^ Note that this is a new object here, which is guaranteed that the lock
was always unlocked with the last free (and it firstly inited in init_once).
z_erofs_pcluster_init_always(pcl);
pcl->obj.index = map->m_pa >> PAGE_SHIFT;
pcl->length = (map->m_llen << Z_EROFS_PCLUSTER_LENGTH_BIT) |
(map->m_flags & EROFS_MAP_FULL_MAPPED ?
Z_EROFS_PCLUSTER_FULL_LENGTH : 0);
if (map->m_flags & EROFS_MAP_ZIPPED)
pcl->algorithmformat = Z_EROFS_COMPRESSION_LZ4;
else
pcl->algorithmformat = Z_EROFS_COMPRESSION_SHIFTED;
pcl->clusterbits = EROFS_I(inode)->z_physical_clusterbits[0];
pcl->clusterbits -= PAGE_SHIFT;
/* new pclusters should be claimed as type 1, primary and followed */
pcl->next = clt->owned_head;
clt->mode = COLLECT_PRIMARY_FOLLOWED;
cl = z_erofs_primarycollection(pcl);
cl->pageofs = map->m_la & ~PAGE_MASK;
/*
* lock all primary followed works before visible to others
* and mutex_trylock *never* fails for a new pcluster.
*/
mutex_trylock(&cl->lock);
^ That was simply once guarded by BUG_ON, but checkpatch.pl raised a warning,
I can use DBG_BUGON here instead.
err = erofs_register_workgroup(inode->i_sb, &pcl->obj, 0);
if (err) {
mutex_unlock(&cl->lock);
^ free with unlock as a convention as one example above.
kmem_cache_free(pcluster_cachep, pcl);
return ERR_PTR(-EAGAIN);
}
Thanks,
Gao Xiang
More information about the Linux-erofs
mailing list