<div dir="ltr">hi,<div>I am sorry,there is only one patch.</div><div><br></div><div>If two or more tasks are doing a shrinking job, there are different results for each task. </div><div>That is to say, the status of each task is not persistent each time, although they have</div><div>the same purpose to release memory.</div><div><br></div><div><div>And, If two tasks are shrinking, the erofs_sb_list will become no order, because each task will</div><div>move one sbi to tail, but sbi is random, so results are more complex.</div></div><div><br></div><div><div>Because of the use of the local variable 'run_no', it took me a long time to understand the </div><div>procedure of each task when they are concurrent. </div><div><br></div><div>There is the same issue for other fs, such as fs/ubifs/shrink.c、fs/f2fs/shrink.c.</div></div><div><br></div><div><div>If scan_objects cost a long time ,it will trigger a watchdog, shrinking should</div><div>not make work time-consuming. It should be done ASAP.</div><div>So, I add a new spin lock to let tasks shrink fs sequentially, it will just make all tasks shrink</div><div>one by one.</div></div><div><br></div><div><br></div><div>Thanks very much.</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Jul 8, 2022 at 11:25 AM Gao Xiang <<a href="mailto:hsiangkao@linux.alibaba.com">hsiangkao@linux.alibaba.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Guowei,<br>
<br>
On Fri, Jul 08, 2022 at 11:11:55AM +0800, Guowei Du wrote:<br>
> From: duguowei <<a href="mailto:duguowei@xiaomi.com" target="_blank">duguowei@xiaomi.com</a>><br>
> <br>
> Because of 'list_move_tail', if two or more tasks are shrinking, there<br>
> will be different results for them.<br>
<br>
Thanks for the patch. Two quick questions:<br>
1) where is the PATCH 1/2;<br>
2) What problem is the current patch trying to resolve...<br>
<br>
> <br>
> For example:<br>
> After the first round, if shrink_run_no of entry equals run_no of task,<br>
> task will break directly at the beginning of next round; if they are<br>
> not equal, task will continue to shrink until encounter one entry<br>
> which has the same number.<br>
> <br>
> It is difficult to confirm the real results of all tasks, so add a lock<br>
> to only allow one task to shrink at the same time.<br>
> <br>
> How to test:<br>
> task1:<br>
> root#echo 3 > /proc/sys/vm/drop_caches<br>
> [743071.839051] Call Trace:<br>
> [743071.839052] <TASK><br>
> [743071.839054] do_shrink_slab+0x112/0x300<br>
> [743071.839058] shrink_slab+0x211/0x2a0<br>
> [743071.839060] drop_slab+0x72/0xe0<br>
> [743071.839061] drop_caches_sysctl_handler+0x50/0xb0<br>
> [743071.839063] proc_sys_call_handler+0x173/0x250<br>
> [743071.839066] proc_sys_write+0x13/0x20<br>
> [743071.839067] new_sync_write+0x104/0x180<br>
> [743071.839070] ? send_command+0xe0/0x270<br>
> [743071.839073] vfs_write+0x247/0x2a0<br>
> [743071.839074] ksys_write+0xa7/0xe0<br>
> [743071.839075] ? fpregs_assert_state_consistent+0x23/0x50<br>
> [743071.839078] __x64_sys_write+0x1a/0x20<br>
> [743071.839079] do_syscall_64+0x3a/0x80<br>
> [743071.839081] entry_SYSCALL_64_after_hwframe+0x46/0xb0<br>
> <br>
> task2:<br>
> root#echo 3 > /proc/sys/vm/drop_caches<br>
> [743079.843214] Call Trace:<br>
> [743079.843214] <TASK><br>
> [743079.843215] do_shrink_slab+0x112/0x300<br>
> [743079.843219] shrink_slab+0x211/0x2a0<br>
> [743079.843221] drop_slab+0x72/0xe0<br>
> [743079.843222] drop_caches_sysctl_handler+0x50/0xb0<br>
> [743079.843224] proc_sys_call_handler+0x173/0x250<br>
> [743079.843227] proc_sys_write+0x13/0x20<br>
> [743079.843228] new_sync_write+0x104/0x180<br>
> [743079.843231] ? send_command+0xe0/0x270<br>
> [743079.843233] vfs_write+0x247/0x2a0<br>
> [743079.843234] ksys_write+0xa7/0xe0<br>
> [743079.843235] ? fpregs_assert_state_consistent+0x23/0x50<br>
> [743079.843238] __x64_sys_write+0x1a/0x20<br>
> [743079.843239] do_syscall_64+0x3a/0x80<br>
> [743079.843241] entry_SYSCALL_64_after_hwframe+0x46/0xb0<br>
> <br>
> Signed-off-by: duguowei <<a href="mailto:duguowei@xiaomi.com" target="_blank">duguowei@xiaomi.com</a>><br>
> ---<br>
> fs/erofs/utils.c | 16 ++++++++++------<br>
> 1 file changed, 10 insertions(+), 6 deletions(-)<br>
> <br>
> diff --git a/fs/erofs/utils.c b/fs/erofs/utils.c<br>
> index ec9a1d780dc1..9eca13a7e594 100644<br>
> --- a/fs/erofs/utils.c<br>
> +++ b/fs/erofs/utils.c<br>
> @@ -186,6 +186,8 @@ static unsigned int shrinker_run_no;<br>
> <br>
> /* protects the mounted 'erofs_sb_list' */<br>
> static DEFINE_SPINLOCK(erofs_sb_list_lock);<br>
> +/* sequence each shrink task */<br>
> +static DEFINE_SPINLOCK(erofs_sb_shrink_lock);<br>
> static LIST_HEAD(erofs_sb_list);<br>
> <br>
> void erofs_shrinker_register(struct super_block *sb)<br>
> @@ -226,13 +228,14 @@ static unsigned long erofs_shrink_scan(struct shrinker *shrink,<br>
> struct list_head *p;<br>
> <br>
> unsigned long nr = sc->nr_to_scan;<br>
> - unsigned int run_no;<br>
> unsigned long freed = 0;<br>
> <br>
> + spin_lock(&erofs_sb_shrink_lock);<br>
<br>
Btw, we cannot make the whole shrinker under one spin_lock.<br>
<br>
Thanks,<br>
Gao Xiang<br>
</blockquote></div>