[PATCH v6 2/2] erofs-utils: optimize buffer allocation logic

Gao Xiang hsiangkao at redhat.com
Sat Jan 23 00:14:08 AEDT 2021

```Hi Weiwen,

On Tue, Jan 19, 2021 at 01:49:51PM +0800, Hu Weiwen wrote:

...

>  	bb = NULL;
>
> -	list_for_each_entry(cur, &blkh.list, list) {
> -		unsigned int used_before, used;
> +	if (!used0 || alignsize == EROFS_BLKSIZ)
> +		goto alloc;
> +
> +	/* try to find a most-fit mapped buffer block first */
> +	used_before = EROFS_BLKSIZ -
> +		round_up(size + required_ext + inline_ext, alignsize);

Honestly, after seen above I feel I'm not good at math now
since I smell somewhat strange of this, apart from the pending
patch you raised [1], the algebra is

/* since all buffers should be aligned with alignsize */
erofs_off_t alignedoffset = roundup(used_before, alignsize);

and (alignedoffset + size + required_ext + inline_ext <= EROFS_BLKSIZ)

and why it can be equal to
used_before = EROFS_BLKSIZ - round_up(size + required_ext + inline_ext, alignsize);

Could you explain this in detail if possible? for example,
size = 3
inline_ext = 62
alignsize = 32

so 4096 - roundup(3 + 62, 32) = 4096 - 96 = 4000
but, the real used_before can be even started at 4032, since
alignedoffset = roundup(4032, 32) = 4032
4032 + 62 = 4094 <= EROFS_BLKSIZ.

Am I stll missing something?

IMO, I don't want too hard on such math, I'd like to just use
used_before = EROFS_BLKSIZ - (size + required_ext + inline_ext);
and simply skip the bb if __erofs_battach is fail (as I said before,
the internal __erofs_battach can be changed, and I don't want to
imply that always succeed.)

If you also agree that, I'll send out a revised version along
with a cleanup patch to clean up erofs_balloc() as well, which
is more complicated than before.

[1] https://lore.kernel.org/r/20210121162606.8168-1-sehuww@mail.scut.edu.cn/

Thanks,
Gao Xiang

```