[RFC PATCH 05/21] ubifs: Pass worst-case buffer size to compression routines
Eric Biggers
ebiggers at kernel.org
Wed Jul 19 08:38:13 AEST 2023
On Tue, Jul 18, 2023 at 02:58:31PM +0200, Ard Biesheuvel wrote:
> Currently, the ubifs code allocates a worst case buffer size to
> recompress a data node, but does not pass the size of that buffer to the
> compression code. This means that the compression code will never use
> the additional space, and might fail spuriously due to lack of space.
>
> So let's multiply out_len by WORST_COMPR_FACTOR after allocating the
> buffer. Doing so is guaranteed not to overflow, given that the preceding
> kmalloc_array() call would have failed otherwise.
>
> Signed-off-by: Ard Biesheuvel <ardb at kernel.org>
> ---
> fs/ubifs/journal.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/fs/ubifs/journal.c b/fs/ubifs/journal.c
> index dc52ac0f4a345f30..4e5961878f336033 100644
> --- a/fs/ubifs/journal.c
> +++ b/fs/ubifs/journal.c
> @@ -1493,6 +1493,8 @@ static int truncate_data_node(const struct ubifs_info *c, const struct inode *in
> if (!buf)
> return -ENOMEM;
>
> + out_len *= WORST_COMPR_FACTOR;
> +
> dlen = le32_to_cpu(dn->ch.len) - UBIFS_DATA_NODE_SZ;
> data_size = dn_size - UBIFS_DATA_NODE_SZ;
> compr_type = le16_to_cpu(dn->compr_type);
This looks like another case where data that would be expanded by compression
should just be stored uncompressed instead.
In fact, it seems that UBIFS does that already. ubifs_compress() has this:
/*
* If the data compressed only slightly, it is better to leave it
* uncompressed to improve read speed.
*/
if (in_len - *out_len < UBIFS_MIN_COMPRESS_DIFF)
goto no_compr;
So it's unclear why the WORST_COMPR_FACTOR thing is needed at all.
- Eric
More information about the Linuxppc-dev
mailing list