BMCWeb payloads larger than than 64MB

Ed Tanous ed at tanous.net
Tue Aug 11 02:23:03 AEST 2020


On Mon, Aug 10, 2020 at 8:56 AM Joseph Reynolds <jrey at linux.ibm.com> wrote:
>
> On 8/5/20 6:26 PM, Joseph Reynolds wrote:
> > On 8/3/20 4:09 PM, Joseph Reynolds wrote:
> >> This is a reminder of the OpenBMC Security Working Group meeting
> >> scheduled for this Wednesday August 5 at 10:00am PDT.
> >>
> >> We'll discuss current development items, and anything else that comes
> >> up.
>
> ...snip...
>
> >> 4. Is there interest in enhancing OpenBMC firmware image update
> >> uploads using the Redfish-specified multipart HTTP push updates (that
> >> is, support the MultipartHttpPushUri property?
> > Sounds good, but nobody is working on it.  We also discussed use cases
> > for golden/primary/active/alternate images.
> >
> Ed,
>
> You mentioned "For any payloads larger than 64MB, this stuff needs
> revisited" on Jul 22 in
> https://gerrit.openbmc-project.xyz/c/openbmc/bmcweb/+/34972 and I would
> like to learn more about what you think the direction should be.

Today, bmcweb needs to buffer the entire payload in memory, then make
a copy to tmpfs.  In practice, this ends up consuming 3X the amount of
ram as the size of the file you're uploading.  This was mediocre when
payloads were 32mb, and really bad when payloads are much bigger (even
64 mb is pushing it, as we're already found).

Beast has utilities for avoiding this, and buffering files directly
from network io to disk, they just need to be wired into the
http_connection class and tested.
https://www.boost.org/doc/libs/develop/libs/beast/doc/html/beast/ref/boost__beast__http__file_body.html

If it were me I would start by creating a new route type (similar to
how request/response and websockets are already new route types) for
uploading files.
I'd make the api something like
BMCWEB_ROUTE("/my/file/upload").onfile([](std::filesystem::path
path_to_temp_file){
     // filesystem now points to a randomized temp file on disk.
});

This would need plumbed through the router and through http connection
to use the file_body type above.

Note: Depending on what you're trying to do, we might also need to
slot this into the MIME handler stuff as well, as those have the
possibility of uploading multiple files.

>
> In the mean time it seems like the current design can tolerate a 128Mb
> payload.

I'm not sure I agree with that.  The current design doesn't even
tolerate a 64MB payload well today.  Several bugs have been filed
about failed uploads, and the timer system has been hacked up quite a
bit in response, and still doesn't quite meet everyones needs for that
payload size.  Doubling the size yet again seems like it's going to be
a problem.

>  Do you foresee additional problems other than we've already
> seen?  Examples: resource use, slow connections, and the attending
> security vulnerabilities.

One important patch in this regard came in recently with something
that James and I worked on here:
https://github.com/openbmc/bmcweb/commit/3909dc82a003893812f598434d6c4558107afa28

This allows the connection handler to differentiate between a logged
in user, and a user that hasn't been logged in yet.  This gives a lot
more flexibility, in that the connection handler can simply disallow
any file uploads if the user isn't logged in.  This has already been
used to differentiate in the limits of payload sizes, which should be
a huge win for DOS attacks.  You'll likely need to use this yet again
to come up with a scheme for checking the timer subsystem, and
allowing authenticated requests to be based on a rate limit (bytes per
second), rather than an iteration count.  This might require moving
back to the async_read_some variants of the asio calls that we used to
use.  I haven't looked into detail to figure out if this is actually
needed.


More information about the openbmc mailing list