hvconsole.c : possible problem
Hollis Blanchard
hollisb at us.ibm.com
Thu Apr 29 00:37:04 EST 2004
On Apr 28, 2004, at 12:05 AM, Benjamin Herrenschmidt wrote:
>
> I've been reviewing arch/ppc64/kernel/hvconsole.c from ameslab tree
> and I have a few comments:
Thanks Ben, this is all my stuff.
> - The use of static variables names "write, read, ring" is a bit
> annoying for anybody trying to debug since they will appear "as-is"
> in System.map. Even for static variables, it's useful to have a prefix.
Ok.
> - There is a potential buffer overflow, I think, in hvsi_load_chunk(),
> in this line :
>
> /* copy up to 16 bytes into the write buffer */
> chunklen = hvsi_read(vtty, write->data.pkt + write->got, 16);
>
> The packet is defined to be 256 bytes. Now, if "got" is 254 bytes for
> example and we call that routine, I let you figure out what happens. It
> may not be possible for valid packets but I suppose an attacker can
> always forge something.
Ok.
> Also, you load the ring buffer without limit, that means you can
> eventually overflow and reuse a buffer that is filled with data
> instead of breaking out.
That's not true... oh, except that can happen in the overflow logic,
yeah.
> The whole function seems dodgy, it should
> rather ask for the exact amount it can fit in the packet, not more,
> and get rid of the overflow thing, don't you think ?
Sorry, didn't follow that. The problem being solved is this: you do an
hvsi_read() and store 15 bytes into 'buf' as follows:
ff05000065 ff05000165 ff05000265
(that's "AAA" in three packets). We just put all three of those packets
into a single ring buffer, so now we have to go back and copy two and
three into subsequent buffers.
Another possibility is having an intermediary 256-byte buffer that we
read into, then copy from there into buffers. That would probably
simplify the code a bit, at the expense of always requiring at least
one memcpy (which is not the case now) even if we just get one packet.
I'll try that.
> A much simpler loop basically, asking only as much is needed to fill
> the header if that isn't filled, then as much is needed to fill the
> data (maxed out at 16) until the packet is filled. That would allow
> to always deal only with packet boundaries, let throttling work
> properly
> at HV level, and simplify the code significantly...
We don't get to ask for a specific number of bytes; we're told after we
have them (up to 16). I'm noticing that the '16' parameter you're
looking at is never used, and should be removed all the way back to
drivers/char/hvc_console.c.
> - There is historical hunt for null characters in hvterm_get_chars(),
> that paulus added because xmon was getting confused. Where do those
> NULL
> chars come from ? I doubt HV itself is adding anything to the stream so
> I suppose it's whatever console program sits at the other hand ?
That must be the case, yeah. HVSI data passes through the HV in the
same way as normal console data, but if any munging occured there we
would be screwed. So it must be that java "vterm" on the HMC.
> I'm
> asking because hvsi_read() doesn't do that, and so we end up with 2
> routines having the exact same prototype: hvsi_read() and
> hvc_hvterm_get_chars(), both calling the same HV routine, but not
> doing exactly the same thing and I was wondering if this inconsistency
> was still necessary.
I've verified that those NULLs do still appear in the normal case, if
that's what you mean. I have not seen any NULLs come through the HVSI
code and they would be pretty noticeable. That is consistent with the
vterm-did-it theory, and yes, that means we have different behavior.
> - The whole wait_for_packet() is completely synchronous (doesn't
> schedule), that doesn't sound very good.
True, hmm... I must have meant to come back to that later.
I guess I'll find out, but is there a limit on how early you can call
schedule() during boot (like kmalloc)? wait_for_packet() is only used
during initialization which happens pretty early.
--
Hollis Blanchard
IBM Linux Technology Center
** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/
More information about the Linuxppc64-dev
mailing list