[PATCH] 8xx_io/uart.c
Joakim Tjernlund
joakim.tjernlund at lumentis.se
Sat Feb 15 02:13:12 EST 2003
> I have checked the links above it looks like it's solved. As far as I can
> tell alternative:
> 0. change alloc_skb()/skb_add_mtu() to force the allocated size to be a
> cache line multiple.
> has been impl. in the kernel:
> - skb_add_mtu() does not exist anymore.
> - alloc_skb() does L1 cache align the size:
> size = requested_size + 16;
> size = SKB_DATA_ALIGN(size); /* this does L1 cache alignment */
> data = kmalloc(size + sizeof(struct skb_shared_info), gfp_mask);
> This makes the skb_shared_info to always start on a new cache line.
> There is no risk that it is invalidated by dma_cache_inv() call.
>
> Jocke
Dan, any doubts still?
Jocke
** Sent via the linuxppc-embedded mail list. See http://lists.linuxppc.org/
More information about the Linuxppc-embedded
mailing list