[PATCH] powerpc/mm: Limit allocation of SWIOTLB on server machines

Ram Pai linuxram at us.ibm.com
Thu Dec 24 14:14:09 AEDT 2020


On Wed, Dec 23, 2020 at 09:06:01PM -0300, Thiago Jung Bauermann wrote:
> 
> Hi Ram,
> 
> Thanks for reviewing this patch.
> 
> Ram Pai <linuxram at us.ibm.com> writes:
> 
> > On Fri, Dec 18, 2020 at 03:21:03AM -0300, Thiago Jung Bauermann wrote:
> >> On server-class POWER machines, we don't need the SWIOTLB unless we're a
> >> secure VM. Nevertheless, if CONFIG_SWIOTLB is enabled we unconditionally
> >> allocate it.
> >> 
> >> In most cases this is harmless, but on a few machine configurations (e.g.,
> >> POWER9 powernv systems with 4 GB area reserved for crashdump kernel) it can
> >> happen that memblock can't find a 64 MB chunk of memory for the SWIOTLB and
> >> fails with a scary-looking WARN_ONCE:
> >> 
> >>  ------------[ cut here ]------------
> >>  memblock: bottom-up allocation failed, memory hotremove may be affected
> >>  WARNING: CPU: 0 PID: 0 at mm/memblock.c:332 memblock_find_in_range_node+0x328/0x340
> >>  Modules linked in:
> >>  CPU: 0 PID: 0 Comm: swapper Not tainted 5.10.0-rc2-orig+ #6
> >>  NIP:  c000000000442f38 LR: c000000000442f34 CTR: c0000000001e0080
> >>  REGS: c000000001def900 TRAP: 0700   Not tainted  (5.10.0-rc2-orig+)
> >>  MSR:  9000000002021033 <SF,HV,VEC,ME,IR,DR,RI,LE>  CR: 28022222  XER: 20040000
> >>  CFAR: c00000000014b7b4 IRQMASK: 1
> >>  GPR00: c000000000442f34 c000000001defba0 c000000001deff00 0000000000000047
> >>  GPR04: 00000000ffff7fff c000000001def828 c000000001def820 0000000000000000
> >>  GPR08: 0000001ffc3e0000 c000000001b75478 c000000001b75478 0000000000000001
> >>  GPR12: 0000000000002000 c000000002030000 0000000000000000 0000000000000000
> >>  GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000002030000
> >>  GPR20: 0000000000000000 0000000000010000 0000000000010000 c000000001defc10
> >>  GPR24: c000000001defc08 c000000001c91868 c000000001defc18 c000000001c91890
> >>  GPR28: 0000000000000000 ffffffffffffffff 0000000004000000 00000000ffffffff
> >>  NIP [c000000000442f38] memblock_find_in_range_node+0x328/0x340
> >>  LR [c000000000442f34] memblock_find_in_range_node+0x324/0x340
> >>  Call Trace:
> >>  [c000000001defba0] [c000000000442f34] memblock_find_in_range_node+0x324/0x340 (unreliable)
> >>  [c000000001defc90] [c0000000015ac088] memblock_alloc_range_nid+0xec/0x1b0
> >>  [c000000001defd40] [c0000000015ac1f8] memblock_alloc_internal+0xac/0x110
> >>  [c000000001defda0] [c0000000015ac4d0] memblock_alloc_try_nid+0x94/0xcc
> >>  [c000000001defe30] [c00000000159c3c8] swiotlb_init+0x78/0x104
> >>  [c000000001defea0] [c00000000158378c] mem_init+0x4c/0x98
> >>  [c000000001defec0] [c00000000157457c] start_kernel+0x714/0xac8
> >>  [c000000001deff90] [c00000000000d244] start_here_common+0x1c/0x58
> >>  Instruction dump:
> >>  2c230000 4182ffd4 ea610088 ea810090 4bfffe84 39200001 3d42fff4 3c62ff60
> >>  3863c560 992a8bfc 4bd0881d 60000000 <0fe00000> ea610088 4bfffd94 60000000
> >>  random: get_random_bytes called from __warn+0x128/0x184 with crng_init=0
> >>  ---[ end trace 0000000000000000 ]---
> >>  software IO TLB: Cannot allocate buffer
> >> 
> >> Unless this is a secure VM the message can actually be ignored, because the
> >> SWIOTLB isn't needed. Therefore, let's avoid the SWIOTLB in those cases.
> >
> > The above warn_on is conveying a genuine warning. Should it be silenced?
> 
> Not sure I understand your point. This patch doesn't silence the
> warning, it avoids the problem it is warning about.

Sorry, I should have explained it better. My point is...  

	If CONFIG_SWIOTLB is enabled, it means that the kernel is
	promising the bounce buffering capability. I know, currently we
	do not have any kernel subsystems that use bounce buffers on
	non-secure-pseries-kernel or powernv-kernel.  But that does not
	mean, there wont be any. In case there is such a third-party
	module needing bounce buffering, it wont be able to operate,
	because of the proposed change in your patch.

	Is that a good thing or a bad thing, I do not know. I will let
	the experts opine.

RP


More information about the Linuxppc-dev mailing list