kernel BUG at fs/jffs2/gc.c:395!

Tao Ren taoren at fb.com
Wed Aug 21 10:20:03 AEST 2019


On 8/20/19 5:06 PM, Andrew Jeffery wrote:
> 
> 
> On Wed, 21 Aug 2019, at 08:42, Tao Ren wrote:
>> On 8/20/19 4:09 PM, Tao Ren wrote:
>>> Hi,
>>>
>>> I hit following jffs2 bug while running Linux 5.0.3 on CMM (ASPEED2500) BMC platform. Has anyone seen the issue before? Any suggestions? 
>>>
>>> [   46.024017] ------------[ cut here ]------------
>>> [   46.079178] kernel BUG at /data/users/taoren/openbmc/build-cmm/tmp/work-shared/cmm/kernel-source/fs/jffs2/gc.c:395!
>>> [   46.204076] Internal error: Oops - BUG: 0 [#1] ARM
>>> [   46.261378] Modules linked in: ext4 mbcache jbd2 crypto_hash
>>> [   46.329093] CPU: 0 PID: 1181 Comm: jffs2_gcd_mtd3 Not tainted 5.0.3-cmm #1
>>> [   46.411343] Hardware name: Generic DT based system
>>> [   46.468685] PC is at jffs2_garbage_collect_pass+0x6f4/0x734
>>> [   46.535322] LR is at jffs2_garbage_collect_pass+0x6f4/0x734
>>> [   46.601977] pc : [<802c292c>]    lr : [<802c292c>]    psr: 60000013
>>> [   46.676959] sp : af3cded0  ip : b56a75c0  fp : af3cdf24
>>> [   46.739463] r10: b4061140  r9 : b57a3900  r8 : b555d4ac
>>> [   46.801968] r7 : b555d4ac  r6 : b403502c  r5 : 00000000  r4 : b4035000
>>> [   46.880073] r3 : b56a75c0  r2 : 00000000  r1 : 00000000  r0 : b403502c
>>> [   46.958177] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment none
>>> [   47.043561] Control: 00c5387d  Table: b5774008  DAC: 00000051
>>> [   47.112319] Process jffs2_gcd_mtd3 (pid: 1181, stack limit = 0x54372ffe)
>>> [   47.192490] Stack: (0xaf3cded0 to 0xaf3ce000)
>>> [   47.244601] dec0:                                     00000000 80a07028 800ad6c9 0000ff2c
>>> [   47.342468] dee0: af3cdefc af3cdef0 80125cd4 8012313c af3cdf24 800ad6c9 8012614c b4035000 
>>> [   47.440331] df00: ffffe000 af3cc000 af3cc000 b4035000 802c509c b419dd18 af3cdf74 af3cdf28
>>> [   47.538196] df20: 802c5174 802c2244 ffffe000 00000001 00000000 ffffe000 b57b0940 00000000
>>> [   47.636058] df40: ffffe000 b4035000 802c509c b419dd18 af3cdf74 800ad6c9 b5753980 b5753980
>>> [   47.733923] df60: b57b0940 00000000 af3cdfac af3cdf78 80134d58 802c50a8 b5753998 b5753998
>>> [   47.831787] df80: af3cdfac b57b0940 80134c0c 00000000 00000000 00000000 00000000 00000000
>>> [   47.929648] dfa0: 00000000 af3cdfb0 801010e8 80134c18 00000000 00000000 00000000 00000000
>>> [   48.027512] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
>>> [   48.125376] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000
>>> [   48.223230] Backtrace:  
>>> [   48.252489] [<802c2238>] (jffs2_garbage_collect_pass) from [<802c5174>] (jffs2_garbage_collect_thread+0xd8/0x1ac)
>>> [   48.375294]  r10:b419dd18 r9:802c509c r8:b4035000 r7:af3cc000 r6:af3cc000 r5:ffffe000
>>> [   48.468985]  r4:b4035000
>>> [   48.499281] [<802c509c>] (jffs2_garbage_collect_thread) from [<80134d58>] (kthread+0x14c/0x164)
>>> [   48.603358]  r6:00000000 r5:b57b0940 r4:b5753980
>>> [   48.658590] [<80134c0c>] (kthread) from [<801010e8>] (ret_from_fork+0x14/0x2c)
>>> [   48.745001] Exception stack(0xaf3cdfb0 to 0xaf3cdff8)
>>> [   48.805428] dfa0:                                     00000000 00000000 00000000 00000000
>>> [   48.903296] dfc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
>>> [   49.001157] dfe0: 00000000 00000000 00000000 00000000 00000013 00000000
>>> [   49.080305]  r10:00000000 r9:00000000 r8:00000000 r7:00000000 r6:00000000 r5:80134c0c
>>> [   49.174000]  r4:b57b0940
>>> [   49.204275] Code: e59f0044 ebfa25cb e1a00006 eb0e888d (e7f001f2)
>>> [   49.277188] ---[ end trace 6baa7af0a90d15ab ]---
>>> [   49.332395] Kernel panic - not syncing: Fatal exception
>>
>> BTW, below are all the messages printed by jffs2 before system crash:
>>
>> [   21.078433] jffs2: version 2.2. (SUMMARY)  © 2001-2006 Red Hat, Inc.
>> [   39.776207] jffs2: notice: (1180) jffs2_build_xattr_subsystem: 
>> complete building xattr subsystem, 0 of xdatum (0 unchecked, 0 orphan) 
>> and 0 of xref (0 dead, 0 orphan) found.
>> [   40.016574] jffs2: warning: (1181) jffs2_do_read_inode_internal: no 
>> data nodes found for ino #140
>> [   40.122964] jffs2: notice: (1181) jffs2_do_read_inode_internal: but 
>> it has children so we fake some modes for it
>> [   43.579361] jffs2: warning: (1181) jffs2_get_inode_nodes: Eep. No 
>> valid nodes for ino #141.
>> [   43.679404] jffs2: warning: (1181) jffs2_do_read_inode_internal: no 
>> data nodes found for ino #141
>> [   43.785661] jffs2: Returned error for crccheck of ino #141. Expect 
>> badness...
>> [   44.021825] jffs2: warning: (1181) jffs2_do_read_inode_internal: no 
>> data nodes found for ino #154
>> [   44.128191] jffs2: notice: (1181) jffs2_do_read_inode_internal: but 
>> it has children so we fake some modes for it
>> [   44.314862] jffs2: warning: (1181) jffs2_do_read_inode_internal: no 
>> data nodes found for ino #155
>> [   44.421152] jffs2: notice: (1181) jffs2_do_read_inode_internal: but 
>> it has children so we fake some modes for it
>> [   44.607378] jffs2: warning: (1181) jffs2_do_read_inode_internal: no 
>> data nodes found for ino #163
>> [   44.713692] jffs2: notice: (1181) jffs2_do_read_inode_internal: but 
>> it has children so we fake some modes for it
>> [   44.899991] jffs2: warning: (1181) jffs2_get_inode_nodes: Eep. No 
>> valid nodes for ino #164.
>> [   45.000107] jffs2: warning: (1181) jffs2_do_read_inode_internal: no 
>> data nodes found for ino #164
>> [   45.106370] jffs2: Returned error for crccheck of ino #164. Expect 
>> badness...
>> [   45.934282] jffs2: Inode #106 already in state 0 in 
>> jffs2_garbage_collect_pass()!
> 
> Looks like a lack of robustness to filesystem corruption to me. LWN
> published an article on the topic just yesterday!
> 
> https://urldefense.proofpoint.com/v2/url?u=https-3A__lwn.net_Articles_796687_&d=DwIFaQ&c=5VD0RTtNlTh3ycd41b3MUw&r=iYElT7HC77pRZ3byVvW8ng&m=VI7s7FnQP0Ooe6tEJPtpjajwGDNh4FhQjTgiRbW8hmk&s=obrUgP3wo7zCtMTk8cnEMXhxaF9VSJmy1iJeSxKHmt8&e= 
> 
> Andrew

Thank you for the sharing, Andrew. Let me check out the article..


Cheers,

Tao


More information about the openbmc mailing list