[PATCH] Add KDB Modules support
linas at austin.ibm.com
linas at austin.ibm.com
Fri Sep 26 06:01:21 EST 2003
Hi,
Could those of you who maintain KDB-enabled ppc64 trees add the
following to your configs?
--- arch/ppc64/config.in.orig 2003-09-23 14:51:00.000000000 -0500
+++ arch/ppc64/config.in 2003-09-23 14:56:58.000000000 -0500
@@ -258,6 +258,8 @@ bool 'Include xmon kernel debugger' CONF
bool 'Include kdb kernel debugger' CONFIG_KDB
bool 'Debug memory allocations' CONFIG_DEBUG_SLAB
if [ "$CONFIG_KDB" = "y" ]; then
+
+ dep_tristate ' KDB additional modules' CONFIG_KDB_MODULES $CONFIG_KDB
bool ' KDB off by default' CONFIG_KDB_OFF
define_bool CONFIG_KALLSYMS y
define_bool CONFIG_XMON n
If one turns on CONFIG_KDB_MODULES, one gets a number of additional
KDB commands for printing out various structs in a human readable way.
Given that some of these structs have large offsets to interesting
fields (e.g. some interesting scsi values occur 1920 bytes in), these
prints are far far better than trying to read hex dumps. (and counting
1920 bytes). I wish I'd known about this some 2-3 weeks ago.
Here's what the additional stuff looks like:
[0]kdb> h
[... stuff deleted ... ]
vm <vaddr> Display vm_area_struct
dentry <dentry> Display interesting dentry stuff
filp <filp> Display interesting filp stuff
sh <vaddr> Show scsi_host
sd <vaddr> Show scsi_device
sc <vaddr> Show scsi_cmnd
kiobuf <vaddr> Display kiobuf
page <vaddr> Display page
inode <vaddr> Display inode
bh <buffer head address Display buffer
inode_pages <inode *> Display pages in an inode
req <vaddr> dump request struct
rqueue <vaddr> dump request queue
memmap page table summary
[0]kdb> vm 0xc000000004e30000
struct vm_area_struct at 0xc000000004e30000 for 136 bytes
vm_start = 0x0 vm_end = 0x0
page_prot = 0xc000000000404bf8
Flags:
[0]kdb> page 0xc000000004e30000
struct page at 0xc000000004e30000
next 0x0000000000000000 prev 0x0000000000000000 addr space 0x0000000000000000) count 0 flags
virtual 0x4ba2e8baf8d0b000
buffers 0xc00000000064a378
[0]kdb> sh 0xc000000004e30000
Scsi_Host at 0xc000000004e30000
next = 0x0000000000000000 host_queue = 0x0000000000000000
ehandler = 0x0000000000000000 eh_wait = 0x0000000000000000 en_notify = 0xc00000eh_active = 0x0 host_wait = 0xc0000000003b95d0 hostt = 0xc000000000649aa8 host_0host_failed = 0 extra_bytes = 0 host_no = 1 resetting = 0
max id/lun/channel = [1/0/-1073741824] this_id = 0
can_queue = 0 cmd_per_lun = -16384 sg_tablesize = 0 u_isa_dma = 1
host_blocked = 1 reverse_ordering = 0
[0]kdb> bh 0xc000000004e30000
buffer_head at 0xc000000004e30000
next 0x0000000000000000 bno 0 rsec 4294967296 size 0 dev 0x0 rdev 0x0
count 0 state 0xc000000000404bf8 [Req Mapped New Async Wait_IO Launder JBD Pr0 b_next_free 0x0000000000000000 b_prev_free 0xffffffff00000000 b_reqnext 0xc008 b_page 0x0000000000000000 b_this_page 0x0000008b0000008b b_private 0x000000000
[0]kdb> memmap
Total pages: 524288
Slab pages: 2908
Dirty pages: 441
Locked pages: 0
Buffer pages: 9035
0 page count: 503233
1 page count: 12019
2 page count: 7428
3 page count: 743
4 page count: 292
5 page count: 20
6 page count: 218
7 page count: 3
high page count: 332
Warning: On my machine asking for req and rqueue hung the machine hard.
req <vaddr> dump request struct
rqueue <vaddr> dump request queue
I suspect that this may be due to the fact that the ppc64 KDB is downlevel,
and that this problem is fixed in newer KDB's.
--linas
** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/
More information about the Linuxppc64-dev
mailing list