OpenCores
URL https://opencores.org/ocsvn/or1k/or1k/trunk

Subversion Repositories or1k

[/] [or1k/] [trunk/] [linux/] [linux-2.4/] [Documentation/] [oops-tracing.txt] - Blame information for rev 1765

Details | Compare with Previous | View Log

Line No. Rev Author Line
1 1275 phoenix
Quick Summary
2
-------------
3
 
4
Install ksymoops from
5
ftp://ftp..kernel.org/pub/linux/utils/kernel/ksymoops
6
Read the ksymoops man page.
7
ksymoops < the_oops.txt
8
 
9
and send the output the maintainer of the kernel area that seems to be
10
involved with the problem, not to the ksymoops maintainer. Don't worry
11
too much about getting the wrong person. If you are unsure send it to
12
the person responsible for the code relevant to what you were doing.
13
If it occurs repeatably try and describe how to recreate it. Thats
14
worth even more than the oops
15
 
16
If you are totally stumped as to whom to send the report, send it to
17
linux-kernel@vger.kernel.org. Thanks for your help in making Linux as
18
stable as humanly possible.
19
 
20
Where is the_oops.txt?
21
----------------------
22
 
23
Normally the Oops text is read from the kernel buffers by klogd and
24
handed to syslogd which writes it to a syslog file, typically
25
/var/log/messages (depends on /etc/syslog.conf).  Sometimes klogd dies,
26
in which case you can run dmesg > file to read the data from the kernel
27
buffers and save it.  Or you can cat /proc/kmsg > file, however you
28
have to break in to stop the transfer, kmsg is a "never ending file".
29
If the machine has crashed so badly that you cannot enter commands or
30
the disk is not available then you have three options :-
31
 
32
(1) Hand copy the text from the screen and type it in after the machine
33
    has restarted.  Messy but it is the only option if you have not
34
    planned for a crash.
35
 
36
(2) Boot with a serial console (see Documentation/serial-console.txt),
37
    run a null modem to a second machine and capture the output there
38
    using your favourite communication program.  Minicom works well.
39
 
40
(3) Patch the kernel with one of the crash dump patches.  These save
41
    data to a floppy disk or video rom or a swap partition.  None of
42
    these are standard kernel patches so you have to find and apply
43
    them yourself.  Search kernel archives for kmsgdump, lkcd and
44
    oops+smram.
45
 
46
No matter how you capture the log output, feed the resulting file to
47
ksymoops along with /proc/ksyms and /proc/modules that applied at the
48
time of the crash.  /var/log/ksymoops can be useful to capture the
49
latter, man ksymoops for details.
50
 
51
 
52
Full Information
53
----------------
54
 
55
From: Linus Torvalds 
56
 
57
How to track down an Oops.. [originally a mail to linux-kernel]
58
 
59
The main trick is having 5 years of experience with those pesky oops
60
messages ;-)
61
 
62
Actually, there are things you can do that make this easier. I have two
63
separate approaches:
64
 
65
        gdb /usr/src/linux/vmlinux
66
        gdb> disassemble 
67
 
68
That's the easy way to find the problem, at least if the bug-report is
69
well made (like this one was - run through ksymoops to get the
70
information of which function and the offset in the function that it
71
happened in).
72
 
73
Oh, it helps if the report happens on a kernel that is compiled with the
74
same compiler and similar setups.
75
 
76
The other thing to do is disassemble the "Code:" part of the bug report:
77
ksymoops will do this too with the correct tools, but if you don't have
78
the tools you can just do a silly program:
79
 
80
        char str[] = "\xXX\xXX\xXX...";
81
        main(){}
82
 
83
and compile it with gcc -g and then do "disassemble str" (where the "XX"
84
stuff are the values reported by the Oops - you can just cut-and-paste
85
and do a replace of spaces to "\x" - that's what I do, as I'm too lazy
86
to write a program to automate this all).
87
 
88
Finally, if you want to see where the code comes from, you can do
89
 
90
        cd /usr/src/linux
91
        make fs/buffer.s        # or whatever file the bug happened in
92
 
93
and then you get a better idea of what happens than with the gdb
94
disassembly.
95
 
96
Now, the trick is just then to combine all the data you have: the C
97
sources (and general knowledge of what it _should_ do), the assembly
98
listing and the code disassembly (and additionally the register dump you
99
also get from the "oops" message - that can be useful to see _what_ the
100
corrupted pointers were, and when you have the assembler listing you can
101
also match the other registers to whatever C expressions they were used
102
for).
103
 
104
Essentially, you just look at what doesn't match (in this case it was the
105
"Code" disassembly that didn't match with what the compiler generated).
106
Then you need to find out _why_ they don't match. Often it's simple - you
107
see that the code uses a NULL pointer and then you look at the code and
108
wonder how the NULL pointer got there, and if it's a valid thing to do
109
you just check against it..
110
 
111
Now, if somebody gets the idea that this is time-consuming and requires
112
some small amount of concentration, you're right. Which is why I will
113
mostly just ignore any panic reports that don't have the symbol table
114
info etc looked up: it simply gets too hard to look it up (I have some
115
programs to search for specific patterns in the kernel code segment, and
116
sometimes I have been able to look up those kinds of panics too, but
117
that really requires pretty good knowledge of the kernel just to be able
118
to pick out the right sequences etc..)
119
 
120
_Sometimes_ it happens that I just see the disassembled code sequence
121
from the panic, and I know immediately where it's coming from. That's when
122
I get worried that I've been doing this for too long ;-)
123
 
124
                Linus
125
 
126
 
127
---------------------------------------------------------------------------
128
Notes on Oops tracing with klogd:
129
 
130
In order to help Linus and the other kernel developers there has been
131
substantial support incorporated into klogd for processing protection
132
faults.  In order to have full support for address resolution at least
133
version 1.3-pl3 of the sysklogd package should be used.
134
 
135
When a protection fault occurs the klogd daemon automatically
136
translates important addresses in the kernel log messages to their
137
symbolic equivalents.  This translated kernel message is then
138
forwarded through whatever reporting mechanism klogd is using.  The
139
protection fault message can be simply cut out of the message files
140
and forwarded to the kernel developers.
141
 
142
Two types of address resolution are performed by klogd.  The first is
143
static translation and the second is dynamic translation.  Static
144
translation uses the System.map file in much the same manner that
145
ksymoops does.  In order to do static translation the klogd daemon
146
must be able to find a system map file at daemon initialization time.
147
See the klogd man page for information on how klogd searches for map
148
files.
149
 
150
Dynamic address translation is important when kernel loadable modules
151
are being used.  Since memory for kernel modules is allocated from the
152
kernel's dynamic memory pools there are no fixed locations for either
153
the start of the module or for functions and symbols in the module.
154
 
155
The kernel supports system calls which allow a program to determine
156
which modules are loaded and their location in memory.  Using these
157
system calls the klogd daemon builds a symbol table which can be used
158
to debug a protection fault which occurs in a loadable kernel module.
159
 
160
At the very minimum klogd will provide the name of the module which
161
generated the protection fault.  There may be additional symbolic
162
information available if the developer of the loadable module chose to
163
export symbol information from the module.
164
 
165
Since the kernel module environment can be dynamic there must be a
166
mechanism for notifying the klogd daemon when a change in module
167
environment occurs.  There are command line options available which
168
allow klogd to signal the currently executing daemon that symbol
169
information should be refreshed.  See the klogd manual page for more
170
information.
171
 
172
A patch is included with the sysklogd distribution which modifies the
173
modules-2.0.0 package to automatically signal klogd whenever a module
174
is loaded or unloaded.  Applying this patch provides essentially
175
seamless support for debugging protection faults which occur with
176
kernel loadable modules.
177
 
178
The following is an example of a protection fault in a loadable module
179
processed by klogd:
180
---------------------------------------------------------------------------
181
Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
182
Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
183
Aug 29 09:51:01 blizard kernel: *pde = 00000000
184
Aug 29 09:51:01 blizard kernel: Oops: 0002
185
Aug 29 09:51:01 blizard kernel: CPU:    0
186
Aug 29 09:51:01 blizard kernel: EIP:    0010:[oops:_oops+16/3868]
187
Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
188
Aug 29 09:51:01 blizard kernel: eax: 315e97cc   ebx: 003a6f80   ecx: 001be77b   edx: 00237c0c
189
Aug 29 09:51:01 blizard kernel: esi: 00000000   edi: bffffdb3   ebp: 00589f90   esp: 00589f8c
190
Aug 29 09:51:01 blizard kernel: ds: 0018   es: 0018   fs: 002b   gs: 002b   ss: 0018
191
Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
192
Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
193
Aug 29 09:51:01 blizard kernel:        00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
194
Aug 29 09:51:01 blizard kernel:        bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
195
Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
196
Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
197
---------------------------------------------------------------------------
198
 
199
Dr. G.W. Wettstein           Oncology Research Div. Computing Facility
200
Roger Maris Cancer Center    INTERNET: greg@wind.rmcc.com
201
820 4th St. N.
202
Fargo, ND  58122
203
Phone: 701-234-7556
204
 
205
 
206
---------------------------------------------------------------------------
207
Tainted kernels:
208
 
209
Some oops reports contain the string 'Tainted: ' after the program
210
counter, this indicates that the kernel has been tainted by some
211
mechanism.  The string is followed by a series of position sensitive
212
characters, each representing a particular tainted value.
213
 
214
  1: 'G' if all modules loaded have a GPL or compatible license, 'P' if
215
     any proprietary module has been loaded.  Modules without a
216
     MODULE_LICENSE or with a MODULE_LICENSE that is not recognised by
217
     insmod as GPL compatible are assumed to be proprietary.
218
 
219
  2: 'F' if any module was force loaded by insmod -f, ' ' if all
220
     modules were loaded normally.
221
 
222
The primary reason for the 'Tainted: ' string is to tell kernel
223
debuggers if this is a clean kernel or if anything unusual has
224
occurred.  Tainting is permanent, even if an offending module is
225
unloading the tainted value remains to indicate that the kernel is not
226
trustworthy.

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.