OpenCores
URL https://opencores.org/ocsvn/or1k_soc_on_altera_embedded_dev_kit/or1k_soc_on_altera_embedded_dev_kit/trunk

Subversion Repositories or1k_soc_on_altera_embedded_dev_kit

[/] [or1k_soc_on_altera_embedded_dev_kit/] [tags/] [linux-2.6/] [linux-2.6.24_orig/] [Documentation/] [robust-futexes.txt] - Blame information for rev 18

Go to most recent revision | Details | Compare with Previous | View Log

Line No. Rev Author Line
1 3 xianfeng
Started by: Ingo Molnar 
2
 
3
Background
4
----------
5
 
6
what are robust futexes? To answer that, we first need to understand
7
what futexes are: normal futexes are special types of locks that in the
8
noncontended case can be acquired/released from userspace without having
9
to enter the kernel.
10
 
11
A futex is in essence a user-space address, e.g. a 32-bit lock variable
12
field. If userspace notices contention (the lock is already owned and
13
someone else wants to grab it too) then the lock is marked with a value
14
that says "there's a waiter pending", and the sys_futex(FUTEX_WAIT)
15
syscall is used to wait for the other guy to release it. The kernel
16
creates a 'futex queue' internally, so that it can later on match up the
17
waiter with the waker - without them having to know about each other.
18
When the owner thread releases the futex, it notices (via the variable
19
value) that there were waiter(s) pending, and does the
20
sys_futex(FUTEX_WAKE) syscall to wake them up.  Once all waiters have
21
taken and released the lock, the futex is again back to 'uncontended'
22
state, and there's no in-kernel state associated with it. The kernel
23
completely forgets that there ever was a futex at that address. This
24
method makes futexes very lightweight and scalable.
25
 
26
"Robustness" is about dealing with crashes while holding a lock: if a
27
process exits prematurely while holding a pthread_mutex_t lock that is
28
also shared with some other process (e.g. yum segfaults while holding a
29
pthread_mutex_t, or yum is kill -9-ed), then waiters for that lock need
30
to be notified that the last owner of the lock exited in some irregular
31
way.
32
 
33
To solve such types of problems, "robust mutex" userspace APIs were
34
created: pthread_mutex_lock() returns an error value if the owner exits
35
prematurely - and the new owner can decide whether the data protected by
36
the lock can be recovered safely.
37
 
38
There is a big conceptual problem with futex based mutexes though: it is
39
the kernel that destroys the owner task (e.g. due to a SEGFAULT), but
40
the kernel cannot help with the cleanup: if there is no 'futex queue'
41
(and in most cases there is none, futexes being fast lightweight locks)
42
then the kernel has no information to clean up after the held lock!
43
Userspace has no chance to clean up after the lock either - userspace is
44
the one that crashes, so it has no opportunity to clean up. Catch-22.
45
 
46
In practice, when e.g. yum is kill -9-ed (or segfaults), a system reboot
47
is needed to release that futex based lock. This is one of the leading
48
bugreports against yum.
49
 
50
To solve this problem, the traditional approach was to extend the vma
51
(virtual memory area descriptor) concept to have a notion of 'pending
52
robust futexes attached to this area'. This approach requires 3 new
53
syscall variants to sys_futex(): FUTEX_REGISTER, FUTEX_DEREGISTER and
54
FUTEX_RECOVER. At do_exit() time, all vmas are searched to see whether
55
they have a robust_head set. This approach has two fundamental problems
56
left:
57
 
58
 - it has quite complex locking and race scenarios. The vma-based
59
   approach had been pending for years, but they are still not completely
60
   reliable.
61
 
62
 - they have to scan _every_ vma at sys_exit() time, per thread!
63
 
64
The second disadvantage is a real killer: pthread_exit() takes around 1
65
microsecond on Linux, but with thousands (or tens of thousands) of vmas
66
every pthread_exit() takes a millisecond or more, also totally
67
destroying the CPU's L1 and L2 caches!
68
 
69
This is very much noticeable even for normal process sys_exit_group()
70
calls: the kernel has to do the vma scanning unconditionally! (this is
71
because the kernel has no knowledge about how many robust futexes there
72
are to be cleaned up, because a robust futex might have been registered
73
in another task, and the futex variable might have been simply mmap()-ed
74
into this process's address space).
75
 
76
This huge overhead forced the creation of CONFIG_FUTEX_ROBUST so that
77
normal kernels can turn it off, but worse than that: the overhead makes
78
robust futexes impractical for any type of generic Linux distribution.
79
 
80
So something had to be done.
81
 
82
New approach to robust futexes
83
------------------------------
84
 
85
At the heart of this new approach there is a per-thread private list of
86
robust locks that userspace is holding (maintained by glibc) - which
87
userspace list is registered with the kernel via a new syscall [this
88
registration happens at most once per thread lifetime]. At do_exit()
89
time, the kernel checks this user-space list: are there any robust futex
90
locks to be cleaned up?
91
 
92
In the common case, at do_exit() time, there is no list registered, so
93
the cost of robust futexes is just a simple current->robust_list != NULL
94
comparison. If the thread has registered a list, then normally the list
95
is empty. If the thread/process crashed or terminated in some incorrect
96
way then the list might be non-empty: in this case the kernel carefully
97
walks the list [not trusting it], and marks all locks that are owned by
98
this thread with the FUTEX_OWNER_DIED bit, and wakes up one waiter (if
99
any).
100
 
101
The list is guaranteed to be private and per-thread at do_exit() time,
102
so it can be accessed by the kernel in a lockless way.
103
 
104
There is one race possible though: since adding to and removing from the
105
list is done after the futex is acquired by glibc, there is a few
106
instructions window for the thread (or process) to die there, leaving
107
the futex hung. To protect against this possibility, userspace (glibc)
108
also maintains a simple per-thread 'list_op_pending' field, to allow the
109
kernel to clean up if the thread dies after acquiring the lock, but just
110
before it could have added itself to the list. Glibc sets this
111
list_op_pending field before it tries to acquire the futex, and clears
112
it after the list-add (or list-remove) has finished.
113
 
114
That's all that is needed - all the rest of robust-futex cleanup is done
115
in userspace [just like with the previous patches].
116
 
117
Ulrich Drepper has implemented the necessary glibc support for this new
118
mechanism, which fully enables robust mutexes.
119
 
120
Key differences of this userspace-list based approach, compared to the
121
vma based method:
122
 
123
 - it's much, much faster: at thread exit time, there's no need to loop
124
   over every vma (!), which the VM-based method has to do. Only a very
125
   simple 'is the list empty' op is done.
126
 
127
 - no VM changes are needed - 'struct address_space' is left alone.
128
 
129
 - no registration of individual locks is needed: robust mutexes dont
130
   need any extra per-lock syscalls. Robust mutexes thus become a very
131
   lightweight primitive - so they dont force the application designer
132
   to do a hard choice between performance and robustness - robust
133
   mutexes are just as fast.
134
 
135
 - no per-lock kernel allocation happens.
136
 
137
 - no resource limits are needed.
138
 
139
 - no kernel-space recovery call (FUTEX_RECOVER) is needed.
140
 
141
 - the implementation and the locking is "obvious", and there are no
142
   interactions with the VM.
143
 
144
Performance
145
-----------
146
 
147
I have benchmarked the time needed for the kernel to process a list of 1
148
million (!) held locks, using the new method [on a 2GHz CPU]:
149
 
150
 - with FUTEX_WAIT set [contended mutex]: 130 msecs
151
 - without FUTEX_WAIT set [uncontended mutex]: 30 msecs
152
 
153
I have also measured an approach where glibc does the lock notification
154
[which it currently does for !pshared robust mutexes], and that took 256
155
msecs - clearly slower, due to the 1 million FUTEX_WAKE syscalls
156
userspace had to do.
157
 
158
(1 million held locks are unheard of - we expect at most a handful of
159
locks to be held at a time. Nevertheless it's nice to know that this
160
approach scales nicely.)
161
 
162
Implementation details
163
----------------------
164
 
165
The patch adds two new syscalls: one to register the userspace list, and
166
one to query the registered list pointer:
167
 
168
 asmlinkage long
169
 sys_set_robust_list(struct robust_list_head __user *head,
170
                     size_t len);
171
 
172
 asmlinkage long
173
 sys_get_robust_list(int pid, struct robust_list_head __user **head_ptr,
174
                     size_t __user *len_ptr);
175
 
176
List registration is very fast: the pointer is simply stored in
177
current->robust_list. [Note that in the future, if robust futexes become
178
widespread, we could extend sys_clone() to register a robust-list head
179
for new threads, without the need of another syscall.]
180
 
181
So there is virtually zero overhead for tasks not using robust futexes,
182
and even for robust futex users, there is only one extra syscall per
183
thread lifetime, and the cleanup operation, if it happens, is fast and
184
straightforward. The kernel doesn't have any internal distinction between
185
robust and normal futexes.
186
 
187
If a futex is found to be held at exit time, the kernel sets the
188
following bit of the futex word:
189
 
190
        #define FUTEX_OWNER_DIED        0x40000000
191
 
192
and wakes up the next futex waiter (if any). User-space does the rest of
193
the cleanup.
194
 
195
Otherwise, robust futexes are acquired by glibc by putting the TID into
196
the futex field atomically. Waiters set the FUTEX_WAITERS bit:
197
 
198
        #define FUTEX_WAITERS           0x80000000
199
 
200
and the remaining bits are for the TID.
201
 
202
Testing, architecture support
203
-----------------------------
204
 
205
i've tested the new syscalls on x86 and x86_64, and have made sure the
206
parsing of the userspace list is robust [ ;-) ] even if the list is
207
deliberately corrupted.
208
 
209
i386 and x86_64 syscalls are wired up at the moment, and Ulrich has
210
tested the new glibc code (on x86_64 and i386), and it works for his
211
robust-mutex testcases.
212
 
213
All other architectures should build just fine too - but they wont have
214
the new syscalls yet.
215
 
216
Architectures need to implement the new futex_atomic_cmpxchg_inatomic()
217
inline function before writing up the syscalls (that function returns
218
-ENOSYS right now).

powered by: WebSVN 2.1.0

© copyright 1999-2024 OpenCores.org, equivalent to Oliscience, all rights reserved. OpenCores®, registered trademark.