1 |
62 |
marcus.erl |
Deadline IO scheduler tunables
|
2 |
|
|
==============================
|
3 |
|
|
|
4 |
|
|
This little file attempts to document how the deadline io scheduler works.
|
5 |
|
|
In particular, it will clarify the meaning of the exposed tunables that may be
|
6 |
|
|
of interest to power users.
|
7 |
|
|
|
8 |
|
|
Selecting IO schedulers
|
9 |
|
|
-----------------------
|
10 |
|
|
Refer to Documentation/block/switching-sched.txt for information on
|
11 |
|
|
selecting an io scheduler on a per-device basis.
|
12 |
|
|
|
13 |
|
|
|
14 |
|
|
********************************************************************************
|
15 |
|
|
|
16 |
|
|
|
17 |
|
|
read_expire (in ms)
|
18 |
|
|
-----------
|
19 |
|
|
|
20 |
|
|
The goal of the deadline io scheduler is to attempt to guarantee a start
|
21 |
|
|
service time for a request. As we focus mainly on read latencies, this is
|
22 |
|
|
tunable. When a read request first enters the io scheduler, it is assigned
|
23 |
|
|
a deadline that is the current time + the read_expire value in units of
|
24 |
|
|
milliseconds.
|
25 |
|
|
|
26 |
|
|
|
27 |
|
|
write_expire (in ms)
|
28 |
|
|
-----------
|
29 |
|
|
|
30 |
|
|
Similar to read_expire mentioned above, but for writes.
|
31 |
|
|
|
32 |
|
|
|
33 |
|
|
fifo_batch
|
34 |
|
|
----------
|
35 |
|
|
|
36 |
|
|
When a read request expires its deadline, we must move some requests from
|
37 |
|
|
the sorted io scheduler list to the block device dispatch queue. fifo_batch
|
38 |
|
|
controls how many requests we move.
|
39 |
|
|
|
40 |
|
|
|
41 |
|
|
writes_starved (number of dispatches)
|
42 |
|
|
--------------
|
43 |
|
|
|
44 |
|
|
When we have to move requests from the io scheduler queue to the block
|
45 |
|
|
device dispatch queue, we always give a preference to reads. However, we
|
46 |
|
|
don't want to starve writes indefinitely either. So writes_starved controls
|
47 |
|
|
how many times we give preference to reads over writes. When that has been
|
48 |
|
|
done writes_starved number of times, we dispatch some writes based on the
|
49 |
|
|
same criteria as reads.
|
50 |
|
|
|
51 |
|
|
|
52 |
|
|
front_merges (bool)
|
53 |
|
|
------------
|
54 |
|
|
|
55 |
|
|
Sometimes it happens that a request enters the io scheduler that is contigious
|
56 |
|
|
with a request that is already on the queue. Either it fits in the back of that
|
57 |
|
|
request, or it fits at the front. That is called either a back merge candidate
|
58 |
|
|
or a front merge candidate. Due to the way files are typically laid out,
|
59 |
|
|
back merges are much more common than front merges. For some work loads, you
|
60 |
|
|
may even know that it is a waste of time to spend any time attempting to
|
61 |
|
|
front merge requests. Setting front_merges to 0 disables this functionality.
|
62 |
|
|
Front merges may still occur due to the cached last_merge hint, but since
|
63 |
|
|
that comes at basically 0 cost we leave that on. We simply disable the
|
64 |
|
|
rbtree front sector lookup when the io scheduler merge function is called.
|
65 |
|
|
|
66 |
|
|
|
67 |
|
|
Nov 11 2002, Jens Axboe
|
68 |
|
|
|
69 |
|
|
|