1 |
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
2 |
|
% $Id$ |
3 |
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
4 |
|
|
5 |
\chapter{Locking} |
\chapter{Locking} |
6 |
|
\section{Purpose} |
7 |
|
Traditional Unix kernels were not designed to work on multiple |
8 |
|
processor |
9 |
|
systems. The Unix philosophy was one of simplicity --- the Multics |
10 |
|
project, though influential, demonstrated that complexity makes systems |
11 |
|
very difficult to build and program for. This also meant that the Unix |
12 |
|
kernel was not pre-emptible: any process running in kernel mode would |
13 |
|
continue to run in kernel mode until it was finished or explicitly |
14 |
|
initiated a context switch. |
15 |
|
|
16 |
|
However, as Unix began to be used in more situations, its simple kernel |
17 |
|
needed modifications. Using multiple processors in a system was a |
18 |
|
cost-effective way of getting twice as much done for scientific |
19 |
|
computing and helped other programs significantly. |
20 |
|
|
21 |
|
Initial multiprocessor systems only allowed one processor to run kernel |
22 |
|
code; it was relatively easy to port existing kernels to SMP machines |
23 |
|
this way --- only the system call entry path would need to be modified, |
24 |
|
as long as all interrupts were handled on a single processor. |
25 |
|
|
26 |
|
This is the path that the Linux kernel took. The 2.0 release of the |
27 |
|
Linux kernel was the first stable release that supported SMP systems; |
28 |
|
the support only allowed one processor to be in kernel mode at a time, |
29 |
|
enforced through the Big Kernel Lock. The 2.2 series introduced |
30 |
|
finer-grained locking and allowed multiple processors to be in kernel |
31 |
|
mode simultaneously as long as they did not contend for the same |
32 |
|
resources. |
33 |
|
|
34 |
|
Kernel developers are facing a challenge with the 2.4 and the 2.5 |
35 |
|
development series. Locking unfortunately adds to the complexity of the |
36 |
|
kernel source code and adds to the amount of CPU time performing tasks |
37 |
|
that are only incidental to their real role of performing actions on |
38 |
|
behalf of users. A kernel could have locks for every individual object; |
39 |
|
this would almost certainly be overkill, and would lead to extensive |
40 |
|
deadlock possibilities. However, a CPU blocked on a lock held by |
41 |
|
another |
42 |
|
CPU to operate on a different data object is wasteful. There should be |
43 |
|
a |
44 |
|
balance of the two concerns, one that allows the system to scale |
45 |
|
gracefully to systems with many CPUs but that doesn't sacrifice speed |
46 |
|
on |
47 |
|
systems with only one or two CPUs. |
48 |
|
|
49 |
|
|