/[hurd]/hurd-l4/doc/vmm.tex
ViewVC logotype

Diff of /hurd-l4/doc/vmm.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.1 by neal, Sun Sep 7 21:49:29 2003 UTC revision 1.2 by neal, Tue Sep 23 18:20:58 2003 UTC
# Line 1  Line 1 
1  \chapter{Virtual Memory Management}  \chapter{Virtual Memory Management}
2    
3  Traditionally, monolithical kernels, but even kernels like Mach,  \begin{quote}
4  provide a virtual memory management system in the kernel.  All paging  \emph{The mind and memory are more sharply exercised in comprehending
5  decisions are made by the kernel itself.  This requires good  another man's things than our own.}
 heuristics.  Smart paging decisions are often not possible because the  
 kernel lacks the information about how the data is used.  
   
 In the Hurd, paging will be done locally in each task.  A physical  
 memory server provides a number of guaranteed physical pages to tasks.  
 It will also provide a number of excess pages (over-commit).  The task  
 might have to return any number of excess pages on short notice.  If  
 the task does not comply, all mappings are revoked (essentially  
 killing the task).  
   
 A problem arises when data has to be exchanged between a client and a  
 server, and the server wants to have control over the content of the  
 pages (for example, pass it on to other servers, like device drivers).  
 The client can not map the pages directly into the servers address  
 space, as it is not trusted.  Container objects created in the  
 physical memory server and mapped into the client and/or the servers  
 address space will provide the necessary security features to allow  
 this.  This can be used for DMA and zero-copying in the data exchange  
 between device drivers and (untrusted) user tasks.  
6    
7    \begin{flushright}
8    \emph{Timber} or \emph{Discoveries} by Ben Jonson
9    \end{flushright}
10    \end{quote}
11    
12    
13    \section{Introduction}
14    
15    The goal of an operating system is simply, perhaps reductively,
16    stated: manage the available resources.  In other words, it is the
17    operating system's job to dictate the policy for obtaining resources
18    and to provide mechanisms to use them.  Most resources which the
19    operating system manages are sparse resources, for instance the CPUs,
20    the memory and the various peripherals including graphics cards and
21    hard drives.  Any given process, therefore, needs to compete with the
22    other processes in the system for some subset of the available
23    resources at any given time.  As can be imagined, the policy to access
24    and the mechanisms to use these resources determines many important
25    characteristics of the system.
26    
27    A simple single user system may use a trivial first come first serve
28    policy for allocating resources, a device abstraction layer and no
29    protection domains.  Although this design may be very light-weight and
30    the thin access layer conducive to high speed, this design will only
31    work on a system where all programs can be trusted: a single malicious
32    or buggy program can potentially halt all others from making progress
33    simply by refusing to yield the CPU or allocating and not releasing
34    resources in a timely fashion.
35    
36    The Hurd, like Unix, aims to provide strong protection domains thereby
37    preventing processes from accidentally or maliciously harming the rest
38    of the system.  Unix has shown that this can be done efficiently.  But
39    more than Unix, the Hurd desires to identify pieces of the system
40    which Unix placed in the kernel but which need not be there as they
41    could be done in user space and provide additional user flexibility.
42    Through our experience and analysis, we are convinced that one area is
43    much of the virtual memory system: tasks are often allocating as much
44    memory without regard---because Unix provides them with no mechanism
45    to do so---for the rest of the system.  But it is not a cooperative
46    model which we wish to embrace but a model which holds the users of
47    the resource responsible for it and when asked to release some of its
48    memory will or violate the social contract and face exile.  Not only
49    will this empower users but it will force them to make smarter
50    decisions.
51    
52    \subsection{Learning from Unix}
53    
54    Unix was designed as a multiuser timesharing system with protection
55    domains thereby permitting process separation, i.e. allowing different
56    users to concurrently run processes in the system and gain access to
57    resources in a controlled fashion such that any one process cannot
58    hurt or excessively starve any other.  Unix achieved this through a
59    monolithic kernel design wherein both policy and mechanism are
60    provided by the kernel.  Due to the limited hardware available at the
61    time and the state of Multics\footnote{Multics was seen as a system
62    which would never realize due to its overly ambitious feature set.},
63    Unix imposed a strong policy on how resources could be used: a program
64    could access files, however, lower level mechanism such as the file
65    system, the virtual file system, network protocol stacks and devices
66    drivers all existed in the kernel proper.  This approach made sense
67    for the extremely limited hardware that Unix was targeted for in the
68    1970s.  As hardware performance increased, however, a separation
69    between mechanism and policy never took place and today Unix-like
70    operating systems are in a very similar state to those available two
71    decades ago; certainly, the implementations have been vastly improved
72    and tuned, however, the fundamental design remains the same.
73    
74    One of the most important of the policy/mechanism couplings in the
75    kernel is the virtual memory subsystem: every component in the system
76    needs memory for a variety of reasons and with different priorities.
77    The system must attempt to meet a given allocation criteria.  However,
78    as the kernel does not and cannot know how how a task will use its
79    memory except based on the use of page fault statistics is bound to
80    make sub-ideal eviction decisions.  It is in part through years of
81    fine tuning that Unix is able to perform as well as it does for the
82    general applications which fit its assumed statistical model.
83    
84    \subsection{Learning from Mach}
85    
86    The faults of Unix became clear through the use of Mach.  The
87    designers of Mach observed that there was too much mechanism in the
88    kernel and attempted to export the file systems, network stack and
89    much of the system API into user space servers.  They left a very
90    powerful VMM in the kernel with the device drivers and a novel IPC
91    system.  Our experience shows that the VMM although very flexible, is
92    unable to make smart paging decisions: because Unix was tied to so
93    many subsystems, it had a fair knowledge of how a lot of the memory in
94    the system was being used.  It could therefore make good guesses about
95    what memory could be evicted and not be needed in the near future.
96    Mach, however, did not have this advantage and relied strictly on page
97    fault statistics and access pattern detection for its page eviction
98    policy.
99    
100    Based on this observation, it is imperitive that the page eviction
101    scheme have good knowledge about how pages are being used as it only
102    requires a few bad decisions to destroy performance.  Thus, a new
103    design can either choose to return to the monolithic design and add
104    even more knowledge to the kernel to increase performance or the page
105    eviction scheme can be remove from the kernel completely and placed in
106    user space and make all tasks self paged.
107    
108    \subsection{Following the Hurd Philosophy}
109    
110    As the Hurd aims, like Unix, to be a multiuser system for mutually
111    untrusted users, security is an absolute necessity.  But it is not the
112    object of the system to limit users excessively: as long as operations
113    can be done securely, they should be permitted.  It is based on this
114    philosophy that we have adopted a self paging design for the new Hurd
115    VMM: who knows better how a task will use its memory than the task
116    itself?  This is clear from the problems that have been encountered
117    with LRU, the basic page evition algorithm, by database developers,
118    language designers implementing garbage collectors and soft realtime
119    application developers such as multimedia developers: they all wrestle
120    with the underlying operating system's page eviction scheme.  By
121    putting the responsibility to page on tasks we think that tasks will
122    be forced to make smart decisions as they can only hurt themselves.
123    
124    \section{Memory Allocation}
125    
126    If memory was infinite and the only problem was worrying about one
127    program accessing the memory of another, memory allocation would be
128    trivial.  This is not, however, the case: memory is visibly finite and
129    a well designed system will exploit it all.  As memory is a system
130    resource, a system wide memory allocation policy must be established
131    which maximizes memory usage according to a given set of criteria.
132    
133    In a typical Unix-like VMM, allocating memory (e.g. using
134    \function{sbrk} or \function{mmap}) does not allocate physical memory
135    but \keyword{virtual memory}.  In order to increase the amount of
136    memory available to users, the kernel uses a \keyword{backing store},
137    typically a hard disk, to temporarily free physical memory thereby
138    allowing other processes to make progress.  The sum of these two is
139    referred to as virtual memory.  The use of backing store ensures data
140    integrity when physical memory must be freed and application
141    transparency is required.  A variety of criteria are used to determine
142    which frames are \keyword{paged out}, however, most often some form of
143    a priority based least recently used, LRU, algorithm is applied.  Upon
144    \keyword{memory pressure}, the system steals pages from low priority
145    processes which have not been used recently or drain pages from an
146    internal cache.
147    
148    This design has a major problem: the kernel has to evict the pages but
149    only the applications know which pages they really need in the near
150    term.  The kernel could ask the applications for this data, however,
151    it is unable to trust the applications as they could, for instance,
152    not respond, and the kernel would have to forcefully evict pages
153    anyway.  As such, the kernel relies on page fault statistics to make
154    projections about how the memory will be used, thus the LRU eviction
155    scheme.  An additional result of this scheme is that as applications
156    never know if mapped memory is in core, they are unable to make
157    guarantees about deadlines.
158    
159    These problems are grounded in the way the Unix VMM allocates memory:
160    it does not allocate physical memory but virtual memory.  This is
161    illustated by the following scenario: when a process starts and begins
162    to use memory, the allocator will happily give it all of memory in the
163    system as long as no other process wants it.  What happens, however,
164    when a second memory hungry process starts is that the kernel has no
165    way to take back memory it allocated to the first process.  At this
166    point, it has two options: it can either return failure to the second
167    process or it can steal memory from the first process and send it to
168    backing store.
169    
170    One way to solve these problems is to have the VMM allocate phsyical
171    memory and make applications completely self-paged.  Thus, the burden
172    of paging lies the application themselves.  When application request
173    memory, they no longer request virutal memory but physical memory.
174    Once the application has exhausted its available frames, it is its
175    responsibility to multiplex the available frames.  Thus, virtual
176    memory is done in the application itself.  It is important to note
177    that a standard manager or managers should be supplied by the
178    operating system.  This is important for implementing something like a
179    POSIX personality.  This should not, however, be hard coded: certain
180    application may greatly benefit by being able to control their own
181    eviction schemes.  At its most basic level, hints could be provided to
182    the manager by introducing extentions on basic function calls.  For
183    instance, \function{malloc} could take an extra parameter indicating
184    the class of data being allocated.  These class would provide hints
185    about the expected usage pattern and life time of the data.
186    
187    \subsection{Bootstrap}
188    
189    When the Hurd starts up, all physical memory is eventually transfered
190    to the physical memory server by the root server.  At this point, the
191    physical memory server will control all of the physical pages in the
192    system.
193    
194    \subsection{Allocation Policy}
195    
196    The physical memory server maintains a concept of \keyword{guaranteed
197    pages} and \keyword{extra pages}.  The former are pages that a given
198    task is guaranteed to map in a very short amount of time.  Given this
199    predicate, the total number of guaranteed pages can never exceed the
200    total number of frames in the system.  Extra pages are pages which are
201    given to clients who have reached their guaranteed page allocation
202    limit.  The phsyical memory server may request that a client
203    relinquish a number of extant extra pages at any time.  The client
204    must return the pages to the physical memory (i.e. free them) in a
205    short amount of time.  Should a task fail to do this, it risks having
206    all of its memory dropped (i.e. not swapped out or saved in anyway)
207    and reclaimed by the physical memory server.
208    
209    Readers familiar with VMS will see a striking difference between these
210    two systems.  This is not without reason.  Yet, differences remains:
211    VMS does not have extra pages and the number of pages is fixed at task
212    creation time.  VMS than maintains a dirty list of pages thereby
213    having a very fast backing store and essentially allowing tasks to
214    have more than their quota of memory if there is no memory pressure.
215    One reason that this is copied in this design is that unlike in VMS,
216    the file systems and device drivers are in user space.  Thus, the
217    caching that was being done by VMS can not be done intelligently by
218    the physical memory server.
219    
220    The number of guaranteed pages that a given task has access to is not
221    determined by the physical memory server but by the \keyword{memory
222    policy server}.  This division allows the physical memory server to
223    only concern itself with the mechanisms and means that it must know
224    essentially nothing about how the underlying operating system
225    functions.  (The implication is that although tailored for Hurd
226    specific needs, the physical memory server is completely separate from
227    the Hurd and can be used by other operating systems running on the
228    microkernel.)  Thus, it is the memory policy server's responsibility
229    to determine who gets how much memory.  This may be determined as a
230    function of the user or looking in file on disk for e.g. quotas.  As
231    can be seen this type of data acquisition could add significant
232    complexity to the physical memory server and require blocking states
233    (e.g. waiting for a read operation on file i/o) and could create
234    circular dependencies.
235    
236    The physical memory server and the memory policy server will contain a
237    shared buffer of tupples indexed by task id containing the number of
238    allocated pages, the number of guaranteed page, and a boolean
239    indicating whether or not this task is eligible for guaranteed pages.
240    The guaranteed page field and the extra page predicate may only be
241    written to by the memory policy server.  The number of allocated pages
242    may only be written to by the physical memory server.  This scheme
243    means that no locking in required.  (On some architectures where a
244    read of a given field cannot be performed in a single operation, the
245    read may have to be done twice).
246    
247    Until the memory policy server makes the intial contact with the
248    physical memory server, memory will be allocated on a first come first
249    serve basis.  The memory policy server shall use the following remote
250    procedure call to contact the physical memory server:
251    
252    \begin{code}
253    error\_t physical\_memory\_server\_introduce (void)
254    \end{code}
255    
256    \noindent
257    This function will succeed the first time it is called.  It will fail
258    all subsequent times.  The physical memory server will record the
259    sender of this rpc as the memory policy server and begin allocating
260    memory according to the previously described protocol.
261    
262    The shared policy buffer may be obtained from the physical memory
263    server by the policy by calling:
264    
265    \begin{code}
266    error\_t physical\_memory\_server\_get\_policy\_buffer (out l4\_map\_t buffer)
267    \end{code}
268    
269    \noindent
270    The returned buffer is mapped with read and write access into the
271    policy memory server's address space.  It may need to be resized.  If
272    this is the case, the physical memory server shall unmap the buffer
273    from the policy memory server's address space, copy the buffer
274    internally as required.  The policy memory server will fault on the
275    memory region on its next access and it may repeat the call.  This
276    call will succeed when the sender is the memory policy server, it will
277    fail otherwise.
278    
279    \subsection{Allocation Mechanisms}
280    
281    Applications are able allocate memory by  Memory allocation will be
282    
283    
284    % Traditionally, monolithical kernels, but even kernels like Mach,
285    % provide a virtual memory management system in the kernel.  All paging
286    % decisions are made by the kernel itself.  This requires good
287    % heuristics.  Smart paging decisions are often not possible because the
288    % kernel lacks the information about how the data is used.
289    %
290    % In the Hurd, paging will be done locally in each task.  A physical
291    % memory server provides a number of guaranteed physical pages to tasks.
292    % It will also provide a number of excess pages (over-commit).  The task
293    % might have to return any number of excess pages on short notice.  If
294    % the task does not comply, all mappings are revoked (essentially
295    % killing the task).
296    %
297    % A problem arises when data has to be exchanged between a client and a
298    % server, and the server wants to have control over the content of the
299    % pages (for example, pass it on to other servers, like device drivers).
300    % The client can not map the pages directly into the servers address
301    % space, as it is not trusted.  Container objects created in the
302    % physical memory server and mapped into the client and/or the servers
303    % address space will provide the necessary security features to allow
304    % this.  This can be used for DMA and zero-copying in the data exchange
305    % between device drivers and (untrusted) user tasks.
306    %
307    %

Legend:
Removed from v.1.1  
changed lines
  Added in v.1.2

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26