9 |
\vspace{8pt} |
\vspace{8pt} |
10 |
} |
} |
11 |
|
|
|
\newcommand{\figesc}[3]{ |
|
|
\begin{figure}[h] |
|
|
\centerline{\includegraphics{#1}} |
|
|
\caption{#2} |
|
|
\label{fig: #3} |
|
|
\end{figure}} |
|
|
\newcommand{\fig}[2]{\figesc{#1}{#2}{#2}} |
|
|
|
|
|
% e.g. \function{\_\_alloc\_pages}{__alloc_pages}{mm/page_alloc.c} |
|
12 |
\newcommand{\funcsection}{\subsection} |
\newcommand{\funcsection}{\subsection} |
13 |
\newcommand{\function}[3]{ |
\newcommand{\function}[3]{ |
14 |
\funcsection{Function #1} |
\funcsection{Function #1()} |
15 |
\label{Sec: #2} |
\label{Sec: #2} |
16 |
\index{#1} |
\index{#1} |
17 |
\textit{File: } \url{#3} \\ |
\textit{File: } \url{#3} \\ |
18 |
\textit{Prototype: }} |
\textit{Prototype: }} |
19 |
|
|
20 |
\chapter{Slab Allocator} |
\chapter{Slab Allocator} |
21 |
|
\label{Sec: Slab Allocator} |
22 |
|
|
23 |
The majority of memory allocation requests in the kernel are for small, |
The majority of memory allocation requests in the kernel are for small, |
24 |
frequently used data structures. For this purpose the slab allocator |
frequently used data structures. The physical page allocator only deals with |
25 |
is perfect. The basic idea behind a slab allocator is to have lists |
allocations in sizes of pages and makes no attempt to use the hardware as |
26 |
of commonly used objects available packed into pages. This avoids the |
cleanly as posssible. The slab allocator exists to serve three purposes. It |
27 |
overhead of allocating and destroying commonly used types of objects such |
provide a pool of small memory buffers packed into pages to reduce internal |
28 |
as inode\_caches, dentry\_caches or vm\_area\_structs while using memory |
fragmentation. These are called the \texttt{sizes caches}. It provide pools |
29 |
more efficiently. The slab allocator used by linux is the same as the one |
of commonly used objects like mm\_struct's to avoid the overhead of creating |
30 |
outlined in Bonwick's~\cite{slab} paper. Some terminology: |
and destroying complex objects. Last, but not least, it tries to use the |
31 |
|
hardware cache as efficiently as possible. |
32 |
|
|
33 |
|
The slab allocator used by linux is the same as the one outlined in |
34 |
|
Bonwick's~\cite{slab} paper. Some terminology: |
35 |
|
|
36 |
\begin{description} |
\begin{description} |
37 |
\idn{cache} It is a store of recently used objects of the same type. In the slab allocator, it is the highest logical unit of storage. It has a human parse-able name like dentry\_cache etc. |
\idn{cache} It is a store of recently used objects of the same type. In the slab allocator, it is the highest logical unit of storage. It has a human parse-able name like dentry\_cache etc. |
44 |
are organized into three types, full slabs, partial slabs and empty ones. |
are organized into three types, full slabs, partial slabs and empty ones. |
45 |
Partial slabs are used if available to avoid fragmentation. To see all |
Partial slabs are used if available to avoid fragmentation. To see all |
46 |
information on caches and slabs available in a system, type {\bf cat |
information on caches and slabs available in a system, type {\bf cat |
47 |
/proc/slabinfo} to see a list. The fields correspond to: |
/proc/slabinfo} to see a list. The fields correspond to: |
48 |
|
|
49 |
\vspace{15pt} |
\vspace{15pt} |
50 |
\noindent \begin{tabular}{ll} |
\noindent \begin{tabular}{ll} |
68 |
\end{tabular} |
\end{tabular} |
69 |
\vspace{15pt} |
\vspace{15pt} |
70 |
|
|
71 |
Further statistics appear if CONFIG\_DEBUG\_SLAB is set during make config |
This refer to the per-CPU object caches. To improve hardware utilization |
72 |
but these are essentially bean counters and not particularly interesting. |
and to reduce the number of locks needed for an allocation, a small pool of |
73 |
|
objects is stored for each CPU. This is described further in Section \ref{Sec: |
74 |
When SMP is enabled, each cache allocates a small array of objects for each CPU |
Per-CPU Object Cache} |
|
available. The reasoning behind having per-CPU slab-caches is that accessing |
|
|
data global to all CPUs requires a number of spinlocks to be held (so as to |
|
|
avoid race-conditions) which is expensive. Also, having per-CPU data like |
|
|
this helps in bringing down the number of hardware cache-coherency issues: |
|
|
if more than one CPU references some particular piece of data (and therefore |
|
|
has it in its on-chip CPU cache) the SMP hardware has to worry about keeping |
|
|
those CPU caches synchronized between them. |
|
|
|
|
|
Hence, each cache has a short per-cpu array called {\bf cpudata} of type |
|
|
{\bf cpucache\_t}. This is a very simple struct with only two members: |
|
|
|
|
|
\vspace{15pt} |
|
|
\noindent \begin{tabular}{ll} |
|
|
avail & The number of available objects in the cache \\ |
|
|
limit & The limit of what can be assigned for this CPU \\ |
|
|
\end{tabular} |
|
|
\vspace{15pt} |
|
75 |
|
|
76 |
To simplify access to this array, a macro called {\bf cc\_data} is provided. |
\begin{figure} |
77 |
Most allocs and frees will be taken out of this per-CPU cache until it |
\centerline{\includegraphics{graphics/cache_slab_layout.ps}} |
78 |
overflows. Once it overflows, half of the entries are placed in a global |
\caption{Cache Structure for the Slab Allocator} |
79 |
cache minimizing the amount of spinlock operations required. |
\label{Cache Structure for the Slab Allocator} |
80 |
|
\end{figure} |
81 |
|
|
82 |
\newpage |
\newpage |
83 |
\section{Cache Structure} |
\section{Caches} |
84 |
|
\label{Sec: Caches} |
85 |
|
|
86 |
The structure of a cache is contained within a {\bf struct kmem\_cache\_s} |
The structure of a cache is contained within a {\bf struct kmem\_cache\_s} |
87 |
typedeffed to {\bf kmem\_cache\_t}. Most of the struct is self-explanatory, |
typedeffed to {\bf kmem\_cache\_t}. Most of the struct is self-explanatory, |
184 |
reaping. When the function does find a cache with this flag set, it skips |
reaping. When the function does find a cache with this flag set, it skips |
185 |
the cache and removes the flag. |
the cache and removes the flag. |
186 |
|
|
187 |
\subsection{Slab structure} |
\subsection{Cache Colouring} |
188 |
|
\label{Sec: Cache Colouring} |
189 |
|
|
190 |
|
To utilize hardware cache better, the slab allocator will offset objects |
191 |
|
in different slabs by different amounts depending on the amount of space |
192 |
|
left over in the slab. The offset is in units of \texttt{BYTES\_PER\_WORD} |
193 |
|
unless \texttt{SLAB\_HWCACHE\_ALIGN} is set in which case it is aligned to |
194 |
|
blocks of L1\_CACHE\_BYTES for alignment to the L1 hardware cache. |
195 |
|
|
196 |
|
During cache creation, it is calculated how many objects can fit on a slab |
197 |
|
(See Section \ref{Sec: Calculating the Number of Objects on a Slab}) and |
198 |
|
what the bytes wasted is. Based on that, two figures are calculated for the |
199 |
|
cache desriptor |
200 |
|
|
201 |
|
\mtablex{lX}{ |
202 |
|
colour & The number of different offset that can be used \\ |
203 |
|
colour\_off & The amount to offset the objects at \\ |
204 |
|
} |
205 |
|
|
206 |
|
With the objects offset, they will use different lines on the associative |
207 |
|
hardware cache. Therefore, objects from slabs are less likely to overwrite |
208 |
|
each other in memory. |
209 |
|
|
210 |
|
The result of this is easiest explained with example. Let us say that s\_mem |
211 |
|
(the address of the first object) on the slab is 0 for convinience, that |
212 |
|
100 bytes are wasted on the slab and alignment is to be at 32 bytes to the |
213 |
|
L1 Hardware Cache on a Pentium 2. |
214 |
|
|
215 |
|
In this scenario, the first slab created will have it's objects start at 0. |
216 |
|
The second will start at 32, the third at 64, the fourth at 96 and the fifth |
217 |
|
will start back at 0. With this, objects from each of the slabs will not |
218 |
|
hit the same hardware cache line on the CPU. |
219 |
|
|
220 |
|
\subsection{Creating a Cache} |
221 |
|
\label{Sec: Creating a Cache} |
222 |
|
|
223 |
|
The following tasks are performed by the function \texttt{kmem\_cache\_create} |
224 |
|
in order to create a cache. |
225 |
|
|
226 |
|
\begin{itemize} |
227 |
|
\item Perform basic sanity checks for bad usage |
228 |
|
\item Perform debugging checks if \texttt{CONFIG\_SLAB\_DEBUG} is set |
229 |
|
\item Allocate a kmem\_cache\_t from the \texttt{cache\_cache} slab cache |
230 |
|
\item Align the object size to the word size |
231 |
|
\item Calculate how many objects will fit on a slab |
232 |
|
\item Align the slab size to the hardware cache |
233 |
|
\item Calculate colour offsets |
234 |
|
\item Initialise remaining fields in cache descriptor |
235 |
|
\item Add the new cache to the cache chain |
236 |
|
\end{itemize} |
237 |
|
|
238 |
|
\begin{figure} |
239 |
|
\centerline{\includegraphics{graphics/kmem_cache_create.ps}} |
240 |
|
\caption{kmem\_cache\_create} |
241 |
|
\label{fig: kmem_cache_create} |
242 |
|
\end{figure} |
243 |
|
|
244 |
|
\function{kmem\_cache\_create}{kmem_cache_create}{mm/slab.c} |
245 |
|
\begin{verbatim} |
246 |
|
kmem_cache_t * |
247 |
|
kmem_cache_create(const char *name, |
248 |
|
size_t size, |
249 |
|
size_t offset, |
250 |
|
unsigned long flags, |
251 |
|
void (*ctor)(void*, kmem_cache_t *, unsigned long), |
252 |
|
void (*dtor)(void*, kmem_cache_t *, unsigned long)) |
253 |
|
\end{verbatim} |
254 |
|
|
255 |
|
This function is responsible for creating new caches and adding them to |
256 |
|
the cache chain. For clarity, debugging information and sanity checks will |
257 |
|
be ignored as they are only important during development and secondary to |
258 |
|
the slab allocator itself. The only check that is important is the check |
259 |
|
of flags against the CREATE\_MASK as the caller may request flags that are |
260 |
|
simply not available. |
261 |
|
|
262 |
|
The arguments to kmem\_cache\_create are as follows |
263 |
|
|
264 |
|
\vspace{10pt} \noindent \begin{tabularx}{15cm}{lX} |
265 |
|
const char *name & Human readable name of the cache \\ |
266 |
|
size\_t size & Size of the slab to create \\ |
267 |
|
size\_t offset & Offset between each object (color) \\ |
268 |
|
unsigned long flags & Flags to assign to the cache as described above \\ |
269 |
|
void (*ctor)() & Pointer to constructor function \\ |
270 |
|
void (*dtor)() & Pointer to destructor \\ |
271 |
|
\end{tabularx} |
272 |
|
|
273 |
|
\vspace{10pt} |
274 |
|
|
275 |
|
The whole beginning of the function is all debugging checks so we'll start |
276 |
|
with the last sanity check |
277 |
|
|
278 |
|
\begin{verbatim} |
279 |
|
/* |
280 |
|
* Always checks flags, a caller might be |
281 |
|
* expecting debug support which isn't available. |
282 |
|
*/ |
283 |
|
BUG_ON(flags & ~CREATE_MASK); |
284 |
|
\end{verbatim} |
285 |
|
|
286 |
|
CREATE\_MASK is the full set of flags that are allowable. If debugging flags |
287 |
|
are used when they are not available, BUG will be called. |
288 |
|
|
289 |
|
\begin{verbatim} |
290 |
|
cachep = (kmem_cache_t *) kmem_cache_alloc |
291 |
|
(&cache_cache, SLAB_KERNEL); |
292 |
|
if (!cachep) |
293 |
|
goto opps; |
294 |
|
memset(cachep, 0, sizeof(kmem_cache_t)); |
295 |
|
\end{verbatim} |
296 |
|
|
297 |
|
Request a kmem\_cache\_t from the cache\_cache. The cache\_cache is statically |
298 |
|
initialised to avoid a chicken and egg problem, see section \ref{Sec: Slab |
299 |
|
Allocator Initialization} |
300 |
|
|
301 |
|
\begin{verbatim} |
302 |
|
/* Check that size is in terms of words. |
303 |
|
* This is needed to avoid unaligned accesses |
304 |
|
* for some archs when redzoning is used, and makes |
305 |
|
* sure any on-slab bufctl's are also correctly aligned. |
306 |
|
*/ |
307 |
|
if (size & (BYTES_PER_WORD-1)) { |
308 |
|
size += (BYTES_PER_WORD-1); |
309 |
|
size &= ~(BYTES_PER_WORD-1); |
310 |
|
printk("%sForcing size word alignment - %s\n", |
311 |
|
func_nm, name); |
312 |
|
} |
313 |
|
\end{verbatim} |
314 |
|
|
315 |
|
Comment says it all really. The next block is debugging code so is skipped |
316 |
|
here. |
317 |
|
|
318 |
|
\begin{verbatim} |
319 |
|
align = BYTES_PER_WORD; |
320 |
|
if (flags & SLAB_HWCACHE_ALIGN) |
321 |
|
align = L1_CACHE_BYTES; |
322 |
|
\end{verbatim} |
323 |
|
|
324 |
|
This will align the object size to the system word size for quicker retrieval. |
325 |
|
If the wasted space is less important than good L1 cache performance, the |
326 |
|
alignment will be made L1\_CACHE\_BYTES. |
327 |
|
|
328 |
|
\begin{verbatim} |
329 |
|
if (size >= (PAGE_SIZE>>3)) |
330 |
|
/* |
331 |
|
* Size is large, assume best to place |
332 |
|
* the slab management obj off-slab |
333 |
|
* (should allow better packing of objs). |
334 |
|
*/ |
335 |
|
flags |= CFLGS_OFF_SLAB; |
336 |
|
\end{verbatim} |
337 |
|
|
338 |
|
Comment says it all really |
339 |
|
|
340 |
|
\begin{verbatim} |
341 |
|
if (flags & SLAB_HWCACHE_ALIGN) { |
342 |
|
while (size < align/2) |
343 |
|
align /= 2; |
344 |
|
size = (size+align-1)&(~(align-1)); |
345 |
|
} |
346 |
|
\end{verbatim} |
347 |
|
|
348 |
|
If the cache is SLAB\_HWCACHE\_ALIGN, it's aligning on the size of |
349 |
|
L1\_CACHE\_BYES which is quiet large, 32 bytes on an Intel. So, align is |
350 |
|
adjusted to that two objects could fit in a cache line. If 2 would fit, |
351 |
|
then try 4, until as many objects are packed in. Then size is adjusted to |
352 |
|
the new alignment |
353 |
|
|
354 |
|
\begin{verbatim} |
355 |
|
/* Cal size (in pages) of slabs, and the num |
356 |
|
* of objs per slab. This could be made much more |
357 |
|
* intelligent. For now, try to avoid using high |
358 |
|
* page-orders for slabs. When the gfp() funcs |
359 |
|
* are more friendly towards high-order requests, |
360 |
|
* this should be changed. |
361 |
|
*/ |
362 |
|
do { |
363 |
|
unsigned int break_flag = 0; |
364 |
|
cal_wastage: |
365 |
|
kmem_cache_estimate(cachep->gfporder, size, flags, |
366 |
|
&left_over, &cachep->num); |
367 |
|
\end{verbatim} |
368 |
|
|
369 |
|
Comment says it all |
370 |
|
|
371 |
|
\begin{verbatim} |
372 |
|
if (break_flag) |
373 |
|
break; |
374 |
|
if (cachep->gfporder >= MAX_GFP_ORDER) |
375 |
|
break; |
376 |
|
if (!cachep->num) |
377 |
|
goto next; |
378 |
|
if (flags & CFLGS_OFF_SLAB && |
379 |
|
cachep->num > offslab_limit) { |
380 |
|
/* Oops, this num of objs will cause problems. */ |
381 |
|
cachep->gfporder--; |
382 |
|
break_flag++; |
383 |
|
goto cal_wastage; |
384 |
|
} |
385 |
|
\end{verbatim} |
386 |
|
|
387 |
|
The break\_flag is set so that the gfporder is reduced only once when off-slab |
388 |
|
slab\_t's are in use. The second check is so the order doesn't get higher |
389 |
|
than whats possible. If num is zero, it means the gfporder is too low and |
390 |
|
needs to be increased. The last check is if the slab\_t is offslab. There |
391 |
|
is a limit to how many objects can be managed off-slab. If it's hit, the |
392 |
|
order is reduced and kmem\_cache\_estimate is called again. |
393 |
|
|
394 |
|
\begin{verbatim} |
395 |
|
/* |
396 |
|
* The Buddy Allocator will suffer if it has to deal with |
397 |
|
* too many allocators of a large order. So while large |
398 |
|
* numbers of objects is good, large orders are not so |
399 |
|
* slab_break_gfp_order forces a balance |
400 |
|
*/ |
401 |
|
if (cachep->gfporder >= slab_break_gfp_order) |
402 |
|
break; |
403 |
|
\end{verbatim} |
404 |
|
|
405 |
|
Comment says it all |
406 |
|
|
407 |
|
\begin{verbatim} |
408 |
|
if ((left_over*8) <= (PAGE_SIZE<<cachep->gfporder)) |
409 |
|
break; /* Acceptable internal fragmentation. */ |
410 |
|
\end{verbatim} |
411 |
|
|
412 |
|
This is a rough check for internal fragmentation. If the wastage as a fraction |
413 |
|
of the total size of the cache is less than one eight, it is acceptable |
414 |
|
|
415 |
|
\begin{verbatim} |
416 |
|
next: |
417 |
|
cachep->gfporder++; |
418 |
|
} while (1); |
419 |
|
\end{verbatim} |
420 |
|
|
421 |
|
This will increase the order to see if it's worth using another page to |
422 |
|
balance how many objects can be in a slab against the slab\_break\_gfp\_order |
423 |
|
and internal fragmentation. |
424 |
|
|
425 |
|
\begin{verbatim} |
426 |
|
if (!cachep->num) { |
427 |
|
printk("kmem_cache_create: couldn't create cache %s.\n", |
428 |
|
name); |
429 |
|
kmem_cache_free(&cache_cache, cachep); |
430 |
|
cachep = NULL; |
431 |
|
goto opps; |
432 |
|
} |
433 |
|
\end{verbatim} |
434 |
|
|
435 |
|
The objects must be too large to fit into the slab so clean up and goto opps |
436 |
|
that just returns. |
437 |
|
|
438 |
|
\begin{verbatim} |
439 |
|
slab_size = L1_CACHE_ALIGN(cachep->num * |
440 |
|
sizeof(kmem_bufctl_t)+sizeof(slab_t)) |
441 |
|
\end{verbatim} |
442 |
|
|
443 |
|
The size of a slab\_t is the number of objects by the size of the |
444 |
|
kmem\_bufctl\_ for each of them plus the size of the slab\_t struct itself |
445 |
|
presuming it's kept on-slab. |
446 |
|
|
447 |
|
\begin{verbatim} |
448 |
|
if (flags & CFLGS_OFF_SLAB && left_over >= slab_size) { |
449 |
|
flags &= ~CFLGS_OFF_SLAB; |
450 |
|
left_over -= slab_size; |
451 |
|
} |
452 |
|
\end{verbatim} |
453 |
|
|
454 |
|
The calculation for slab\_size included slab\_t even if the slab\_t would be |
455 |
|
off-slab. These checks see if it would fit on-slab and if it would, place it. |
456 |
|
|
457 |
|
\begin{verbatim} |
458 |
|
/* Offset must be a multiple of the alignment. */ |
459 |
|
offset += (align-1); |
460 |
|
offset &= ~(align-1); |
461 |
|
if (!offset) |
462 |
|
offset = L1_CACHE_BYTES; |
463 |
|
cachep->colour_off = offset; |
464 |
|
cachep->colour = left_over/offset; |
465 |
|
\end{verbatim} |
466 |
|
|
467 |
|
offset is the offset between each object so that the slab is coloured so |
468 |
|
that each object would get different cache lines. |
469 |
|
|
470 |
|
\begin{verbatim} |
471 |
|
/* init remaining fields */ |
472 |
|
if (!cachep->gfporder && !(flags & CFLGS_OFF_SLAB)) |
473 |
|
flags |= CFLGS_OPTIMIZE; |
474 |
|
|
475 |
|
cachep->flags = flags; |
476 |
|
cachep->gfpflags = 0; |
477 |
|
if (flags & SLAB_CACHE_DMA) |
478 |
|
cachep->gfpflags |= GFP_DMA; |
479 |
|
|
480 |
|
spin_lock_init(&cachep->spinlock); |
481 |
|
cachep->objsize = size; |
482 |
|
INIT_LIST_HEAD(&cachep->slabs_full); |
483 |
|
INIT_LIST_HEAD(&cachep->slabs_partial); |
484 |
|
INIT_LIST_HEAD(&cachep->slabs_free); |
485 |
|
|
486 |
|
if (flags & CFLGS_OFF_SLAB) |
487 |
|
cachep->slabp_cache = |
488 |
|
kmem_find_general_cachep(slab_size,0); |
489 |
|
cachep->ctor = ctor; |
490 |
|
cachep->dtor = dtor; |
491 |
|
/* Copy name over so we don't have |
492 |
|
* problems with unloaded modules */ |
493 |
|
strcpy(cachep->name, name); |
494 |
|
|
495 |
|
\end{verbatim} |
496 |
|
|
497 |
|
This just copies the information into the kmem\_cache\_t and initializes |
498 |
|
it's fields. \texttt{kmem\_find\_general\_cachep} finds the appropriate sized |
499 |
|
sizes cache to allocate a slab descriptor from when the slab manager is kept |
500 |
|
off-slab. |
501 |
|
|
502 |
|
\begin{verbatim} |
503 |
|
#ifdef CONFIG_SMP |
504 |
|
if (g_cpucache_up) |
505 |
|
enable_cpucache(cachep); |
506 |
|
#endif |
507 |
|
\end{verbatim} |
508 |
|
|
509 |
|
If SMP is available, enable\_cpucache will create a per CPU cache of objects |
510 |
|
for this cache and set proper values for avail and limit based on how large |
511 |
|
each object is. See Section \ref{Sec: Per-CPU Object Cache} for more details. |
512 |
|
|
513 |
|
\begin{verbatim} |
514 |
|
/* |
515 |
|
* Need the semaphore to access the chain. |
516 |
|
* Cycle through the chain to make sure there |
517 |
|
* isn't a cache of the same name available. |
518 |
|
*/ |
519 |
|
down(&cache_chain_sem); |
520 |
|
{ |
521 |
|
struct list_head *p; |
522 |
|
|
523 |
|
list_for_each(p, &cache_chain) { |
524 |
|
kmem_cache_t *pc = list_entry(p, kmem_cache_t, next); |
525 |
|
|
526 |
|
/* The name field is constant - no lock needed. */ |
527 |
|
if (!strcmp(pc->name, name)) |
528 |
|
BUG(); |
529 |
|
} |
530 |
|
} |
531 |
|
\end{verbatim} |
532 |
|
|
533 |
|
Comment covers it |
534 |
|
|
535 |
|
\begin{verbatim} |
536 |
|
/* There is no reason to lock our new cache before we |
537 |
|
* link it in - no one knows about it yet... |
538 |
|
*/ |
539 |
|
list_add(&cachep->next, &cache_chain); |
540 |
|
up(&cache_chain_sem); |
541 |
|
opps: |
542 |
|
return cachep; |
543 |
|
} |
544 |
|
\end{verbatim} |
545 |
|
|
546 |
|
\subsection{Calculating the Number of Objects on a Slab} |
547 |
|
\label{Sec: Calculating the Number of Objects on a Slab} |
548 |
|
|
549 |
|
During cache creation, it is determined how many objects can be stored in |
550 |
|
a slab and how much wasteage there will be. The following function calculates |
551 |
|
how many objects may be stored, taking into account if the slab and bufctl's |
552 |
|
must be stored on-slab. |
553 |
|
|
554 |
|
\function{kmem\_cache\_estimate}{kmem_cache_estimate}{mm/slab.c} |
555 |
|
|
556 |
|
\begin{verbatim} |
557 |
|
static void kmem_cache_estimate (unsigned long gfporder, size_t size, |
558 |
|
int flags, size_t *left_over, unsigned int *num) |
559 |
|
{ |
560 |
|
\end{verbatim} |
561 |
|
|
562 |
|
\begin{description} |
563 |
|
\idn{gfporder} The 2$^{gfporder}$ number of pages to allocate for each slab |
564 |
|
\idn{size} The size of each object |
565 |
|
\idn{flags} The cache flags. See Section \ref{Sec: Cache Static Flags} |
566 |
|
\idn{left\_over} The number of bytes left over in the slab. Returned to |
567 |
|
caller |
568 |
|
\idn{num} The number of objects that will fit in a slab. Returned to |
569 |
|
caller |
570 |
|
\end{description} |
571 |
|
|
572 |
|
\begin{verbatim} |
573 |
|
|
574 |
|
int i; |
575 |
|
size_t wastage = PAGE_SIZE<<gfporder; |
576 |
|
|
577 |
|
size_t extra = 0; |
578 |
|
size_t base = 0; |
579 |
|
|
580 |
|
\end{verbatim} |
581 |
|
\texttt{wastage} is decremented through the function. It starts with |
582 |
|
the maximum possible amount of wastage. |
583 |
|
|
584 |
|
\begin{verbatim} |
585 |
|
if (!(flags & CFLGS_OFF_SLAB)) { |
586 |
|
base = sizeof(slab_t); |
587 |
|
extra = sizeof(kmem_bufctl_t); |
588 |
|
} |
589 |
|
\end{verbatim} |
590 |
|
|
591 |
|
\texttt{base} is where usable memory in the slab starts. If the slab descriptor |
592 |
|
is kept on cache, the base begins at the end of the slab\_t struct and the |
593 |
|
number of bytes needed to store the bufctl is the size of kmem\_bufctl\_t. |
594 |
|
\texttt{extra} is the number of bytes needed to store kmem\_bufctl\_t |
595 |
|
|
596 |
|
\begin{verbatim} |
597 |
|
|
598 |
|
i = 0; |
599 |
|
while (i*size + L1_CACHE_ALIGN(base+i*extra) <= wastage) |
600 |
|
i++; |
601 |
|
\end{verbatim} |
602 |
|
|
603 |
|
\texttt{i} becomes the number of objects the slab can hold |
604 |
|
|
605 |
|
This counts up the number of objects that the cache can store. \texttt{i*size} |
606 |
|
is the amount of memory needed to store the object itself. |
607 |
|
|
608 |
|
L1\_CACHE\_ALIGN(base+i*extra) is slightly trickier. This is calculating |
609 |
|
the amount of memory needed to store the kmem\_bufctl\_t of which one exists |
610 |
|
for every object in the slab. As it is at the beginning of the slab, it is |
611 |
|
L1 cache aligned so that the first object in the slab will be aligned to |
612 |
|
hardware cache. \texttt{i*extra} will calculate the amount of space needed |
613 |
|
to hold a kmem\_bufctl\_t for this object. As wastage starts out as the size |
614 |
|
of the slab, it's use is overloaded here. |
615 |
|
|
616 |
|
\begin{verbatim} |
617 |
|
if (i > 0) |
618 |
|
i--; |
619 |
|
|
620 |
|
if (i > SLAB_LIMIT) |
621 |
|
i = SLAB_LIMIT; |
622 |
|
\end{verbatim} |
623 |
|
|
624 |
|
Because the previous loop counts until the slab overflows, the number of |
625 |
|
objects that can be stored is \texttt{i-1}. |
626 |
|
|
627 |
|
SLAB\_LIMIT is the absolute largest number of objects a slab can store. Is |
628 |
|
is defined as 0xffffFFFE as this the largest number kmem\_bufctl\_t, which |
629 |
|
is an unsigned int, can hold |
630 |
|
|
631 |
|
\begin{verbatim} |
632 |
|
*num = i; |
633 |
|
wastage -= i*size; |
634 |
|
wastage -= L1_CACHE_ALIGN(base+i*extra); |
635 |
|
*left_over = wastage; |
636 |
|
} |
637 |
|
\end{verbatim} |
638 |
|
|
639 |
|
\begin{itemize} |
640 |
|
\item \texttt{num} is now the number of objects a slab can hold |
641 |
|
\item Take away the space taken up by all the objects from wastage |
642 |
|
\item Take away the space taken up by the kmem\_bufctl\_t |
643 |
|
\item Wastage has now been calculated as the left over space in the slab |
644 |
|
\item Add the cache to the chain and return. |
645 |
|
\end{itemize} |
646 |
|
|
647 |
|
\subsection{Growing a Cache} |
648 |
|
\label{Sec: Growing a Cache} |
649 |
|
|
650 |
|
At this point, we have seen how the cache is created, but on creation, |
651 |
|
it is an empty cache with empty lists for it's \texttt{slab\_full}, |
652 |
|
\texttt{slab\_partial} and \texttt{slabs\_free}. |
653 |
|
|
654 |
|
This section will show how a cache is grown when no objects are left in the |
655 |
|
\texttt{slabs\_partial} list and there is no slabs in \texttt{slabs\_free}. |
656 |
|
The principle function for this is \id{kmem\_cache\_grow}. The tasks it |
657 |
|
takes are |
658 |
|
|
659 |
|
\begin{figure}[ht] |
660 |
|
\centerline{\includegraphics{graphics/kmem_cache_grow.ps}} |
661 |
|
\caption{kmem\_cache\_grow} |
662 |
|
\label{kmem_cache_grow} |
663 |
|
\end{figure} |
664 |
|
|
665 |
|
\begin{itemize} |
666 |
|
\item Perform basic sanity checks to guard against bad usage |
667 |
|
\item Calculate colour offset for objects in this slab |
668 |
|
\item Allocate memory for slab and acquire a slab descriptor |
669 |
|
\item Link the pages used for the slab to the slab and cache descriptors (See |
670 |
|
Section \ref{Sec: Slabs} |
671 |
|
\item Initalise objects in the slab |
672 |
|
\item Add the slab to the cache |
673 |
|
\end{itemize} |
674 |
|
|
675 |
|
\function{kmem\_cache\_grow}{kmem_cache_grow}{mm/slab.c} |
676 |
|
\begin{verbatim} |
677 |
|
int kmem_cache_grow (kmem_cache_t * cachep, |
678 |
|
int flags) |
679 |
|
\end{verbatim} |
680 |
|
|
681 |
|
When there is no partial of free slabs left, the cache has to grow by |
682 |
|
allocating a new slab and placing it on the free list. It is quiet long but |
683 |
|
not too complex. |
684 |
|
|
685 |
|
\begin{verbatim} |
686 |
|
|
687 |
|
slab_t *slabp; |
688 |
|
struct page *page; |
689 |
|
void *objp; |
690 |
|
size_t offset; |
691 |
|
unsigned int i, local_flags; |
692 |
|
unsigned long ctor_flags; |
693 |
|
unsigned long save_flags; |
694 |
|
|
695 |
|
/* Be lazy and only check for valid flags here, |
696 |
|
* keeping it out of the critical path in kmem_cache_alloc(). |
697 |
|
*/ |
698 |
|
if (flags & ~(SLAB_DMA|SLAB_LEVEL_MASK|SLAB_NO_GROW)) |
699 |
|
BUG(); |
700 |
|
if (flags & SLAB_NO_GROW) |
701 |
|
return 0; |
702 |
|
\end{verbatim} |
703 |
|
|
704 |
|
Straight forward. Make sure we are not trying to grow a slab that shouldn't |
705 |
|
be grown. |
706 |
|
|
707 |
|
\begin{verbatim} |
708 |
|
if (in_interrupt() && (flags & SLAB_LEVEL_MASK) |
709 |
|
!= SLAB_ATOMIC) |
710 |
|
BUG(); |
711 |
|
\end{verbatim} |
712 |
|
|
713 |
|
Make sure that if we are in an interrupt that the appropriate ATOMIC flags |
714 |
|
are set so we don't accidently sleep. |
715 |
|
|
716 |
|
\begin{verbatim} |
717 |
|
ctor_flags = SLAB_CTOR_CONSTRUCTOR; |
718 |
|
local_flags = (flags & SLAB_LEVEL_MASK); |
719 |
|
if (local_flags == SLAB_ATOMIC) |
720 |
|
/* |
721 |
|
* Not allowed to sleep. Need to tell a |
722 |
|
* constructor about this - it might need |
723 |
|
* to know... |
724 |
|
*/ |
725 |
|
ctor_flags |= SLAB_CTOR_ATOMIC; |
726 |
|
\end{verbatim} |
727 |
|
|
728 |
|
Set the appropriate flags for growing a cache and set ATOMIC if necessary. |
729 |
|
SLAB\_LEVEL\_MASK is the collection of GFP masks that determines how the |
730 |
|
buddy allocator will behave. |
731 |
|
|
732 |
|
\begin{verbatim} |
733 |
|
/* About to mess with non-constant members - lock. */ |
734 |
|
spin_lock_irqsave(&cachep->spinlock, save_flags); |
735 |
|
\end{verbatim} |
736 |
|
|
737 |
|
An interrupt safe lock has to be acquired because it's possible for an |
738 |
|
interrupt hander to affect the cache descriptor. |
739 |
|
|
740 |
|
\begin{verbatim} |
741 |
|
/* Get colour for the slab, and cal the next value. */ |
742 |
|
offset = cachep->colour_next; |
743 |
|
cachep->colour_next++; |
744 |
|
if (cachep->colour_next >= cachep->colour) |
745 |
|
cachep->colour_next = 0; |
746 |
|
offset *= cachep->colour_off; |
747 |
|
\end{verbatim} |
748 |
|
|
749 |
|
The colour will affect what cache line each object is assigned to on the |
750 |
|
CPU cache (See Section \ref{Sec: Cache Colouring}). This block of code says |
751 |
|
what offset to use for this block of objects and calculates what the next |
752 |
|
offset will me. \texttt{colour} is the number of different offsets that can be |
753 |
|
used hence \texttt{colour\_next} wraps when it reaches \texttt{colour} |
754 |
|
|
755 |
|
\begin{verbatim} |
756 |
|
cachep->dflags |= DFLGS_GROWN; |
757 |
|
|
758 |
|
cachep->growing++; |
759 |
|
\end{verbatim} |
760 |
|
|
761 |
|
This two lines will ensure that this cache won't be reaped for some time |
762 |
|
(See Section \ref{Sec: Cache Reaping}). As the cache is grown, it doesn't |
763 |
|
make sense that the slab just allocated here would be deleted by kswapd in |
764 |
|
a short space of time. |
765 |
|
|
766 |
|
\begin{verbatim} |
767 |
|
spin_unlock_irqrestore(&cachep->spinlock, save_flags); |
768 |
|
\end{verbatim} |
769 |
|
|
770 |
|
Restore the lock |
771 |
|
|
772 |
|
\begin{verbatim} |
773 |
|
/* Get mem for the objs. */ |
774 |
|
if (!(objp = kmem_getpages(cachep, flags))) |
775 |
|
goto failed; |
776 |
|
\end{verbatim} |
777 |
|
|
778 |
|
Just a wrapper around \_\_alloc\_pages(). See Section \ref{Sec: Interfacing |
779 |
|
with the Buddy Allocator} |
780 |
|
|
781 |
|
\begin{verbatim} |
782 |
|
/* Get slab management. */ |
783 |
|
if (!(slabp = kmem_cache_slabmgmt(cachep, |
784 |
|
objp, offset, |
785 |
|
local_flags))) |
786 |
|
goto opps1; |
787 |
|
\end{verbatim} |
788 |
|
|
789 |
|
This will allocate a slab\_t struct to manage this slab. How this function |
790 |
|
decides whether to place a slab\_t on or off the slab will be discussed later. |
791 |
|
|
792 |
|
\begin{verbatim} |
793 |
|
i = 1 << cachep->gfporder; |
794 |
|
page = virt_to_page(objp); |
795 |
|
do { |
796 |
|
SET_PAGE_CACHE(page, cachep); |
797 |
|
SET_PAGE_SLAB(page, slabp); |
798 |
|
PageSetSlab(page); |
799 |
|
page++; |
800 |
|
} while (--i); |
801 |
|
\end{verbatim} |
802 |
|
|
803 |
|
The struct page is used to keep track of the cachep and slabs (See Section |
804 |
|
\ref{Slabs}). From the head, search forward for the cachep and search back |
805 |
|
for the slabp. SET\_PAGE\_CACHE inserts the cachep onto the front of the |
806 |
|
list. SET\_PAGE\_SLAB will place the slab on end of the list. PageSetSlab |
807 |
|
is a macro which sets the PG\_slab bit on the page flags. The while loop |
808 |
|
will do this for each page that was allocated for this slab. |
809 |
|
|
810 |
|
\begin{verbatim} |
811 |
|
kmem_cache_init_objs(cachep, slabp, ctor_flags); |
812 |
|
\end{verbatim} |
813 |
|
|
814 |
|
This function, described in Section \ref{Sec: Initializing Objects} |
815 |
|
|
816 |
|
\begin{verbatim} |
817 |
|
spin_lock_irqsave(&cachep->spinlock, save_flags); |
818 |
|
cachep->growing--; |
819 |
|
\end{verbatim} |
820 |
|
|
821 |
|
Lock the cache so the slab can be inserted on the list and say that we are not |
822 |
|
growing any more so that the cache will be considered for reaping again later. |
823 |
|
|
824 |
|
\begin{verbatim} |
825 |
|
/* Make slab active. */ |
826 |
|
list_add_tail(&slabp->list, &cachep->slabs_free); |
827 |
|
STATS_INC_GROWN(cachep); |
828 |
|
cachep->failures = 0; |
829 |
|
\end{verbatim} |
830 |
|
|
831 |
|
Add the slab to the list and set some statistics. |
832 |
|
|
833 |
|
\begin{verbatim} |
834 |
|
spin_unlock_irqrestore(&cachep->spinlock, save_flags); |
835 |
|
return 1; |
836 |
|
\end{verbatim} |
837 |
|
|
838 |
|
Unlock and return success. |
839 |
|
|
840 |
|
\begin{verbatim} |
841 |
|
opps1: |
842 |
|
kmem_freepages(cachep, objp); |
843 |
|
failed: |
844 |
|
spin_lock_irqsave(&cachep->spinlock, save_flags); |
845 |
|
cachep->growing--; |
846 |
|
spin_unlock_irqrestore(&cachep->spinlock, save_flags); |
847 |
|
return 0; |
848 |
|
} |
849 |
|
\end{verbatim} |
850 |
|
|
851 |
|
opps1 is reached if a slab manager could not be allocated. failed is reached |
852 |
|
if pages could not be allocated for the slab at all. |
853 |
|
|
854 |
|
\subsection{Shrinking Caches} |
855 |
|
\label{Sec: Shrinking Caches} |
856 |
|
|
857 |
|
Periodically it is necessary to shrink a cache, for instance when kswapd |
858 |
|
is woken as zones need to be balanced. Before a cache is shrinked, it is |
859 |
|
checked to make sure it isn't called from inside an interrupt. The code |
860 |
|
behind \emph{kmem\_shrink\_cache()} looks a bit convulated at first glance. |
861 |
|
It's tasks are |
862 |
|
|
863 |
|
\begin{itemize} |
864 |
|
\item Delete all objects in the per CPU caches |
865 |
|
\item Delete all slabs from slabs\_free unless the growing flag gets set |
866 |
|
\end{itemize} |
867 |
|
|
868 |
|
\begin{figure} |
869 |
|
\centerline{\includegraphics{graphics/kmem_cache_shrink.ps}} |
870 |
|
\caption{kmem\_cache\_shrink} |
871 |
|
\label{fig: kmem_cache_shrink} |
872 |
|
\end{figure} |
873 |
|
|
874 |
|
Two varieties of shrink functions are provided. \texttt{kmem\_cache\_shrink} |
875 |
|
removes all slabs from slabs\_free and returns the number of pages freed as |
876 |
|
a result. \texttt{\_\_kmem\_cache\_shrink} frees all slabs from slabs\_free |
877 |
|
and then verifies that slabs\_partial and slabs\_full are empty. This is |
878 |
|
important during cache destruction when it doesn't matter how many pages |
879 |
|
are freed, just that the cache is empty. |
880 |
|
|
881 |
|
\function{kmem\_cache\_shrink}{kmem_cache_shrink}{mm/slab.c} |
882 |
|
\begin{verbatim} |
883 |
|
int kmem_cache_shrink(kmem_cache_t *cachep) |
884 |
|
\end{verbatim} |
885 |
|
|
886 |
|
\begin{verbatim} |
887 |
|
|
888 |
|
int ret; |
889 |
|
|
890 |
|
if (!cachep || in_interrupt() || |
891 |
|
!is_chained_kmem_cache(cachep)) |
892 |
|
BUG(); |
893 |
|
|
894 |
|
drain_cpu_caches(cachep); |
895 |
|
\end{verbatim} |
896 |
|
|
897 |
|
drain\_cpu\_caches (Section \ref{Sec: drain_cpu_caches}) will try and remove |
898 |
|
the objects kept available for a particular CPU that would have been allocated |
899 |
|
earlier with kmem\_cache\_alloc\_batch. |
900 |
|
|
901 |
|
\begin{verbatim} |
902 |
|
spin_lock_irq(&cachep->spinlock); |
903 |
|
ret = __kmem_cache_shrink_locked(cachep); |
904 |
|
spin_unlock_irq(&cachep->spinlock); |
905 |
|
\end{verbatim} |
906 |
|
Lock and shrink |
907 |
|
\begin{verbatim} |
908 |
|
return ret << cachep->gfporder; |
909 |
|
|
910 |
|
\end{verbatim} |
911 |
|
|
912 |
|
As the number of slabs freed is returned, bit shifting it by gfporder |
913 |
|
will give the number of pages freed. There is a similar function called |
914 |
|
\_\_kmem\_cache\_shrink. The only difference with it is that it returns a |
915 |
|
boolean on whether the whole cache is free or not. |
916 |
|
|
917 |
|
\function{kmem\_cache\_shrink\_locked}{kmem_cache_shrink_locked}{mm/slab.c} |
918 |
|
\begin{verbatim} |
919 |
|
int __kmem_cache_shrink_locked(kmem_cache_t *cachep) |
920 |
|
\end{verbatim} |
921 |
|
|
922 |
|
This function cycles through all the slabs\_free in the cache and calls |
923 |
|
kmem\_slab\_destory (described below) on each of them. The code is very |
924 |
|
straight forward. |
925 |
|
|
926 |
|
\begin{verbatim} |
927 |
|
|
928 |
|
slab_t *slabp; |
929 |
|
int ret = 0; |
930 |
|
|
931 |
|
/* If the cache is growing, stop shrinking. */ |
932 |
|
while (!cachep->growing) { |
933 |
|
struct list_head *p; |
934 |
|
|
935 |
|
p = cachep->slabs_free.prev; |
936 |
|
if (p == &cachep->slabs_free) |
937 |
|
break; |
938 |
|
|
939 |
|
\end{verbatim} |
940 |
|
|
941 |
|
If the list \texttt{slabs\_free} is empty, then both \textit{slabs\_free.prev} |
942 |
|
and \textit{slabs\_free.next} point to itself. The above code checks for |
943 |
|
this condition and quits as there are no empty slabs to free. |
944 |
|
|
945 |
|
\begin{verbatim} |
946 |
|
|
947 |
|
slabp = list_entry(cachep->slabs_free.prev, slab_t, list); |
948 |
|
|
949 |
|
\end{verbatim} |
950 |
|
There is an empty slab available, so get a pointer to it. |
951 |
|
\begin{verbatim} |
952 |
|
|
953 |
|
#if DEBUG |
954 |
|
if (slabp->inuse) |
955 |
|
BUG(); |
956 |
|
#endif |
957 |
|
|
958 |
|
\end{verbatim} |
959 |
|
A bug condition where a partially used slab is in the free slab list. |
960 |
|
\begin{verbatim} |
961 |
|
|
962 |
|
list_del(&slabp->list); |
963 |
|
|
964 |
|
\end{verbatim} |
965 |
|
|
966 |
|
Since we are going to free this slab, remove it from the \textit{slabs\_free} |
967 |
|
list. |
968 |
|
|
969 |
|
\begin{verbatim} |
970 |
|
|
971 |
|
|
972 |
|
spin_unlock_irq(&cachep->spinlock); |
973 |
|
kmem_slab_destroy(cachep, slabp); |
974 |
|
ret++; |
975 |
|
spin_lock_irq(&cachep->spinlock); |
976 |
|
} |
977 |
|
return ret; |
978 |
|
\end{verbatim} |
979 |
|
|
980 |
|
Call \texttt{kmem\_slab\_destroy()} (which is discussed below) to actually |
981 |
|
do the formalities of freeing the slab. Increment the value of \textit{ret}, |
982 |
|
which is used to count the number of slabs being freed. |
983 |
|
|
984 |
|
\function{\_\_kmem\_slab\_destroy}{__kmem_slab_destroy}{mm/slab.c} |
985 |
|
\begin{verbatim} |
986 |
|
void kmem_slab_destroy (kmem_cache_t *cachep, |
987 |
|
slab_t *slabp) |
988 |
|
\end{verbatim} |
989 |
|
|
990 |
|
This function cycles through all objects in a slab and does the required |
991 |
|
cleanup. Before calling, the slab must have been unlinked from the cache. |
992 |
|
|
993 |
|
\begin{verbatim} |
994 |
|
if (cachep->dtor |
995 |
|
#if DEBUG |
996 |
|
|| cachep->flags & (SLAB_POISON | SLAB_RED_ZONE) |
997 |
|
#endif |
998 |
|
) { |
999 |
|
|
1000 |
|
\end{verbatim} |
1001 |
|
If a destructor exists for this slab, or if DEBUG is enabled and the necessary |
1002 |
|
flags are present, continue. |
1003 |
|
\begin{verbatim} |
1004 |
|
|
1005 |
|
int i; |
1006 |
|
for (i = 0; i < cachep->num; i++) { |
1007 |
|
void* objp = slabp->s_mem+cachep->objsize*i; |
1008 |
|
\end{verbatim} |
1009 |
|
Cycle through all objects in the slab. |
1010 |
|
\begin{verbatim} |
1011 |
|
#if DEBUG |
1012 |
|
if (cachep->flags & SLAB_RED_ZONE) { |
1013 |
|
if (*((unsigned long*)(objp)) != RED_MAGIC1) |
1014 |
|
BUG(); |
1015 |
|
if (*((unsigned long*)(objp + cachep->objsize |
1016 |
|
- BYTES_PER_WORD)) != RED_MAGIC1) |
1017 |
|
BUG(); |
1018 |
|
objp += BYTES_PER_WORD; |
1019 |
|
} |
1020 |
|
#endif |
1021 |
|
|
1022 |
|
if (cachep->dtor) |
1023 |
|
(cachep->dtor)(objp, cachep, 0); |
1024 |
|
|
1025 |
|
\end{verbatim} |
1026 |
|
|
1027 |
|
If a destructor exists for this slab, then invoke it for the object. |
1028 |
|
|
1029 |
|
\begin{verbatim} |
1030 |
|
#if DEBUG |
1031 |
|
if (cachep->flags & SLAB_RED_ZONE) { |
1032 |
|
objp -= BYTES_PER_WORD; |
1033 |
|
} |
1034 |
|
if ((cachep->flags & SLAB_POISON) && |
1035 |
|
kmem_check_poison_obj(cachep, objp)) |
1036 |
|
BUG(); |
1037 |
|
#endif |
1038 |
|
} |
1039 |
|
} |
1040 |
|
|
1041 |
|
kmem_freepages(cachep, slabp->s_mem-slabp->colouroff); |
1042 |
|
|
1043 |
|
\end{verbatim} |
1044 |
|
|
1045 |
|
\texttt{kmem\_freepages()} will call the buddy allocator to free the pages |
1046 |
|
for the slab. |
1047 |
|
|
1048 |
|
\begin{verbatim} |
1049 |
|
|
1050 |
|
if (OFF_SLAB(cachep)) |
1051 |
|
kmem_cache_free(cachep->slabp_cache, slabp); |
1052 |
|
|
1053 |
|
\end{verbatim} |
1054 |
|
|
1055 |
|
If the slab\_t is kept off-slab, it's cache entry must be removed. |
1056 |
|
|
1057 |
|
\subsection{Destroying Caches} |
1058 |
|
|
1059 |
|
Destroying a cache is yet another glorified list manager. It is called when |
1060 |
|
a module is unloading itself or is being destroyed. This is to prevent |
1061 |
|
caches with duplicate caches been created if the module is unloaded and |
1062 |
|
loaded several times. |
1063 |
|
|
1064 |
|
The steps taken to destroy a cache are |
1065 |
|
|
1066 |
|
\begin{itemize} |
1067 |
|
\item Delete the cache from the cache chain |
1068 |
|
\item Shrink the cache to delete all slabs (See Section \ref{Sec: Shrinking |
1069 |
|
Caches}) |
1070 |
|
\item Free any per CPU caches (\texttt{kfree}) |
1071 |
|
\item Delete the cache descriptor from the \texttt{cache\_cache} (See Section: |
1072 |
|
\ref{Sec: Object Freeing}) |
1073 |
|
\end{itemize} |
1074 |
|
|
1075 |
|
Figure \ref{fig: kmem_cache_destroy} Shows the call graph for this task. |
1076 |
|
|
1077 |
|
\begin{figure} |
1078 |
|
\centerline{\includegraphics{graphics/kmem_cache_destroy.ps}} |
1079 |
|
\caption{kmem\_cache\_destroy} |
1080 |
|
\label{fig: kmem_cache_destroy} |
1081 |
|
\end{figure} |
1082 |
|
|
1083 |
|
\function{kmem\_cache\_destroy}{kmem_cache_destroy}{mm/slab.c} |
1084 |
|
|
1085 |
|
\begin{verbatim} |
1086 |
|
int kmem_cache_destroy (kmem_cache_t * cachep) |
1087 |
|
{ |
1088 |
|
if (!cachep || in_interrupt() || cachep->growing) |
1089 |
|
BUG(); |
1090 |
|
\end{verbatim} |
1091 |
|
|
1092 |
|
Sanity check. Make sure the cachep is not null, that an interrupt isn't |
1093 |
|
trying to do this and that the cache hasn't been marked growing, indicating |
1094 |
|
it's in use |
1095 |
|
|
1096 |
|
\begin{verbatim} |
1097 |
|
|
1098 |
|
down(&cache_chain_sem); |
1099 |
|
|
1100 |
|
\end{verbatim} |
1101 |
|
|
1102 |
|
Acquire the semaphore for accessing the cache chain |
1103 |
|
|
1104 |
|
\begin{verbatim} |
1105 |
|
|
1106 |
|
if (clock_searchp == cachep) |
1107 |
|
clock_searchp = list_entry(cachep->next.next, |
1108 |
|
kmem_cache_t, next); |
1109 |
|
list_del(&cachep->next); |
1110 |
|
up(&cache_chain_sem); |
1111 |
|
|
1112 |
|
\end{verbatim} |
1113 |
|
|
1114 |
|
\begin{itemize} |
1115 |
|
\item Acquire the semaphore for accessing the cache chain |
1116 |
|
\item Acquire the list entry from the cache chain |
1117 |
|
\item Delete this cache from the cache chain |
1118 |
|
\item Release the cache chain semaphore |
1119 |
|
\end{itemize} |
1120 |
|
|
1121 |
|
\begin{verbatim} |
1122 |
|
|
1123 |
|
if (__kmem_cache_shrink(cachep)) { |
1124 |
|
printk(KERN_ERR "kmem_cache_destroy: Can't free all objects %p\n", |
1125 |
|
cachep); |
1126 |
|
down(&cache_chain_sem); |
1127 |
|
list_add(&cachep->next,&cache_chain); |
1128 |
|
up(&cache_chain_sem); |
1129 |
|
return 1; |
1130 |
|
} |
1131 |
|
|
1132 |
|
\end{verbatim} |
1133 |
|
|
1134 |
|
Shrink the cache to free all slabs (See Section \ref{Sec: Shrinking Caches}) |
1135 |
|
The shrink function returns true if there is still slabs in the cache. If |
1136 |
|
there is, the cache cannot be destroyed so it is added back into the cache |
1137 |
|
chain and the error reported |
1138 |
|
|
1139 |
|
\begin{verbatim} |
1140 |
|
#ifdef CONFIG_SMP |
1141 |
|
{ |
1142 |
|
int i; |
1143 |
|
for (i = 0; i < NR_CPUS; i++) |
1144 |
|
kfree(cachep->cpudata[i]); |
1145 |
|
} |
1146 |
|
#endif |
1147 |
|
\end{verbatim} |
1148 |
|
|
1149 |
|
If SMP is enabled, each per CPU data is freed using \texttt{kfree} |
1150 |
|
|
1151 |
|
\begin{verbatim} |
1152 |
|
|
1153 |
|
|
1154 |
|
kmem_cache_free(&cache_cache, cachep); |
1155 |
|
|
1156 |
|
return 0; |
1157 |
|
} |
1158 |
|
\end{verbatim} |
1159 |
|
|
1160 |
|
Delete the cache descriptor from the cache\_cache |
1161 |
|
|
1162 |
|
\subsection{Cache Reaping} |
1163 |
|
\label{Sec: Cache Reaping} |
1164 |
|
|
1165 |
|
When the page allocator notices that memory is getting tight, it wakes |
1166 |
|
\texttt{kswapd} to begin freeing up pages. One of the first ways it |
1167 |
|
accomplishes this task is telling the slab allocator to reap caches. It has |
1168 |
|
to be the slab allocator that selects the caches as other subsystems should |
1169 |
|
not know anything about the cache internals. |
1170 |
|
|
1171 |
|
\begin{figure} |
1172 |
|
\centerline{\includegraphics{graphics/kmem_cache_reap.ps}} |
1173 |
|
\caption{kmem\_cache\_reap} |
1174 |
|
\label{fig: kmem_cache_reap} |
1175 |
|
\end{figure} |
1176 |
|
|
1177 |
|
The call graph in Figure \ref{fig: kmem_cache_reap} is deceptively simple. The |
1178 |
|
task of selecting the proper cache to reap is quiet long. In case there is |
1179 |
|
many caches in the system, only \id{REAP\_SCANLEN} caches are examined |
1180 |
|
in each call. The last cache to be scanned is stored in the variable |
1181 |
|
\id{clock\_searchp} so as not to examine the same caches over and over |
1182 |
|
again. For each scanned cache, the reaper does the following |
1183 |
|
|
1184 |
|
\begin{itemize} |
1185 |
|
\item Check flags for SLAB\_NO\_REAP and skip if set |
1186 |
|
\item If the cache is growing, skip it |
1187 |
|
\item if the cache has grown recently (DFLGS\_GROWN is set in dflags), skip it |
1188 |
|
but clear the flag so it will be reaped the next time |
1189 |
|
\item Count the number of free slabs in slabs\_free and calculate how many |
1190 |
|
pages that would free in the variable \texttt{pages} |
1191 |
|
\item If the cache has constructors or large slabs, adjust \texttt{pages} to |
1192 |
|
make it less likely for the cache to be selected. |
1193 |
|
\item If the number of pages that would be freed exceeds |
1194 |
|
\texttt{REAP\_PERFECT}, free half of the slabs in slabs\_free |
1195 |
|
\item Otherwise scan the rest of the caches and select the one that would free |
1196 |
|
the most pages for freeing half of it's slabs in slabs\_free |
1197 |
|
\end{itemize} |
1198 |
|
|
1199 |
|
\function{kmem\_cache\_reap}{kmem_cache_reap}{mm/slab.c} |
1200 |
|
|
1201 |
|
There is three distinct sections to this function. The first is simple |
1202 |
|
function preamble. The second is the selection of a cache to reap and the |
1203 |
|
third is the freeing of the slabs |
1204 |
|
|
1205 |
|
\begin{verbatim} |
1206 |
|
int kmem_cache_reap (int gfp_mask) |
1207 |
|
{ |
1208 |
|
slab_t *slabp; |
1209 |
|
kmem_cache_t *searchp; |
1210 |
|
kmem_cache_t *best_cachep; |
1211 |
|
unsigned int best_pages; |
1212 |
|
unsigned int best_len; |
1213 |
|
unsigned int scan; |
1214 |
|
int ret = 0; |
1215 |
|
|
1216 |
|
\end{verbatim} |
1217 |
|
|
1218 |
|
The only parameter is the GFP flag. The only check made is against the |
1219 |
|
\_\_GFP\_WAIT flag. As \texttt{kswapd} can sleep, this flag is virtually |
1220 |
|
worthless |
1221 |
|
|
1222 |
|
\begin{verbatim} |
1223 |
|
if (gfp_mask & __GFP_WAIT) |
1224 |
|
down(&cache_chain_sem); |
1225 |
|
else |
1226 |
|
if (down_trylock(&cache_chain_sem)) |
1227 |
|
return 0; |
1228 |
|
\end{verbatim} |
1229 |
|
|
1230 |
|
If the caller can sleep, then acquire the semaphore else, try and acquire |
1231 |
|
the semaphore and if not available, return |
1232 |
|
|
1233 |
|
\begin{verbatim} |
1234 |
|
scan = REAP_SCANLEN; |
1235 |
|
best_len = 0; |
1236 |
|
best_pages = 0; |
1237 |
|
best_cachep = NULL; |
1238 |
|
searchp = clock_searchp; |
1239 |
|
\end{verbatim} |
1240 |
|
|
1241 |
|
REAP\_SCANLEN is the number of caches to examine. searchp to be the last |
1242 |
|
cache that was examined at the last reap |
1243 |
|
|
1244 |
|
The next do..while loop scans REAP\_SCANLEN caches and selects a cache to reap |
1245 |
|
slabs from. |
1246 |
|
|
1247 |
|
\begin{verbatim} |
1248 |
|
do { |
1249 |
|
unsigned int pages; |
1250 |
|
struct list_head* p; |
1251 |
|
unsigned int full_free; |
1252 |
|
|
1253 |
|
if (searchp->flags & SLAB_NO_REAP) |
1254 |
|
goto next; |
1255 |
|
\end{verbatim} |
1256 |
|
|
1257 |
|
If SLAB\_NO\_REAP is set, slip immediately |
1258 |
|
|
1259 |
|
\begin{verbatim} |
1260 |
|
spin_lock_irq(&searchp->spinlock); |
1261 |
|
\end{verbatim} |
1262 |
|
|
1263 |
|
Acquire an interrupt safe lock |
1264 |
|
|
1265 |
|
\begin{verbatim} |
1266 |
|
if (searchp->growing) |
1267 |
|
goto next_unlock; |
1268 |
|
|
1269 |
|
if (searchp->dflags & DFLGS_GROWN) { |
1270 |
|
searchp->dflags &= ~DFLGS_GROWN; |
1271 |
|
goto next_unlock; |
1272 |
|
} |
1273 |
|
\end{verbatim} |
1274 |
|
|
1275 |
|
If the cache is growing or has grown recently, skip it |
1276 |
|
|
1277 |
|
\begin{verbatim} |
1278 |
|
#ifdef CONFIG_SMP |
1279 |
|
{ |
1280 |
|
cpucache_t *cc = cc_data(searchp); |
1281 |
|
if (cc && cc->avail) { |
1282 |
|
__free_block(searchp, cc_entry(cc), |
1283 |
|
cc->avail); |
1284 |
|
cc->avail = 0; |
1285 |
|
} |
1286 |
|
} |
1287 |
|
#endif |
1288 |
|
|
1289 |
|
\end{verbatim} |
1290 |
|
|
1291 |
|
Free any per CPU objects to the global pool |
1292 |
|
|
1293 |
|
\begin{verbatim} |
1294 |
|
full_free = 0; |
1295 |
|
p = searchp->slabs_free.next; |
1296 |
|
while (p != &searchp->slabs_free) { |
1297 |
|
slabp = list_entry(p, slab_t, list); |
1298 |
|
#if DEBUG |
1299 |
|
if (slabp->inuse) |
1300 |
|
BUG(); |
1301 |
|
#endif |
1302 |
|
full_free++; |
1303 |
|
p = p->next; |
1304 |
|
} |
1305 |
|
|
1306 |
|
pages = full_free * (1<<searchp->gfporder); |
1307 |
|
\end{verbatim} |
1308 |
|
|
1309 |
|
Count the number of slabs in the slabs\_free list and calculate the number |
1310 |
|
of pages all the slabs hold |
1311 |
|
|
1312 |
|
\begin{verbatim} |
1313 |
|
if (searchp->ctor) |
1314 |
|
pages = (pages*4+1)/5; |
1315 |
|
\end{verbatim} |
1316 |
|
|
1317 |
|
If the objects have constructors, reduce the page count by one fifth to make |
1318 |
|
it less likely to be selected for reaping |
1319 |
|
|
1320 |
|
\begin{verbatim} |
1321 |
|
if (searchp->gfporder) |
1322 |
|
pages = (pages*4+1)/5; |
1323 |
|
|
1324 |
|
\end{verbatim} |
1325 |
|
|
1326 |
|
If the slabs consist of more than one page, reduce the page count by one |
1327 |
|
fifth. This is because high order pages are hard to acquire |
1328 |
|
|
1329 |
|
\begin{verbatim} |
1330 |
|
if (pages > best_pages) { |
1331 |
|
best_cachep = searchp; |
1332 |
|
best_len = full_free; |
1333 |
|
best_pages = pages; |
1334 |
|
if (pages >= REAP_PERFECT) { |
1335 |
|
clock_searchp = |
1336 |
|
list_entry(searchp->next.next, |
1337 |
|
kmem_cache_t,next); |
1338 |
|
goto perfect; |
1339 |
|
} |
1340 |
|
} |
1341 |
|
\end{verbatim} |
1342 |
|
|
1343 |
|
If this is the best canditate found for reaping so far, check if it is |
1344 |
|
perfect for reaping. If this cache is perfect for reaping then update |
1345 |
|
\texttt{clock\_searchp} and goto perfect where half the slabs will be |
1346 |
|
freed. Otherwise record the new maximums. best\_len is recorded so that it |
1347 |
|
is easy to know how many slabs is half of the slabs in the free list |
1348 |
|
|
1349 |
|
\begin{verbatim} |
1350 |
|
next_unlock: |
1351 |
|
spin_unlock_irq(&searchp->spinlock); |
1352 |
|
next: |
1353 |
|
searchp = |
1354 |
|
list_entry(searchp->next.next,kmem_cache_t,next); |
1355 |
|
} while (--scan && searchp != clock_searchp); |
1356 |
|
\end{verbatim} |
1357 |
|
|
1358 |
|
This next\_unlock label is reached if it was found the cache was growing |
1359 |
|
after acquiring the lock so the cache descriptor lock is released. Move to |
1360 |
|
the next entry in the cache chain and keep scanning until REAP\_SCANLEN is |
1361 |
|
reached or until the whole chain has been examined. |
1362 |
|
|
1363 |
|
At this point a cache has been selected to reap from. The next block will free |
1364 |
|
half of the free slabs from the selected cache. |
1365 |
|
|
1366 |
|
\begin{verbatim} |
1367 |
|
clock_searchp = searchp; |
1368 |
|
|
1369 |
|
if (!best_cachep) |
1370 |
|
goto out; |
1371 |
|
\end{verbatim} |
1372 |
|
|
1373 |
|
Update clock\_searchp for the next cache reap. If a cache was not selected, |
1374 |
|
goto out to free the cache chain and exit |
1375 |
|
|
1376 |
|
\begin{verbatim} |
1377 |
|
spin_lock_irq(&best_cachep->spinlock); |
1378 |
|
\end{verbatim} |
1379 |
|
|
1380 |
|
Acquire the cache chain spinlock and disable interrupts |
1381 |
|
|
1382 |
|
\begin{verbatim} |
1383 |
|
perfect: |
1384 |
|
best_len = (best_len + 1)/2; |
1385 |
|
|
1386 |
|
for (scan = 0; scan < best_len; scan++) { |
1387 |
|
\end{verbatim} |
1388 |
|
|
1389 |
|
Adjust best\_len to be the number of slabs to free and free best\_len number |
1390 |
|
of slabs. |
1391 |
|
|
1392 |
|
\begin{verbatim} |
1393 |
|
struct list_head *p; |
1394 |
|
|
1395 |
|
if (best_cachep->growing) |
1396 |
|
break; |
1397 |
|
\end{verbatim} |
1398 |
|
|
1399 |
|
If the cache is growing, exit |
1400 |
|
|
1401 |
|
\begin{verbatim} |
1402 |
|
p = best_cachep->slabs_free.prev; |
1403 |
|
if (p == &best_cachep->slabs_free) |
1404 |
|
break; |
1405 |
|
slabp = list_entry(p,slab_t,list); |
1406 |
|
\end{verbatim} |
1407 |
|
|
1408 |
|
Get a slab from the list and check to make sure there is slabs left to free on |
1409 |
|
it before acquiring the slab pointer. |
1410 |
|
|
1411 |
|
\begin{verbatim} |
1412 |
|
#if DEBUG |
1413 |
|
if (slabp->inuse) |
1414 |
|
BUG(); |
1415 |
|
#endif |
1416 |
|
list_del(&slabp->list); |
1417 |
|
STATS_INC_REAPED(best_cachep); |
1418 |
|
|
1419 |
|
\end{verbatim} |
1420 |
|
|
1421 |
|
A debugging check if enabled. Remove the slab from the list as it's about to |
1422 |
|
be destroyed. Update statistics if enabled. |
1423 |
|
|
1424 |
|
\begin{verbatim} |
1425 |
|
|
1426 |
|
spin_unlock_irq(&best_cachep->spinlock); |
1427 |
|
kmem_slab_destroy(best_cachep, slabp); |
1428 |
|
spin_lock_irq(&best_cachep->spinlock); |
1429 |
|
} |
1430 |
|
|
1431 |
|
\end{verbatim} |
1432 |
|
|
1433 |
|
Release the cache descriptor while deleting the slab because the cache |
1434 |
|
descriptor is safe and move to the next slab to free in the cache |
1435 |
|
|
1436 |
|
\begin{verbatim} |
1437 |
|
spin_unlock_irq(&best_cachep->spinlock); |
1438 |
|
ret = scan * (1 << best_cachep->gfporder); |
1439 |
|
out: |
1440 |
|
up(&cache_chain_sem); |
1441 |
|
return ret; |
1442 |
|
} |
1443 |
|
\end{verbatim} |
1444 |
|
|
1445 |
|
The requesite number of slabs has been freed to record the number of pages |
1446 |
|
that were freed, release the cache descriptor locks and return the result. |
1447 |
|
|
1448 |
|
\section{Slabs} |
1449 |
|
\label{Sec: Slabs} |
1450 |
|
|
1451 |
As mentioned, a slab consists of one or more pages assigned to contain objects. |
As mentioned, a slab consists of one or more pages assigned to contain objects. |
1452 |
The job of this struct is to manage the objects in the slab. The struct to |
The job of this struct is to manage the objects in the slab. The struct to |
1453 |
describe a slab is simple: |
describe a slab is simple: |
1454 |
|
|
|
\subsubsection{struct slab\_s}\index{struct slab\_s} |
|
1455 |
\begin{verbatim} |
\begin{verbatim} |
1456 |
typedef struct slab_s { |
typedef struct slab_s { |
1457 |
struct list_head list; |
struct list_head list; |
1492 |
\label{fig: Page to Cache and Slab Relationship} |
\label{fig: Page to Cache and Slab Relationship} |
1493 |
\end{figure} |
\end{figure} |
1494 |
|
|
|
|
|
|
\subsection{Overall Structure} |
|
|
|
|
|
\begin{figure} |
|
|
\img{slab.png}{slab} |
|
|
\caption{Relationship between cache and slab descriptors} |
|
|
\label{fig:slab1} |
|
|
\end{figure} |
|
|
|
|
1495 |
Caches are linked together with the \textit{next} field. Each cache consists |
Caches are linked together with the \textit{next} field. Each cache consists |
1496 |
of one or more slabs which are blocks of memory of one or more pages. Each |
of one or more slabs which are blocks of memory of one or more pages. Each |
1497 |
slab contains multiple numbers of objects, possibly with gaps between |
slab contains multiple numbers of objects, possibly with gaps between |
1502 |
on the slab or off it. If on the slab, it is at the beginning. If off-cache, |
on the slab or off it. If on the slab, it is at the beginning. If off-cache, |
1503 |
it is stored in an appropriately sized memory cache. |
it is stored in an appropriately sized memory cache. |
1504 |
|
|
1505 |
\begin{figure} |
\vbox{ |
1506 |
\begin{verbatim} |
\begin{verbatim} |
|
|
|
1507 |
On-Slab |
On-Slab |
1508 |
|------------------slab---------------------| |
|------------------slab---------------------| |
1509 |
|--------Page-------||---------Page---------| |
|--------Page-------||---------Page---------| |
1520 |
---------------------------------------------- |
---------------------------------------------- |
1521 |
|
|
1522 |
\end{verbatim} |
\end{verbatim} |
1523 |
\end{figure} |
} |
1524 |
\begin{figure} |
|
1525 |
|
\vbox{ |
1526 |
\begin{verbatim} |
\begin{verbatim} |
1527 |
Off-Slab |
Off-Slab |
1528 |
kmem_cache_t |
kmem_cache_t |
1542 |
| obj | obj | obj | obj | obj | obj | obj | |
| obj | obj | obj | obj | obj | obj | obj | |
1543 |
| | | | | | | | |
| | | | | | | | |
1544 |
------------------------------------------- |
------------------------------------------- |
|
|
|
1545 |
\end{verbatim} |
\end{verbatim} |
1546 |
\end{figure} |
} |
1547 |
|
|
1548 |
\sloppypar The \texttt{struct page}`s \textit{list} element is used to |
\sloppypar The \texttt{struct page}`s \textit{list} element is used to |
1549 |
track where cache\_t and slab\_t are stored (see kmem\_cache\_grow). The |
track where cache\_t and slab\_t are stored (see kmem\_cache\_grow). The |
1550 |
$list\rightarrow{next}$ pointer points to kmem\_cache\_t (the cache it belongs to) and |
$list\rightarrow{next}$ pointer points to kmem\_cache\_t (the cache it |
1551 |
$list\rightarrow{prev}$ points to slab\_t (the slab it is part of). So given an object, |
belongs to) and $list\rightarrow{prev}$ points to slab\_t (the slab it is |
1552 |
we can easily find the associated cache and slab through these pointers. |
part of). So given an object, we can easily find the associated cache and |
1553 |
|
slab through these pointers. |
|
\subsection{Cache Colouring} |
|
|
\label{Sec: Cache Colouring} |
|
|
|
|
|
To utilize hardware cache better, the slab allocator will offset objects |
|
|
in different slabs by different amounts depending on the amount of space |
|
|
left over in the slab. The offset is in units of \texttt{BYTES\_PER\_WORD} |
|
|
unless \texttt{SLAB\_HWCACHE\_ALIGN} is set in which case it is aligned to |
|
|
blocks of L1\_CACHE\_BYTES for alignment to the L1 hardware cache. |
|
|
|
|
|
During cache creation, it is calculated how many objects can fit on a slab |
|
|
(See Section \ref{Sec: Calculating the Number of Objects on a Slab}) and |
|
|
what the bytes wasted is. Based on that, two figures are calculated for the |
|
|
cache desriptor |
|
|
|
|
|
\mtablex{lX}{ |
|
|
colour & The number of different offset that can be used \\ |
|
|
colour\_off & The amount to offset the objects at \\ |
|
|
} |
|
|
|
|
|
With the objects offset, they will use different lines on the associative |
|
|
hardware cache. Therefore, objects from slabs are less likely to overwrite |
|
|
each other in memory. |
|
|
|
|
|
The result of this is easiest explained with example. Let us say that s\_mem |
|
|
(the address of the first object) on the slab is 0 for convinience, that |
|
|
100 bytes are wasted on the slab and alignment is to be at 32 bytes to the |
|
|
L1 Hardware Cache on a Pentium 2. |
|
|
|
|
|
In this scenario, the first slab created will have it's objects start at 0. |
|
|
The second will start at 32, the third at 64, the fourth at 96 and the fifth |
|
|
will start back at 0. With this, objects from each of the slabs will not |
|
|
hit the same hardware cache line on the CPU. |
|
|
|
|
|
\section{Interfacing with the Buddy Allocator} |
|
|
\label{Sec: Interfacing with the Buddy Allocator} |
|
|
|
|
|
The slab allocator doesn't come with pages attached, it must ask the |
|
|
physical page allocator (See Section \ref{Sec: Physical Page Management}) |
|
|
for it's pages. For this two interfaces are provided, kmem\_getpages and |
|
|
kmem\_freepages. They are basically wrappers around the buddy allocators |
|
|
API so that slab flags will be taken into account for allocations |
|
|
|
|
|
\function{kmem\_getpages}{kmem_getpages}{mm/slab.c} |
|
|
|
|
|
This allocates pages for the slab allocator |
|
1554 |
|
|
1555 |
|
\function{kmem\_cache\_slabmgmt}{kmem_cache_slabmgmt}{mm/slab.c} |
1556 |
|
\begin{verbatim} |
1557 |
|
slab_t * kmem_cache_slabmgmt (kmem_cache_t *cachep, |
1558 |
|
void *objp, |
1559 |
|
int colour_off, |
1560 |
|
int local_flags) |
1561 |
|
\end{verbatim} |
1562 |
|
This function allocates a new slab\_t and places it in the correct place. |
1563 |
\begin{verbatim} |
\begin{verbatim} |
1564 |
486 static inline void * kmem_getpages (kmem_cache_t *cachep, unsigned long |
slab_t *slabp; |
|
flags) |
|
|
487 { |
|
|
488 void *addr; |
|
|
495 flags |= cachep->gfpflags; |
|
|
496 addr = (void*) __get_free_pages(flags, cachep->gfporder); |
|
|
503 return addr; |
|
|
504 } |
|
|
\end{verbatim} |
|
|
|
|
|
\begin{itemize} |
|
|
\item Whatever flags were requested for the allocation, append the cache |
|
|
flags to it. The only flag it may append is GFP\_DMA if the cache requires DMA |
|
|
memory |
|
|
|
|
|
\item Call the buddy allocator (See Section \ref{Sec: __get_free_pages}) |
|
|
|
|
|
\item Return the pages or NULL if it failed |
|
|
\end{itemize} |
|
|
|
|
|
\function{kmem\_freepages}{kmem_freepages}{mm/slab.c} |
|
|
|
|
|
This frees pages for the slab allocator. Before it calls the buddy allocator |
|
|
API, it will remove the PG\_slab bit from the page flags |
|
1565 |
|
|
1566 |
\begin{verbatim} |
if (OFF_SLAB(cachep)) { |
1567 |
507 static inline void kmem_freepages (kmem_cache_t *cachep, void *addr) |
/* Slab management obj is off-slab. */ |
1568 |
508 { |
slabp = kmem_cache_alloc(cachep->slabp_cache, |
1569 |
509 unsigned long i = (1<<cachep->gfporder); |
local_flags); |
1570 |
510 struct page *page = virt_to_page(addr); |
if (!slabp) |
1571 |
511 |
return NULL; |
|
517 while (i--) { |
|
|
518 PageClearSlab(page); |
|
|
519 page++; |
|
|
520 } |
|
|
521 free_pages((unsigned long)addr, cachep->gfporder); |
|
|
522 } |
|
1572 |
\end{verbatim} |
\end{verbatim} |
1573 |
|
|
1574 |
\begin{itemize} |
The first check is to see if the slab\_t is kept off the slab. If it is, |
1575 |
\item Retrieve the order used for the original allocation |
$cachep\rightarrow{slabp\_cache}$ will be pointing to the cache of memory |
1576 |
\item Get the struct page for the address |
allocations large enough to contain the slab\_t. The different size caches |
1577 |
\item Clear the PG\_slab bit on each page |
are the same ones used by kmalloc. |
|
\item Call the buddy allocator (See Section \ref{Sec: free_pages}) |
|
|
\end{itemize} |
|
|
|
|
|
\section{Initialization} |
|
|
|
|
|
The first function called from \emph{start\_kernel} is {\bf |
|
|
kmem\_cache\_init()}. This takes the following very simple steps |
|
|
|
|
|
\begin{itemize} |
|
|
\item Initialize a mutex for access to the cache chain |
|
|
\item Initialize the linked list for the cache chain |
|
|
\item Initialize the cache\_cache |
|
|
\item Sets the cache\_cache colour |
|
|
\end{itemize} |
|
|
|
|
|
The term \emph{cache chain} is simply a fancy name for a circular linked list |
|
|
of caches the slab allocator knows about. It then goes on to initialize |
|
|
a cache of caches called {\bf kmem\_cache}. This is a cache of objects of |
|
|
type {\bf kmem\_cache\_t} which describes information about the cache itself. |
|
|
|
|
|
\subsection{Initializing cache\_cache} |
|
|
|
|
|
This cache is initialized as follows |
|
1578 |
|
|
1579 |
\begin{verbatim} |
\begin{verbatim} |
1580 |
static kmem_cache_t cache_cache = { |
} else { |
1581 |
slabs_full: LIST_HEAD_INIT(cache_cache.slabs_full), |
slabp = objp+colour_off; |
1582 |
slabs_partial: LIST_HEAD_INIT(cache_cache.slabs_partial), |
colour_off += L1_CACHE_ALIGN(cachep->num * |
1583 |
slabs_free: LIST_HEAD_INIT(cache_cache.slabs_free), |
sizeof(kmem_bufctl_t) |
1584 |
objsize: sizeof(kmem_cache_t), |
+ sizeof(slab_t)); |
1585 |
flags: SLAB_NO_REAP, |
} |
|
spinlock: SPIN_LOCK_UNLOCKED, |
|
|
colour_off: L1_CACHE_BYTES, |
|
|
name: "kmem_cache", |
|
|
}; |
|
1586 |
\end{verbatim} |
\end{verbatim} |
1587 |
|
|
1588 |
\begin{tabularx}{15cm}{lX} |
Otherwise the slab\_t struct is contained on the slab itself at the beginning |
1589 |
slabs\_full & Standard list init \\ |
of the slab. |
|
slabs\_partial & Standard list init \\ |
|
|
slabs\_free & Standard list init \\ |
|
|
objsize & Size of the struct. See the kmem\_cache\_s struct \\ |
|
|
flags & Make sure this cache can't be reaped \\ |
|
|
spinlock & Initialize unlocked \\ |
|
|
colour\_off & Align the objects to the L1 Cache \\ |
|
|
name & Name of the cache \\ |
|
|
\end{tabularx} |
|
|
|
|
|
\function{kmem\_cache\_init}{kmem_cache_init}{mm/slab.c} |
|
|
|
|
|
This function will |
|
|
|
|
|
\begin{itemize} |
|
|
\item Initialise the cache chain linked list |
|
|
\item Initialise a mutex for accessing the cache chain |
|
|
\item Calculate the cache\_cache colour |
|
|
\end{itemize} |
|
1590 |
|
|
1591 |
\begin{verbatim} |
\begin{verbatim} |
1592 |
void __init kmem_cache_init(void) |
slabp->inuse = 0; |
1593 |
{ |
slabp->colouroff = colour_off; |
1594 |
size_t left_over; |
slabp->s_mem = objp+colour_off; |
|
|
|
|
init_MUTEX(&cache_chain_sem); |
|
|
INIT_LIST_HEAD(&cache_chain); |
|
|
|
|
|
kmem_cache_estimate(0, cache_cache.objsize, 0, |
|
|
&left_over, &cache_cache.num); |
|
|
if (!cache_cache.num) |
|
|
BUG(); |
|
1595 |
|
|
1596 |
cache_cache.colour = left_over/cache_cache.colour_off; |
return slabp; |
|
cache_cache.colour_next = 0; |
|
|
} |
|
1597 |
\end{verbatim} |
\end{verbatim} |
1598 |
|
|
1599 |
\begin{itemize} |
The most important one to note here is the value of s\_mem. It'll be set to |
1600 |
\item Initialise the semaphore for access the cache chain |
be at the beginning of the slab if the slab manager is off slab but at the |
1601 |
|
end of the slab\_t if it's on slab. |
|
\item Initialise the cache chain linked list |
|
|
|
|
|
\item This estimates the number of objects and amount of bytes wasted. See |
|
|
Section \ref{Sec: kmem_cache_estimate} |
|
|
|
|
|
\item If even one kmem\_cache\_t cannot be stored in a page, there is |
|
|
something seriously wrong |
|
|
|
|
|
\item texttt{colour} is the number of different cache lines that can be used |
|
|
while still keeping L1 cache alignment |
|
1602 |
|
|
1603 |
\item texttt{colour\_next} indicates which line to use next. Start at 0 |
\section{Objects} |
1604 |
|
\label{Sec: Objects} |
1605 |
|
|
1606 |
\end{itemize} |
This section will cover how objects are managed. At this point, most of the |
1607 |
|
real hard work has been completed by either the cache or slab managers. |
1608 |
|
|
1609 |
\subsection{Initializing cache\_sizes} |
\subsection{Initializing Objects} |
1610 |
|
\label{Sec: Initializing Objects} |
1611 |
|
|
1612 |
\emph{kmem\_cache\_sizes\_init()} is called to create a set of caches of |
When a slab is created, all the objects in it put in an initialised state. If a |
1613 |
different sizes. On a system with a page size of 4096, the smallest chunk |
constructor is available, it is called for each object and it is expected when |
1614 |
is 32 bytes, otherwise it is 64 bytes. Two caches will be created for every |
an object is freed, it is left in it's initialised state. Conceptually this is |
1615 |
size, both of them cacheline-aligned, and one suitable for ISA DMA. So the |
very simple, cycle through all objects and call the constructor and initialise |
1616 |
smallest caches of memory are called {\emph size-32} and {\emph size-32(DMA)}. |
the kmem\_bufctl for it. The function \texttt{kmem\_cache\_init\_objs} |
1617 |
Caches for each subsequent power of two will be created until two caches of |
is responsible for initialising the objects. |
|
size of 131072 bytes are created. These will be used by \emph{kmalloc} later. |
|
|
Refer to section~\ref{fun:kcsi} for the implementation details. |
|
1618 |
|
|
1619 |
\section{Initializing Objects} |
\function{kmem\_cache\_init\_objs}{kmem_cache_init_objs}{mm/slab.c} |
|
\subsection{Function kmem\_cache\_init\_objs()} |
|
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
1620 |
\begin{verbatim} |
\begin{verbatim} |
1621 |
void kmem_cache_init_objs (kmem_cache_t * cachep, |
void kmem_cache_init_objs (kmem_cache_t * cachep, |
1622 |
slab_t * slabp, |
slab_t * slabp, |
1632 |
\end{verbatim} |
\end{verbatim} |
1633 |
|
|
1634 |
This steps through the number of objects that can be contained onslab. |
This steps through the number of objects that can be contained onslab. |
1635 |
($cachep\rightarrow{objsize} * i$) will give an offset from s\_mem where \textit{i}th |
($cachep\rightarrow{objsize} * i$) will give an offset from s\_mem where |
1636 |
object is. [note: s\_mem is used to point to the first object]. |
\textit{i}th object is. [note: s\_mem is used to point to the first object]. |
1637 |
|
|
1638 |
\begin{verbatim} |
\begin{verbatim} |
1639 |
#if DEBUG |
#if DEBUG |
1698 |
} |
} |
1699 |
\end{verbatim} |
\end{verbatim} |
1700 |
|
|
1701 |
This is used later for locating free objects within the slab |
This initialises the kmem\_bufctl\_t array. See Section \ref{Sec: Tracking |
1702 |
|
Free Objects} |
1703 |
|
|
1704 |
\begin{verbatim} |
\begin{verbatim} |
1705 |
slab_bufctl(slabp)[i-1] = BUFCTL_END; |
slab_bufctl(slabp)[i-1] = BUFCTL_END; |
1707 |
|
|
1708 |
\end{verbatim} |
\end{verbatim} |
1709 |
|
|
1710 |
Mark the end of the slab with BUFCTL\_END. free is set to 0 so that the first |
Mark the end of the kmem\_bufctl\_t array with BUFCTL\_END. free is set to |
1711 |
object allocated will be the first object on the slab. |
0 so that the first object allocated will be the 0th object on the slab. |
1712 |
|
|
1713 |
\section{Allocating Objects} |
\subsection{Allocating Objects} |
1714 |
|
\label{Sec: Allocating Objects} |
1715 |
|
|
1716 |
This section covers what is needed to allocate an object. The allocator behaves |
This section covers what is needed to allocate an object. The allocator behaves |
1717 |
slightly different in the UP and SMP cases and will be treated seperatly in |
slightly different in the UP and SMP cases and will be treated seperatly in |
1718 |
this section. Figure \ref{fig: kmem_cache_alloc UP} shows the basic call |
this section. Figure \ref{fig: kmem_cache_alloc UP} shows the basic call |
1719 |
graph that is used to allocate an object in the UP case. |
graph that is used to allocate an object in the UP case. |
1720 |
|
|
1721 |
\figesc{graphics/kmem_cache_alloc-UP.ps}{kmem\_cache\_alloc UP}{kmem_cache_alloc |
\begin{figure}[h] |
1722 |
UP |
\centerline{\includegraphics{graphics/kmem_cache_alloc-UP.ps}} |
1723 |
} |
\caption{kmem\_cache\_alloc UP} |
1724 |
|
\label{fig: kmem_cache_alloc UP} |
1725 |
|
\end{figure} |
1726 |
|
|
1727 |
As is clear, there is four basic steps. The first step (head) covers basic |
As is clear, there is four basic steps. The first step (head) covers basic |
1728 |
checking to make sure the allocation is allowable. The second step is to |
checking to make sure the allocation is allowable. The second step is to |
1729 |
select which slabs list to allocate from. This is one of slabs\_partial or |
select which slabs list to allocate from. This is one of slabs\_partial or |
1730 |
slabs\_free. If there is no slabs in slabs\_free, the cache is grown (See |
slabs\_free. If there is no slabs in slabs\_free, the cache is grown (See |
1731 |
Section \ref{Sec: Slab Creation}) to create a new slab in slabs\_free. The |
Section \ref{Sec: Growing a Cache}) to create a new slab in slabs\_free. The |
1732 |
final step is to allocate the object from the selected slab. |
final step is to allocate the object from the selected slab. |
1733 |
|
|
1734 |
The SMP case takes one futher step. Before allocating one object, it will |
The SMP case takes one futher step. Before allocating one object, it will |
1735 |
check to see if there is one available from the per-CPU cache and use it if |
check to see if there is one available from the per-CPU cache and use it if |
1736 |
there is. If there is not, it will allocate \texttt{batchcount} number of |
there is. If there is not, it will allocate \texttt{batchcount} number of |
1737 |
objects in bulk and place them in it's per-cpu cache. See Section \ref{Sec: |
objects in bulk and place them in it's per-cpu cache. See Section \ref{Sec: Per-CPU Object Cache} for details. |
|
Per-C PU Object Cache} for details. |
|
1738 |
|
|
1739 |
|
\function{\_\_kmem\_cache\_alloc}{kmem_cache_alloc}{mm/slab.c} |
|
\subsection{Function \_\_kmem\_cache\_alloc()} |
|
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
1740 |
\begin{verbatim} |
\begin{verbatim} |
1741 |
void * __kmem_cache_alloc (kmem_cache_t *cachep, |
void * __kmem_cache_alloc (kmem_cache_t *cachep, |
1742 |
int flags) |
int flags) |
1760 |
\vspace{10pt} |
\vspace{10pt} |
1761 |
\begin{supertabular}{lp{10cm}} |
\begin{supertabular}{lp{10cm}} |
1762 |
|
|
1763 |
\id{SLAB\_NOFS}& This flag tells the page free logic to not make any |
\id{SLAB\_NOFS} & This flag tells the page free logic to not make any |
1764 |
calls to the file-system layer. This is important for the |
calls to the file-system layer. This is important for the |
1765 |
allocation of buffer heads for instance where it is important |
allocation of buffer heads for instance where it is important |
1766 |
the file-system does not end up recursively calling itself. \\ |
the file-system does not end up recursively calling itself \\ |
1767 |
|
|
1768 |
\id{SLAB\_NOIO} & Do not start any IO. For example, in |
\id{SLAB\_NOIO} & Do not start any IO. For example, in |
1769 |
\texttt{try\_to\_free\_buffers()}, no attempt to write out |
\texttt{try\_to\_free\_buffers()}, no attempt to write out |
1770 |
busy buffer pages will be made if this slab flag is used. \\ |
busy buffer pages will be made if this slab flag is used \\ |
1771 |
|
|
1772 |
\id{SLAB\_NOHIGHIO}& Treated the same as SLAB\_NOIO according to buffer.c \\ |
\id{SLAB\_NOHIGHIO}& Treated the same as SLAB\_NOIO according to buffer.c \\ |
1773 |
|
|
1999 |
\end{itemize} |
\end{itemize} |
2000 |
|
|
2001 |
|
|
2002 |
\subsection{Function kmem\_cache\_alloc\_one\_tail()} |
\function{kmem\_cache\_alloc\_one\_tail}{kmem_cache_alloc_one_tail}{mm/slab.c} |
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
2003 |
\begin{verbatim} |
\begin{verbatim} |
2004 |
void * kmem_cache_alloc_one_tail (kmem_cache_t *cachep, |
void * kmem_cache_alloc_one_tail (kmem_cache_t *cachep, |
2005 |
slab_t *slabp) |
slab_t *slabp) |
2072 |
|
|
2073 |
Return the object which has been allocated. |
Return the object which has been allocated. |
2074 |
|
|
2075 |
\subsection{Function kmem\_cache\_alloc\_batch()}\label{fun:kcab} |
\function{kmem\_cache\_alloc\_batch}{kmem_cache_alloc_batch}{mm/slab.c} |
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
2076 |
\begin{verbatim} |
\begin{verbatim} |
2077 |
void* kmem_cache_alloc_batch(kmem_cache_t* cachep, |
void* kmem_cache_alloc_batch(kmem_cache_t* cachep, |
2078 |
cpucache_t* cc, |
cpucache_t* cc, |
2149 |
Free the spinlock and return an object if possible. Otherwise return NULL |
Free the spinlock and return an object if possible. Otherwise return NULL |
2150 |
to the cache can be grown. |
to the cache can be grown. |
2151 |
|
|
2152 |
\section{Object Freeing} |
\subsection{Object Freeing} |
2153 |
\label{Sec: Object Freeing} |
\label{Sec: Object Freeing} |
2154 |
|
|
2155 |
This section covers what is needed to free an object. In many ways, it is |
This section covers what is needed to free an object. In many ways, it is |
2158 |
object to the per CPU cache. Figure \ref{fig: kmem_cache_free} shows the very |
object to the per CPU cache. Figure \ref{fig: kmem_cache_free} shows the very |
2159 |
simply call graph used |
simply call graph used |
2160 |
|
|
2161 |
\figesc{graphics/kmem_cache_free.ps}{kmem\_cache\_free}{kmem_cache_free} |
\begin{figure} |
2162 |
|
\centerline{\includegraphics{graphics/kmem_cache_free.ps}} |
2163 |
|
\caption{kmem\_cache\_free} |
2164 |
|
\label{fig: kmem_cache_free} |
2165 |
|
\end{figure} |
2166 |
|
|
2167 |
\function{kmem\_cache\_free}{kmem_cache_free}{mm/slab.c} |
\function{kmem\_cache\_free}{kmem_cache_free}{mm/slab.c} |
2168 |
|
|
2412 |
} |
} |
2413 |
\end{verbatim} |
\end{verbatim} |
2414 |
|
|
2415 |
\section{Creating a Cache} |
\section{Tracking Free Objects} |
2416 |
\subsection{Function kmem\_cache\_create()}\index{kmem\_cache\_create()} |
\label{Sec: Tracking Free Objects} |
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
|
\begin{verbatim} |
|
|
kmem_cache_t * |
|
|
kmem_cache_create(const char *name, |
|
|
size_t size, |
|
|
size_t offset, |
|
|
unsigned long flags, |
|
|
void (*ctor)(void*, kmem_cache_t *, unsigned long), |
|
|
void (*dtor)(void*, kmem_cache_t *, unsigned long)) |
|
|
\end{verbatim} |
|
2417 |
|
|
2418 |
This function is responsible for creating new caches and adding them to |
The slab allocator has to have a quick and simple way of tracking where free |
2419 |
the cache chain. For clarity, debugging information and sanity checks will |
objects are on the partially filled slabs. It achieves this via a mechanism |
2420 |
be ignored as they are only important during development and secondary to |
called \id{kmem\_bufctl\_t} that is associated with each slab manager as |
2421 |
the slab allocator itself. The only check that is important is the check |
obviously it is up to the slab manager to know where it's free objects are. |
|
of flags against the CREATE\_MASK as the caller may request flags that are |
|
|
simply not available. |
|
2422 |
|
|
2423 |
The arguments to kmem\_cache\_create are as follows |
Historically, and according to the paper describing the slab |
2424 |
|
allocator~\cite{slab}, \id{kmem\_bufctl\_t} was a linked list of objects. In |
2425 |
|
Linux 2.2.x, this struct was a union of three items, a pointer to the next |
2426 |
|
free object, a pointer to the slab manager and a pointer to the object. Which |
2427 |
|
it was depended on the state of the object. |
2428 |
|
|
2429 |
\vspace{10pt} \noindent \begin{tabularx}{15cm}{lX} |
Today, the slab and cache a page belongs to is determined by the list field |
2430 |
const char *name & Human readable name of the cache \\ |
in \texttt{struct page} illustrated in Figure \ref{fig: Page to Cache and |
2431 |
size\_t size & Size of the slab to create \\ |
Slab Relationship} in Section \ref{Sec: Slabs} |
|
size\_t offset & Offset between each object (color) \\ |
|
|
unsigned long flags & Flags to assign to the cache as described above \\ |
|
|
void (*ctor)() & Pointer to constructor function \\ |
|
|
void (*dtor)() & Pointer to destructor \\ |
|
|
\end{tabularx} |
|
2432 |
|
|
2433 |
\vspace{10pt} |
\subsection{kmem\_bufctl\_t} |
2434 |
|
\label{Sec: kmem_bufctl_t} |
2435 |
|
|
2436 |
The whole beginning of the function is all debugging checks similar to what |
The kmem\_bufctl\_t is simply an unsigned integer and is treated as an array |
2437 |
has been dealt with to date, so we'll start with the last sanity check that |
stored after the slab manager (See Section \ref{Sec: Slabs}). The number |
2438 |
is made so that you can see where we are starting from |
of elements in the array is the same as the number of objects on the slab. |
2439 |
|
|
2440 |
\begin{verbatim} |
\begin{verbatim} |
2441 |
/* |
typedef unsigned int kmem_bufctl_t; |
|
* Always checks flags, a caller might be |
|
|
* expecting debug support which isn't available. |
|
|
*/ |
|
|
BUG_ON(flags & ~CREATE_MASK); |
|
2442 |
\end{verbatim} |
\end{verbatim} |
2443 |
|
|
2444 |
CREATE\_MASK is the full set of flags that are allowable. If debugging flags |
As the array is kept after the slab descriptor and there is no pointer to |
2445 |
are used when they are not available, BUG will be called. |
the first element directly, a helper macro \id{slab\_bufctl} is provided. |
2446 |
|
|
2447 |
\begin{verbatim} |
\begin{verbatim} |
2448 |
cachep = (kmem_cache_t *) kmem_cache_alloc |
#define slab_bufctl(slabp) \ |
2449 |
(&cache_cache, SLAB_KERNEL); |
((kmem_bufctl_t *)(((slab_t*)slabp)+1)) |
|
if (!cachep) |
|
|
goto opps; |
|
|
memset(cachep, 0, sizeof(kmem_cache_t)); |
|
2450 |
\end{verbatim} |
\end{verbatim} |
2451 |
|
|
2452 |
Request a kmem\_cache\_t from the cache\_cache. Remember this is a cache |
This seemingly cryptic macro is quiet simple when broken |
2453 |
of cache descriptors. It's not a catch 22 problem as the cache\_cache is |
down. The parameter \texttt{slabp} is to the slab manager. The block |
2454 |
statically initialized. |
\texttt{((slab\_t*)slabp)+1} casts slabp to a slab\_t struct and adds 1 |
2455 |
|
to it. This will give a \texttt{slab\_t *} pointer to the beginning of the |
2456 |
|
kmem\_bufctl\_t array. \texttt{(kmem\_bufctl\_t *)} recasts that pointer |
2457 |
|
back to the required type. The results in blocks of code that contain |
2458 |
|
\texttt{slab\_bufctl(slabp)[i]}. Translated that says, take a pointer to |
2459 |
|
a slab descriptor, offset it with slab\_bufctl to the beginning of the |
2460 |
|
kmem\_bufctl\_t array and give the i${th}$ element of the array. |
2461 |
|
|
2462 |
\begin{verbatim} |
The index to the next free object in the slab is stored in |
2463 |
/* Check that size is in terms of words. |
\texttt{slab\_t$\rightarrow$free} eliminating the need for a linked list |
2464 |
* This is needed to avoid unaligned accesses |
to track free objects. When objects are allocated or freed, this pointer is |
2465 |
* for some archs when redzoning is used, and makes |
updated based on information in the kmem\_bufctl\_t array. |
|
* sure any on-slab bufctl's are also correctly aligned. |
|
|
*/ |
|
|
if (size & (BYTES_PER_WORD-1)) { |
|
|
size += (BYTES_PER_WORD-1); |
|
|
size &= ~(BYTES_PER_WORD-1); |
|
|
printk("%sForcing size word alignment - %s\n", |
|
|
func_nm, name); |
|
|
} |
|
|
\end{verbatim} |
|
2466 |
|
|
2467 |
Comment says it all really. The next block is debugging code so is skipped |
\subsection{Initialising the kmem\_bufctl\_t Array} |
|
here. |
|
2468 |
|
|
2469 |
\begin{verbatim} |
When a cache is grown, alll the objects and the kmem\_bufctl\_t array on |
2470 |
align = BYTES_PER_WORD; |
the slab are initialised. The array is filled with the index of each object |
2471 |
if (flags & SLAB_HWCACHE_ALIGN) |
beginning with 1 and ending with the marker \texttt{BUFCTL\_END}. |
|
align = L1_CACHE_BYTES; |
|
|
\end{verbatim} |
|
2472 |
|
|
2473 |
This will align the object size to the system word size for quicker retrieval. |
The value 0 is stored in \texttt{slab\_t$\rightarrow$free} as the 0${th}$ |
2474 |
If the wasted space is less important than good L1 cache performance, the |
object is the first free object to be used. See section \ref{Sec: Initializing |
2475 |
alignment will be made L1\_CACHE\_BYTES. |
Objects} to see the function which initialised the array. |
2476 |
|
|
2477 |
\begin{verbatim} |
The idea is that for a given object \emph{n}, the index of the next free |
2478 |
if (size >= (PAGE_SIZE>>3)) |
object will be stored in kmem\_bufctl\_t[n]. Looking at the array above, |
2479 |
/* |
the next object free after 0 is 1. After 1, there is two and so on. |
|
* Size is large, assume best to place |
|
|
* the slab management obj off-slab |
|
|
* (should allow better packing of objs). |
|
|
*/ |
|
|
flags |= CFLGS_OFF_SLAB; |
|
|
\end{verbatim} |
|
2480 |
|
|
2481 |
Comment says it all really |
\subsection{Finding the Next Free Object} |
2482 |
|
|
2483 |
\begin{verbatim} |
\texttt{kmem\_cache\_alloc} is the function which allocates an object. It |
2484 |
if (flags & SLAB_HWCACHE_ALIGN) { |
uses the function \texttt{kmem\_cache\_alloc\_one\_tail} (See Section |
2485 |
while (size < align/2) |
\ref{Sec: kmem_cache_alloc_one_tail}) to allocate the object and update the |
2486 |
align /= 2; |
kmem\_bufctl\_t array. |
|
size = (size+align-1)&(~(align-1)); |
|
|
} |
|
|
\end{verbatim} |
|
2487 |
|
|
2488 |
If the cache is SLAB\_HWCACHE\_ALIGN, it's aligning on the size of |
\texttt{slab\_t$\rightarrow$free} has the index of the first free object. The |
2489 |
L1\_CACHE\_BYES which is quiet large, 32 bytes on an Intel. So, align is |
index of the next free object is at kmem\_bufctl\_t[slab\_t$\rightarrow$free]. |
2490 |
adjusted to that two objects could fit in a cache line. If 2 would fit, |
In code terms, this looks like |
|
then try 4, until as many objects are packed in. Then size is adjusted to |
|
|
the new alignment |
|
2491 |
|
|
2492 |
\begin{verbatim} |
\begin{verbatim} |
2493 |
/* Cal size (in pages) of slabs, and the num |
objp = slabp->s_mem + slabp->free*cachep->objsize; |
2494 |
* of objs per slab. This could be made much more |
slabp->free=slab_bufctl(slabp)[slabp->free]; |
|
* intelligent. For now, try to avoid using high |
|
|
* page-orders for slabs. When the gfp() funcs |
|
|
* are more friendly towards high-order requests, |
|
|
* this should be changed. |
|
|
*/ |
|
|
do { |
|
|
unsigned int break_flag = 0; |
|
|
cal_wastage: |
|
|
kmem_cache_estimate(cachep->gfporder, size, flags, |
|
|
&left_over, &cachep->num); |
|
2495 |
\end{verbatim} |
\end{verbatim} |
2496 |
|
|
2497 |
Comment says it all |
\texttt{slabp$\rightarrow$s\_mem} is the index of the first object on the |
2498 |
|
slab. \texttt{slabp$\rightarrow$free} is the index of the object to allocate |
2499 |
|
and it has to be multipled by the size of an object. |
2500 |
|
|
2501 |
\begin{verbatim} |
The index of the next free object to allocate is stored at |
2502 |
if (break_flag) |
kmem\_bufctl\_t[slabp$\rightarrow$free]. There is no pointer directly |
2503 |
break; |
to the array hence the helper macro slab\_bufctl is used. Note that the |
2504 |
if (cachep->gfporder >= MAX_GFP_ORDER) |
kmem\_bufctl\_t array is not changed during allocations but that the elements |
2505 |
break; |
that are unallocated are unreachable. For example, after two allocations, index |
2506 |
if (!cachep->num) |
0 and 1 of the kmem\_bufctl\_t array are not pointed to by any other element. |
|
goto next; |
|
|
if (flags & CFLGS_OFF_SLAB && |
|
|
cachep->num > offslab_limit) { |
|
|
/* Oops, this num of objs will cause problems. */ |
|
|
cachep->gfporder--; |
|
|
break_flag++; |
|
|
goto cal_wastage; |
|
|
} |
|
|
\end{verbatim} |
|
|
|
|
|
The break\_flag is set so that the gfporder is reduced only once when off-slab |
|
|
slab\_t's are in use. The second check is so the order doesn't get higher |
|
|
than whats possible. If num is zero, it means the gfporder is too low and |
|
|
needs to be increased. The last check is if the slab\_t is offslab. There |
|
|
is a limit to how many objects can be managed off-slab. If it's hit, the |
|
|
order is reduced and kmem\_cache\_estimate is called again. |
|
2507 |
|
|
2508 |
\begin{verbatim} |
\subsection{Updating kmem\_bufctl\_t} |
|
/* |
|
|
* The Buddy Allocator will suffer if it has to deal with |
|
|
* too many allocators of a large order. So while large |
|
|
* numbers of objects is good, large orders are not so |
|
|
* slab_break_gfp_order forces a balance |
|
|
*/ |
|
|
if (cachep->gfporder >= slab_break_gfp_order) |
|
|
break; |
|
|
\end{verbatim} |
|
2509 |
|
|
2510 |
Comment says it all |
The kmem\_bufctl\_t list is only updated when an object is freed in the |
2511 |
|
function \texttt{kmem\_cache\_free\_one}. The array is updated with this |
2512 |
|
block of code |
2513 |
|
|
2514 |
\begin{verbatim} |
\begin{verbatim} |
2515 |
if ((left_over*8) <= (PAGE_SIZE<<cachep->gfporder)) |
unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize; |
2516 |
break; /* Acceptable internal fragmentation. */ |
|
2517 |
|
slab_bufctl(slabp)[objnr] = slabp->free; |
2518 |
|
slabp->free = objnr; |
2519 |
\end{verbatim} |
\end{verbatim} |
2520 |
|
|
2521 |
8 appears to be an arbitrary figure. |
\texttt{objp} is the object about to be freed and objnr is it's index. |
2522 |
|
\texttt{kmem\_bufctl\_t[objnr]} is updated to pointer to the current value |
2523 |
\begin{verbatim} |
of \texttt{slabp$\rightarrow$free} efficively placing the object pointed to |
2524 |
next: |
by free on the pseudo linked list. slabp$\rightarrow$free is updated to the |
2525 |
cachep->gfporder++; |
object been freed so that it will be the next one allocated. |
|
} while (1); |
|
|
\end{verbatim} |
|
2526 |
|
|
2527 |
This will increase the order to see if it's worth using another page to |
\section{Per-CPU Object Cache} |
2528 |
balance how many objects can be in a slab against the slab\_break\_gfp\_order |
\label{Sec: Per-CPU Object Cache} |
|
and internal fragmentation. |
|
2529 |
|
|
2530 |
\begin{verbatim} |
One of the tasks the slab allocator is dedicated to is improved hardware cache |
2531 |
if (!cachep->num) { |
utilization. An aim of high performance computing\cite{high-performance} in |
2532 |
printk("kmem_cache_create: couldn't create cache %s.\n", |
general is to use data on the same CPU for as long as possible. Linux |
2533 |
name); |
achieves |
2534 |
kmem_cache_free(&cache_cache, cachep); |
this by trying to keep objects in the same CPU cache with a Per-CPU object |
2535 |
cachep = NULL; |
cache, called a \id{cpucache} for each CPU in the system. |
|
goto opps; |
|
|
} |
|
|
\end{verbatim} |
|
2536 |
|
|
2537 |
The objects must be too large to fit into the slab so clean up and goto opps |
When allocating or freeing objects, they are placed in the cpucache. When |
2538 |
that just returns. |
there is no objects free, a \texttt{batch} of objects is placed into the |
2539 |
|
pool. When the pool gets too large, half of them are removed and placed in |
2540 |
|
the global cache. This way the hardware cache will be used for as long as |
2541 |
|
possible on the same CPU. |
2542 |
|
|
2543 |
\begin{verbatim} |
\subsection{Describing the Per-CPU Object Cache} |
2544 |
slab_size = L1_CACHE_ALIGN(cachep->num * |
\label{Sec: Describing the Per-CPU Object Cache} |
|
sizeof(kmem_bufctl_t)+sizeof(slab_t)) |
|
|
\end{verbatim} |
|
2545 |
|
|
2546 |
The size of a slab\_t is the number of objects by the size of the |
Each cache descriptor has a pointer to an array of cpucaches, described in |
2547 |
kmem\_bufctl\_ for each of them plus the size of the slab\_t struct itself |
the cache descriptor as |
|
presuming it's kept on-slab. |
|
2548 |
|
|
2549 |
\begin{verbatim} |
\begin{verbatim} |
2550 |
if (flags & CFLGS_OFF_SLAB && left_over >= slab_size) { |
cpucache_t *cpudata[NR_CPUS]; |
|
flags &= ~CFLGS_OFF_SLAB; |
|
|
left_over -= slab_size; |
|
|
} |
|
2551 |
\end{verbatim} |
\end{verbatim} |
2552 |
|
|
2553 |
The calculation for slab\_size included slab\_t even if the slab\_t would be |
This structure is very simple |
|
off-slab. These checks see if it would fit on-slab and if it would, place it. |
|
2554 |
|
|
2555 |
\begin{verbatim} |
\begin{verbatim} |
2556 |
/* Offset must be a multiple of the alignment. */ |
typedef struct cpucache_s { |
2557 |
offset += (align-1); |
unsigned int avail; |
2558 |
offset &= ~(align-1); |
unsigned int limit; |
2559 |
if (!offset) |
} cpucache_t; |
|
offset = L1_CACHE_BYTES; |
|
|
cachep->colour_off = offset; |
|
|
cachep->colour = left_over/offset; |
|
2560 |
\end{verbatim} |
\end{verbatim} |
2561 |
|
|
2562 |
offset is the offset between each object so that the slab is coloured so |
\begin{description} |
2563 |
that each object would get different cache lines. |
\item{avail} is the number of free objects available on this cpucache |
2564 |
|
\item{limit} is the total number of free objects that can exist |
2565 |
|
\end{description} |
2566 |
|
|
2567 |
\begin{verbatim} |
A helper macro \id{cc\_data} is provided to give the cpucache for a given |
2568 |
/* init remaining fields */ |
cache and processor. It is defined as |
|
if (!cachep->gfporder && !(flags & CFLGS_OFF_SLAB)) |
|
|
flags |= CFLGS_OPTIMIZE; |
|
2569 |
|
|
2570 |
cachep->flags = flags; |
\begin{verbatim} |
2571 |
cachep->gfpflags = 0; |
#define cc_data(cachep) \ |
2572 |
if (flags & SLAB_CACHE_DMA) |
((cachep)->cpudata[smp_processor_id()]) |
2573 |
cachep->gfpflags |= GFP_DMA; |
\end{verbatim} |
2574 |
|
|
2575 |
spin_lock_init(&cachep->spinlock); |
This will take a given cache descriptor (cachep) and return a pointer from |
2576 |
cachep->objsize = size; |
the cpucache array (cpudata). The index needed is the ID of the current |
2577 |
INIT_LIST_HEAD(&cachep->slabs_full); |
processor, smp\_processor\_id(). |
|
INIT_LIST_HEAD(&cachep->slabs_partial); |
|
|
INIT_LIST_HEAD(&cachep->slabs_free); |
|
2578 |
|
|
2579 |
if (flags & CFLGS_OFF_SLAB) |
Pointers to objects on the cpucache are placed immediatly after the |
2580 |
cachep->slabp_cache = |
cpucache\_t struct. This is very similiar to how objects are stored after a |
2581 |
kmem_find_general_cachep(slab_size,0); |
slab descriptor illustrated in Section \ref{Sec: Slab Structure}. |
|
cachep->ctor = ctor; |
|
|
cachep->dtor = dtor; |
|
|
/* Copy name over so we don't have |
|
|
* problems with unloaded modules */ |
|
|
strcpy(cachep->name, name); |
|
2582 |
|
|
2583 |
\end{verbatim} |
\subsection{Adding/Removing Objects from the Per-CPU Cache} |
2584 |
|
|
2585 |
This just copies the information into the kmem\_cache\_t and initializes |
To prevent fragmentation, objects are always added or removed from the end |
2586 |
it's fields. The kmem\_find\_general\_cachep despite it's funny name just |
of the array. To add an object (\texttt{obj}) to the CPU cache (\texttt{cc}), |
2587 |
goes through the sized caches used by kmalloc until it finds one big enough |
the following block of code is used |
|
to store the slab\_t . |
|
2588 |
|
|
2589 |
\begin{verbatim} |
\begin{verbatim} |
2590 |
#ifdef CONFIG_SMP |
cc_entry(cc)[cc->avail++] = obj; |
|
if (g_cpucache_up) |
|
|
enable_cpucache(cachep); |
|
|
#endif |
|
2591 |
\end{verbatim} |
\end{verbatim} |
2592 |
|
|
2593 |
If SMP is available, enable\_cpucache will create a per CPU cache of objects |
To remove an object |
|
for this cache and set proper values for avail and limit based on how large |
|
|
each object is. |
|
2594 |
|
|
2595 |
\begin{verbatim} |
\begin{verbatim} |
2596 |
/* |
obj = cc_entry(cc)[--cc->avail]; |
|
* Need the semaphore to access the chain. |
|
|
* Cycle through the chain to make sure there |
|
|
* isn't a cache of the same name available. |
|
|
*/ |
|
|
down(&cache_chain_sem); |
|
|
{ |
|
|
struct list_head *p; |
|
|
|
|
|
list_for_each(p, &cache_chain) { |
|
|
kmem_cache_t *pc = list_entry(p, kmem_cache_t, next); |
|
|
|
|
|
/* The name field is constant - no lock needed. */ |
|
|
if (!strcmp(pc->name, name)) |
|
|
BUG(); |
|
|
} |
|
|
} |
|
2597 |
\end{verbatim} |
\end{verbatim} |
2598 |
|
|
2599 |
Comment covers it |
\id{cc\_entry} is a helper major which gives a pointer to the first object |
2600 |
|
in the cpucache. It is defined as |
2601 |
|
|
2602 |
\begin{verbatim} |
\begin{verbatim} |
2603 |
/* There is no reason to lock our new cache before we |
#define cc_entry(cpucache) \ |
2604 |
* link it in - no one knows about it yet... |
((void **)(((cpucache_t*)(cpucache))+1)) |
|
*/ |
|
|
list_add(&cachep->next, &cache_chain); |
|
|
up(&cache_chain_sem); |
|
|
opps: |
|
|
return cachep; |
|
|
} |
|
2605 |
\end{verbatim} |
\end{verbatim} |
2606 |
|
|
2607 |
\subsection{Calculating the Number of Objects on a Slab} |
This takes a pointer to a cpucache, increments the value by the size of the |
2608 |
\label{Sec: Calculating the Number of Objects on a Slab} |
cpucache\_t descriptor giving the first object in the cache. |
|
|
|
|
During cache creation, it is determined how many objects can be stored in |
|
|
a slab and how much wasteage there will be. The following function calculates |
|
|
how many objects may be stored, taking into account if the slab and bufctl's |
|
|
must be stored on-slab. |
|
|
|
|
|
\function{kmem\_cache\_estimate}{kmem_cache_estimate}{mm/slab.c} |
|
2609 |
|
|
2610 |
\begin{verbatim} |
\subsection{Enabling Per-CPU Caches} |
|
static void kmem_cache_estimate (unsigned long gfporder, size_t size, |
|
|
int flags, size_t *left_over, unsigned int *num) |
|
|
{ |
|
|
\end{verbatim} |
|
2611 |
|
|
2612 |
\begin{description} |
When a cache is created, it's CPU cache has to be enabled and memory allocated |
2613 |
\idn{gfporder} The 2$^{gfporder}$ number of pages to allocate for each slab |
for it using kmalloc. The function \id{enable\_cpucache} is responsible for |
2614 |
\idn{size} The size of each object |
deciding what size to make the cache and calling \id{kmem\_tune\_cpucache} |
2615 |
\idn{flags} The cache flags. See Section \ref{Sec: Cache Static Flags} |
to allocate memory for it. |
|
\idn{left\_over} The number of bytes left over in the slab. Returned to |
|
|
caller |
|
|
\idn{num} The number of objects that will fit in a slab. Returned to |
|
|
caller |
|
|
\end{description} |
|
2616 |
|
|
2617 |
\begin{verbatim} |
Obviously a CPU cache cannot exist until after the various sizes caches |
2618 |
|
have been enabled so a global variable \id{g\_cpucache\_up} is used |
2619 |
|
to prevent cpucache's been enabled before it is possible. The function |
2620 |
|
\id{enable\_all\_cpucaches} cycles through all caches in the cache chain |
2621 |
|
and enables their cpucache. |
2622 |
|
|
2623 |
int i; |
Once the CPU cache has been setup, it can be accessed without locking as a |
2624 |
size_t wastage = PAGE_SIZE<<gfporder; |
CPU will never access the wrong cpucache so it is guarenteed safe access to |
2625 |
|
it. |
2626 |
|
|
2627 |
size_t extra = 0; |
\function{enable\_all\_cpucaches}{enable_all_cpucaches}{mm/slab.c} |
|
size_t base = 0; |
|
2628 |
|
|
2629 |
\end{verbatim} |
This function locks the cache chain and enables the cpucache for every cache. |
2630 |
\texttt{wastage} is decremented through the function. It starts with |
This is important after the cache\_cache and sizes cache have been enabled. |
|
the maximum possible amount of wastage. |
|
2631 |
|
|
2632 |
\begin{verbatim} |
\begin{verbatim} |
2633 |
if (!(flags & CFLGS_OFF_SLAB)) { |
static void enable_all_cpucaches (void) |
2634 |
base = sizeof(slab_t); |
{ |
2635 |
extra = sizeof(kmem_bufctl_t); |
struct list_head* p; |
|
} |
|
|
\end{verbatim} |
|
2636 |
|
|
2637 |
\texttt{base} is where usable memory in the slab starts. If the slab descriptor |
down(&cache_chain_sem); |
|
is kept on cache, the base begins at the end of the slab\_t struct and the |
|
|
number of bytes needed to store the bufctl is the size of kmem\_bufctl\_t. |
|
|
texttt{extra} is the number of bytes needed to store kmem\_bufctl\_t |
|
2638 |
|
|
2639 |
\begin{verbatim} |
p = &cache_cache.next; |
|
|
|
|
i = 0; |
|
|
while (i*size + L1_CACHE_ALIGN(base+i*extra) <= wastage) |
|
|
i++; |
|
2640 |
\end{verbatim} |
\end{verbatim} |
2641 |
|
|
2642 |
\texttt{i} becomes the number of objects the slab can hold |
Obtain the semaphore to the cache chain and get the first cache on the chain |
|
|
|
|
This counts up the number of objects that the cache can store. \texttt{i*size} |
|
|
is the amount of memory needed to store the object itself. |
|
|
\texttt{L1\_CACHE\_ALIGN(base+i*extra)} is slightly trickier. This is |
|
|
calculating the amount of memory needed to store the kmem\_bufctl\_t of |
|
|
which one exists for every object in the slab. As it is at the beginning of |
|
|
the slab, it is L1 cache aligned so that the first object in the slab will |
|
|
be aligned to hardware cache. \texttt{i*extra} will calculate the amount of |
|
|
space needed to hold a kmem\_bufctl\_t for this object. As wastage starts |
|
|
out as the size of the slab, it's use is overloaded here. |
|
2643 |
|
|
2644 |
\begin{verbatim} |
\begin{verbatim} |
|
if (i > 0) |
|
|
i--; |
|
2645 |
|
|
2646 |
if (i > SLAB_LIMIT) |
do { |
2647 |
i = SLAB_LIMIT; |
kmem_cache_t* cachep = list_entry(p, kmem_cache_t, next); |
|
\end{verbatim} |
|
2648 |
|
|
2649 |
Because the previous loop counts until the slab overflows, the number of |
enable_cpucache(cachep); |
2650 |
objects that can be stored is \texttt{i-1}. |
p = cachep->next.next; |
2651 |
|
} while (p != &cache_cache.next); |
2652 |
|
\end{verbatim} |
2653 |
|
|
2654 |
SLAB\_LIMIT is the absolute largest number of objects a slab can store. Is |
Cycle through the whole chain. For each cache on it, enable it's cpucache. |
2655 |
is defined as 0xffffFFFE as this the largest number kmem\_bufctl\_t, which |
Note that this will skip the first cache on the chain but cache\_cache doesn't |
2656 |
is an unsigned int, can hold |
need a cpucache as it's so rarely used. |
2657 |
|
|
2658 |
\begin{verbatim} |
\begin{verbatim} |
2659 |
*num = i; |
|
2660 |
wastage -= i*size; |
up(&cache_chain_sem); |
|
wastage -= L1_CACHE_ALIGN(base+i*extra); |
|
|
*left_over = wastage; |
|
2661 |
} |
} |
2662 |
\end{verbatim} |
\end{verbatim} |
2663 |
|
|
2664 |
\begin{itemize} |
Release the semaphore |
|
\item \texttt{num} is now the number of objects a slab can hold |
|
|
\item Take away the space taken up by all the objects from wastage |
|
|
\item Take away the space taken up by the kmem\_bufctl\_t |
|
|
\item Wastage has now been calculated as the left over space in the slab |
|
|
\item Add the cache to the chain and return. |
|
|
\end{itemize} |
|
|
|
|
|
\section{Growing a Cache} |
|
2665 |
|
|
2666 |
At this point, we have seen how the cache is created, but on creation, |
\function{enable\_cpucache}{enable_cpucache}{mm/slab.c} |
|
it is an empty cache with empty lists for it's \texttt{slab\_full}, |
|
|
\texttt{slab\_partial} and \texttt{slabs\_free}. See Section \ref{Sec: Slab Allocator Overview} for a description of these lists. |
|
2667 |
|
|
2668 |
This section will show how a cache is grown when no objects are left in the |
This function calculates what the size of a cpucache should be |
2669 |
\texttt{slabs\_partial} list and there is no slabs in \texttt{slabs\_free}. |
based on the size of the objects the cache contains before calling |
2670 |
The principle function for this is \id{kmem\_cache\_grow}. The tasks it |
\texttt{kmem\_tune\_cpucache} which does the actual allocation. |
|
fulfills are |
|
|
|
|
|
\begin{itemize} |
|
|
\item Perform basic sanity checks to guard against bad usage |
|
|
\item Calculate colour offset for objects in this slab |
|
|
\item Allocate memory for slab and acquire a slab descriptor |
|
|
\item Link the pages used for the slab to the slab and cache descriptors (See |
|
|
Section \ref{Sec: Slab Structure} |
|
|
\item Initalise objects in the slab |
|
|
\item Add the slab to the cache |
|
|
\end{itemize} |
|
|
|
|
|
\begin{figure} |
|
|
\centerline{\includegraphics{graphics/kmem_cache_grow.ps}} |
|
|
\caption{kmem\_cache\_grow} |
|
|
\label{kmem_cache_grow} |
|
|
\end{figure} |
|
|
|
|
|
\subsection{Function kmem\_cache\_grow()} |
|
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
|
\begin{verbatim} |
|
|
int kmem_cache_grow (kmem_cache_t * cachep, |
|
|
int flags) |
|
|
\end{verbatim} |
|
|
|
|
|
When there is no partial of free slabs left, the cache has to grow by |
|
|
allocating a new slab and placing it on the free list. It is quiet long but |
|
|
not too complex. |
|
2671 |
|
|
2672 |
\begin{verbatim} |
\begin{verbatim} |
2673 |
|
static void enable_cpucache (kmem_cache_t *cachep) |
2674 |
|
{ |
2675 |
|
int err; |
2676 |
|
int limit; |
2677 |
|
|
2678 |
slab_t *slabp; |
if (cachep->objsize > PAGE_SIZE) |
2679 |
struct page *page; |
return; |
2680 |
void *objp; |
if (cachep->objsize > 1024) |
2681 |
size_t offset; |
limit = 60; |
2682 |
unsigned int i, local_flags; |
else if (cachep->objsize > 256) |
2683 |
unsigned long ctor_flags; |
limit = 124; |
2684 |
unsigned long save_flags; |
else |
2685 |
|
limit = 252; |
|
/* Be lazy and only check for valid flags here, |
|
|
* keeping it out of the critical path in kmem_cache_alloc(). |
|
|
*/ |
|
|
if (flags & ~(SLAB_DMA|SLAB_LEVEL_MASK|SLAB_NO_GROW)) |
|
|
BUG(); |
|
|
if (flags & SLAB_NO_GROW) |
|
|
return 0; |
|
|
\end{verbatim} |
|
|
|
|
|
Straight forward. Make sure we are not trying to grow a slab that shouldn't |
|
|
be grown. |
|
|
|
|
|
\begin{verbatim} |
|
|
if (in_interrupt() && (flags & SLAB_LEVEL_MASK) |
|
|
!= SLAB_ATOMIC) |
|
|
BUG(); |
|
2686 |
\end{verbatim} |
\end{verbatim} |
2687 |
|
|
2688 |
Make sure that if we are in an interrupt that the appropriate ATOMIC flags |
If an object is larger than a page, don't create a per CPU cache as they are |
2689 |
are set so we don't accidently sleep. |
too expensive. If an object is larger than 1KB, keep the cpu cache below 3MB |
2690 |
|
in size. The limit is set to 124 objects to take the size of the cpucache |
2691 |
|
descriptors into account. For smaller objects, just make sure the cache |
2692 |
|
doesn't go above 3MB in size |
2693 |
|
|
2694 |
\begin{verbatim} |
\begin{verbatim} |
|
ctor_flags = SLAB_CTOR_CONSTRUCTOR; |
|
|
local_flags = (flags & SLAB_LEVEL_MASK); |
|
|
if (local_flags == SLAB_ATOMIC) |
|
|
/* |
|
|
* Not allowed to sleep. Need to tell a |
|
|
* constructor about this - it might need |
|
|
* to know... |
|
|
*/ |
|
|
ctor_flags |= SLAB_CTOR_ATOMIC; |
|
|
\end{verbatim} |
|
|
|
|
|
Set the appropriate flags for growing a cache and set ATOMIC if necessary. |
|
|
SLAB\_LEVEL\_MASK is the collection of GFP masks that determines how the |
|
|
buddy allocator will behave. |
|
2695 |
|
|
2696 |
\begin{verbatim} |
err = kmem_tune_cpucache(cachep, limit, limit/2); |
|
/* About to mess with non-constant members - lock. */ |
|
|
spin_lock_irqsave(&cachep->spinlock, save_flags); |
|
2697 |
\end{verbatim} |
\end{verbatim} |
2698 |
|
|
2699 |
Self explanatory |
Allocate the memory for the cpucache. |
2700 |
|
|
2701 |
\begin{verbatim} |
\begin{verbatim} |
2702 |
/* Get colour for the slab, and cal the next value. */ |
if (err) |
2703 |
offset = cachep->colour_next; |
printk(KERN_ERR |
2704 |
cachep->colour_next++; |
"enable_cpucache failed for %s, error %d.\n", |
2705 |
if (cachep->colour_next >= cachep->colour) |
cachep->name, -err); |
2706 |
cachep->colour_next = 0; |
} |
|
offset *= cachep->colour_off; |
|
2707 |
\end{verbatim} |
\end{verbatim} |
2708 |
|
|
2709 |
The colour will affect what cache line each object is assigned to on the |
Print out an error message if the allocation failed |
|
CPU cache. colour\_off is how far has to be jumped for each cache line. |
|
|
colour\_next is what number line we want to go to. This will calculate the |
|
|
offset to be colour\_next * colour\_off . It will increase colour\_next |
|
|
unless it reaches the max amount of colouring for this slab, cachep->colour |
|
|
in which case it'll go back to the first lines |
|
|
|
|
|
\begin{verbatim} |
|
|
cachep->dflags |= DFLGS_GROWN; |
|
2710 |
|
|
2711 |
cachep->growing++; |
\function{kmem\_tune\_cpucache}{kmem_tune_cpucache}{mm/slab.c} |
|
\end{verbatim} |
|
2712 |
|
|
2713 |
This two lines will ensure that this cache won't be reaped for some time. As |
This function is responsible for allocating memory for the cpucaches. For |
2714 |
the cache is grown, it doesn't make sense that the slab just allocated here |
each CPU on the system, kmalloc gives a block of memory large enough |
2715 |
would be deleted by kswapd in a short space of time. |
for one cpu cache and fills a cpupdate\_struct\_t struct. The function |
2716 |
|
\texttt{smp\_call\_function\_all\_cpus} then calls \texttt{do\_ccupdate\_local} |
2717 |
|
which swaps the new information with the old information in the cache |
2718 |
|
descriptor. |
2719 |
|
|
2720 |
\begin{verbatim} |
\begin{verbatim} |
2721 |
spin_unlock_irqrestore(&cachep->spinlock, save_flags); |
static int kmem_tune_cpucache (kmem_cache_t* cachep, int limit, int |
2722 |
|
batchcount) |
2723 |
|
{ |
2724 |
\end{verbatim} |
\end{verbatim} |
2725 |
|
|
2726 |
Restore the lock |
The parameters of the function are |
2727 |
|
|
2728 |
|
\begin{description} |
2729 |
|
\item{cachep} The cache this cpucache is been allocated for |
2730 |
|
\item{limit} The total number of objects that can exist in the cpucache |
2731 |
|
\item{batchcount} The number of objects to allocate in one batch when the |
2732 |
|
cpucache is empty |
2733 |
|
\end{description} |
2734 |
|
|
2735 |
\begin{verbatim} |
\begin{verbatim} |
2736 |
/* Get mem for the objs. */ |
ccupdate_struct_t new; |
2737 |
if (!(objp = kmem_getpages(cachep, flags))) |
int i; |
|
goto failed; |
|
|
\end{verbatim} |
|
2738 |
|
|
2739 |
Just a wrapper around \_\_alloc\_pages(). |
/* |
2740 |
|
* These are admin-provided, so we are more graceful. |
2741 |
|
*/ |
2742 |
|
if (limit < 0) |
2743 |
|
return -EINVAL; |
2744 |
|
if (batchcount < 0) |
2745 |
|
return -EINVAL; |
2746 |
|
if (batchcount > limit) |
2747 |
|
return -EINVAL; |
2748 |
|
if (limit != 0 && !batchcount) |
2749 |
|
return -EINVAL; |
2750 |
|
|
|
\begin{verbatim} |
|
|
/* Get slab management. */ |
|
|
if (!(slabp = kmem_cache_slabmgmt(cachep, |
|
|
objp, offset, |
|
|
local_flags))) |
|
|
goto opps1; |
|
2751 |
\end{verbatim} |
\end{verbatim} |
2752 |
|
|
2753 |
This will allocate a slab\_t struct to manage this slab. How this function |
Sanity checks. They have to be made because this function can be called as a |
2754 |
decides whether to place a slab\_t on or off the slab will be discussed later. |
result of writing to /proc/slabinfo . |
2755 |
|
|
2756 |
|
\begin{verbatim} |
2757 |
|
memset(&new.new,0,sizeof(new.new)); |
2758 |
|
if (limit) { |
2759 |
|
for (i = 0; i< smp_num_cpus; i++) { |
2760 |
|
cpucache_t* ccnew; |
2761 |
|
|
2762 |
|
ccnew = kmalloc(sizeof(void*)*limit+ |
2763 |
|
sizeof(cpucache_t), GFP_KERNEL); |
2764 |
|
if (!ccnew) |
2765 |
|
goto oom; |
2766 |
|
ccnew->limit = limit; |
2767 |
|
ccnew->avail = 0; |
2768 |
|
new.new[cpu_logical_map(i)] = ccnew; |
2769 |
|
} |
2770 |
|
} |
2771 |
|
|
|
\begin{verbatim} |
|
|
i = 1 << cachep->gfporder; |
|
|
page = virt_to_page(objp); |
|
|
do { |
|
|
SET_PAGE_CACHE(page, cachep); |
|
|
SET_PAGE_SLAB(page, slabp); |
|
|
PageSetSlab(page); |
|
|
page++; |
|
|
} while (--i); |
|
2772 |
\end{verbatim} |
\end{verbatim} |
2773 |
|
|
2774 |
The struct page is used to keep track of the cachep and slabs. From the head, |
Clear the ccupdate\_struct\_t struct. For every CPU on the system, allocate |
2775 |
search forward for the cachep and search back for the slabp. SET\_PAGE\_CACHE |
memory for the cpucache. The size of it is the size of the descriptor plus |
2776 |
inserts the cachep onto the front of the list. SET\_PAGE\_SLAB will place |
limit number of pointers to objects. The new cpucaches are stored in the new |
2777 |
the slab on end of the list. PageSetSlab is a macro which sets the PG\_slab |
array where they will be swapped into the cache descriptor later by |
2778 |
bit on the page flags. The while loop will do this for each page that was |
do\_ccupdate\_local(). |
|
allocated for this slab. |
|
2779 |
|
|
2780 |
\begin{verbatim} |
\begin{verbatim} |
2781 |
kmem_cache_init_objs(cachep, slabp, ctor_flags); |
new.cachep = cachep; |
2782 |
\end{verbatim} |
spin_lock_irq(&cachep->spinlock); |
2783 |
|
cachep->batchcount = batchcount; |
2784 |
|
spin_unlock_irq(&cachep->spinlock); |
2785 |
|
|
2786 |
This function, described earlier will initialize each object that can fit |
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new); |
|
on the slab. |
|
2787 |
|
|
|
\begin{verbatim} |
|
|
spin_lock_irqsave(&cachep->spinlock, save_flags); |
|
|
cachep->growing--; |
|
2788 |
\end{verbatim} |
\end{verbatim} |
2789 |
|
|
2790 |
Lock the cache so the slab can be inserted on the list and say that we are not |
Fill in the rest of the struct and call smp\_call\_function\_all\_cpus which |
2791 |
growing any more so that the cache will be considered for reaping again later. |
will make sure each CPU gets it's new cpucache. |
2792 |
|
|
2793 |
\begin{verbatim} |
\begin{verbatim} |
|
/* Make slab active. */ |
|
|
list_add_tail(&slabp->list, &cachep->slabs_free); |
|
|
STATS_INC_GROWN(cachep); |
|
|
cachep->failures = 0; |
|
|
\end{verbatim} |
|
2794 |
|
|
2795 |
Add the slab to the list and set some statistics. |
for (i = 0; i < smp_num_cpus; i++) { |
2796 |
|
cpucache_t* ccold = new.new[cpu_logical_map(i)]; |
2797 |
|
if (!ccold) |
2798 |
|
continue; |
2799 |
|
local_irq_disable(); |
2800 |
|
free_block(cachep, cc_entry(ccold), ccold->avail); |
2801 |
|
local_irq_enable(); |
2802 |
|
kfree(ccold); |
2803 |
|
} |
2804 |
|
|
|
\begin{verbatim} |
|
|
spin_unlock_irqrestore(&cachep->spinlock, save_flags); |
|
|
return 1; |
|
2805 |
\end{verbatim} |
\end{verbatim} |
2806 |
|
|
2807 |
Unlock and return success. |
The function do\_ccupdate\_local() swaps what is in the cache descriptor with |
2808 |
|
the new cpucaches. This block cycles through all the old cpucaches and frees |
2809 |
|
the memory. |
2810 |
|
|
2811 |
\begin{verbatim} |
\begin{verbatim} |
2812 |
opps1: |
return 0; |
2813 |
kmem_freepages(cachep, objp); |
oom: |
2814 |
failed: |
for (i--; i >= 0; i--) |
2815 |
spin_lock_irqsave(&cachep->spinlock, save_flags); |
kfree(new.new[cpu_logical_map(i)]); |
2816 |
cachep->growing--; |
return -ENOMEM; |
|
spin_unlock_irqrestore(&cachep->spinlock, save_flags); |
|
|
return 0; |
|
2817 |
} |
} |
2818 |
\end{verbatim} |
\end{verbatim} |
2819 |
|
|
2820 |
opps1 is reached if a slab manager could not be allocated. failed is reached |
\subsection{Updating Per-CPU Information} |
|
if pages could not be allocated for the slab at all. |
|
|
|
|
|
\subsection{Function kmem\_cache\_slabmgmt()} |
|
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
|
\begin{verbatim} |
|
|
slab_t * kmem_cache_slabmgmt (kmem_cache_t *cachep, |
|
|
void *objp, |
|
|
int colour_off, |
|
|
int local_flags) |
|
|
\end{verbatim} |
|
|
This function allocates a new slab\_t and places it in the correct place. |
|
|
\begin{verbatim} |
|
|
slab_t *slabp; |
|
|
|
|
|
if (OFF_SLAB(cachep)) { |
|
|
/* Slab management obj is off-slab. */ |
|
|
slabp = kmem_cache_alloc(cachep->slabp_cache, |
|
|
local_flags); |
|
|
if (!slabp) |
|
|
return NULL; |
|
|
\end{verbatim} |
|
2821 |
|
|
2822 |
The first check is to see if the slab\_t is kept off the slab. If it is, |
When the per-cpu caches have been created or changed, each CPU has to be |
2823 |
$cachep\rightarrow{slabp\_cache}$ will be pointing to the cache of memory |
told about it. It's not sufficient to change all the values in the cache |
2824 |
allocations large enough to contain the slab\_t. The different size caches |
descriptor as that would lead to cache coherency issues and spinlocks would |
2825 |
are the same ones used by kmalloc. |
have to used to protect the cpucache's. Instead a \id{ccupdate\_t} struct |
2826 |
|
is populated with all the information each CPU needs and each CPU swaps the |
2827 |
|
new data with the old information in the cache descriptor. The struct for |
2828 |
|
storing the new cpucache information is defined as follows |
2829 |
|
|
2830 |
\begin{verbatim} |
\begin{verbatim} |
2831 |
} else { |
typedef struct ccupdate_struct_s |
2832 |
slabp = objp+colour_off; |
{ |
2833 |
colour_off += L1_CACHE_ALIGN(cachep->num * |
kmem_cache_t *cachep; |
2834 |
sizeof(kmem_bufctl_t) |
cpucache_t *new[NR_CPUS]; |
2835 |
+ sizeof(slab_t)); |
} ccupdate_struct_t; |
|
} |
|
2836 |
\end{verbatim} |
\end{verbatim} |
2837 |
|
|
2838 |
Otherwise the slab\_t struct is contained on the slab itself at the beginning |
The cachep is the cache been updated and the array \texttt{new} is |
2839 |
of the slab. |
of the cpucache descriptors for each CPU on the system. The function |
2840 |
|
\texttt{smp\_function\_all\_cpus} is used to get each CPU to call the |
2841 |
|
\id{do\_ccupdate\_local} function which swaps the information from |
2842 |
|
ccupdate\_struct\_t with the information in the cache descriptor. |
2843 |
|
|
2844 |
\begin{verbatim} |
Once the information has been swapped, the old data can be deleted. |
|
slabp->inuse = 0; |
|
|
slabp->colouroff = colour_off; |
|
|
slabp->s_mem = objp+colour_off; |
|
2845 |
|
|
2846 |
return slabp; |
\function{smp\_function\_all\_cpus}{smp_function_all_cpus}{mm/slab.c} |
|
\end{verbatim} |
|
2847 |
|
|
2848 |
The most important one to note here is the value of s\_mem. It'll be set to |
This calls the function \texttt{func} for all CPU's. In the context of the |
2849 |
be at the beginning of the slab if the slab manager is off slab but at the |
slab allocator, the function is do\_ccupdate\_local and the arguement is \ |
2850 |
end of the slab\_t if it's on slab. |
ccupdate\_struct\_t. |
2851 |
|
|
2852 |
\section{Shrinking Caches} |
\begin{verbatim} |
2853 |
|
static void smp_call_function_all_cpus(void (*func) (void *arg), |
2854 |
|
void *arg) |
2855 |
|
{ |
2856 |
|
local_irq_disable(); |
2857 |
|
func(arg); |
2858 |
|
local_irq_enable(); |
2859 |
|
|
2860 |
Periodically it is necessary to shrink a cache, for instance when kswapd |
if (smp_call_function(func, arg, 1, 1)) |
2861 |
is woken as zones need to be balanced. Before a cache is shrinked, it is |
BUG(); |
2862 |
checked to make sure it isn't called from inside an interrupt. The code |
} |
2863 |
behind \emph{kmem\_shrink\_cache()} looks a bit convulated at first glance. |
\end{verbatim} |
|
It's tasks are |
|
2864 |
|
|
2865 |
\begin{itemize} |
This function is quiet simply. First it disable interrupts locally and call |
2866 |
\item Delete all objects in the per CPU caches |
the function for this CPU. It then calls smp\_call\_function which makes sure |
2867 |
\item Delete all slabs from slabs\_free unless the growing flag gets set |
that every other CPU executes the function \texttt{func}. In the context of |
2868 |
\end{itemize} |
the slab allocator, this will always be do\_ccupdate\_local. |
2869 |
|
|
2870 |
Two varieties of shrink functions are provided. \texttt{kmem\_cache\_shrink} |
\function{do\_ccupdate\_local}{do_ccupdate_local}{mm/slab.c} |
|
removes all slabs from slabs\_free and returns the number of pages freed as |
|
|
a result. \texttt{\_\_kmem\_cache\_shrink} frees all slabs from slabs\_free |
|
|
and then verifies that slabs\_partial and slabs\_full are empty. This is |
|
|
important during cache destruction when it doesn't matter how many pages |
|
|
are freed, just that the cache is empty. |
|
2871 |
|
|
2872 |
\subsection{Function kmem\_cache\_shrink()} |
This function swaps the cpucache information in the cache descriptor with |
2873 |
\textit{File: }\url{mm/slab.c}\\ |
the information in \texttt{info} for this CPU. |
|
\textit{Prototype: } |
|
|
\begin{verbatim} |
|
|
int kmem_cache_shrink(kmem_cache_t *cachep) |
|
|
\end{verbatim} |
|
2874 |
|
|
2875 |
\begin{verbatim} |
\begin{verbatim} |
2876 |
|
static void do_ccupdate_local(void *info) |
2877 |
|
{ |
2878 |
|
ccupdate_struct_t *new = (ccupdate_struct_t *)info; |
2879 |
|
cpucache_t *old = cc_data(new->cachep); |
2880 |
|
|
|
int ret; |
|
|
|
|
|
if (!cachep || in_interrupt() || |
|
|
!is_chained_kmem_cache(cachep)) |
|
|
BUG(); |
|
|
|
|
|
drain_cpu_caches(cachep); |
|
2881 |
\end{verbatim} |
\end{verbatim} |
2882 |
|
|
2883 |
drain\_cpu\_caches will try and remove the objects kept available |
The parameter passed in is a pointer to the \texttt{ccupdate\_struct\_t} |
2884 |
for a particular CPU that would have been allocated earlier with |
passed to \texttt{smp\_call\_function\_all\_cpus}. Part of the |
2885 |
kmem\_cache\_alloc\_batch. |
\texttt{ccupdate\_struct\_t} is a pointer to the cache this cpucache belongs |
2886 |
|
to. \texttt{cc\_data} returns the \texttt{cpucache\_t} for this processor |
2887 |
|
|
2888 |
\begin{verbatim} |
\begin{verbatim} |
2889 |
spin_lock_irq(&cachep->spinlock); |
|
2890 |
ret = __kmem_cache_shrink_locked(cachep); |
cc_data(new->cachep) = new->new[smp_processor_id()]; |
2891 |
spin_unlock_irq(&cachep->spinlock); |
new->new[smp_processor_id()] = old; |
2892 |
|
} |
2893 |
\end{verbatim} |
\end{verbatim} |
|
Lock and shrink |
|
|
\begin{verbatim} |
|
|
return ret << cachep->gfporder; |
|
2894 |
|
|
2895 |
\end{verbatim} |
Place the new cpucache in cache descriptor. cc\_data returns the pointer to the |
2896 |
|
cpucache for this CPU. Replace the pointer in new with the old cpucache so it |
2897 |
|
can be deleted later by the caller of \texttt{smp\_call\_function\_call\_cpus}, |
2898 |
|
\texttt{kmem\_tune\_cpucache for example} |
2899 |
|
|
2900 |
As the number of slabs freed is returned, bit shifting it by gfporder |
\subsection{Draining a Per-CPU Cache} |
|
will give the number of pages freed. There is a similar function called |
|
|
\_\_kmem\_cache\_shrink. The only difference with it is that it returns a |
|
|
boolean on whether the whole cache is free or not. |
|
2901 |
|
|
2902 |
\subsection{Function \_\_kmem\_cache\_shrink\_locked()} |
When a cache is been shrunk, it's first step is to drain the cpucaches of any |
2903 |
\textit{File: }\url{mm/slab.c}\\ |
objects they might have. This is so the slab allocator will have a clearer |
2904 |
\textit{Prototype: } |
view of what slabs can be freed or not. This is important because if just |
2905 |
\begin{verbatim} |
one object in a slab is placed in a Per-CPU cache, that whole slab cannot |
2906 |
int __kmem_cache_shrink_locked(kmem_cache_t *cachep) |
be freed. If the system is tight on memory, saving a few milliseconds on |
2907 |
\end{verbatim} |
allocations is the least of it's trouble. |
2908 |
|
|
2909 |
This function cycles through all the slabs\_free in the cache and calls |
\function{drain\_cpu\_caches}{drain_cpu_caches}{mm/slab.c} |
|
kmem\_slab\_destory (described below) on each of them. The code is very |
|
|
straight forward. |
|
2910 |
|
|
2911 |
\begin{verbatim} |
\begin{verbatim} |
2912 |
|
static void drain_cpu_caches(kmem_cache_t *cachep) |
2913 |
|
{ |
2914 |
|
ccupdate_struct_t new; |
2915 |
|
int i; |
2916 |
|
|
2917 |
slab_t *slabp; |
memset(&new.new,0,sizeof(new.new)); |
|
int ret = 0; |
|
|
|
|
|
/* If the cache is growing, stop shrinking. */ |
|
|
while (!cachep->growing) { |
|
|
struct list_head *p; |
|
2918 |
|
|
2919 |
p = cachep->slabs_free.prev; |
new.cachep = cachep; |
|
if (p == &cachep->slabs_free) |
|
|
break; |
|
2920 |
|
|
2921 |
|
down(&cache_chain_sem); |
2922 |
|
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new); |
2923 |
\end{verbatim} |
\end{verbatim} |
2924 |
|
|
2925 |
If the list \texttt{slabs\_free} is empty, then both \textit{slabs\_free.prev} |
This block blanks out the new ccupdate\_struct\_t, acquires the cache chain |
2926 |
and \textit{slabs\_free.next} point to itself. The above code checks for |
semaphore and calls smp\_call\_function\_cpus to get all the cpucache |
2927 |
this condition and quits as there are no empty slabs to free. |
information for each cpu |
2928 |
|
|
2929 |
\begin{verbatim} |
\begin{verbatim} |
2930 |
|
for (i = 0; i < smp_num_cpus; i++) { |
2931 |
slabp = list_entry(cachep->slabs_free.prev, slab_t, list); |
cpucache_t* ccold = new.new[cpu_logical_map(i)]; |
2932 |
|
if (!ccold || (ccold->avail == 0)) |
2933 |
|
continue; |
2934 |
|
local_irq_disable(); |
2935 |
|
free_block(cachep, cc_entry(ccold), ccold->avail); |
2936 |
|
local_irq_enable(); |
2937 |
|
ccold->avail = 0; |
2938 |
|
} |
2939 |
\end{verbatim} |
\end{verbatim} |
|
There is an empty slab available, so get a pointer to it. |
|
|
\begin{verbatim} |
|
2940 |
|
|
2941 |
#if DEBUG |
All the objects in each CPU are freed and the cpucache struct updated to show |
2942 |
if (slabp->inuse) |
that there is no available objects in it |
|
BUG(); |
|
|
#endif |
|
2943 |
|
|
|
\end{verbatim} |
|
|
A bug condition where a partially used slab is in the free slab list. |
|
2944 |
\begin{verbatim} |
\begin{verbatim} |
2945 |
|
smp_call_function_all_cpus(do_ccupdate_local, (void *)&new); |
2946 |
list_del(&slabp->list); |
up(&cache_chain_sem); |
2947 |
|
} |
2948 |
\end{verbatim} |
\end{verbatim} |
2949 |
|
|
2950 |
Since we are going to free this slab, remove it from the \textit{slabs\_free} |
All the cpucaches have been updated so call smp\_call\_function\_all\_cpus to |
2951 |
list. |
place them all back in the cache descriptor again and release the cache chain |
2952 |
|
semaphore. |
2953 |
|
|
2954 |
\begin{verbatim} |
\section{Slab Allocator Initialization} |
2955 |
|
\label{Sec: Slab Allocator Initialization} |
2956 |
|
|
2957 |
|
The first function called from \emph{start\_kernel} is {\bf |
2958 |
|
kmem\_cache\_init()}. This takes the following very simple steps |
2959 |
|
|
2960 |
spin_unlock_irq(&cachep->spinlock); |
\begin{itemize} |
2961 |
kmem_slab_destroy(cachep, slabp); |
\item Initialize a mutex for access to the cache chain |
2962 |
ret++; |
\item Initialize the linked list for the cache chain |
2963 |
spin_lock_irq(&cachep->spinlock); |
\item Initialize the cache\_cache |
2964 |
} |
\item Sets the cache\_cache colour |
2965 |
return ret; |
\end{itemize} |
|
\end{verbatim} |
|
2966 |
|
|
2967 |
Call \texttt{kmem\_slab\_destroy()} (which is discussed below) to actually |
The term \emph{cache chain} is simply a fancy name for a circular linked list |
2968 |
do the formalities of freeing the slab. Increment the value of \textit{ret}, |
of caches the slab allocator knows about. It then goes on to initialize |
2969 |
which is used to count the number of slabs being freed. |
a cache of caches called {\bf kmem\_cache}. This is a cache of objects of |
2970 |
|
type {\bf kmem\_cache\_t} which describes information about the cache itself. |
2971 |
|
|
2972 |
\subsection{Function \_\_kmem\_slab\_destroy()} |
\subsection{Initializing cache\_cache} |
|
\textit{File: }\url{mm/slab.c}\\ |
|
|
\textit{Prototype: } |
|
|
\begin{verbatim} |
|
|
void kmem_slab_destroy (kmem_cache_t *cachep, |
|
|
slab_t *slabp) |
|
|
\end{verbatim} |
|
2973 |
|
|
2974 |
This function cycles through all objects in a slab and does the required |
This cache is initialized as follows |
|
cleanup. Before calling, the slab must have been unlinked from the cache. |
|
2975 |
|
|
|
\begin{verbatim} |
|
|
if (cachep->dtor |
|
|
#if DEBUG |
|
|
|| cachep->flags & (SLAB_POISON | SLAB_RED_ZONE) |
|
|
#endif |
|
|
) { |
|
|
|
|
|
\end{verbatim} |
|
|
If a destructor exists for this slab, or if DEBUG is enabled and the necessary |
|
|
flags are present, continue. |
|
2976 |
\begin{verbatim} |
\begin{verbatim} |
2977 |
|
static kmem_cache_t cache_cache = { |
2978 |
int i; |
slabs_full: LIST_HEAD_INIT(cache_cache.slabs_full), |
2979 |
for (i = 0; i < cachep->num; i++) { |
slabs_partial: LIST_HEAD_INIT(cache_cache.slabs_partial), |
2980 |
void* objp = slabp->s_mem+cachep->objsize*i; |
slabs_free: LIST_HEAD_INIT(cache_cache.slabs_free), |
2981 |
|
objsize: sizeof(kmem_cache_t), |
2982 |
|
flags: SLAB_NO_REAP, |
2983 |
|
spinlock: SPIN_LOCK_UNLOCKED, |
2984 |
|
colour_off: L1_CACHE_BYTES, |
2985 |
|
name: "kmem_cache", |
2986 |
|
}; |
2987 |
\end{verbatim} |
\end{verbatim} |
|
Cycle through all objects in the slab. |
|
|
\begin{verbatim} |
|
|
#if DEBUG |
|
|
if (cachep->flags & SLAB_RED_ZONE) { |
|
|
if (*((unsigned long*)(objp)) != RED_MAGIC1) |
|
|
BUG(); |
|
|
if (*((unsigned long*)(objp + cachep->objsize |
|
|
- BYTES_PER_WORD)) != RED_MAGIC1) |
|
|
BUG(); |
|
|
objp += BYTES_PER_WORD; |
|
|
} |
|
|
#endif |
|
2988 |
|
|
2989 |
if (cachep->dtor) |
\begin{tabularx}{15cm}{lX} |
2990 |
(cachep->dtor)(objp, cachep, 0); |
slabs\_full & Standard list init \\ |
2991 |
|
slabs\_partial & Standard list init \\ |
2992 |
\end{verbatim} |
slabs\_free & Standard list init \\ |
2993 |
|
objsize & Size of the struct. See the kmem\_cache\_s struct \\ |
2994 |
|
flags & Make sure this cache can't be reaped \\ |
2995 |
|
spinlock & Initialize unlocked \\ |
2996 |
|
colour\_off & Align the objects to the L1 Cache \\ |
2997 |
|
name & Name of the cache \\ |
2998 |
|
\end{tabularx} |
2999 |
|
|
3000 |
If a destructor exists for this slab, then invoke it on the object. The |
\function{kmem\_cache\_init}{kmem_cache_init}{mm/slab.c} |
|
destructors are *not* used in Linux. It has been kept for some future use. |
|
3001 |
|
|
3002 |
\begin{verbatim} |
\begin{verbatim} |
3003 |
#if DEBUG |
void __init kmem_cache_init(void) |
3004 |
if (cachep->flags & SLAB_RED_ZONE) { |
{ |
3005 |
objp -= BYTES_PER_WORD; |
size_t left_over; |
|
} |
|
|
if ((cachep->flags & SLAB_POISON) && |
|
|
kmem_check_poison_obj(cachep, objp)) |
|
|
BUG(); |
|
|
#endif |
|
|
} |
|
|
} |
|
|
|
|
|
kmem_freepages(cachep, slabp->s_mem-slabp->colouroff); |
|
|
|
|
|
\end{verbatim} |
|
|
|
|
|
\texttt{kmem\_freepages()} will call the buddy allocator to free the pages |
|
|
for the slab. |
|
3006 |
|
|
3007 |
\begin{verbatim} |
init_MUTEX(&cache_chain_sem); |
3008 |
|
INIT_LIST_HEAD(&cache_chain); |
3009 |
|
|
3010 |
if (OFF_SLAB(cachep)) |
kmem_cache_estimate(0, cache_cache.objsize, 0, |
3011 |
kmem_cache_free(cachep->slabp_cache, slabp); |
&left_over, &cache_cache.num); |
3012 |
|
if (!cache_cache.num) |
3013 |
|
BUG(); |
3014 |
|
|
3015 |
|
cache_cache.colour = left_over/cache_cache.colour_off; |
3016 |
|
cache_cache.colour_next = 0; |
3017 |
|
} |
3018 |
\end{verbatim} |
\end{verbatim} |
3019 |
|
|
|
If the slab\_t is kept off-slab, it's cache entry must be removed. |
|
|
|
|
|
\section{Destroying Caches} |
|
|
|
|
|
Destroying a cache is yet another glorified list manager. It is called when |
|
|
a module is unloading itself or is being destroyed. This is to prevent |
|
|
caches with duplicate caches been created if the module is unloaded and |
|
|
loaded several times. |
|
|
|
|
|
The steps taken to destroy a cache are |
|
|
|
|
3020 |
\begin{itemize} |
\begin{itemize} |
3021 |
\item Delete the cache from the cache chain |
\item Initialise the cache chain linked list |
3022 |
\item Shrink the cache to delete all slabs (See Section \ref{Sec: Cache |
\item Initialise the semaphore for access the cache chain |
3023 |
Shrinking |
\item This estimates the number of objects and amount of bytes wasted. See |
3024 |
}) |
Section \ref{Sec: kmem_cache_estimate} |
3025 |
\item Free any per CPU caches (\texttt{kfree}) |
\item Calculate the cache\_cache colour |
|
\item Delete the cache descriptor from the \texttt{cache\_cache} (See Section: |
|
|
\ref{Sec: Object Freeing}) |
|
3026 |
\end{itemize} |
\end{itemize} |
3027 |
|
|
3028 |
Figure \ref{fig: kmem_cache_destroy} Shows the call graph for this task. |
\section{Interfacing with the Buddy Allocator} |
3029 |
|
\label{Sec: Interfacing with the Buddy Allocator} |
3030 |
|
|
3031 |
\begin{figure} |
The slab allocator doesn't come with pages attached, it must ask the physical |
3032 |
\centerline{\includegraphics{graphics/kmem_cache_destroy.ps}} |
page allocator for it's pages. For this two interfaces are provided, |
3033 |
\caption{kmem\_cache\_destroy} |
kmem\_getpages and kmem\_freepages. They are basically wrappers around |
3034 |
\label{kmem_cache_destroy} |
the buddy allocators API so that slab flags will be taken into account |
3035 |
\end{figure} |
for allocations |
3036 |
|
|
3037 |
\function{kmem\_cache\_destroy}{kmem_cache_destroy}{mm/slab.c} |
\function{kmem\_getpages}{kmem_getpages}{mm/slab.c} |
3038 |
|
|
3039 |
|
This allocates pages for the slab allocator |
3040 |
|
|
3041 |
\begin{verbatim} |
\begin{verbatim} |
3042 |
int kmem_cache_destroy (kmem_cache_t * cachep) |
static inline void * kmem_getpages (kmem_cache_t *cachep, unsigned long |
3043 |
|
flags) |
3044 |
{ |
{ |
3045 |
if (!cachep || in_interrupt() || cachep->growing) |
void *addr; |
3046 |
BUG(); |
flags |= cachep->gfpflags; |
3047 |
\end{verbatim} |
\end{verbatim} |
3048 |
|
|
3049 |
Sanity check. Make sure the cachep is not null, that an interrupt isn't |
Whatever flags were requested for the allocation, append the cache flags to |
3050 |
trying to do this and that the cache hasn't been marked growing, indicating |
it. The only flag it may append is GFP\_DMA if the cache requires DMA memory |
|
it's in use |
|
3051 |
|
|
3052 |
\begin{verbatim} |
\begin{verbatim} |
3053 |
|
addr = (void*) __get_free_pages(flags, cachep->gfporder); |
3054 |
down(&cache_chain_sem); |
return addr; |
3055 |
|
} |
3056 |
\end{verbatim} |
\end{verbatim} |
3057 |
|
|
3058 |
Acquire the semaphore for accessing the cache chain |
Call the buddy allocator and return the pages or NULL if it failed |
3059 |
|
|
3060 |
\begin{verbatim} |
\function{kmem\_freepages}{kmem_freepages}{mm/slab.c} |
|
|
|
|
if (clock_searchp == cachep) |
|
|
clock_searchp = list_entry(cachep->next.next, |
|
|
kmem_cache_t, next); |
|
|
list_del(&cachep->next); |
|
|
up(&cache_chain_sem); |
|
|
|
|
|
\end{verbatim} |
|
3061 |
|
|
3062 |
\begin{itemize} |
This frees pages for the slab allocator. Before it calls the buddy allocator |
3063 |
\item Acquire the semaphore for accessing the cache chain |
API, it will remove the PG\_slab bit from the page flags |
|
\item Acquire the list entry from the cache chain |
|
|
\item Delete this cache from the cache chain |
|
|
\item Release the cache chain semaphore |
|
|
\end{itemize} |
|
3064 |
|
|
3065 |
\begin{verbatim} |
\begin{verbatim} |
3066 |
|
static inline void kmem_freepages (kmem_cache_t *cachep, void *addr) |
3067 |
if (__kmem_cache_shrink(cachep)) { |
{ |
3068 |
printk(KERN_ERR "kmem_cache_destroy: Can't free all objects %p\n", |
unsigned long i = (1<<cachep->gfporder); |
3069 |
cachep); |
struct page *page = virt_to_page(addr); |
|
down(&cache_chain_sem); |
|
|
list_add(&cachep->next,&cache_chain); |
|
|
up(&cache_chain_sem); |
|
|
return 1; |
|
|
} |
|
|
|
|
3070 |
\end{verbatim} |
\end{verbatim} |
3071 |
|
|
3072 |
Shrink the cache to free all slabs (See Section \ref{Sec: __kmem_cache_shrink}) |
The original order for the allocation is stored in the cache descriptor. The |
3073 |
The shrink function returns true if there is still slabs in the cache. If |
physical page allocator expects a struct page which virt\_to\_page provides. |
|
there is, the cache cannot be destroyed so it is added back into the cache |
|
|
chain and the error reported |
|
3074 |
|
|
3075 |
\begin{verbatim} |
\begin{verbatim} |
3076 |
#ifdef CONFIG_SMP |
while (i--) { |
3077 |
{ |
PageClearSlab(page); |
3078 |
int i; |
page++; |
3079 |
for (i = 0; i < NR_CPUS; i++) |
} |
|
kfree(cachep->cpudata[i]); |
|
|
} |
|
|
#endif |
|
3080 |
\end{verbatim} |
\end{verbatim} |
3081 |
|
|
3082 |
If SMP is enabled, each per CPU data is freed using \texttt{kfree} |
Clear the PG\_slab bit for each page |
3083 |
|
|
3084 |
\begin{verbatim} |
\begin{verbatim} |
3085 |
|
|
3086 |
|
free_pages((unsigned long)addr, cachep->gfporder); |
|
kmem_cache_free(&cache_cache, cachep); |
|
|
|
|
|
return 0; |
|
3087 |
} |
} |
3088 |
\end{verbatim} |
\end{verbatim} |
3089 |
|
|
3090 |
Delete the cache descriptor from the cache\_cache |
Call the buddy allocator |
|
|
|
|
\section{Cache Reaping} |
|
|
\label{Sec: Cache Reaping} |
|
|
|
|
|
When the page allocator notices that memory is getting tight, it |
|
|
wakes \texttt{kswapd} to begin freeing up pages (See Section \ref{Sec: __alloc_pages}. One of the first ways it accomplishes this task is telling the |
|
|
slab allocator to reap caches. It has to be the slab allocator that selects the |
|
|
caches as other subsystems should not know anything about the cache internals. |
|
|
|
|
|
\figesc{graphics/kmem_cache_reap.ps}{kmem\_cache\_reap}{kmem_cache_reap} |
|
|
|
|
|
The call graph in Figure \ref{fig: kmem_cache_reap} is deceptively simple. The |
|
|
task of selecting the proper cache to reap is quiet long. In case there is |
|
|
many caches in the system, only \id{REAP\_SCANLEN} caches are examined |
|
|
in each call. The last cache to be scanned is stored in the variable |
|
|
\id{clock\_searchp} so as not to examine the same caches over and over |
|
|
again. For each scanned cache, the reaper does the following |
|
|
|
|
|
\begin{itemize} |
|
|
\item Check flags for SLAB\_NO\_REAP and skip if set |
|
|
\item If the cache is growing, skip it |
|
|
\item if the cache has grown recently (DFLGS\_GROWN is set in dflags), skip it |
|
|
but clear the flag so it will be reaped the next time |
|
|
\item Count the number of free slabs in slabs\_free and calculate how many |
|
|
pages that would free in the variable \texttt{pages} |
|
|
\item If the cache has constructors or large slabs, adjust \texttt{pages} to |
|
|
make it less likely for the cache to be selected. |
|
|
\item If the number of pages that would be freed exceeds |
|
|
\texttt{REAP\_PERFECT}, free half of the slabs in slabs\_free |
|
|
\item Otherwise scan the rest of the caches and select the one that would free |
|
|
the most pages for freeing half of it's slabs in slabs\_free |
|
|
\end{itemize} |
|
|
|
|
|
\function{kmem\_cache\_reap}{kmem_cache_reap}{mm/slab.c} |
|
|
|
|
|
Because of the size of this function, it will be broken up into three seperate |
|
|
sections. The first is simple function preamble. The second is the selection |
|
|
of a cache to reap and the third is the freeing of the slabs |
|
|
|
|
|
\begin{verbatim} |
|
|
int kmem_cache_reap (int gfp_mask) |
|
|
{ |
|
|
slab_t *slabp; |
|
|
kmem_cache_t *searchp; |
|
|
kmem_cache_t *best_cachep; |
|
|
unsigned int best_pages; |
|
|
unsigned int best_len; |
|
|
unsigned int scan; |
|
|
int ret = 0; |
|
|
|
|
|
if (gfp_mask & __GFP_WAIT) |
|
|
down(&cache_chain_sem); |
|
|
else |
|
|
if (down_trylock(&cache_chain_sem)) |
|
|
return 0; |
|
|
|
|
|
scan = REAP_SCANLEN; |
|
|
best_len = 0; |
|
|
best_pages = 0; |
|
|
best_cachep = NULL; |
|
|
searchp = clock_searchp; |
|
|
\end{verbatim} |
|
|
|
|
|
\begin{itemize} |
|
|
\item The only parameter is the GFP flag. The only check made is against |
|
|
the \_\_GFP\_WAIT flag. As \texttt{kswapd} can sleep, this flag is virtually |
|
|
worthless |
|
|
|
|
|
\item Can the caller sleep? If yes, then acquire the semaphore |
|
3091 |
|
|
3092 |
\item Else, try and acquire the semaphore and if not available, |
\section{Sizes Cache} |
3093 |
return |
\label{Sec: Sizes Cache} |
3094 |
|
|
3095 |
\item REAP\_SCANLEN (10) is the number of caches to examine. |
Linux keeps two sets of caches for small memory allocations. One suitable for |
3096 |
|
use with DMA and the other suitable for normal use. The human readable names |
3097 |
\item Set searchp to be the last cache that was examined at the last |
for these caches \id{size-X cache} and \id{size-X(DMA) cache} viewable |
3098 |
reap |
from \texttt{/proc/cpuinfo}. Information for each sized cache is stored in |
3099 |
\end{itemize} |
a \id{cache\_sizes\_t} struct defined in \emph{mm/slab.c} |
3100 |
|
|
3101 |
\begin{verbatim} |
\begin{verbatim} |
3102 |
do { |
typedef struct cache_sizes { |
3103 |
unsigned int pages; |
size_t cs_size; |
3104 |
struct list_head* p; |
kmem_cache_t *cs_cachep; |
3105 |
unsigned int full_free; |
kmem_cache_t *cs_dmacachep; |
3106 |
|
} cache_sizes_t; |
|
if (searchp->flags & SLAB_NO_REAP) |
|
|
goto next; |
|
|
spin_lock_irq(&searchp->spinlock); |
|
|
if (searchp->growing) |
|
|
goto next_unlock; |
|
|
if (searchp->dflags & DFLGS_GROWN) { |
|
|
searchp->dflags &= ~DFLGS_GROWN; |
|
|
goto next_unlock; |
|
|
} |
|
|
#ifdef CONFIG_SMP |
|
|
{ |
|
|
cpucache_t *cc = cc_data(searchp); |
|
|
if (cc && cc->avail) { |
|
|
__free_block(searchp, cc_entry(cc), |
|
|
cc->avail); |
|
|
cc->avail = 0; |
|
|
} |
|
|
} |
|
|
#endif |
|
|
|
|
|
full_free = 0; |
|
|
p = searchp->slabs_free.next; |
|
|
while (p != &searchp->slabs_free) { |
|
|
slabp = list_entry(p, slab_t, list); |
|
|
#if DEBUG |
|
|
if (slabp->inuse) |
|
|
BUG(); |
|
|
#endif |
|
|
full_free++; |
|
|
p = p->next; |
|
|
} |
|
|
|
|
|
pages = full_free * (1<<searchp->gfporder); |
|
|
if (searchp->ctor) |
|
|
pages = (pages*4+1)/5; |
|
|
if (searchp->gfporder) |
|
|
pages = (pages*4+1)/5; |
|
|
if (pages > best_pages) { |
|
|
best_cachep = searchp; |
|
|
best_len = full_free; |
|
|
best_pages = pages; |
|
|
if (pages >= REAP_PERFECT) { |
|
|
clock_searchp = |
|
|
list_entry(searchp->next.next, |
|
|
kmem_cache_t,next); |
|
|
goto perfect; |
|
|
} |
|
|
} |
|
|
next_unlock: |
|
|
spin_unlock_irq(&searchp->spinlock); |
|
|
next: |
|
|
searchp = |
|
|
list_entry(searchp->next.next,kmem_cache_t,next); |
|
|
} while (--scan && searchp != clock_searchp); |
|
3107 |
\end{verbatim} |
\end{verbatim} |
3108 |
|
|
3109 |
This block examines REAP\_SCANLEN number of caches to select one to free |
\begin{description} |
3110 |
|
\item{cs\_size} The size of the memory block |
3111 |
|
\item{cs\_cachep} The cache of blocks for normal memory use |
3112 |
|
\item{cs\_dmacachep} The cache of blocks for use with DMA |
3113 |
|
\end{description} |
3114 |
|
|
3115 |
\begin{itemize} |
\emph{kmem\_cache\_sizes\_init()} is called to create a set of caches of |
3116 |
\item Acquire an interrupt safe lock to the cache descriptor |
different sizes. On a system with a page size of 4096, the smallest chunk |
3117 |
\item If the cache is growing, skip it |
is 32 bytes, otherwise it is 64 bytes. Two caches will be created for every |
3118 |
\item If the cache has grown recently, skip it and clear the flag |
size, both of them cacheline-aligned, and one suitable for ISA DMA. So the |
3119 |
\item Free any per CPU objects to the global pool |
smallest caches of memory are called {\emph size-32} and {\emph size-32(DMA)}. |
3120 |
\item Count the number of slabs in the slabs\_free list |
Caches for each subsequent power of two will be created until two caches of |
3121 |
\item Calculate the number of pages all the slabs hold |
size of 131072 bytes are created. These will be used by \emph{kmalloc} later. |
|
\item If the objects have constructors, reduce the page count by |
|
|
one |
|
|
fifth to make it less likely to be selected for reaping |
|
|
\item If the slabs consist of more than one page, reduce the page |
|
|
count by one fifth. This is because high order pages are hard to acquire |
|
|
\item If this is the best canditate found for reaping so far, check if |
|
|
it is perfect for reaping |
|
|
\item Record the new maximums |
|
|
\item best\_len is recorded so that it is easy to know how many slabs is |
|
|
half of the slabs in the free list |
|
|
\item If this cache is perfect for reaping then .... |
|
|
\item Update \texttt{clock\_searchp} |
|
|
\item Goto perfect where half the slabs will be freed |
|
|
\item This label is reached if it was found the cache was growing after |
|
|
acquiring the lock |
|
|
\item Release the cache descriptor lock |
|
|
\item Move to the next entry in the cache chain |
|
|
\item Scan while REAP\_SCANLEN has not been reachd and we have not |
|
|
cycled around the whole cache chain |
|
|
\end{itemize} |
|
3122 |
|
|
3123 |
\begin{verbatim} |
\begin{verbatim} |
3124 |
clock_searchp = searchp; |
static cache_sizes_t cache_sizes[] = { |
3125 |
|
#if PAGE_SIZE == 4096 |
3126 |
if (!best_cachep) |
{ 32, NULL, NULL}, |
|
goto out; |
|
|
|
|
|
spin_lock_irq(&best_cachep->spinlock); |
|
|
perfect: |
|
|
best_len = (best_len + 1)/2; |
|
|
for (scan = 0; scan < best_len; scan++) { |
|
|
struct list_head *p; |
|
|
|
|
|
if (best_cachep->growing) |
|
|
break; |
|
|
p = best_cachep->slabs_free.prev; |
|
|
if (p == &best_cachep->slabs_free) |
|
|
break; |
|
|
slabp = list_entry(p,slab_t,list); |
|
|
#if DEBUG |
|
|
if (slabp->inuse) |
|
|
BUG(); |
|
3127 |
#endif |
#endif |
3128 |
list_del(&slabp->list); |
{ 64, NULL, NULL}, |
3129 |
STATS_INC_REAPED(best_cachep); |
{ 128, NULL, NULL}, |
3130 |
|
{ 256, NULL, NULL}, |
3131 |
spin_unlock_irq(&best_cachep->spinlock); |
{ 512, NULL, NULL}, |
3132 |
kmem_slab_destroy(best_cachep, slabp); |
{ 1024, NULL, NULL}, |
3133 |
spin_lock_irq(&best_cachep->spinlock); |
{ 2048, NULL, NULL}, |
3134 |
} |
{ 4096, NULL, NULL}, |
3135 |
spin_unlock_irq(&best_cachep->spinlock); |
{ 8192, NULL, NULL}, |
3136 |
ret = scan * (1 << best_cachep->gfporder); |
{ 16384, NULL, NULL}, |
3137 |
out: |
{ 32768, NULL, NULL}, |
3138 |
up(&cache_chain_sem); |
{ 65536, NULL, NULL}, |
3139 |
return ret; |
{131072, NULL, NULL}, |
3140 |
} |
{ 0, NULL, NULL} |
3141 |
\end{verbatim} |
\end{verbatim} |
3142 |
|
|
3143 |
This block will free half of the slabs from the selected cache |
As is obvious, this is a statis array that is zero terminated consisting |
3144 |
|
of buffers of succeeding powers of 2 from 2$^5$ to 2$^{17}$ . An array now |
3145 |
\begin{itemize} |
exists that describes each sized cache which must be initialised with caches |
3146 |
\item Update clock\_searchp for the next cache reap |
at system startup. |
|
\item If a cache was not selected, goto out to free the cache chain |
|
|
and exit |
|
|
\item Acquire the cache chain spinlock and disable interrupts |
|
|
\item Adjust best\_len to be the number of slabs to free |
|
|
\item Free best\_len number of slabs |
|
|
\item If the cache is growing, exit |
|
|
\item Get a slab from the list |
|
|
\item If there is no slabs left in the list, exit |
|
|
\item Get the slab pointer |
|
|
\item If debugging is enabled, make sure there isn't active objects |
|
|
in the slab |
|
|
\item Remove the slab from the slabs\_free list |
|
|
\item Update statistics if enabled |
|
|
\item Free the cache descriptor and enable interrupts |
|
|
\item Destroy the slab. See Section \ref{Sec: Slab Destroying} |
|
|
\item Reacquire the cache descriptor spinlock and disable interrupts |
|
|
\item Free the cache descriptor and enable interrupts |
|
|
\item \texttt{ret} is the number of pages that was freed |
|
|
\item Free the cache semaphore and return the number of pages freed |
|
|
\end{itemize} |
|
3147 |
|
|
3148 |
\section{kmalloc} |
\subsection{kmalloc} |
3149 |
\label{Sec: kmalloc} |
\label{Sec: kmalloc} |
3150 |
|
|
3151 |
With the existance of the sizes cache, the slab allocator is able to offer a |
With the existance of the sizes cache, the slab allocator is able to offer a |
3174 |
large enough for this allocation, then call \_\_kmem\_cache\_alloc() to |
large enough for this allocation, then call \_\_kmem\_cache\_alloc() to |
3175 |
allocate from the cache as normal. |
allocate from the cache as normal. |
3176 |
|
|
3177 |
\section{kfree} |
\subsection{kfree} |
3178 |
\label{Sec: kfree} |
\label{Sec: kfree} |
3179 |
|
|
3180 |
Just as there is a \texttt{kmalloc} function to allocate small memory objects |
Just as there is a \texttt{kmalloc} function to allocate small memory objects |