/[lkdp]/lkdp/mm/slab.tex
ViewVC logotype

Diff of /lkdp/mm/slab.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.11 by nayaniabhishek, Sat Jun 22 18:26:30 2002 UTC revision 1.12 by gormanm, Fri Jul 5 07:26:27 2002 UTC
# Line 1  Line 1 
1  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
2  % $Id$          % $Id$        
3  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%  %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
4    
5    \newcommand{\mtablex}[2]{
6      \noindent \begin{tabularx}{13.5cm}{#1}
7      #2
8      \end{tabularx}
9      \vspace{8pt}
10    }
11    
12    \newcommand{\figesc}[3]{
13      \begin{figure}[h]
14      \centerline{\includegraphics{#1}}
15      \caption{#2}
16      \label{fig: #3}
17      \end{figure}}
18    \newcommand{\fig}[2]{\figesc{#1}{#2}{#2}}
19    
20    % e.g. \function{\_\_alloc\_pages}{__alloc_pages}{mm/page_alloc.c}
21    \newcommand{\funcsection}{\subsection}
22    \newcommand{\function}[3]{
23      \funcsection{Function #1}
24      \label{Sec: #2}
25      \index{#1}
26      \textit{File: } \url{#3} \\
27      \textit{Prototype: }}
28    
29  \chapter{Slab Allocator}  \chapter{Slab Allocator}
30    
31  The majority of memory allocation requests in the kernel are for small,  The majority of memory allocation requests in the kernel are for small,
# Line 125  wrong flags, the bitmask is compared aga Line 150  wrong flags, the bitmask is compared aga
150  \textit{slab.c}.  CREATE\_MASK consists of all the legal flags that can be  \textit{slab.c}.  CREATE\_MASK consists of all the legal flags that can be
151  used when creating a cache.  If an illegal flag is used, BUG() is invoked.  used when creating a cache.  If an illegal flag is used, BUG() is invoked.
152    
153    \subsection{Cache Static Flags}
154    \label{Sec: Cache Static Flags}
155    
156    The cache \texttt{flags} field is intended to give extra information about the
157    slab. The following two flags are intended for use within the slab allocator
158    but are not used much.
159    
160    \begin{description}
161    
162    \idx{CFGS\_OFF\_SLAB} Indicates that the slabs for this cache are kept
163    off-slab. This is discussed further in Section \ref{Sec: Storing the Slab
164    Descriptor}
165    
166    \idx{CFLGS\_OPTIMIZE} This flag is only ever set and never used
167    
168    \end{description}
169    
170    Other flags are exposed in \emph{include/linux/slab.h} . These affect how
171    the allocator treats the slabs.
172    
173    \mtablex{lX}{
174    SLAB\_HWCACHE\_ALIGN & Align the objects to the L1 CPU cache \\
175    SLAB\_NO\_REAP       & Never reap slabs in this cache \\
176    SLAB\_CACHE\_DMA     & Use memory from ZONE\_DMA \\
177    }
178    
179    If CONFIG\_SLAB\_DEBUG is set at compile time, the following flags are
180    available
181    
182    \mtablex{lX}{
183    SLAB\_DEBUG\_FREE    & Perform expensive checks on free \\
184    SLAB\_DEBUG\_INITIAL & After an object is freed, the constructor is called
185    with
186                           a flag set that tells it to check to make sure it is
187                           initialised correctly \\
188    SLAB\_RED\_ZONE      & This places a marker at either end of objects to trap
189                            overflows \\
190    SLAB\_POISON         & Poison objects with known a pattern for trapping
191                            changes made to objects not allocated or initialsed \\
192    }
193    
194    To prevent callers using the wrong flags a \id{CREATE\_MASK} is defined
195    consisting of all the allowable flags.
196    
197    \subsection{Cache Dynamic Flags}
198    \label{Sec: Cache Dynamic Flags}
199    
200    The \texttt{dflags} field appears to have only one flag \id{DFLGS\_GROWN}
201    but it is important. The flag is set during \texttt{kmem\_cache\_grow} so
202    that \texttt{kmem\_cache\_reap} will be unlikely to choose the cache for
203    reaping. When the function does find a cache with this flag set, it skips
204    the cache and removes the flag.
205    
206  \subsection{Slab structure}  \subsection{Slab structure}
207    
208  As mentioned, a slab consists of one or more pages assigned to contain objects.  As mentioned, a slab consists of one or more pages assigned to contain objects.
# Line 156  caches. Else it will be stored within th Line 234  caches. Else it will be stored within th
234  described in a later section dealing with kmalloc. They are caches which  described in a later section dealing with kmalloc. They are caches which
235  store blocks of memory of sizes that are powers of two.  store blocks of memory of sizes that are powers of two.
236    
237    The reader will note that given the slab manager or an object within the
238    slab, there does not appear to be a way to determine what slab or cache they
239    belong to. This is addressed by using the page$\rightarrow$list that makes
240    up the cache. \id{SET\_PAGE\_CACHE} and \id{SET\_PAGE\_SLAB} use next
241    and prev on the page list to track what cache and slab an object belongs
242    to. To get the descriptors from the page, the macros \id{GET\_PAGE\_CACHE}
243    and \id{GET\_PAGE\_SLAB} are available. This is illustrated as best as
244    possible in Figure \ref{fig: Page to Cache and Slab Relationship}
245    
246    
247    \begin{figure}[h]
248    \centerline{\includegraphics{graphics/pageslabcache.ps}}
249    \caption{Page to Cache and Slab Relationship}
250    \label{fig: Page to Cache and Slab Relationship}
251    \end{figure}
252    
253    
254  \subsection{Overall Structure}  \subsection{Overall Structure}
255    
256  \begin{figure}  \begin{figure}
# Line 224  $list\rightarrow{next}$ pointer points t Line 319  $list\rightarrow{next}$ pointer points t
319  $list\rightarrow{prev}$ points to slab\_t (the slab it is part of). So given an object,  $list\rightarrow{prev}$ points to slab\_t (the slab it is part of). So given an object,
320  we can easily find the associated cache and slab through these pointers.  we can easily find the associated cache and slab through these pointers.
321    
322    \subsection{Cache Colouring}
323    \label{Sec: Cache Colouring}
324    
325    To utilize hardware cache better, the slab allocator will offset objects
326    in different slabs by different amounts depending on the amount of space
327    left over in the slab. The offset is in units of \texttt{BYTES\_PER\_WORD}
328    unless \texttt{SLAB\_HWCACHE\_ALIGN} is set in which case it is aligned to
329    blocks of L1\_CACHE\_BYTES for alignment to the L1 hardware cache.
330    
331    During cache creation, it is calculated how many objects can fit on a slab
332    (See Section \ref{Sec: Calculating the Number of Objects on a Slab}) and
333    what the bytes wasted is. Based on that, two figures are calculated for the
334    cache desriptor
335    
336    \mtablex{lX}{
337    colour    & The number of different offset that can be used \\
338    colour\_off & The amount to offset the objects at \\
339    }
340    
341    With the objects offset, they will use different lines on the associative
342    hardware cache. Therefore, objects from slabs are less likely to overwrite
343    each other in memory.
344    
345    The result of this is easiest explained with example. Let us say that s\_mem
346    (the address of the first object) on the slab is 0 for convinience, that
347    100 bytes are wasted on the slab and alignment is to be at 32 bytes to the
348    L1 Hardware Cache on a Pentium 2.
349    
350    In this scenario, the first slab created will have it's objects start at 0.
351    The second will start at 32, the third at 64, the fourth at 96 and the fifth
352    will start back at 0. With this, objects from each of the slabs will not
353    hit the same hardware cache line on the CPU.
354    
355    \section{Interfacing with the Buddy Allocator}
356    \label{Sec: Interfacing with the Buddy Allocator}
357    
358    The slab allocator doesn't come with pages attached, it must ask the
359    physical page allocator (See Section \ref{Sec: Physical Page Management})
360    for it's pages. For this two interfaces are provided, kmem\_getpages and
361    kmem\_freepages. They are basically wrappers around the buddy allocators
362    API so that slab flags will be taken into account for allocations
363    
364    \function{kmem\_getpages}{kmem_getpages}{mm/slab.c}
365    
366    This allocates pages for the slab allocator
367    
368    \begin{verbatim}
369    486 static inline void * kmem_getpages (kmem_cache_t *cachep, unsigned long
370    flags)
371    487 {
372    488         void    *addr;
373    495         flags |= cachep->gfpflags;
374    496         addr = (void*) __get_free_pages(flags, cachep->gfporder);
375    503         return addr;
376    504 }
377    \end{verbatim}
378    
379    \begin{itemize}
380    \item Whatever flags were requested for the allocation, append the cache
381    flags to it. The only flag it may append is GFP\_DMA if the cache requires DMA
382    memory
383    
384    \item Call the buddy allocator (See Section \ref{Sec: __get_free_pages})
385    
386    \item Return the pages or NULL if it failed
387    \end{itemize}
388    
389    \function{kmem\_freepages}{kmem_freepages}{mm/slab.c}
390    
391    This frees pages for the slab allocator. Before it calls the buddy allocator
392    API, it will remove the PG\_slab bit from the page flags
393    
394    \begin{verbatim}
395    507 static inline void kmem_freepages (kmem_cache_t *cachep, void *addr)
396    508 {
397    509         unsigned long i = (1<<cachep->gfporder);
398    510         struct page *page = virt_to_page(addr);
399    511
400    517         while (i--) {
401    518                 PageClearSlab(page);
402    519                 page++;
403    520         }
404    521         free_pages((unsigned long)addr, cachep->gfporder);
405    522 }
406    \end{verbatim}
407    
408    \begin{itemize}
409    \item Retrieve the order used for the original allocation
410    \item Get the struct page for the address
411    \item Clear the PG\_slab bit on each page
412    \item Call the buddy allocator (See Section \ref{Sec: free_pages})
413    \end{itemize}
414    
415  \section{Initialization}  \section{Initialization}
416    
417  The first function called from \emph{start\_kernel} is {\bf  The first function called from \emph{start\_kernel} is {\bf
# Line 269  colour\_off    & Align the objects to the L Line 457  colour\_off    & Align the objects to the L
457  name            & Name of the cache \\  name            & Name of the cache \\
458  \end{tabularx}  \end{tabularx}
459    
460    \function{kmem\_cache\_init}{kmem_cache_init}{mm/slab.c}
461    
462    This function will
463    
464    \begin{itemize}
465    \item Initialise the cache chain linked list
466    \item Initialise a mutex for accessing the cache chain
467    \item Calculate the cache\_cache colour
468    \end{itemize}
469    
470    \begin{verbatim}
471    void __init kmem_cache_init(void)
472    {
473            size_t left_over;
474    
475            init_MUTEX(&cache_chain_sem);
476            INIT_LIST_HEAD(&cache_chain);
477    
478            kmem_cache_estimate(0, cache_cache.objsize, 0,
479                            &left_over, &cache_cache.num);
480            if (!cache_cache.num)
481                    BUG();
482    
483            cache_cache.colour = left_over/cache_cache.colour_off;
484            cache_cache.colour_next = 0;
485    }
486    \end{verbatim}
487    
488    \begin{itemize}
489    \item Initialise the semaphore for access the cache chain
490    
491    \item Initialise the cache chain linked list
492    
493    \item This estimates the number of objects and amount of bytes wasted.  See
494    Section \ref{Sec: kmem_cache_estimate}
495    
496    \item If even one kmem\_cache\_t cannot be stored in a page, there is
497    something seriously wrong
498    
499    \item texttt{colour} is the number of different cache lines that can be used
500    while still keeping L1 cache alignment
501    
502    \item texttt{colour\_next} indicates which line to use next. Start at 0
503    
504    \end{itemize}
505    
506  \subsection{Initializing cache\_sizes}  \subsection{Initializing cache\_sizes}
507    
508  \emph{kmem\_cache\_sizes\_init()} is called to create a set of caches of  \emph{kmem\_cache\_sizes\_init()} is called to create a set of caches of
# Line 378  object allocated will be the first objec Line 612  object allocated will be the first objec
612    
613  \section{Allocating Objects}  \section{Allocating Objects}
614    
615  \emph{kmem\_cache\_alloc()} is badly named as it doesn't allocate a new cache,  This section covers what is needed to allocate an object. The allocator behaves
616  it allocates a new object.  Creating a new cache will be dealt with later  slightly different in the UP and SMP cases and will be treated seperatly in
617  as creating of a cache depends on being able to allocate a kmem\_cache first.  this section.  Figure \ref{fig: kmem_cache_alloc UP} shows the basic call
618    graph that is used to allocate an object in the UP case.
619    
620    \figesc{graphics/kmem_cache_alloc-UP.ps}{kmem\_cache\_alloc UP}{kmem_cache_alloc
621    UP
622    }
623    
624    As is clear, there is four basic steps. The first step (head) covers basic
625    checking to make sure the allocation is allowable. The second step is to
626    select which slabs list to allocate from. This is one of slabs\_partial or
627    slabs\_free. If there is no slabs in slabs\_free, the cache is grown (See
628    Section \ref{Sec: Slab Creation}) to create a new slab in slabs\_free. The
629    final step is to allocate the object from the selected slab.
630    
631    The SMP case takes one futher step. Before allocating one object, it will
632    check to see if there is one available from the per-CPU cache and use it if
633    there is. If there is not, it will allocate \texttt{batchcount} number of
634    objects in bulk and place them in it's per-cpu cache. See Section \ref{Sec:
635    Per-C PU Object Cache} for details.
636    
637    
638  \subsection{Function \_\_kmem\_cache\_alloc()}  \subsection{Function \_\_kmem\_cache\_alloc()}
639          \textit{File: }\url{mm/slab.c}\\          \textit{File: }\url{mm/slab.c}\\
# Line 465  They largely affect how the buddy alloca Line 718  They largely affect how the buddy alloca
718  \emph{kmem\_cache\_alloc} calls \emph{\_\_kmem\_cache\_alloc} directly.  \emph{kmem\_cache\_alloc} calls \emph{\_\_kmem\_cache\_alloc} directly.
719  It comes in two flavors, UP and SMP.  It comes in two flavors, UP and SMP.
720    
721  \subsubsection{Allocation on a UP}  \subsubsection{Allocation on UP}
722  With the \#defines removed, this is what the function looks like.  With the \#defines removed, this is what the function looks like.
723  \begin{verbatim}  \begin{verbatim}
724  void * __kmem_cache_alloc (kmem_cache_t *cachep, int flags)  void * __kmem_cache_alloc (kmem_cache_t *cachep, int flags)
# Line 516  Note the label alloc\_new\_slab which ha Line 769  Note the label alloc\_new\_slab which ha
769  partially free slabs available. So we grow the cache by one more slab and  partially free slabs available. So we grow the cache by one more slab and
770  try again.  try again.
771    
   
772  \subsubsection{Allocation on SMP}  \subsubsection{Allocation on SMP}
773    
774  There are two principle differences between allocations on UP and on SMP. The  There are two principle differences between allocations on UP and on SMP. The
# Line 802  way a UP does it. Line 1054  way a UP does it.
1054  Free the spinlock and return an object if possible. Otherwise return NULL  Free the spinlock and return an object if possible. Otherwise return NULL
1055  to the cache can be grown.  to the cache can be grown.
1056    
1057    \section{Object Freeing}
1058    \label{Sec: Object Freeing}
1059    
1060    This section covers what is needed to free an object. In many ways, it is
1061    similiar to how objects are allocated and just like the allocation, there is a
1062    UP and SMP flavour. The principle difference is that the SMP version frees the
1063    object to the per CPU cache. Figure \ref{fig: kmem_cache_free} shows the very
1064    simply call graph used
1065    
1066    \figesc{graphics/kmem_cache_free.ps}{kmem\_cache\_free}{kmem_cache_free}
1067    
1068    \function{kmem\_cache\_free}{kmem_cache_free}{mm/slab.c}
1069    
1070    \begin{verbatim}
1071    void kmem_cache_free (kmem_cache_t *cachep, void *objp)
1072    {
1073            unsigned long flags;
1074    #if DEBUG
1075            CHECK_PAGE(virt_to_page(objp));
1076            if (cachep != GET_PAGE_CACHE(virt_to_page(objp)))
1077                    BUG();
1078    #endif
1079    \end{verbatim}
1080    
1081    If debugging is enabled, the page will first be checked with \id{CHECK\_PAGE}
1082    to make sure it is a slab page. Secondly the page list will be examined to
1083    make sure it belongs to this cache (See Section \ref{Sec: Slab Structure})
1084    
1085    \begin{verbatim}
1086    
1087            local_irq_save(flags);
1088            __kmem_cache_free(cachep, objp);
1089            local_irq_restore(flags);
1090    }
1091    \end{verbatim}
1092    
1093    Interrupts are disabled to protect the path. \_\_kmem\_cache\_free will free
1094    the object to the per CPU cache for the SMP case and to the global pool in
1095    the normal case. Reenable interrupts.
1096    
1097    
1098    \function{\_\_kmem\_cache\_free}{__kmem_cache_free (UP)}{mm/slab.c}
1099    
1100    This covers what the function does in the UP case. It is obvious the object
1101    is just freed to the global pool. The SMP case will be dealt with in the
1102    next section
1103    
1104    \begin{verbatim}
1105    static inline void __kmem_cache_free (kmem_cache_t *cachep, void* objp)
1106    {
1107            kmem_cache_free_one(cachep, objp);
1108    }
1109    \end{verbatim}
1110    
1111    \function{\_\_kmem\_cache\_free}{__kmem_cache_free (SMP)}{mm/slab.c}
1112    
1113    This case is slightly more interesting.
1114    
1115    \begin{verbatim}
1116    static inline void __kmem_cache_free (kmem_cache_t *cachep, void* objp)
1117    {
1118            cpucache_t *cc = cc_data(cachep);
1119    \end{verbatim}
1120    Get the data for this per CPU cache (See Section \ref{Sec: Per-CPU Object Cache}
1121    
1122    \begin{verbatim}
1123    
1124            CHECK_PAGE(virt_to_page(objp));
1125    
1126            if (cc)
1127    \end{verbatim}
1128    
1129    Make sure the page is a slab page. If a per CPU cache is available, try to
1130    use it. This is not always available. During cache destruction for instance,
1131    the per CPU caches are already gone
1132    
1133    \begin{verbatim}
1134    
1135                    int batchcount;
1136                    if (cc->avail < cc->limit) {
1137                            STATS_INC_FREEHIT(cachep);
1138                            cc_entry(cc)[cc->avail++] = objp;
1139                            return;
1140                    }
1141    \end{verbatim}
1142    
1143    If the number of available in the per CPU cache is below limit, then add
1144    the object to the free list and return. Update statistics if enabled.
1145    
1146    \begin{verbatim}
1147    
1148                    STATS_INC_FREEMISS(cachep);
1149                    batchcount = cachep->batchcount;
1150                    cc->avail -= batchcount;
1151                    free_block(cachep,
1152                            &cc_entry(cc)[cc->avail],batchcount);
1153                    cc_entry(cc)[cc->avail++] = objp;
1154                    return;
1155    \end{verbatim}
1156    
1157    The pool has overflowed so batchcount number of objects is going to be
1158    freed to the global pool.  Update the number of available (\texttt{avail})
1159    objects. Free a block of objects to the global cache. Free the requested
1160    object and place it on the per CPU pool.
1161    
1162    
1163    \begin{verbatim}
1164            } else {
1165                    free_block(cachep, &objp, 1);
1166            }
1167    }
1168    \end{verbatim}
1169    
1170    If the per CPU cache is not available, then free this object to the global pool
1171    
1172    \function{kmem\_cache\_free\_one}{kmem_cache_free_one}{mm/slab.c}
1173    
1174    \begin{verbatim}
1175    static inline void kmem_cache_free_one(kmem_cache_t *cachep, void *objp)
1176    {
1177            slab_t* slabp;
1178    
1179            CHECK_PAGE(virt_to_page(objp));
1180            slabp = GET_PAGE_SLAB(virt_to_page(objp));
1181    
1182    \end{verbatim}
1183    
1184    Make sure the page is a slab page. Get a slab descriptor for the page.
1185    
1186    \begin{verbatim}
1187    
1188    #if DEBUG
1189            if (cachep->flags & SLAB_DEBUG_INITIAL)
1190                    cachep->ctor(objp, cachep,
1191                            SLAB_CTOR_CONSTRUCTOR|SLAB_CTOR_VERIFY);
1192    \end{verbatim}
1193    
1194    If SLAB\_DEBUG\_INITIAL is set, the constructor is called to verify the
1195    object is in an initialised state
1196    
1197    \begin{verbatim}
1198            if (cachep->flags & SLAB_RED_ZONE) {
1199                    objp -= BYTES_PER_WORD;
1200                    if (xchg((unsigned long *)objp, RED_MAGIC1) !=
1201                                                         RED_MAGIC2)
1202                            BUG();
1203                    if (xchg((unsigned long *)(objp+cachep->objsize -
1204                                    BYTES_PER_WORD), RED_MAGIC1) !=
1205                                                          RED_MAGIC2)
1206                            BUG();
1207            }
1208    \end{verbatim}
1209    
1210    Verify the red marks at either end of the object are still there. This will
1211    check for writes beyound the boundaries of the object and for double frees
1212    
1213    \begin{verbatim}
1214    
1215            if (cachep->flags & SLAB_POISON)
1216                    kmem_poison_obj(cachep, objp);
1217            if (kmem_extra_free_checks(cachep, slabp, objp))
1218                    return;
1219    #endif
1220    \end{verbatim}
1221    
1222    Poison the freed object with a known pattern. This function will confirm
1223    the object is a part of this slab and cache. It will then check the free
1224    list (bufctl) to make sure this is not a double free. See Section \ref{Sec:
1225    kmem_extra_free_checks}
1226    
1227    \begin{verbatim}
1228    
1229    
1230            {
1231                    unsigned int objnr = (objp-slabp->s_mem)/cachep->objsize;
1232    
1233                    slab_bufctl(slabp)[objnr] = slabp->free;
1234                    slabp->free = objnr;
1235            }
1236    \end{verbatim}
1237    
1238    Calculate the index for the object been freed. As this object is now free,
1239    update the bufctl to reflect that. See Section \ref{Sec: Tracking Free
1240    Objects}
1241    
1242    \begin{verbatim}
1243    
1244            STATS_DEC_ACTIVE(cachep);
1245            
1246            {
1247                    int inuse = slabp->inuse;
1248                    if (unlikely(!--slabp->inuse)) {
1249                            /* Was partial or full, now empty. */
1250                            list_del(&slabp->list);
1251                            list_add(&slabp->list, &cachep->slabs_free);
1252    
1253    \end{verbatim}
1254    
1255    If \texttt{inuse} reaches 0, the slab is free and is moved to the slabs\_free
1256    list
1257    
1258    \begin{verbatim}
1259    
1260                    } else if (unlikely(inuse == cachep->num)) {
1261                            /* Was full. */
1262                            list_del(&slabp->list);
1263                            list_add(&slabp->list, &cachep->slabs_partial);
1264                    }
1265            }
1266    }
1267    \end{verbatim}
1268    
1269    If the number in use equals the number of objects in a slab, it is full so
1270    move it to the slabs\_full list
1271    
1272    \function{free\_block}{free_block}{mm/slab.c}
1273    
1274    This function is only used in the SMP case when the per CPU cache gets too
1275    full. It is used to free a batch of objects in bulk
1276    
1277    \begin{verbatim}
1278    static void free_block (kmem_cache_t* cachep, void** objpp, int len)
1279    {
1280            spin_lock(&cachep->spinlock);
1281            __free_block(cachep, objpp, len);
1282            spin_unlock(&cachep->spinlock);
1283    }
1284    \end{verbatim}
1285    
1286    The parameters are
1287    
1288    \begin{description}
1289    \idn{cachep} The cache that objects are been freed from
1290    \idn{objpp} Pointer to the first object to free
1291    \idn{len} The number of objects to free
1292    \end{description}
1293    
1294    The code ....
1295    
1296    \begin{itemize}
1297    \item Acquire a lock to the cache descriptor
1298    \item Discussed in next section
1299    \item Release the lock
1300    \end{itemize}
1301    
1302    \function{\_\_free\_block}{__free_block}{mm/slab.c}
1303    
1304    This function is trivial. Starting with \texttt{objpp}, it will free len
1305    number of objects.
1306    
1307    \begin{verbatim}
1308    static inline void __free_block (kmem_cache_t* cachep,
1309                                    void** objpp, int len)
1310    {
1311            for ( ; len > 0; len--, objpp++)
1312                    kmem_cache_free_one(cachep, *objpp);
1313    }
1314    \end{verbatim}
1315    
1316  \section{Creating a Cache}  \section{Creating a Cache}
1317  \subsection{Function kmem\_cache\_create()}\index{kmem\_cache\_create()}  \subsection{Function kmem\_cache\_create()}\index{kmem\_cache\_create()}
1318          \textit{File: }\url{mm/slab.c}\\          \textit{File: }\url{mm/slab.c}\\
# Line 1107  opps: Line 1618  opps:
1618  }  }
1619  \end{verbatim}  \end{verbatim}
1620    
1621  Add the cache to the chain and return.  \subsection{Calculating the Number of Objects on a Slab}
1622    \label{Sec: Calculating the Number of Objects on a Slab}
1623    
1624    During cache creation, it is determined how many objects can be stored in
1625    a slab and how much wasteage there will be. The following function calculates
1626    how many objects may be stored, taking into account if the slab and bufctl's
1627    must be stored on-slab.
1628    
1629    \function{kmem\_cache\_estimate}{kmem_cache_estimate}{mm/slab.c}
1630    
1631    \begin{verbatim}
1632    static void kmem_cache_estimate (unsigned long gfporder, size_t size,
1633                     int flags, size_t *left_over, unsigned int *num)
1634    {
1635    \end{verbatim}
1636    
1637    \begin{description}
1638    \idn{gfporder} The 2$^{gfporder}$ number of pages to allocate for each slab
1639    \idn{size}     The size of each object
1640    \idn{flags}    The cache flags. See Section \ref{Sec: Cache Static Flags}
1641    \idn{left\_over} The number of bytes left over in the slab. Returned to
1642    caller
1643    \idn{num}      The number of objects that will fit in a slab. Returned to
1644    caller
1645    \end{description}
1646    
1647    \begin{verbatim}
1648    
1649            int i;
1650            size_t wastage = PAGE_SIZE<<gfporder;
1651    
1652            size_t extra = 0;
1653            size_t base = 0;
1654    
1655    \end{verbatim}
1656    \texttt{wastage} is decremented through the function. It starts with
1657    the maximum possible amount of wastage.
1658    
1659    \begin{verbatim}
1660            if (!(flags & CFLGS_OFF_SLAB)) {
1661                    base = sizeof(slab_t);
1662                    extra = sizeof(kmem_bufctl_t);
1663            }
1664    \end{verbatim}
1665    
1666    \texttt{base} is where usable memory in the slab starts. If the slab descriptor
1667    is kept on cache, the base begins at the end of the slab\_t struct and the
1668    number of bytes needed to store the bufctl is the size of kmem\_bufctl\_t.
1669    texttt{extra} is the number of bytes needed to store kmem\_bufctl\_t
1670    
1671    \begin{verbatim}
1672    
1673            i = 0;
1674            while (i*size + L1_CACHE_ALIGN(base+i*extra) <= wastage)
1675                    i++;
1676    \end{verbatim}
1677    
1678    \texttt{i} becomes the number of objects the slab can hold
1679    
1680    This counts up the number of objects that the cache can store. \texttt{i*size}
1681    is the amount of memory needed to store the object itself.
1682    \texttt{L1\_CACHE\_ALIGN(base+i*extra)} is slightly trickier. This is
1683    calculating the amount of memory needed to store the kmem\_bufctl\_t of
1684    which one exists for every object in the slab. As it is at the beginning of
1685    the slab, it is L1 cache aligned so that the first object in the slab will
1686    be aligned to hardware cache. \texttt{i*extra} will calculate the amount of
1687    space needed to hold a kmem\_bufctl\_t for this object. As wastage starts
1688    out as the size of the slab, it's use is overloaded here.
1689    
1690    \begin{verbatim}
1691            if (i > 0)
1692                    i--;
1693    
1694            if (i > SLAB_LIMIT)
1695                    i = SLAB_LIMIT;
1696    \end{verbatim}
1697    
1698    Because the previous loop counts until the slab overflows, the number of
1699    objects that can be stored is \texttt{i-1}.
1700    
1701    SLAB\_LIMIT is the absolute largest number of objects a slab can store. Is
1702    is defined as 0xffffFFFE as this the largest number kmem\_bufctl\_t, which
1703    is an unsigned int, can hold
1704    
1705    \begin{verbatim}
1706            *num = i;
1707            wastage -= i*size;
1708            wastage -= L1_CACHE_ALIGN(base+i*extra);
1709            *left_over = wastage;
1710    }
1711    \end{verbatim}
1712    
1713    \begin{itemize}
1714    \item \texttt{num} is now the number of objects a slab can hold
1715    \item Take away the space taken up by all the objects from wastage
1716    \item Take away the space taken up by the kmem\_bufctl\_t
1717    \item Wastage has now been calculated as the left over space in the slab
1718    \item Add the cache to the chain and return.
1719    \end{itemize}
1720    
1721  \section{Growing a Cache}  \section{Growing a Cache}
1722    
1723    At this point, we have seen how the cache is created, but on creation,
1724    it is an empty cache with empty lists for it's \texttt{slab\_full},
1725    \texttt{slab\_partial} and \texttt{slabs\_free}. See Section \ref{Sec: Slab Allocator Overview} for a description of these lists.
1726    
1727    This section will show how a cache is grown when no objects are left in the
1728    \texttt{slabs\_partial} list and there is no slabs in \texttt{slabs\_free}.
1729    The principle function for this is \id{kmem\_cache\_grow}. The tasks it
1730    fulfills are
1731    
1732    \begin{itemize}
1733    \item Perform basic sanity checks to guard against bad usage
1734    \item Calculate colour offset for objects in this slab
1735    \item Allocate memory for slab and acquire a slab descriptor
1736    \item Link the pages used for the slab to the slab and cache descriptors (See
1737    Section \ref{Sec: Slab Structure}
1738    \item Initalise objects in the slab
1739    \item Add the slab to the cache
1740    \end{itemize}
1741    
1742    \begin{figure}
1743    \centerline{\includegraphics{graphics/kmem_cache_grow.ps}}
1744    \caption{kmem\_cache\_grow}
1745    \label{kmem_cache_grow}
1746    \end{figure}
1747    
1748  \subsection{Function kmem\_cache\_grow()}  \subsection{Function kmem\_cache\_grow()}
1749          \textit{File: }\url{mm/slab.c}\\          \textit{File: }\url{mm/slab.c}\\
1750          \textit{Prototype: }          \textit{Prototype: }
# Line 1312  This function allocates a new slab\_t an Line 1947  This function allocates a new slab\_t an
1947  \end{verbatim}  \end{verbatim}
1948    
1949  The first check is to see if the slab\_t is kept off the slab. If it is,  The first check is to see if the slab\_t is kept off the slab. If it is,
1950  $cachep\rightarrow{slabp\_cache}$ will be pointing to the cache of memory allocations  $cachep\rightarrow{slabp\_cache}$ will be pointing to the cache of memory
1951  large enough to contain the slab\_t. The different size caches are the same  allocations large enough to contain the slab\_t. The different size caches
1952  ones used by kmalloc.  are the same ones used by kmalloc.
1953    
1954  \begin{verbatim}  \begin{verbatim}
1955          } else {          } else {
# Line 1345  end of the slab\_t if it's on slab. Line 1980  end of the slab\_t if it's on slab.
1980  Periodically it is necessary to shrink a cache, for instance when kswapd  Periodically it is necessary to shrink a cache, for instance when kswapd
1981  is woken as zones need to be balanced.  Before a cache is shrinked, it is  is woken as zones need to be balanced.  Before a cache is shrinked, it is
1982  checked to make sure it isn't called from inside an interrupt.  The code  checked to make sure it isn't called from inside an interrupt.  The code
1983  behind {\emph kmem\_shrink\_cache() looks a bit convulated at first glance.  behind \emph{kmem\_shrink\_cache()} looks a bit convulated at first glance.
1984  However it does just one thing.  It's tasks are
1985    
1986  \begin{itemize}  \begin{itemize}
1987  \item For every additional slab on the slabs\_free list, call kmem\_cache\_destroy(slab)  \item Delete all objects in the per CPU caches
1988    \item Delete all slabs from slabs\_free unless the growing flag gets set
1989  \end{itemize}  \end{itemize}
1990    
1991  Most of the code simply deals with list locking and management.  {\emph  Two varieties of shrink functions are provided. \texttt{kmem\_cache\_shrink}
1992  kmem\_shrink\_cache} returns back a boolean as to whether the cache still  removes all slabs from slabs\_free and returns the number of pages freed as
1993  has active objects or not.  This is important for when a full cache is  a result.  \texttt{\_\_kmem\_cache\_shrink} frees all slabs from slabs\_free
1994  being destroyed.  and then verifies that slabs\_partial and slabs\_full are empty. This is
1995    important during cache destruction when it doesn't matter how many pages
1996    are freed, just that the cache is empty.
1997    
1998  \subsection{Function kmem\_cache\_shrink()}  \subsection{Function kmem\_cache\_shrink()}
1999          \textit{File: }\url{mm/slab.c}\\          \textit{File: }\url{mm/slab.c}\\
# Line 1547  a module is unloading itself or is being Line 2185  a module is unloading itself or is being
2185  caches with duplicate caches been created if the module is unloaded and  caches with duplicate caches been created if the module is unloaded and
2186  loaded several times.  loaded several times.
2187    
2188  First the cache is removed from the cache chain. Then \_\_kmem\_cache\_shrink  The steps taken to destroy a cache are
2189  is called to do the release of slabs. It works the same as kmem\_cache\_shrink  
2190  does except it returns a boolean indicating if all slabs were free or not. If  \begin{itemize}
2191  they were, the cache is freed. If they were not, the cache is added back to  \item Delete the cache from the cache chain
2192  the cache chain and an error is printed.  \item Shrink the cache to delete all slabs (See Section \ref{Sec: Cache
2193    Shrinking
2194    })
2195    \item Free any per CPU caches (\texttt{kfree})
2196    \item Delete the cache descriptor from the \texttt{cache\_cache} (See Section:
2197    \ref{Sec: Object Freeing})
2198    \end{itemize}
2199    
2200    Figure \ref{fig: kmem_cache_destroy} Shows the call graph for this task.
2201    
2202    \begin{figure}
2203    \centerline{\includegraphics{graphics/kmem_cache_destroy.ps}}
2204    \caption{kmem\_cache\_destroy}
2205    \label{kmem_cache_destroy}
2206    \end{figure}
2207    
2208    \function{kmem\_cache\_destroy}{kmem_cache_destroy}{mm/slab.c}
2209    
2210    \begin{verbatim}
2211    int kmem_cache_destroy (kmem_cache_t * cachep)
2212    {
2213            if (!cachep || in_interrupt() || cachep->growing)
2214                     BUG();
2215    \end{verbatim}
2216    
2217    Sanity check. Make sure the cachep is not null, that an interrupt isn't
2218    trying to do this and that the cache hasn't been marked growing, indicating
2219    it's in use
2220    
2221    \begin{verbatim}
2222    
2223             down(&cache_chain_sem);
2224    
2225    \end{verbatim}
2226    
2227    Acquire the semaphore for accessing the cache chain
2228    
2229    \begin{verbatim}
2230    
2231             if (clock_searchp == cachep)
2232                     clock_searchp = list_entry(cachep->next.next,
2233                                                     kmem_cache_t, next);
2234             list_del(&cachep->next);
2235             up(&cache_chain_sem);
2236    
2237    \end{verbatim}
2238    
2239    \begin{itemize}
2240    \item Acquire the semaphore for accessing the cache chain
2241    \item Acquire the list entry from the cache chain
2242    \item Delete this cache from the cache chain
2243    \item Release the cache chain semaphore
2244    \end{itemize}
2245    
2246    \begin{verbatim}
2247    
2248             if (__kmem_cache_shrink(cachep)) {
2249                     printk(KERN_ERR "kmem_cache_destroy: Can't free all objects %p\n",
2250                            cachep);
2251                     down(&cache_chain_sem);
2252                     list_add(&cachep->next,&cache_chain);
2253                     up(&cache_chain_sem);
2254                     return 1;
2255             }
2256    
2257    \end{verbatim}
2258    
2259    Shrink the cache to free all slabs (See Section \ref{Sec: __kmem_cache_shrink})
2260    The shrink function returns true if there is still slabs in the cache. If
2261    there is, the cache cannot be destroyed so it is added back into the cache
2262    chain and the error reported
2263    
2264    \begin{verbatim}
2265     #ifdef CONFIG_SMP
2266             {
2267                     int i;
2268                     for (i = 0; i < NR_CPUS; i++)
2269                             kfree(cachep->cpudata[i]);
2270             }
2271     #endif
2272    \end{verbatim}
2273    
2274    If SMP is enabled, each per CPU data is freed using \texttt{kfree}
2275    
2276    \begin{verbatim}
2277    
2278    
2279             kmem_cache_free(&cache_cache, cachep);
2280    
2281             return 0;
2282    }
2283    \end{verbatim}
2284    
2285    Delete the cache descriptor from the cache\_cache
2286    
2287    \section{Cache Reaping}
2288    \label{Sec: Cache Reaping}
2289    
2290    When the page allocator notices that memory is getting tight, it
2291    wakes \texttt{kswapd} to begin freeing up pages (See Section \ref{Sec: __alloc_pages}. One of the first ways it accomplishes this task is telling the
2292    slab allocator to reap caches. It has to be the slab allocator that selects the
2293    caches as other subsystems should not know anything about the cache internals.
2294    
2295    \figesc{graphics/kmem_cache_reap.ps}{kmem\_cache\_reap}{kmem_cache_reap}
2296    
2297    The call graph in Figure \ref{fig: kmem_cache_reap} is deceptively simple. The
2298    task of selecting the proper cache to reap is quiet long. In case there is
2299    many caches in the system, only \id{REAP\_SCANLEN} caches are examined
2300    in each call. The last cache to be scanned is stored in the variable
2301    \id{clock\_searchp} so as not to examine the same caches over and over
2302    again. For each scanned cache, the reaper does the following
2303    
2304    \begin{itemize}
2305    \item Check flags for SLAB\_NO\_REAP and skip if set
2306    \item If the cache is growing, skip it
2307    \item if the cache has grown recently (DFLGS\_GROWN is set in dflags), skip it
2308    but clear the flag so it will be reaped the next time
2309    \item Count the number of free slabs in slabs\_free and calculate how many
2310    pages that would free in the variable \texttt{pages}
2311    \item If the cache has constructors or large slabs, adjust \texttt{pages} to
2312    make it less likely for the cache to be selected.
2313    \item If the number of pages that would be freed exceeds
2314    \texttt{REAP\_PERFECT}, free half of the slabs in slabs\_free
2315    \item Otherwise scan the rest of the caches and select the one that would free
2316    the most pages for freeing half of it's slabs in slabs\_free
2317    \end{itemize}
2318    
2319  \section{kmalloc/kfree}  \function{kmem\_cache\_reap}{kmem_cache_reap}{mm/slab.c}
2320    
2321  At this stage, the workings of kmalloc and kfree should be obvious if not  Because of the size of this function, it will be broken up into three seperate
2322  downright trivial.  sections. The first is simple function preamble. The second is the selection
2323    of a cache to reap and the third is the freeing of the slabs
2324    
2325    \begin{verbatim}
2326    int kmem_cache_reap (int gfp_mask)
2327    {
2328            slab_t *slabp;
2329            kmem_cache_t *searchp;
2330            kmem_cache_t *best_cachep;
2331            unsigned int best_pages;
2332            unsigned int best_len;
2333            unsigned int scan;
2334            int ret = 0;
2335    
2336            if (gfp_mask & __GFP_WAIT)
2337                    down(&cache_chain_sem);
2338            else
2339                    if (down_trylock(&cache_chain_sem))
2340                            return 0;
2341    
2342            scan = REAP_SCANLEN;
2343            best_len = 0;
2344            best_pages = 0;
2345            best_cachep = NULL;
2346            searchp = clock_searchp;
2347    \end{verbatim}
2348    
2349    \begin{itemize}
2350    \item The only parameter is the GFP flag. The only check made is against
2351    the \_\_GFP\_WAIT flag. As \texttt{kswapd} can sleep, this flag is virtually
2352    worthless
2353    
2354    \item Can the caller sleep? If yes, then acquire the semaphore
2355    
2356    \item Else, try and acquire the semaphore and if not available,
2357    return
2358    
2359    \item REAP\_SCANLEN (10) is the number of caches to examine.
2360    
2361    \item Set searchp to be the last cache that was examined at the last
2362    reap
2363    \end{itemize}
2364    
2365    \begin{verbatim}
2366            do {
2367                    unsigned int pages;
2368                    struct list_head* p;
2369                    unsigned int full_free;
2370    
2371                    if (searchp->flags & SLAB_NO_REAP)
2372                            goto next;
2373                    spin_lock_irq(&searchp->spinlock);
2374                    if (searchp->growing)
2375                            goto next_unlock;
2376                    if (searchp->dflags & DFLGS_GROWN) {
2377                            searchp->dflags &= ~DFLGS_GROWN;
2378                            goto next_unlock;
2379                    }
2380    #ifdef CONFIG_SMP
2381                    {
2382                            cpucache_t *cc = cc_data(searchp);
2383                            if (cc && cc->avail) {
2384                                    __free_block(searchp, cc_entry(cc),
2385                                    cc->avail);
2386                                    cc->avail = 0;
2387                            }
2388                    }
2389    #endif
2390    
2391                    full_free = 0;
2392                    p = searchp->slabs_free.next;
2393                    while (p != &searchp->slabs_free) {
2394                            slabp = list_entry(p, slab_t, list);
2395    #if DEBUG
2396                            if (slabp->inuse)
2397                                    BUG();
2398    #endif
2399                            full_free++;
2400                            p = p->next;
2401                    }
2402    
2403                    pages = full_free * (1<<searchp->gfporder);
2404                    if (searchp->ctor)
2405                            pages = (pages*4+1)/5;
2406                    if (searchp->gfporder)
2407                            pages = (pages*4+1)/5;
2408                    if (pages > best_pages) {
2409                            best_cachep = searchp;
2410                            best_len = full_free;
2411                            best_pages = pages;
2412                            if (pages >= REAP_PERFECT) {
2413                               clock_searchp =
2414                                    list_entry(searchp->next.next,
2415                                         kmem_cache_t,next);
2416                               goto perfect;
2417                            }
2418                    }
2419    next_unlock:
2420                    spin_unlock_irq(&searchp->spinlock);
2421    next:
2422                    searchp =
2423                            list_entry(searchp->next.next,kmem_cache_t,next);
2424            } while (--scan && searchp != clock_searchp);
2425    \end{verbatim}
2426    
2427    This block examines REAP\_SCANLEN number of caches to select one to free
2428    
2429    \begin{itemize}
2430    \item Acquire an interrupt safe lock to the cache descriptor
2431    \item If the cache is growing, skip it
2432    \item If the cache has grown recently, skip it and clear the flag
2433    \item Free any per CPU objects to the global pool
2434    \item Count the number of slabs in the slabs\_free list
2435    \item Calculate the number of pages all the slabs hold
2436    \item If the objects have constructors, reduce the page count by
2437    one
2438    fifth to make it less likely to be selected for reaping
2439    \item If the slabs consist of more than one page, reduce the page
2440    count by one fifth. This is because high order pages are hard to acquire
2441    \item If this is the best canditate found for reaping so far, check if
2442    it is perfect for reaping
2443    \item Record the new maximums
2444    \item best\_len is recorded so that it is easy to know how many slabs is
2445    half of the slabs in the free list
2446    \item If this cache is perfect for reaping then ....
2447    \item Update \texttt{clock\_searchp}
2448    \item Goto perfect where half the slabs will be freed
2449    \item This label is reached if it was found the cache was growing after
2450    acquiring the lock
2451    \item Release the cache descriptor lock
2452    \item Move to the next entry in the cache chain
2453    \item Scan while REAP\_SCANLEN has not been reachd and we have not
2454    cycled around the whole cache chain
2455    \end{itemize}
2456    
2457    \begin{verbatim}
2458            clock_searchp = searchp;
2459    
2460            if (!best_cachep)
2461                    goto out;
2462    
2463            spin_lock_irq(&best_cachep->spinlock);
2464    perfect:
2465            best_len = (best_len + 1)/2;
2466            for (scan = 0; scan < best_len; scan++) {
2467                    struct list_head *p;
2468    
2469                    if (best_cachep->growing)
2470                            break;
2471                    p = best_cachep->slabs_free.prev;
2472                    if (p == &best_cachep->slabs_free)
2473                            break;
2474                    slabp = list_entry(p,slab_t,list);
2475    #if DEBUG
2476                    if (slabp->inuse)
2477                            BUG();
2478    #endif
2479               list_del(&slabp->list);
2480                    STATS_INC_REAPED(best_cachep);
2481    
2482                    spin_unlock_irq(&best_cachep->spinlock);
2483                    kmem_slab_destroy(best_cachep, slabp);
2484                    spin_lock_irq(&best_cachep->spinlock);
2485            }
2486            spin_unlock_irq(&best_cachep->spinlock);
2487            ret = scan * (1 << best_cachep->gfporder);
2488    out:
2489            up(&cache_chain_sem);
2490            return ret;
2491    }
2492    \end{verbatim}
2493    
2494    This block will free half of the slabs from the selected cache
2495    
2496    \begin{itemize}
2497    \item Update clock\_searchp for the next cache reap
2498    \item If a cache was not selected, goto out to free the cache chain
2499    and exit
2500    \item Acquire the cache chain spinlock and disable interrupts
2501    \item Adjust best\_len to be the number of slabs to free
2502    \item Free best\_len number of slabs
2503    \item If the cache is growing, exit
2504    \item Get a slab from the list
2505    \item If there is no slabs left in the list, exit
2506    \item Get the slab pointer
2507    \item If debugging is enabled, make sure there isn't active objects
2508    in the slab
2509    \item Remove the slab from the slabs\_free list
2510    \item Update statistics if enabled
2511    \item Free the cache descriptor and enable interrupts
2512    \item Destroy the slab. See Section \ref{Sec: Slab Destroying}
2513    \item Reacquire the cache descriptor spinlock and disable interrupts
2514    \item Free the cache descriptor and enable interrupts
2515    \item \texttt{ret} is the number of pages that was freed
2516    \item Free the cache semaphore and return the number of pages freed
2517    \end{itemize}
2518    
2519    \section{kmalloc}
2520    \label{Sec: kmalloc}
2521    
2522    With the existance of the sizes cache, the slab allocator is able to offer a
2523    new allocator function, \id{kmalloc} for use when small memory buffers are
2524    required. When a request is received, the appropriate sizes cache is selected
2525    and an object assigned from it.  All the hard work is in cache allocation
2526    (See Section \ref{Sec: Object Allocation}
2527    
2528  \begin{verbatim}  \begin{verbatim}
2529  void * kmalloc (size_t size, int flags)  void * kmalloc (size_t size, int flags)
# Line 1578  Go through all the available sizes until Line 2545  Go through all the available sizes until
2545  large enough for this allocation, then call \_\_kmem\_cache\_alloc() to  large enough for this allocation, then call \_\_kmem\_cache\_alloc() to
2546  allocate from the cache as normal.  allocate from the cache as normal.
2547    
2548    \section{kfree}
2549    \label{Sec: kfree}
2550    
2551    Just as there is a \texttt{kmalloc} function to allocate small memory objects
2552    for use, there is a \id{kfree} for freeing it. As with kmalloc, the real
2553    work takes place during object freeing (See Section \ref{Sec: Object Freeing})
2554    
2555  \begin{verbatim}  \begin{verbatim}
2556  void kfree (const void *objp)  void kfree (const void *objp)
2557  {  {

Legend:
Removed from v.1.11  
changed lines
  Added in v.1.12

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26