/[gzz]/manuscripts/storm/article.rst
ViewVC logotype

Diff of /manuscripts/storm/article.rst

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.154 by benja, Sat Feb 15 11:54:00 2003 UTC revision 1.155 by benja, Sat Feb 15 13:29:03 2003 UTC
# Line 127  for versioned data and Xanalogical stora Line 127  for versioned data and Xanalogical stora
127  We address the mobility of documents by block storage  We address the mobility of documents by block storage
128  and versioning, while we use Xanalogical storage  and versioning, while we use Xanalogical storage
129  to address the movement of content between documents (copy&paste);  to address the movement of content between documents (copy&paste);
130  see Fig. [ref-storm_layers]_.  Fig. [ref-storm_layers]_ provides an overview of Storm's components.
131    
132  .. uml:: storm_layers  .. uml:: storm_layers
133      :caption: Components of the Storm model      :caption: Components of the Storm model
# Line 172  Storm's peer-to-peer functionality is in Line 172  Storm's peer-to-peer functionality is in
172  usable yet.  usable yet.
173    
174  This paper is structured as follows. In the next section, we describe  This paper is structured as follows. In the next section, we describe
175  related work. In section 3, we give an overview of xanalogical model.  related work. In section 3, we give an overview of the xanalogical storage model.
176  In section 4, we introduce the basic storage unit of our  In section 4, we introduce the basic storage unit of our
177  system, i.e. file-like blocks identified by cryptographic hashes. In section 5,  system, i.e. file-like blocks identified by cryptographic hashes. In section 5,
178  we discuss application-specific reverse indexing of blocks by their  we discuss application-specific reverse indexing of blocks by their
# Line 326  close keys in the custody of a 'hashtabl Line 326  close keys in the custody of a 'hashtabl
326  at the cost of each peer maintaining one node in the overlay network  at the cost of each peer maintaining one node in the overlay network
327  for each (key,value) pair it publishes.  for each (key,value) pair it publishes.
328    
 .. hemppah's original text before benja's changes:  
    In a DHT, both hashtable items and the addresses of peers  
    are mapped into a single virtual key space. The form of the key space  
    depends on implementation (for example, Chord uses a circle).  
    A distance metric (e.g. numerical, XOR) is used to find the peer  
    whose position in the key space is 'closest' to the key of a given item.  
    This peer is responsible to store the item (so both queries and insertions  
    relating to the key are routed to it.) Thus,  
    DHT's overlay connectivity graph is structured. On the other hand, the overlay  
    connectivity graph of broadcasting approach is formed more or less (depends on  
    implementation) in a random manner.  
   
    When performing queries, in broadcasting approach, peer sends a query request to a  
    subset of its neighbors and these peers to their subsequent neighbors. The  
    process will continue as long as query's time-to-live (TTL) value hasn't been reached.  
    In DHT approach, query request is deterministically routed towards the peer  
    which hosts a specific data item. Routing is based on 'hints' (based on  
    differences between data item's key and peer's key), which each peer provides  
    along the routing path.  
   
    Obviously, there are major differences within approaches. For the DHT approach,  
    perhaps the main difference is *what* is self-organized into a  
    virtual key space. For instance, in SWAN [ref] and Skip Graph [ref], *data  
    items* self-organise into a virtual address space, while in other DHT  
    implementations *peers* self-organise in structured form in a virtual space.  
    In the broadcasting approach, implementations' differences mostly lie in the  
    *structural level* of the overlay network, i.e. super peers and peer clusters.  
   
329  The basic definition of a distributed hashtable does not indicate  The basic definition of a distributed hashtable does not indicate
330  how large the keys and values used may be. Intuitively, we expect keys  how large the keys and values used may be. Intuitively, we expect keys
331  to be small, maybe a few hundred bytes at most; however, there are different  to be small, maybe a few hundred bytes at most; however, there are different
# Line 364  downloaded? Iyer et al [iyer02squirrel]_ Line 336  downloaded? Iyer et al [iyer02squirrel]_
336  a *home-store* and the latter a *directory* scheme (they call the peer  a *home-store* and the latter a *directory* scheme (they call the peer
337  responsible for a hashtable item its 'home node,' thus 'home-store').  responsible for a hashtable item its 'home node,' thus 'home-store').
338    
 .. Should we discuss applications of p2p systems (CFS, OceanStore, Squirrel, ...)  
    here? If so, which ones?  
    [CFS/PAST(DHT, GUID, block vs. files), perhaps Freenet too (GUID) ? -Hermanni]  
   
 .. thesis-benja: remove paragraph below  
   
339  CFS [ref] is a global peer-to-peer storage system. CFS is built upon Chord DHT  CFS [ref] is a global peer-to-peer storage system. CFS is built upon Chord DHT
340  peer-to-peer routing layer[ref]. CFS stores data as blocks. However, CFS *splits* data  peer-to-peer routing layer[ref]. CFS stores data as blocks. However, CFS *splits* data
341  (files) into several miniblocks and spreads blocks over the available CFS servers.  (files) into several miniblocks and spreads blocks over the available CFS servers.
# Line 427  transclusions: identical characters copi Line 393  transclusions: identical characters copi
393  Through this mechanism, the system can show to the user all documents  Through this mechanism, the system can show to the user all documents
394  that share text with the current document.  that share text with the current document.
395    
396  To keep track of links and transclusions, the system keeps a global index  To track links and transclusions, the system indexes documents by
397  of documents by the characters they contain, and of links by the characters  the characters they contain, and links by the characters they refer to.
398  they refer to. Thus, for each character in the document, the system  To find transclusions of a document, we search the index for other
399  queries the index for other documents containing this character,  documents containing any character from this document. To show links,
400  and shows them as transclusions. Resolving links is a multi-step process.  we first search for links refering to any character in this document.
401  Each link is modeled as two collections of characters: the two  However, when we have a link, we don't yet have the document it links to.
402  endpoints of the link. To show links to a specific document,  Therefore, in a second step, we search for documents containing
403  the system firstly uses the link index to find links  the characters the link targets.
 to each character in the document. Secondly, for each link,  
 it looks at the *other* set of characters in the link-- the target  
 of the link, if the original character was the source, and vice versa.  
 Thirdly, it looks for documents containing these target characters.  
 This way, even if both the source and target characters of the link  
 are moved to a different document, the link stays connected to them.  
404    
405  Of course, doing any expensive operation for *every* character  Of course, doing any expensive operation
406    (like an index lookup) for *every* character
407  in a document does not scale very well. In practice,  in a document does not scale very well. In practice,
408  characters typed in consecutively are given consecutive ids,  characters typed in consecutively are given consecutive ids,
409  such as ``...:4``, ``...:5``, ``...:6`` and so on, and  such as ``...:4``, ``...:5``, ``...:6`` and so on, and
410  operations are on *spans*, i.e. consecutive ranges of characters  operations are on *spans*, i.e. ranges of consecutive characters
411  (``...:4-6``) in a document. In Storm, in each editor session we  (``...:4-6``).
 create a block with all characters entered in this session (the content type  
 being ``text/plain``). To designate a span of characters  
 from that session, we use the block's id, the offset of the first  
 character, and the number of characters in the span.  
 This technique was first introduced in [lukka02guids]_.  
   
 In Xanadu, characters are stored to append-only *scrolls*  
 when they are typed [ref]. Because of this, in Storm, we call the  
 blocks containing the actual characters *scroll blocks*. The documents  
 do not actually contain the characters; instead, they are  
 *virtual files* containing span references as described above.  
 To show a document, the scroll blocks it references are loaded  
 and the characters retrieved from there [#]_.  
412    
 .. [#] It is unclear whether this approach is efficient for text  
    in the Storm framework; in the future, we may try storing  
    the characters in the documents themselves, along with their  
    permanent identifiers. For images or video, on the other hand,  
    it is clearly beneficial if content appearing in different  
    documents-- or different versions of a document-- is only  
    stored once, in a block only referred to wherever  
    the data is transcluded.  
     
413  Our current implementation shows only links between documents  Our current implementation shows only links between documents
414  that are in memory at the same time [screenshot of xupdf, perhaps ref too  that are in memory at the same time [screenshot of xupdf, perhaps ref too
415  (submitted) antont: was thinking the same, it would illustrate this well].  (submitted) antont: was thinking the same, it would illustrate this well].
416  In the future, we will implement a global distributed index at top of  In the future, we will implement a global distributed index on top of
417  a distributed hashtable, with the scroll blocks' ids as the keys.  a distributed hashtable (Section 5).
418  To find the transclusions of a span, the system will retrieve  To find the transclusions of a span, the system will retrieve
419  all transclusions of any span in the scroll block, then  all transclusions of any span with the same prefix (``...:``), then
420  sort out those that do not overlap the span in question.  sort out those that do not overlap the span in question.
421    
422  Since the problem is to search for overlapping ranges,  Since the problem is to search for overlapping ranges,
423  the spans cannot be used as hashtable keys. However, as the blocks  the spans themselves can not be used as hashtable keys.
424  will be relatively small (limited by the amount of text  However, we keep the number of spans with the same prefix
425  the user enters between two saves of a document), we hope  relatively small (limited by the amount of text
426    the user enters between two saves of a document). Therefore, we hope
427  that this will not be a major scalability problem. Otherwise,  that this will not be a major scalability problem. Otherwise,
428  systems that allow range queries, such as skip graphs [AspnesS2003]_,  systems that allow range queries, such as skip graphs [AspnesS2003]_,
429  skipnet [ref], may prove useful.  skipnet [ref], may prove useful.
# Line 502  comments of articles etc. Line 442  comments of articles etc.
442  Figure [ref-figdocmovement]_ illustrates how xanalogical storage addresses the issue of  Figure [ref-figdocmovement]_ illustrates how xanalogical storage addresses the issue of
443  movement of data between documents. Initially, there are documents D1 and  movement of data between documents. Initially, there are documents D1 and
444  D2, with two links (directed arrows in the figure) from D1 to two different  D2, with two links (directed arrows in the figure) from D1 to two different
445  elements in D2, A and B. The links actually are to the /spans/ A and B that  elements in D2, A and B. The links actually are to the *spans* A and B that
446  are stored in the scroll, but shown as parts of D2, as illustrated with the  are stored in the scroll, but shown as parts of D2, as illustrated with the
447  dashed lines. Then, when in the next step the document D2 is split in two --  dashed lines. Then, when in the next step the document D2 is split in two --
448  becoming documents D2.1 and D2.2 -- with link target A in the first and B  becoming documents D2.1 and D2.2 -- with link target A in the first and B
# Line 750  the problem; at startup time, we simply Line 690  the problem; at startup time, we simply
690  version of a document whose identifier is hard-wired into  version of a document whose identifier is hard-wired into
691  the software (mutable documents are described in section 6.1).  the software (mutable documents are described in section 6.1).
692    
693    4.2. Xanalogical storage on top of blocks
694    -----------------------------------------
695    
696    In Storm, in each editor session we
697    create a block with all characters entered in this session (the content type
698    being ``text/plain``). To designate a span of characters
699    from that session, we use the block's id, the offset of the first
700    character, and the number of characters in the span.
701    This technique was first introduced in [lukka02guids]_.
702    
703    In Xanadu, characters are stored to append-only *scrolls*
704    when they are typed [ref?]. Because of this, in Storm, we call the
705    blocks containing the actual characters *scroll blocks*. The documents
706    do not actually contain the characters; instead, they are
707    *virtual files* containing span references as described above.
708    To show a document, the scroll blocks it references are loaded
709    and the characters retrieved from there [#]_.
710    
711    .. [#] It is unclear whether this approach is efficient for text
712       in the Storm framework; in the future, we may try storing
713       the characters in the documents themselves, along with their
714       permanent identifiers. For images or video, on the other hand,
715       it is clearly beneficial if content appearing in different
716       documents-- or different versions of a document-- is only
717       stored once, in a block only referred to wherever
718       the data is transcluded.
719    
720    
721  5. Application-specific reverse indexing  5. Application-specific reverse indexing
722  ========================================  ========================================
# Line 839  occuring in a document. Line 806  occuring in a document.
806  Clearly, for block storage to be useful, there has to be a way to  Clearly, for block storage to be useful, there has to be a way to
807  efficiently update documents/maintain different versions of documents.  efficiently update documents/maintain different versions of documents.
808  We achieve this by a combination of two mechanisms. Firstly, a  We achieve this by a combination of two mechanisms. Firstly, a
809  *pointer* is an updatable reference to a block;  *pointer* is an updatable reference to a block.
810  pointers can be updated by creating a specific kind of Storm block  Secondly, similar to version control systems like CVS,
811  representing an assertion of the form, "pointer ``P`` now points  we do not store each version, but only the differences between versions.
 to block ``B``." Pointers are resolved with the help of a Storm index  
 mapping pointer identifiers to blocks providing targets for that pointer.  
 Through this mechanism, we can keep old versions of documents  
 along with the current versions.  
   
 .. [Figure ? -Hermanni]  
   
 Secondly, in the spirit of version control systems like CVS,  
 we do not store *each version*, but only the differences between versions.  
 However, we still refer to each full version by the id of a block  
 containing that version, even though we do not store this block.  
 When we want to access a particular version, we reconstruct it  
 using the differences, and then check the result using  
 the cryptographic hash in the full version's block id.  
812    
 .. [Figure ? -Hermanni]  
813    
814  6.1. Pointers: implementing mutable resources  6.1. Pointers: implementing mutable resources
815  ---------------------------------------------  ---------------------------------------------
# Line 865  the cryptographic hash in the full versi Line 817  the cryptographic hash in the full versi
817  In Storm, *pointers* are used to implement mutable resources.  In Storm, *pointers* are used to implement mutable resources.
818  A pointer is a globally unique identifier (usually created randomly)  A pointer is a globally unique identifier (usually created randomly)
819  that can refer to different blocks over time. A block a pointer  that can refer to different blocks over time. A block a pointer
820  points to is called the pointer's *target*.  points to is called the pointer's *target* (Fig. [ref-storm_pointers]_).
821    
822  To assign a target to a pointer, we create a special kind of block,  To assign a target to a pointer, we create a special kind of block,
823  a *pointer block*, representing an assertion like *pointer P targets  a *pointer block*, representing an assertion like *pointer P targets
# Line 931  is hardly feasible for a system intended Line 883  is hardly feasible for a system intended
883  for off-line as well as on-line work.  for off-line as well as on-line work.
884  For long-term publishing, one-time signatures have been  For long-term publishing, one-time signatures have been
885  found useful [anderson98erl]_. For the time being, the pointer mechanism  found useful [anderson98erl]_. For the time being, the pointer mechanism
886  works only in trusted Storm zones (Section 3), e.g.  works only in trusted Storm zones (Section 4), e.g.
887  in a workgroup collaborating on a set of documents.  in a workgroup collaborating on a set of documents.
888    
889  .. [#] Without timestamps, digital signatures are only valid  .. [#] Without timestamps, digital signatures are only valid
# Line 967  for Web-like publishing. More research i Line 919  for Web-like publishing. More research i
919  6.2. Diffs: storing alternative versions efficiently  6.2. Diffs: storing alternative versions efficiently
920  ----------------------------------------------------  ----------------------------------------------------
921    
922  [benja says: Please do not touch this section, but tell me  .. [Hm, should we move/remove 'Additionally, many versioning'
923  how to improve it instead. Reason: this is the meat for my thesis,     paragraph into related work ? -Hermanni]
 due Feb 9th, so I want all possible improvements on it  
 to go there, too ;-) [and I'm of course allowed to solicite feedback,  
 but not allowed to use stuff written by someone else...]]  
 [Hm, should we move/remove 'Additionally, many versioning'  
 paragraph into related work ? -Hermanni]  
924    
925  The pointer system suggests that for each version of a document,  The pointer system suggests that for each version of a document,
926  we store an independent block containing this version. This  we store an independent block containing this version. This

Legend:
Removed from v.1.154  
changed lines
  Added in v.1.155

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26