/[gzz]/manuscripts/storm/article.rst
ViewVC logotype

Diff of /manuscripts/storm/article.rst

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.117 by benja, Sat Feb 8 22:08:01 2003 UTC revision 1.118 by benja, Sun Feb 9 03:53:09 2003 UTC
# Line 258  structure is also \*hyper*media. Refs?) Line 258  structure is also \*hyper*media. Refs?)
258  2.3. Peer-to-peer systems  2.3. Peer-to-peer systems
259  -------------------------  -------------------------
260    
261  During the last few years, there have been a lot of research efforts related  During the last few years, there has been a lot of research
262  to Peer-to-Peer (p2p) resource discovery, both in industry and academic world.  related to peer-to-peer resource discovery, both academical and in the industry.
263  Intensive work in p2p field has yielded two main approaches: broadcasting  There are two main approaches: broadcasting [gnutella1, kazaa, limewire,
264  [ref: gnutella1, kazaa, limewire, shareaza] and Distributed Hash Tables (DHT)  shareaza], and distributed hashtables (DHTs) [chord, can, tapestry, pastry,
265  [refs: chord, can, tapestry, pastry, kademlia, symphony, viceroy, skip graphs,  kademlia, symphony, viceroy]. Broadcasting systems
266  swan]. Both of these approaches use an application level overlay network.  forward queries to all systems reachable in a given number of hops
267  However, there are significant differences between broadcasting  (time-to-live). DHTs store (key,value) pairs which can be found given
268  and DHT approach in scalability and efficiency properties. A DHT  the key; a DHT assigns each peer a subset of all possible keys, and
269  usually provides log-like bounds to *all* internal  routes queries for a given key to the peer responsible for it.
270  operations [#]_ (footnote about 'stable state' ?), while broadcasting can't achieve  Before a pair can be found, it must be *inserted* in the DHT
271  either of these.  by sending it to the peer responsible for the key. Both approaches
272    use an application-level overlay network for routing.
273    
274    While broadcasting systems' performance is linear, DHTs' performance
275    usually has log-like bounds in the number of peers
276    for *all* internal operations [#]_. This scalability is
277    what makes global searches feasible in DHTs. In broadcasting approaches,
278    on the other hand, scalability is archieved by forwarding queries
279    only to a limited subset of the peers (bounded by the time-to-live),
280    which means that searches in these systems are not truly global.
281    
282  .. [#] It's not clear whether *all* proposed DHT designs can preserve  .. [#] It's not clear whether *all* proposed DHT designs can preserve
283     log-like properties when participants are heterogeneous and they     log-like properties when participants are heterogeneous and they
284     join and leave the system in a dynamic manner.     join and leave the system in a dynamic manner.
285    
286  A distributed hashtable stores key/value pairs.  A DHT has a *key space*, for example the points on a circle.
287  In a DHT, both hashtable items and the addresses of peers  The keys in (key,value) pairs are mapped to points in the key space
288  are mapped into a single virtual key space. The form of the key space  through a hash function. Independently, each peer is assigned
289  depends on implementation (for example, Chord uses a circle).  a point in the space. The DHT defines a distance metric
290  A distance metric (e.g. numerical, XOR) is used to find the peer  between points in the key space (e.g. numeric, XOR); the peer
291  whose position in the key space is 'closest' to the key of a given item.  responsible for a hashtable key, then, is the one that is *closest*
292  This peer is responsible to store the item (so both queries and insertions  to it in the key space, according to the distance metric.
293  relating to the key are routed to it.) Thus,  A peer, then, is analogous to a hashtable bucket.
294  DHT's overlay connectivity graph is structured. On the other hand, the overlay  Queries are routed to the overlay network, each hop bringing
295  connectivity graph of broadcasting approach is formed more or less (depends on  them closer to its destination in key space, until they reach
296  implementation) in a random manner.  the peer responsible for them.
297    
298  When performing queries, in broadcasting approach, peer sends a query request to a  .. http://sahara.cs.berkeley.edu/jan2003-retreat/ravenben_api_talk.pdf
299  subset of its neighbors and these peers to their subsequent neighbors. The     Full paper will appear in IPTPS 2003 -Hermanni
300  process will continue as long as query's time-to-live (TTL) value hasn't been reached.  
301  In DHT approach, query request is deterministically routed towards the peer  Recently, a few DHT-like systems have been developed which employ
302  which hosts a specific data item. Routing is based on 'hints' (based on  a key space similarly to a DHT, but in which queries are routed
303  differences between data item's key and peer's key), which each peer provides  to (key,value) pairs [SWAN, skip graph]: A peer
304  along the routing path.  occupies several positions in the key space, one for each
305    (key,value) pair. In such a system, the indirection of placing
306  Obviously, there are major differences within approaches. For the DHT approach,  close keys in the custody of a 'hashtable bucket' peer is removed
307  perhaps the main difference is *what* is self-organized into a  at the cost of each peer maintining one node in the overlay network
308  virtual key space. For instance, in SWAN [ref] and Skip Graph [ref], *data  for each (key,value) pair it publishes.
309  items* self-organise into a virtual address space, while in other DHT  
310  implementations *peers* self-organise in structured form in a virtual space.  .. hemppah's original text before benja's changes:
311  In the broadcasting approach, implementations' differences mostly lie in the     In a DHT, both hashtable items and the addresses of peers
312  *structural level* of the overlay network, i.e. super peers and peer clusters.     are mapped into a single virtual key space. The form of the key space
313       depends on implementation (for example, Chord uses a circle).
314       A distance metric (e.g. numerical, XOR) is used to find the peer
315       whose position in the key space is 'closest' to the key of a given item.
316       This peer is responsible to store the item (so both queries and insertions
317       relating to the key are routed to it.) Thus,
318       DHT's overlay connectivity graph is structured. On the other hand, the overlay
319       connectivity graph of broadcasting approach is formed more or less (depends on
320       implementation) in a random manner.
321    
322       When performing queries, in broadcasting approach, peer sends a query request to a
323       subset of its neighbors and these peers to their subsequent neighbors. The
324       process will continue as long as query's time-to-live (TTL) value hasn't been reached.
325       In DHT approach, query request is deterministically routed towards the peer
326       which hosts a specific data item. Routing is based on 'hints' (based on
327       differences between data item's key and peer's key), which each peer provides
328       along the routing path.
329    
330       Obviously, there are major differences within approaches. For the DHT approach,
331       perhaps the main difference is *what* is self-organized into a
332       virtual key space. For instance, in SWAN [ref] and Skip Graph [ref], *data
333       items* self-organise into a virtual address space, while in other DHT
334       implementations *peers* self-organise in structured form in a virtual space.
335       In the broadcasting approach, implementations' differences mostly lie in the
336       *structural level* of the overlay network, i.e. super peers and peer clusters.
337    
338  CFS [ref], which is built upon Chord DHT peer-to-peer routing layer[ref], stores  CFS [ref], which is built upon Chord DHT peer-to-peer routing layer[ref], stores
339  data as blocks. However, CFS *splits* data (files) into several miniblocks and  data as blocks. However, CFS *splits* data (files) into several miniblocks and
# Line 337  byte sequence would change the hash (and Line 370  byte sequence would change the hash (and
370  Mutable data structures are built on top of the immutable blocks  Mutable data structures are built on top of the immutable blocks
371  (see Section 6).  (see Section 6).
372    
 When used in a network environment, such ids do not provide  
 a hint as to where a specific block is stored.  
 However, many existing peer-to-peer systems could be used to  
 find arbitrary blocks in a location-independent fashion;  
 for example, Freenet [ref], recent Gnutella-based clients  
 (e.g. Shareaza [ref]), and Overnet/eDonkey2000 [ref]  
 also use SHA-1-based identifiers [e.g. ref: magnet uri].  
 (However, we have not put a network  
 implementation into regular use yet and thus can only describe our  
 design, not report on implementation experience.  
 We discuss peer-to-peer implementations in Section 7, below.)  
   
373  Storing data in immutable blocks may seem strange at first, but  Storing data in immutable blocks may seem strange at first, but
374  has a number of advantages. First of all, it makes identifiers  has a number of advantages. First of all, it makes identifiers
375  self-certifying: no matter where we have downloaded a block from,  self-certifying: no matter where we have downloaded a block from,
# Line 481  or through other means. Line 502  or through other means.
502  We have implemented the first three (using hexadecimal  We have implemented the first three (using hexadecimal
503  representations of the block ids for file names).  representations of the block ids for file names).
504    
505    Many existing peer-to-peer systems could be used to
506    find blocks on the network.
507    For example, Freenet [ref], recent Gnutella-based clients
508    (e.g. Shareaza [ref]), and Overnet/eDonkey2000 [ref]
509    also use SHA-1-based identifiers [e.g. ref: magnet uri].
510    Implementations on top of a DHT could use both the
511    directory and the home store approach as defined by [ref Squirrel].
512    
513    Unfortunately, we have not put a p2p-based implementation
514    into use yet and can therefore only report on our design.
515    Currently, we are working on a prototype implementation
516    based on the GISP distributed hashtable [ref]
517    and the directory approach (using the DHT to find a peer
518    with a copy of the block, then using HTTP to download the block).
519    Many practical problems have to be overcome before this
520    implementation will be usable (for example seeding the
521    table of known peers, and issues with UDP and network
522    address translation [ref]).
523    
524  Sometimes it is useful to think about *zones* blocks are in,  Sometimes it is useful to think about *zones* blocks are in,
525  related to distribution policy: for example, a *public*  related to distribution policy: for example, a *public*
526  zone for blocks served to others in the network, a *private*  zone for blocks served to others in the network, a *private*
# Line 628  of Storm blocks: finding a block based o Line 668  of Storm blocks: finding a block based o
668  Storm provides a general API for indexing blocks in  Storm provides a general API for indexing blocks in
669  application-specific ways. We have implemented indexing  application-specific ways. We have implemented indexing
670  on a local machine, but the interface is designed so that  on a local machine, but the interface is designed so that
671  implementation on top of networking overlay (e.g. distributed hashtable)  implementation on top of a distributed hashtable
672  will be trivial.  will be straight-forward. (Again, our GISP-based implementation
673    is in a very early stage.)
 .. [Benja, this might be useful for defining Storm APIs for DHTs etc:  
    http://sahara.cs.berkeley.edu/jan2003-retreat/ravenben_api_talk.pdf  
    Full paper will appear in IPTPS 2003 -Hermanni]  
   
    Benja says: Hm, does that belong in the p2p section?  
674    
675  In Storm, applications are not allowed to put arbitrary  In Storm, applications are not allowed to put arbitrary
676  mappings into the index. Instead, applications that want  items into the index. Instead, applications that want
677  to index blocks provide the following callback  to index blocks provide the following callback
678  to a Storm pool::  to a Storm pool::
679    
680      getMappings(block) ->      getItems(block) ->
681          set of (key, value) pairs          set of (key, value) pairs
682    
683  This callback processes a block and returns a set of mappings  This callback processes a block and returns a set of
684  (key/value pairs) to be placed into the index.  hashtable items (key/value pairs) to be placed into the index.
685  The Storm pool, in turn, provides  The Storm pool, in turn, provides
686  the following interface to the application::  the following interface to the application::
687    
688      get(key) -> set of (block, value) pairs      get(key) -> set of (block, value) pairs
689    
690  This function finds all mappings created by this application  This function finds all items created by this application
691  with a given key, indicating both the application-provided  with a given key, indicating both the application-provided
692  value and the block for which the mapping was created.  value and the block for which the item was created.
693    
694  We use the ``getMappings()`` approach instead of  We use the ``getItems()`` approach instead of
695  allowing applications to put arbitrary mappings into the database  allowing applications to put arbitrary items into the database
696  because binding mappings to blocks makes it easy for pools  because binding items to blocks makes it easy for pools
697  to e.g. remove associated mappings when deleting a block.  to e.g. remove associated items when deleting a block.
698    
699  As an example, the ``getMappings()`` method of our Xanalogical  As an example, the ``getItems()`` method of our Xanalogical
700  storage implementation will, for a block containing a document,  storage implementation will, for a block containing a document,
701  collect all the spans in a document, and return mappings  collect all the spans in a document, and return items
702  from their scroll blocks' IDs to the spans and their positions  from their scroll blocks' IDs to the spans and their positions
703  in the document. When we want to find the transclusions of  in the document. When we want to find the transclusions of
704  a span, we use ``get()`` to get the mappings for the ID of  a span, we use ``get()`` to get the items for the ID of
705  that span's scroll block, and load the document blocks referenced  that span's scroll block, and load the document blocks referenced
706  by the mappings.  by the items.
707    
708  In a networked implementation, each peer is responsible  In a networked implementation, each peer is responsible
709  for indexing the blocks it stores. Since no peer can  for indexing the blocks it stores. Since no peer can
# Line 688  we maintain for each application a table Line 723  we maintain for each application a table
723  that have already been indexed. When the Storm pool  that have already been indexed. When the Storm pool
724  implementation is initialized, it compares the list  implementation is initialized, it compares the list
725  of indexed blocks with the list of all available blocks,  of indexed blocks with the list of all available blocks,
726  and asks the application for unindexed blocks' mappings.  and asks the application for unindexed blocks' items.
727    
728  One indexing application that may seem obvious is keyword-based  One indexing application that may seem obvious is keyword-based
729  full-text search. However, no research has been done  full-text search. However, no research has been done
730  in this direction; it is not clear whether the current  in this direction; it is not clear whether the current
731  interface is well suited to this, or whether current implementations  interface is well suited to this, or whether current implementations
732  are able to scale to the load to store a mapping for each word  are able to scale to the load to store an item for each word
733  occuring in a document.  occuring in a document.
734    
735  .. [There are two refs about keywords in DHTs-- should we ref these ? -Hermanni]  .. [There are two refs about keywords in DHTs-- should we ref these ? -Hermanni]

Legend:
Removed from v.1.117  
changed lines
  Added in v.1.118

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26