/[gzz]/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
ViewVC logotype

Diff of /gzz/Documentation/misc/hemppah-progradu/masterthesis.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.22 by hemppah, Tue Dec 10 12:52:20 2002 UTC revision 1.23 by hemppah, Tue Dec 10 13:05:08 2002 UTC
# Line 182  reputation\ Line 182  reputation\
182    
183  \subsection{Distributed hash table}  \subsection{Distributed hash table}
184    
185    In Distributed Hash Table (DHT) approach, each value is associated with a unique key (e.g. SHA-1 \cite{fips-sha-1})in an m-bit virtual address space. The virtual
186    address space is partitioned into sections, which form adjoining regions of this address space. In general,
187    either a single computer or multiple computers is assigned to each section of the virtual address space. Each
188    computer is assigned one or more sections, and they maintains copies of those key-value bindings whose key values
189    lie within its assigned cell. This means, in general, that computer that hosts corresponding key-value pair,
190    is not owned by the user that decided to provide the resource to the netowork. Moreover, the allocation of the address
191    space and the assigment of computers to sections is dynamic. Therefore, everytime when a node joins or
192    leaves the network, the address space is reallocated.
193    
194  \subsection{Hybrid architecture}  \subsection{Hybrid architecture}
195    
196  \subsection{Tree based architecture}  \subsection{Tree based architecture}
# Line 204  reputation\ Line 213  reputation\
213    
214    
215    
216  \chapter{Summary of existing Peer-to-Peer file sharing systems}  \chapter{Summary of existing Peer-to-Peer systems}
   
 In this section, we review briefly existing algorithms used in existing Peer-to-Peer systems.  
   
 \section{Distributed Hash Tables}  
   
 In DHT approach, each value is associated with a unique key (e.g. SHA-1 \cite{fips-sha-1})in an m-bit virtual address space. The virtual  
 address space is partitioned into sections, which form adjoining regions of this address space. In general,  
 either a single computer or multiple computers is assigned to each section of the virtual address space. Each  
 computer is assigned one or more sections, and they maintains copies of those key-value bindings whose key values  
 lie within its assigned cell. This means, in general, that computer that hosts corresponding key-value pair,  
 is not owned by the user that decided to provide the resource to the netowork. Moreover, the allocation of the address  
 space and the assigment of computers to sections is dynamic. Therefore, everytime when a node joins or  
 leaves the network, the address space is reallocated.  
217    
218    This section reviews briefly existing algorithms used in existing Peer-to-Peer systems. Note that this section
219    is not meant to be an exhaustive survey of Peer-to-Peer systems. Instead, this section introduces a few systems
220    from each architectural perspective.
221    
222  \subsection{Plaxton Algorithm}  \section{Plaxton}
223  Plaxton \cite{plaxton97accessingnearby} developed the first routing algorithm, which can be used with DHTs.  Plaxton \cite{plaxton97accessingnearby} developed the first routing algorithm, which can be used with DHTs.
224  The algorithm is not designed to be used in dynamic distributed systems, because Plaxton algorithm  The algorithm is not designed to be used in dynamic distributed systems, because Plaxton algorithm
225  assumes a proportional static node population. However, algorithm provides very efficient routing for search  assumes a proportional static node population. However, algorithm provides very efficient routing for search
# Line 231  to right) incrementally the destination Line 230  to right) incrementally the destination
230  Plaxton's algorithm routes in $O(log n)$ hops and requires a routing table size of $O(log n)$.  Plaxton's algorithm routes in $O(log n)$ hops and requires a routing table size of $O(log n)$.
231        
232    
233  \subsection{Tapestry}  \section{Tapestry}
234  Tapestry \cite{zhao01tapestry} is a adaption of Plaxton's algorithm \cite{plaxton97accessingnearby}. Tapestry  Tapestry \cite{zhao01tapestry} is a adaption of Plaxton's algorithm \cite{plaxton97accessingnearby}. Tapestry
235  routes queries with path lengths of $O(log n)$, and each node, for a systems with $n$ nodes, maintains routing table  routes queries with path lengths of $O(log n)$, and each node, for a systems with $n$ nodes, maintains routing table
236  size of $O(log n)$. When a node leaves or joins to network,  $O(log^2 n)$ messages are required.  size of $O(log n)$. When a node leaves or joins to network,  $O(log^2 n)$ messages are required.
237    
238  \subsection{Pastry}  \section{Pastry}
239  In Pastry \cite{rowston01pastry}, the key space is considered as a virtual circle. Each node is responsible for keys  In Pastry \cite{rowston01pastry}, the key space is considered as a virtual circle. Each node is responsible for keys
240  which are closest numerically. The neighbors consist of leaf set, which is the set of $|L|$ closest nodes. In addition,  which are closest numerically. The neighbors consist of leaf set, which is the set of $|L|$ closest nodes. In addition,
241  Pastry has another set of neighbors randomly spread out in the key space for more efficient routing. As in Plaxton approach,  Pastry has another set of neighbors randomly spread out in the key space for more efficient routing. As in Plaxton approach,
242  Pastry also forwards the query to the neighbor which have the longest shared prefix of the key. Pastry routes within  Pastry also forwards the query to the neighbor which have the longest shared prefix of the key. Pastry routes within
243  the pathlength of $O(log n)$, each node has $O(log n)$ neighbors and departure or joining of node requires $(log^2 n)$ messages.  the pathlength of $O(log n)$, each node has $O(log n)$ neighbors and departure or joining of node requires $(log^2 n)$ messages.
244    
245  \subsection{CAN}  \section{CAN}
246  In the CAN model \cite{ratnasamy01can}, nodes are mapped into a virtual $d$-dimensional coordinate key space. Each node  In the CAN model \cite{ratnasamy01can}, nodes are mapped into a virtual $d$-dimensional coordinate key space. Each node
247  is associated with a hypercubal blocks of this keyspace and every block keeps information on its immediate hypercubal  is associated with a hypercubal blocks of this keyspace and every block keeps information on its immediate hypercubal
248  neighbors. In CAN, nodes have $O(d)$ neighbors and expected pathlengths are $O(dn^\frac{1}{d})$. Node insertion or deletion affects  neighbors. In CAN, nodes have $O(d)$ neighbors and expected pathlengths are $O(dn^\frac{1}{d})$. Node insertion or deletion affects
249  $O(number of dimensions)$ existing nodes. Setting $d = log_2(n)/2$, CAN provides similar scalability as Plaxton approach.  $O(number of dimensions)$ existing nodes. Setting $d = log_2(n)/2$, CAN provides similar scalability as Plaxton approach.
250    
251  \subsection{Chord}  \section{Chord}
252  Chord \cite{stoica01chord} uses virtual circle as the key space. As Pastry, Chord also threats node's neighbors as leaf sets.  Chord \cite{stoica01chord} uses virtual circle as the key space. As Pastry, Chord also threats node's neighbors as leaf sets.
253  However, in Chord, there are two sets of neighbors: each node has a successor list of k nodes which immediately follows the node  However, in Chord, there are two sets of neighbors: each node has a successor list of k nodes which immediately follows the node
254  in the key space. For better efficiency, each node has additional finger list of $O(log n)$ nodes placed around the key space.  in the key space. For better efficiency, each node has additional finger list of $O(log n)$ nodes placed around the key space.
# Line 257  In a $n$ node network, each node maintai Line 256  In a $n$ node network, each node maintai
256  hops. Additionally in Chord, a node join or leave requires $O(log^2 n)$ messages.  hops. Additionally in Chord, a node join or leave requires $O(log^2 n)$ messages.
257    
258    
259  \subsection{Kademlia}  \section{Kademlia}
260    
261  Kademlia \cite{maymounkov02kademlia} is based on a XOR-based metric topology. In this approach, every query (message) exchanged conveys  Kademlia \cite{maymounkov02kademlia} is based on a XOR-based metric topology. In this approach, every query (message) exchanged conveys
262  useful contact information. Furthermore, Kademlia uses this information to send parallel query messages. XOR-metrics are used to calculate  useful contact information. Furthermore, Kademlia uses this information to send parallel query messages. XOR-metrics are used to calculate
# Line 266  contained in the key space. Routing tabl Line 265  contained in the key space. Routing tabl
265  efficiently than other DHT approaches. For a system with $n$ nodes, Kademlia's algorithm routes in $O(log n)$ hops and requires  efficiently than other DHT approaches. For a system with $n$ nodes, Kademlia's algorithm routes in $O(log n)$ hops and requires
266  a routing table size of $O(log n)$.  a routing table size of $O(log n)$.
267    
268  \subsection{Coral}  \section{Coral}
269    
270  Coral [NOTYETPUBLISHED] is based on a new abstraction called distributed sloppy hash table (DSHT) and is a layer on existing  Coral [NOTYETPUBLISHED] is based on a new abstraction called distributed sloppy hash table (DSHT) and is a layer on existing
271  lookup systems, such as Chord, CAN, Kademlia, Pastry and Tapestry. In contrast to original DHTs, Coral provides a lookup, which  lookup systems, such as Chord, CAN, Kademlia, Pastry and Tapestry. In contrast to original DHTs, Coral provides a lookup, which

Legend:
Removed from v.1.22  
changed lines
  Added in v.1.23

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26