/[gzz]/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
ViewVC logotype

Diff of /gzz/Documentation/misc/hemppah-progradu/masterthesis.tex

Parent Directory Parent Directory | Revision Log Revision Log | View Patch Patch

revision 1.1 by hemppah, Thu Nov 7 13:03:58 2002 UTC revision 1.2 by hemppah, Wed Nov 20 12:46:49 2002 UTC
# Line 97  limited. Line 97  limited.
97    
98  \section{Existing Peer-to-Peer systems}  \section{Existing Peer-to-Peer systems}
99    
100    \subsection{Search Methods}
101    
102    Current discovery methods are not suitable for large decentralized networks
103    Current centralized methods of discovery that are acceptable for dedicated servers
104    hosting relatively static content break down when applied to large peer based networks.
105    Current decentralized methods lack the efficiency, flexibility and performance to be
106    effective in large networks.
107    
108    Searching the internet and other large networks is currently a very centralized process.
109    All of the major search engines such as Google rely on very large databases and servers to
110    process queries. These servers and storage systems are very expensive to build and maintain,
111    and often have problems keeping information they contain current and relevant.
112    
113    Search engines are also limited as to the sites they can crawl to obtain the data stored in
114    their databases. Your typical peer based network client is far beyond their grasp. This
115    makes the vast amount of data available within each peer unknown via this traditional
116    method. Data stored in databases accessed via HTML forms and CGI queries is also outside
117    the reach of traditional web crawlers.
118    
119    Peer based networks, such as Freenet and Gnutella rely on a different approach to searching.
120    In some cases this is a shared index or external indexing system. In other cases this may
121    entail querying specific peers or groups of peers until the resource is located (or you grow
122    tired of the search).
123    
124    All of these approaches lack the flexibility and performance for use in large peer based networks.
125    
126    
127    
128    Resource discovery in peer based networks is critical to the value of the network as a whole
129    
130    The main benefit provided by peer based networks is the fact that they allow access to all
131    kinds of information and resources which were previously unavailable. This may be files and
132    documents of interest, or computing power for complex computational tasks.
133    
134    An important feature of these decentralized peer networks is that their perceived value is
135    directly related to the quantity and quality of the resources available within them. More
136    resources can be added by increasing the number of peers within the network. Thus, the value
137    of the network grows as its popularity increases, which further increases its growth, etc, etc.
138    
139    There comes a point, however, at which more peers no longer increase the number of resources
140    available to each peer, and may even cause availability of resources to drop. If the network
141    cannot locate resources within the large numbers of peers, or locating resources becomes
142    exponentially more expensive as the size of the network grows, it will be forever crippled at
143    this threshold.
144    
145    The ability to locate resources efficiently and effectively regardless of network size is
146    therefore critical to the value and utility of the network as a whole.
147    
148    
149    
150    Locating resources requires a diverse amount information to be widely effective.
151    
152    Effective discovery methods must rely on a large variety of information about the desired
153    resources, typically in the form of metadata.
154    
155    Metadata varies widely between each kind of resource described. This data can be as simple
156    as a filename and SHA-1 hash value, or as detailed as a full cast and credits roster for a
157    motion picture. How this meta data is interpreted can also vary widely between types of resources.
158    A search for a given amount of processor time for a complex/grid computation may require checking
159    system resources, such as scheduled jobs and system load before a reply can be provided.
160    
161    Metadata can vastly improve the accuracy and efficiency of a search, which directly affects the
162    utility and popularity of the network.
163    
164    Support for a wide variety of meta data and searching options is critical to the value and
165    utility of any peer based network.
166    
167    
168    
169    Any discovery mechanism for large peer based networks must provide a minimum set of features
170    
171    To summarize, an effective discovery mechanism is critical to the value and utility of a peer
172    based network. To be effective a discovery mechanism must support a minimum of features including:
173    
174          Efficient operation in small or large networks
175          Efficient operation for small or large numbers of resources
176          Support a wide variety of meta data and query processing
177          Provide accurate, relevant information for each query
178          Resistant to malicious attack or exploitation
179    
180    Existing Decentralized Discovery Methods
181    
182    A short description and assessment of existing decentralized discovery mechanisms is provided to
183    compare with a new approach presented in this document.
184    
185    
186    
187    All existing discovery methods fail to meet all the desired requirements for use in large networks.
188    
189    There are a number of existing decentralized discovery methods in use today which use a variety
190    of designs and architectures. All of these methods have various strengths which make them attractive
191    for certain circumstances, however, none of them meet all the criteria desired for use in large
192    peer based networks.
193    
194    The major types of discovery methods we will examine are:
195    
196          Flooding broadcast of queries
197          Selective forwarding/routing of queries
198          Decentralized hash table networks
199          Centralized indexes and repositories
200          Distributed indexes and repositories
201          Relevance driven network crawlers
202    
203    
204    
205    
206    
207    Flooding broadcast systems do not scale well
208    
209    The original Gnutella implementation is a prime example of a flooding broadcast discovery mechanism.
210    This type of method has the advantage of flexibility in the processing of queries. Each peer can
211    determine how it will process the query and respond accordingly. Unfortunately this type of method
212    is efficient only for small networks.
213    
214    Due to the broadcast nature of each query, the bandwidth required for each query grows exponentially
215    with a linear increase in the number of peers. Rising popularity will cause the network to quickly
216    reach a bandwidth saturation point. This causes fragmentation of the network into smaller groups of
217    peers, and consumes a large amount of bandwidth while in operation.
218    
219    Segmentation of the network reduces the number of peers visible and the quantity of resources
220    available. Queries must be sent over and over again to try and compensate for the reduced range of
221    queries in a highly segmented network. It may take a large amount of time for a suitable number of
222    peers to be queried, which further reduces the effectiveness of this approach.
223    
224    This type of discovery mechanism is very susceptible to malicious activity. Rogue peers can send out
225    large numbers bogus queries which produce a significant load on the network and disproportionately
226    reduce network effectiveness.
227    
228    False replies to queries can be formulated for spam / advertising purposes, which reduces the accuracy
229     of the queries.
230    
231    
232    
233    Selective forwarding systems are susceptible to malicious activity
234    
235    Selective forwarding systems are much more scalable than flooding broadcast networks. Instead of
236    sending a query to all peers, it is selectively forwarded to specific peers who are considered likely
237    to be able to locate the resource. While this approach greatly reduces bandwidth limitations to
238    scalabality, it still suffers from a number of shortcomings.
239    
240    First and foremost is susceptibility to malicious activity. Due to the fact that a much smaller
241    number of peers receive the query, it is vastly more important that each of these peers be reputable
242    for this operation to be effective.
243    
244    A rogue peer can insert itself into the network at various points and misroute queries, or discard
245    them altogether. Results can be falsified to degrade the accuracy and relevance of results. Depending
246    on the pervasiveness and operation of this peer(s), performance can be degraded significantly.
247    
248    Any system that relies on trust in an open, decentralized network will inevitably run into problems
249    from misuse and malicious activity.
250    
251    Each peer must also contain some amount of additional information used to route or direct queries
252    received. For small networks this overhead is negligible, however, in larger networks this overhead
253    may grow to levels that are unsupportable.
254    
255    While an improvement over flooding broadcast techniques, this approach is still not suitable for a
256    large peer based network.
257    
258    
259    
260    Decentralized hash table networks do not support robust search
261    
262    Decentralized hash table networks further optimize the ability to locate a given piece of information.
263    Every document or file stored within the system is given a unique ID, typically an SHA-1 hash of its
264    contents, which is used to identify and locate a resource. The network and peers are designed in such a
265    way that a given key can be located very quickly despite network size. This type of system does have
266    severe drawbacks which preclude its use as a robust searching and discovery method.
267    
268    Since data is identified solely by ID, it is impossible to perform a fuzzy or keyword search within
269    the network. Everything must be retrieved or inserted using an ID.
270    
271    These systems are also susceptible to malicious activity by rouge peers. A rogue peer may misdirect
272    queries, insert large amounts of frivolous data to clutter the keyspace, or flood the network with
273    queries to degrade performance. In such hierarchial or shared index systems these attacks can inflict
274    much more damage than the bandwidth and CPU resources required to initiate them.
275    
276    (Amplifying effect on the attack)
277    
278    While more resilient than flooding broadcast networks, and efficient at locating known pieces of
279    information, these networks are still not able to perform robust discovery in large peer based networks.
280    
281    
282    
283    Centralized indexes are expensive and legally troublesome
284    
285    Centralized indexes have provided the best performance for resource discovery to date. However, they
286    still entail a number of significant drawbacks which preclude their use in large peer based networks.
287    
288    The most serious issue is cost. The bandwidth and hardware required to support large networks of peers
289    is prohibitively expensive. Scaling this kind of network requires substantial capital investment and may
290    still reach limits that unsupportable.
291    
292    Recent court rulings cast serious doubt about the liability involved in using centralized servers to
293    index resources in a peer based network. It has been said that the recent legal precedents require any
294    such system to monitor usage and activity of the network exactly to ensure that no types of copyright
295    violations are occurring. The ability to monitor and enforce this requirement is quite challenging, and
296    may be too much of a risk.
297    
298    Centralized index systems are not suitable solutions for resource discovery in large peer based networks.
299    
300    
301    
302    Distributed indexes are dificult to maintain and susceptible to malicious activity
303    
304    Distributed indexes eliminate the need for expensive centralized servers by sharing the indexing burden
305    among peers in the network. Legal vulnerability is greatly decreased by removing central control of indexing
306    operations. When designed correctly, these types of networks provide the best performance and scalability
307    of any solution. Even more so than most centralized solutions.
308    
309    The most difficult problem with these types of indexing systems is cache coherence of all the indexed data.
310    Peer networks are much more volatile, in terms of peers joining and leaving the network, as well as the
311    resources contained within the index. The overhead in keeping everything up to date and efficiently distributed
312    is a major detriment to scalability.
313    
314    There have been a number of proposals and implementations of shared index systems which address this problem.
315    Unfortunately distributed indexes encounter problems in the following situations:
316    
317          The number of peers supporting the index network is large
318          Many peers join and depart the network maintaining the index
319          The amount of data to be indexed is significant
320          The meta data for the indexed data is very diverse
321          Malicious peers exploit the trust implicit in a shared index
322    
323    All large peer based networks exhibit these features, making a distributed index system incredibly
324    complicated.
325    
326    Their susceptability to malicious attack is also increased. Rogue peers may insert large amounts
327    of frivolous data which burdens the shared index as well as reducing the accuracy of searches
328    within it. There is a much larger degree of trust placed on each peer, due to that fact that
329    each peer must handle and search the indexed data correctly, and also that each peer help maintain
330    (in terms of bandwidth and physical storage) the shared index equally (or at least to the best of
331    their ability given finite resources) This makes resilience in the face of rogue peers extremely difficult.
332    
333    Supporting a wide range of meta data can also be difficult. An XML schema may be provided to
334    contain this data, however, tracking the meta data in addition to keys or names significantly
335    increases the indexing overhead, further reducing scalability of the network. Since each peer
336    must search its section of the index at given times, each peer must also be able to understand
337    the meta data as it relates to the query it is processing. This is also a significant burden,
338    as diverse peers may or may not understand the meta data and how to interpret it.
339    
340    Distributed indexing systems as they currently exist cannot provide robust discovery in large
341    networks. I hope that will change at some point in the future, as this would be the best solution hands down.
342    
343    
344    
345    Relevance driven network crawlers lack support for proactive queries and diverse data
346    
347    Relevance driven network crawlers are a different approach to the resource discovery problem. Instead
348    of performing a specific query based on peer request, they use a database of existing information the
349    peer has accumulated to determine which resources it encounters may or may not be relevant or interesting
350    to the peer.
351    
352    Over time a large amount of information is accrued which is analyzed to determine what common elements
353    the peer has found relevant. The crawler then traverses the network, usually consisting on HTML documents
354    for new information which matches the profile distilled from previous peer information.
355    
356    The problem with this system is that it lacks support for proactive queries for specific information, as
357    it is directed by past information. Support for a wide variety of resources is also missing, since the
358    relevance engine expects a certain kind of data on which it can operate. This usually consists of HTML
359    or or other text documents.
360    
361    Finally, this type of discovery can be too slow for most uses. The time required for the crawler to
362    traverse a significant amount of content can be prohibitively long for uses on modems or DSL connections.
363    
364    Relevance driven network crawlers are not suitable for discovery in large networks.
365    
366    
367    Optimizations to Existing Discovery Methods
368    
369    Many of the afore mentioned discovery methods have been tweaked and tuned in various ways to increase the
370    efficiency and accuracy of their operation. A few of this enhancements are described below.
371    
372    
373    
374    Intelligence and hierarchy in flooding broadcast networks
375    
376    The Gnutella network has come a long way since its conception in April of 2000. The first new feature is
377    increased intelligence in the peers in the network. The second is the use of hierarchy to differentiate high
378    bandwidth, dedicated peers from slower, less powerful peer clients.
379    
380    The original Gnutella specification was very simple and intended for small groups of peers. This simple protocol
381     lacked the forethought required for scaling in larger networks. Once the network gained popularity it became
382     obvious to all involved that additional features were required to avoid the congestion in a larger, busy network.
383    
384    One popular modification was denying access to gnutella resources to web based gnutella clients. These web
385    interfaces allowed a large number of users to search the network without participating, and thus placed a
386    large load on the network with no return value. Many clients will no longer share files with peers who themselves
387    do not share.
388    
389    Other expensive protocol operations, such as unnecessary broadcast replies were quickly replaced with
390    intelligent forwarding to intended destinations.
391    
392    Connection profiles were implemented to favor higher bandwidth connections over slower modem connections so
393    that slow users were pushed to the outer edges of the network, and no longer presented a bottle neck to network
394    communication.
395    
396    Expanding on this theme, the Clip2 Reflector was introduced to allow high bandwidth broadband users to act as
397    proxies for slower modem users.
398    
399    All in all the Gnutella network and related systems have made vast progress. In many cases they may provide
400    adequate performance despite their intrinsic weakenesses.
401    
402    
403    
404    Catalogs and meta indexes in distributed hash table networks
405    
406    The desire to allow flexible keyword and meta data searching in distributed hash table networks has resulted in
407    various methods to catalog the data contained within them.
408    
409    A new project called Espra stores catalog documents within Freenet itself that describe the resources represented
410    by their hash key identifier. Additions and searching can be performed on these catalogs to locate resources
411    efficiently and quickly within the network.
412    
413    Other networks consist of similar methods which keep the catalog or index in external web servers or documents.
414    
415    The main drawback with this approach is that it requires the maintenance of these catalogs. Locating a given catalog
416    or index in the first place may also be a problem.
417    
418    These methods have provided a much needed ability to search for resources in these distributed hash table networks,
419    however, they still lack the robustness and flexibility desired in an optimal solution.
420    
421    
422    
423    Keyword search for distributed hash table networks
424    
425    Another use of distributed hash tables is keyword searching using individual hash values for each keyword in a query.
426    Each keyword produces a set of matches, which can then be combined for complex muti-word keyword searches.
427    
428    This approach looks very promising, as it retains the attractive performance and scalability of distributed hash tables
429    while providing the flexiblity of keyword / metadata based searching. There should be some implementations of this
430    coming out sometime in 2002, however, none are in a stable, useable state as of this time.
431    
432    Implementations of searching over distributed hash tables need to solve two hard problems. The first is support for
433    load distribution of hotspots: very popular hash keys. Some keywords are very popular and these keywords could drive
434    an unsupportable amount of traffic to a single node (or small set of nodes) in the distributed hash table network.
435    There must be some mechanism for many nodes to share the load of popular keywords.
436    
437    The second problem is the protection of the insert mechanism in the keyword indexes. It is hard to ensure that all
438    users returning hits for a given keyword are legitimate, and false or malicious results stored/appended at a given
439    keyword could severely impact the performance of the search.
440    
441    Once these problems are solved or minimized searching over distributed hash table networks could provide a very robust
442    search mechanism for large peer networks.
443    
444    
445    
446    Hybrid networks using super peers and self organization
447    
448    A popular type of hybrid network has been implemented by FastTrack and used in the Morpheus and KaZaa media sharing
449    applications. This approach has also been implemented in the now defunct Clip2 Reflector, and the JXTA Search implementation.
450    
451    This type of network replaces the dedicated central servers used in indexing content with a large number of super peers.
452    These peers have above average bandwidth and processing power which allows them to take on this additional workload without
453    affecting performance a great deal. Every peer in the network contacts one or more of these super nodes to search for
454    matches to a given query.
455    
456    Super peers are selected automatically based on some kind of bandwidth and memory/cpu metric. Often there is some kind of
457    colloboration between super peers to relay queries if no matches are found locally, and to provide super peer nodes to new clients.
458    
459    This architecture provides the best solution to date. By avoiding fully centralized servers these networks have been a bit
460    more resiliant legally (although KaZaa and FastTrack are currently in legal manuevers).
461    
462    These types of networks appear to be the current sweet spot for searching networks. Napster was too centralized, and gnutella
463    not enough. Meeting at the middle with a hybrid super peer network gives you the best of both worlds.
464    
465    There are still a number of problems with this architecture. Despite being less of a legal target than a true centralized
466    server, they are still 'mini' centralized servers in function. Given the recent court rulings these nodes would have to monitor
467     and filter content to avoid possible copyright infringement violations. Requiring each node to contain a list of all filter
468     information would be near impossible to implement given the current size of filters used by the RIAA alone. The now defunct
469     OpenNap server network was a distributed collection of smaller centralized servers, and they were threatened out of existence.
470     It is likely that once the encryption used in FastTrack has been circumvented that the super peers would be a prime target for RIAA/MPAA nasty grams.
471    
472    Support for robust meta data information is also difficult to provide with this type of architecture. This requires each super
473    node to support all of the meta data types used in matching queries for the resources it indexes. For a wide variety of meta
474    data this would require a large amount of overhead in synchronizing support for this meta data in all super nodes as well as
475    adding the functionality for specific meta data types in each super node.
476    
477    These super nodes are also prime targets for malicious attack. Since each peer they are connected to provides them with index
478    information, as well as queries, it takes a small amount of effort for a peer to send a large volume of false index information
479    as well as large numbers of bogus queries. Depending on the specific implementation of these super peers this may cause
480    excessive memory usage, truncated indexes, and low performance.
481    
482    Finally, this type of network relies the on the generosity of peers in the network to provide these super peers. In current
483    implementations this is an optional feature and may or may not be feasible in a large network.
484    
485    
486    
487    
488    
489    An Adaptive Social Discovery Mechanism for Large Peer Based Networks
490    
491    We now describe the architecture of an adaptive social discovery mechanism that is designed to work efficiently,
492    effectively, and in a scalable manner for large peer based networks.
493    
494    
495    
496    Social discovery implies a direct, continued interaction between peers in the network
497    
498    One of the fundamental differences with this approach is that it requires a direct connection between each peer and the
499    peers it communicates with. We will see that this impacts a large number of the requirements for a robust discovery mechanism.
500    
501    Each peer directly controls which peers it communicates with, how bandwidth is consumed, and how the network is used.
502    This provides powerful abilities to resist abuse of the network, allocate bandwidth according to the users preferences,
503    and last but not least, allows many optimizations of the discovery process which would not be available otherwise.
504    
505    Each connection is also much longer lived than a typical TCP connection. These connections can be re-established when a
506    dialup user changes IP addresses or a NAT user changes ports. They persist as long as the peers agree to communicate.
507    
508    This longevity of connections allows peers to maintain a history of their interaction with each of their peers which in
509    turn is used for reputation management and optimization of discovery operations within the network.
510    
511    
512    
513    Simple, low overhead messaging forms the foundation of peer communication
514    
515    At the base of this discovery implementation is the use of UDP for simple, low overhead messaging via small data packets. All
516    communication between peers is performed through a single UDP socket. An application level multiplexing protocol supports the
517    large number of direct connections with very little overhead. This is similar to the way that TCP and UDP connections are
518    multiplexed over IP using port numbers.
519    
520    All discovery operations require a certain amount of communication between peers to locate a given resource. In large
521    decentralized networks this often consumes the majority of bandwidth available. By making the messaging protocol as compact
522    and lightweight as possible, we reduce the overhead required for sending any given message.
523    
524    
525    
526    Connection persistence allows profile and performance tracking of peers
527    
528    The base protocol also uses much longer connection lifetimes between peers. Connections can be re-established if the
529    application is restarted, if the modem line disconnects, and if the ports change on a NAT firewall. As long as the peers
530    wish to remain connected they may do so.
531    
532    The reason for this feature is to maintain a history for each peer. This history is used to build a profile of the peer
533    to determine how 'valuable' it is for discovery operations, and how many resources it has used.
534    
535    Peers that are outright malicious can be identified by providing no value, yet using large amounts of bandwidth or other
536    resources. Their connection is then terminated.
537    
538    Peers who consume but do not share resources will in turn be viewed as very low quality peers and their connections terminated
539    as well. This prevents abuse of the network, or the tragedy of the commons effect, and encourages peers to provide resources
540    and be good neighbors.
541    
542    
543    
544    Past query responses are used to optimize resource discovery
545    
546    The actual search for resources within the network is accomplished by sending a single compact query packet to each
547    peer in the group to be queried. This proceeds in a linear fashion until a sufficient number of resources are
548    located, or the user terminates the query.
549    
550    This would be a rather slow and inefficient operation if no further optimizations were made. To increase the
551    efficiency of the discovery operation the profile associated with each peer is used to determine the order in
552    which each peer is sent a query packet.
553    
554    Peers who have responded with relevant, quality resources in the past will have a higher quality value in their
555    profile than those peers who have not.
556    
557    By querying the peers with the higher quality value first, the chances of finding a resource quickly are greatly
558    increased. This in turn decreases the total amount of bandwidth and time required for a search.
559    
560    
561    
562    Social discovery and profiling encourages sharing and good behavior
563    
564    Most searching networks provide little incentive for peers to provide more resources. The 'Free Loaders' problem
565    has been stated quite often when discussions about peer networking arise. There have been some attempts to
566    eliminate free loading and bad behavior using agorics or reputation, however, these methods have proven very difficult to apply.
567    
568    In a social discovery network each peer must contribute or risk loosing the peers that it is connected to. Likewise,
569    if you want to be able to connect to high quality peers, you must strive to be a high quality peer yourself. This
570    is all handled autonomously given the adaptive nature of peer organization during queries and other operations.
571    
572    As peers continually refine their peer groups, the bad or low quality peers will be dropped and replaced with new
573    peers who might have better characteristics. In this way, good behavior and large numbers of quality resources are
574    rewarded and encouraged.
575    
576    
577    
578    Distinct groups of peers are supported for distinct types of discovery
579    
580    In many cases a user will search for various types of resources on the same network. While a peer may be a very
581    good peer for one type of query, it may be very poor for another. For this reason groups of peers are supported
582    so that peers can be queried when most appropriate.
583    
584    This prevents high quality peers from getting poor ratings during queries which they do not support, and allows
585    increased efficiency for the discovery operation by providing groups of peers tuned to the specific type of
586    discovery operation.
587    
588    For example, one set of peers may be used to locate classical recordings, while another may be used to locate small
589    animation files. Each peer may be useful for one type of query and not the other, and groups ensure that peers are
590    treated appropriately based on their performance for specific types of queries.
591    
592    
593    
594    Extensions are supported for a wide range of meta data and functionality
595    
596    Another core feature of this approach is the use of modular extensions to the discovery operations and application
597    functionality. A protocol extension ID is specified within each query packet. Any third party can define a set of
598    meta data or protocol extensions and assign it a unique extension ID. Any client which supports that extension can
599    now process the meta data appropriately for much greater flexibility and accuracy during the discovery operation.
600    
601    Often there is additional processing required for a given set of protocol or meta data extensions. This is supported
602    using dynamic modules which contain the required code to process this information. These modules can be loaded and
603    unloaded at runtime according to a users needs.
604    
605    This modular, extensible system provides the flexibility to support a wide range of meta data and protocol extensions
606    to further increase the quality and value of responses received.
607    
608    
609    
610    Adaptive social discovery relates directly to the interaction of a user with his/her peers
611    
612    Taken as a whole, this process maps closely to the actual interaction that occurs between a user and the peers
613    (s)he communicates with in the network.
614    
615    Groups of peers with similar interests will organize spontaneously as they would in the physical world, and can
616    remain in continued interaction with each other as long as they find the relationship valuable.
617    
618    Conversely, those peers which do not contribute to the group or attempt to attack the peers outright will find
619    themselves ostracized until they cease their undesirable behavior.
620    
621    By taking advantage of this style of interaction the quality, performance and flexibility required for decentralized
622    resource discovery in large peer based networks can be implemented successfully.
623    
624    
625    
626  \subsection{Business}  \subsection{Business}
627  \subsection{Business}  \subsection{Business}
628  \subsection{Business}  \subsection{Business}

Legend:
Removed from v.1.1  
changed lines
  Added in v.1.2

savannah-hackers-public@gnu.org
ViewVC Help
Powered by ViewVC 1.1.26