Peculiarities of Cyberspace
Virtual community | Not a second-hand world | Networks of the future | Quantity creates quality | Community | Face-to-face or CMC? | P2P: networks of unknown friends | Flash Mobs | Second Life | Online morality and decency
NetLove & Cybersex Intimate at a distance | Bodiless intimacy | Eroticizing virtual reality | Virtual & local relations | Netlove | Pornography in Cyberspace | Child Pornography | Regulation of Cyberporno | CyberStalking
Liberating saviour or plundering pirate
Exchanging files and information between computers is nearly as old as the computer itself. However, until recently the systems for file and information sharing between computers were extremely limited. They were mainly limited to Local Area Networks (LANs) and to the exchange of files with familiar individuals via the internet. The exchange of files in local networks was usually carried out via built-in system of network software, whereas the file exchange via internet went via an FTP (File Transfer Protocol) connection or one of the many other commercial programmes such as Hotline or Carracho. The range of this peer-to-peer sharing was restricted to the circle of familiar computer users someone knew and who agreed to exchange files. Users who wanted to communicate with new or unknown users could transfer files with the help of IRC (Internet Relay Chat) or other similar bulletin boards. These methods never became very popular because they were rather difficult in use.
The special nature of p2p-networks is that special software is installed on individual computers, making direct communication possible with other computers with the same software. Once such a connection has been accomplished, the p2p-programme enables the exchange of practically every digital file between the connected machines.
When the first p2p-programmes stormed the internet it wasn't clear yet how this technological innovation should be judged. For some it was a dangerous ghost that was eating away one of the basic values of civil societies: digital exchanges were nests for pirates who were spreading illegal copies of software, music and films. For others these digital exchange marts were a legitimate instrument to give the power over the web back to the users [Kees Vuik]. After all, from the beginning the internet was nothing but a system of connected computers, in order to communicate with each other in an equal way [History of the internet].
Over the years the internet has become nearly just as stratified as the society it stems from. The exponential growth and far-reaching commercialization of the web have lead to an ever-stronger manifestation of the power structures of society in the virtual world. At present specialized computers channel the data traffic on the internet and portals and search machines such as AOL, Google and Yahoo! dominate and exploit the market of the internet-dollars. Strongly concentrated hubs have arisen that play a crucial role in the internet traffic. They are monster-servers, diverting their information to millions of regular web-users.
The English word 'peer' refers to someone who is your equal as to the control over relevant social sources such as education, social or financial position. You could also say companion or mate. In a p2p-computer network every machine is equal. Such a network knows no central servers that deal with clients' requests. A p2p-network is only composed of equal 'fellow-computers'.
What is peer-to-peer?
Sharing files by means of peer-to-peer programmes is an important change in the way in which internet users find and exchange information. In the traditional client-server model access to information is realized by interaction between clients (users who ask for services) and servers (suppliers of services, usually websites or portals). The peer-to-peer model allows users who want this to interact directly with each other and to share information, without intervention of a server. A common characteristic of peer-to-peer programmes is that these virtual networks are constructed with their own mechanisms for the addressing of message traffic. Peer-to-peer networks can directly offer services and connect users with each other. In the meantime a great number of powerful applications have been built around this model [summary by Zeropaid.com]. From the SETI@home network (where users share the computer power of their computers to search for extraterrestrial life) to the popular programme for file sharing, KaZaA (used to share music and other files).
P2p-networks consist of individual computers running the same software with which they can communicate directly via the internet. A computer that is connected to this network is not only connected to one other computer, but also to an extensive web of computers, connected to each other by a common thread, namely the p2p-programme. When the connection has been realized the programme makes free exchange between participants possible. There is no communication-centre ('hub'), but many decentralized communication-nodes. That is why p2p-activities are difficult to trace and stop.
Most producers of p2p-software believe that information should be free, that anyone should be able to surf anonymously, and that nobody has the right to check what someone else searches, reads or publishes on the internet. The technology enabling this is based on a form of file sharing.
Types of p2p-networks
There are two main variants of peer-to-peer networks.
- The centralized model, in which a central server or 'broker' controls the traffic between individually registered users. The best-known example of this model is Napster.
- The decentralized model, in which participants find each other directly and interact. The Gnutella network is an example of this model.
In centralized p2p-networks indexes are kept up on central servers of the shared files that are stored on the PCs of registered users of the network. Each time a user logs in or out on the server network these indexes are updated. When a user of a centralized p2p-network requests a certain file, the central server makes a list of the files that comply with the request each time. The request is checked with the database of the server, which contains references to the files of the users who are connected to the network at that moment. The central server shows this list to the requesting user. The latter can select the desired file from the list and open a direct HTTP connection with the individual computer containing that file at that moment. Downloading the file takes place directly, from one network user to the other. The actual file is never stored on the central server or on another intermediary point of the network. The server merely facilitates the interaction between equal computers.
A centralized p2p-network has a number of advantages. The most important one is the central index, which localizes files quickly and efficiently. Because the central server permanently brings the index up to date, the files the users find there are immediately available for download. Another advantage is that all individual users (clients) have to be registered on the server's network. The consequence is that requests for files reach all logged-in visitors, allowing the searching to take place as extensively as possible.
The problem with p2p-networks is the centralized architecture. It's true a centralized architecture enables efficient and extensive searching, but the system only has one point of access. These kinds of networks can collapse completely when one or more servers are put out of order. Besides, the server-client model can produce obsolete information or broken links, because the central server's database is only periodically updated.
Decentralized p2p-programmes are called servents. A servent is a server and a client at the same time. When you install a Gnutella-clone on your computer, you receive both a search machine and a file server. The search machine serves to localize other Gnutellians on the internet and to see which files they have on offer. The file server is makes your own files available to the Gnutella community. It is up to you to decide what you offer and what you prefer to keep private, by opening certain maps of your hard disc.
The searches through distributed networks aren't recorded anywhere, but spread like thousands of water ripples on the same pond. Therefore they are hard to reduce to the source. In these programmes the anonymity of the surfer is much bigger than in centralized p2p-programmes such as Napster.
Due to this built-in anarchism distributed p2p-networks are regarded as a subversive or even disrupting technology, which might harm the roots of the internet. Others like to see the p2p-networks as the return to the egalitarian and not yet commercialised and concentrated internet of the first hour.
It is and remains an arms race between rebels and the establishment. The rebels defy the establishment by making revolutionary use of advanced technologies for communication and information exchange. The rebels are brought to court and threatened by wealthy media giants. With the help of programmes such as Media Tracker and Copyright Agent record companies spy on individual users in p2p-networks. Holders of copyrights who think their rights on the internet or online services have been violated by unauthorized use of their protected works can appeal to the service provider concerned and demand that the illegal material is removed or that its access is blocked.
Decentralized p2p-networks don't use of a central server to keep the files of all users up to date. Users begin with a computer that is connected to the network and that is equipped with a servent: the programme works as a combination of a 'server' and a 'client'. The first user with computer A contacts another computer B that is connected to the internet. A announces to B that it is 'alive' (social presence), who in his turn announces to all computers with which it is connected, C, D, E and F, that A is in the air. Subsequently the computers C, D, E and F announce to all computers with which they are connected that A is in the air. These computers repeat the pattern and reveal to the computers with which they are connected that A is in the air. Although the reach of this network is potentially infinite, it is in reality restricted by 'time-to-live': that is the number of computer layers that will be reached by the request. Servents process each network message with an excessively high TTL.
When A has announced to the members of the network that it is in the air, it can search in the contents of the shared directories (folders) of the network members. The request is sent to all members of the network, starting with B, then C, D, E, F, who in their turn send it to the computers they are connected with, etc. When one of the computers in the network has a file that meets the request, it returns the file information (name, size, type, etc.) via all computers in the path to A. The list of files that meet the request appears on the servent display of computer A. A is then able to make a direct connection with the computer on which the file is available and can download the file directly from that computer. The distributed p2p-model enables file sharing without using servers that do not offer direct files themselves.
The decentralized p2p-model has a number of advantages over other methods of file sharing. Since the network is decentralized, it is more robust than a centralized model: it eliminates the dependency of centralized servers that can be potentially critical points of failure. The new generation p2p-networks don't use of a central database with the names of available files, but is completely decentralized. There is no spider in the web that can be eliminated, as was the case with the 140 servers of Napster. The only way to get these networks out of the air is by taking the millions of individual users to court. Thus, the new generation p2p-programmes are not only more robust from a technical point of view, but also much more difficult to fight from a legal point of view. Decentralized p2p-networks such as Gnutella have been designed for the searching of every type of digital file (ranging from recipes to pictures and java libraries). In principle decentralized networks can reach every computer on the internet, whereas even the most extensive search machines only reach 20% of the available websites (larger range).
In decentralized p2p-networks the messages are sent to 'unknown friends'. Users send their request to their 'closest friends', who in their turn spread that request among their 'friends-of-friends'. When one or more users in the network break off the connection, requests are still passed on.
Risks of p2p-networks
Users of p2p-systems are facing two dangers. The most general and therefore first problem is safety. How do you know that a virus doesn't infect the files you are downloading from an anonymous private computer? You —the average user— don't know. With their 'healthy craving for weaknesses' hackers have contributed to a growing robustness of p2p-systems. In the advanced p2p-networks safety measures have been built in ('network firewalls') that prevent the insertion of viruses, worms or other undesired activities. The largest polluters of digital exchange marts are, for that matter, not the frustrated types that take a personal pleasure in disrupting digital exchange marts, but the well-advised defence armies of the very powerful media and amusement industry.
The second danger facing users of p2p-systems is that they are personally prosecuted because of copyright infringement. In 2000 the RIAA sent a letter to Fortune 1000 companies, warning them for potential liability for the use of p2p by employees. A number of years later the giants of the media industry announced that they would prosecute any individual user or distributor of copyright protected material. The thought behind it is as simple as it is effective: most people don't go through red when they expect to be caught and punished for it.
Rise and fall of Napster
Facilitating digital piracy?
Napster was the first p2p-programme that was used by internetters on a very large scale. Napster is a protocol for sharing files among users. With Napster the files remain on the client computer and are never spread via the server. The server offers the possibility to search for specific files and to initiate a direct transfer among the clients.
Napster was written by the 19-year old Shawn Fanning. He created a new name space that is independent of the official name space of the internet: Domain Naming Service (DNS). When a user registers with Napster he gives his computer a name. When another Napster user wants to communicate with this user, the Napster server translates this name into the internet address of the user's computer. The Napster server functions as a name server and a search machine.
Every system that translates names into internet numbers is a name space. All computers on the internet have their own unique number, the IP-number (Internet Protocol). Since it is not very handy to remember such a number, the owner of the computer can ask for a domain number and have this linked to the number. A domain number is a unique name with which places on the internet can be defined. The domain name for this website for example is 'sociosite.net'. This name is unique and is always associated with this website. So, the domain name is a Christian name for the IP-number of a website.
However, technically speaking every computer on the internet only has an IP address. With the domain name system it is possible to give a descriptive name to an internet address. Domain names are included in the main DNS server, which is managed by the Network Solutions Registry. This connects the domain name with a DNS server of one's own choice, which in its turn gets the domain name to refer to the website.
Users can register with a Napster server under any possible name. The registration is prompt, free of charge and requires no contact or other personal information. The distribution of names is simple: who comes first gets the name. Also people who have no permanent IP-address can register with Napster. The search machine of Napster searches the name space and links to the files.
Napster opened her digital doors in 1999. In July 2000 Napster already was one of the 50 most visited sites on the web [Media Metrix]. In that month 4.9 million users clicked on Napster. Just about 28 million users had downloaded the Napster programme.
After some hesitation the international music industry launched a large-scale and hard attack on Napster. As a consequence of p2p-piracy the music industry would lose income of billions of dollars. The large record labels instituted legal proceedings against Napster, but the whole online music world felt involved. It became a sort of test trial, in which had to be determined how far the new musical initiatives could stretch copyrights and make a hole in the established system of the music industry.
In July 2000 the legal judgment was passed on Napster. And the judgement was devastating. Napster had to close the gateways of the online exchange service because she offered support to the distribution of copyright wise protected songs and compositions. Napster couldn't guarantee that the users of her software wouldn't commit piracy [the legal and moral argumentations in the Napster trial will be dealt with extensively elsewhere].
After this first major victory on internet piracy the number of Napster users reduced strongly. Subsequently, Napster was taken over by the German media group Bertelsmann, which promised to release a legal (and paid) version of the p2p-programme.
Over 70 million of Napster users had at least made clear that many people had a need for access to digital music via the internet.
Napster's lawyers argued that Napster is merely a technology enabling personal sharing of music. It is legally permitted to make unlimited copies for personal use and also to share these copies with friends and acquaintances. As long as no money is asked for this or otherwise commercial goals are pursued. The programme itself and her users do not by definition infringe on other people's copyrights. Who does so anyway is personally responsible. But Napster itself is not responsible or liable for the deeds of users who have downloaded copyright-protected music without paying for it.
Failure points: robustness and vulnerability
The lawyers of the music industry opposed that Napster is a company based on piracy. She facilitates the exchange of illegal copies that are distributed in public. And that is something completely different from copying or downloading music for personal use. Since Napster cannot give a strong guarantee that her users do not infringe on copyrights, the programme must be designated as illegal.
Napster is a typical example of a centralized p2p-network. This has a few advantages, but also a drawback. As said before, the centralized architecture of a p2p-network enables efficient and extensive searching in the available files. The coordinating index with names and addresses of files are on a central server and is directly approachable via a certain point. In this strength, however, at the same time lies the failure point of Napster.
- Vulnerable architecture: central administration
Napster was vulnerable because she filed the references to songs that her members wanted to exchange on her own servers. The lists of sharable music are on the hard discs of the central servers of Napster. This didn't only make Napster technologically vulnerable, but also legally.
- Technologically: Centralized p2p-networks can collapse completely or partly when one or more central servers are put out of order. Because tens of millions of people made use of the central indexes of Napster her servers were heavily loaded. This doesn't only require large investments in a gigantic server space (140 servers) and a large bandwidth. The whole network can fall apart when the central servers are overloaded, whether or not by vicious attacks from outside. Technologically speaking Napster's network was sensitive to both external and internal disturbances of balance. Both her robustness (resistance against arbitrary damage) and her vulnerability (resistance against vicious attacks) were not optimal.
- Legally: Also from a legal point of view Napster appeared to be rather vulnerable. By maintaining a central database with names and addresses of available files Napster could be held responsible for illegal transfers in the network. Napster could be blamed that she published information on her own servers, enabling users to download illegal material. Whatever one thinks of this, morally or legally, making this index of references public made Napster vulnerable in the conducted trials. When Napster was finally forced by the judge to pull the plugs from their servers, the whole network collapsed.
- Insufficient support from famous artists
Also and especially in the music world, fame and the connected income are unequally divided. Only a small part of the musicians (around 2 percent) obtain contracts with large music companies. Without the marketing and promotion of these large labels smaller artist never get the break-through they need. Napster praised herself because her service offers exposure to less well-known artists. Already in the first few weeks of her existence more than 5,000 musicians agreed with the distribution of their music by Napster. But among famous artists Napster obtained little support.
The combination of these failure points proved fatal to Napster. Many feared that the conviction of Napster would restrain the further development of p2p-technologies. However, that hasn't been the case. Very soon after Napster collapsed decentralized networks were constructed, with the help of newer technologies, which were much more robust and much less vulnerable. They allow much faster exchange of all kinds of files, and make it almost impossible to trace the identity of the user. These types of systems are as such difficult to censor or control.
At the same time there was a search for systems in which both the interests of the consumers and the creative artists were done justice. Those interests do not as such agree with those of the intermediaries: the music or film companies. But these industries had had a thorough shake-up by the Napster history. Finally they took their own initiatives to make their digital products available for money via the internet.
With a little technical knowledge open-source programmes have found a way to make their own Napster servers. This way the Napster software can be used to exchange songs, without using the Napster servers [Open Nap]. Via Napigator one can get access to these non-connected servers. Napigator functions as a guide for parallel Napster servers that have no connection with the software company.
Napster was merely the frontline of the digital download army. As soon as the plugs were taken out of the central servers of Napster, her users retreated in the jungle and shared music with the help of distributed information systems such as KaZaA, Grokster and Gnutella. Meanwhile newer p2p-programmes such as eDonkey and BitTorrent defeat their older competitors in the fight for faster downloads and search machines. It is impossible to determine how many guerrilla downloaders are doing this now. But in any case it seems to become more difficult to trace or stop them.
Gnutella: the strength of the weakest link
Gnutella is a fully distributed technology for information sharing, based on the peer-to-peer model. This means that users directly make contact with each other via a piece of client software, generating a self-organizing virtual network. Gnutella doesn't rely on a central server. The structure is built in such a way that a continuous chain of all users is formed. Every user is quasi client and server simultaneously: the data transmission takes place in two directions. Queries are carried out faster than with Napster, and the speed of the downloads is also high.
Strictly speaking Gnutella is not a real programme, but a free available framework of protocols with which every programme can build his own version of a Gnutella system. In the meantime dozens of those programmes exist, such as BearShare, Gnotella, LimeWire and Hagelslag.
Gnutella is an efficient and useful programme, but has its problems with scalability. The cause of this is the way in which queries are treated. With Gnutella queries have a certain life span. When for example a query is sent with a life span of 10 and each site makes contact with 6 other sites, 1 million (106) messages can be exchanged. Such an exponential distribution of queries makes the system vulnerable as the network grows (scalability), but also vulnerable to vicious denial-of-service attacks that overload the network with queries.
Due to the limited life span of queries the horizon of each user is restricted. Everyone can search in a few hundred sites each time, but they will never find the files that are located just outside this horizon. With this 'subnetting' the designers of Gnutella have attempted to protect the network against attacks.
The weakness of Gnutella appeared when in the summer of 2000 it came out that Napster would perhaps have to close its doors on judge's orders. The music lovers ran wild. They threw themselves massively on Gnutella, causing this network to clog up so much that it was unavailable for days. The vulnerability of Gnutella is a consequence of its search mechanism. Queries are sent to (usually 4) 'neighbours' in the network who, in their turn, send the query to their neighbours again. When one of those 'neighbours of neighbours' has the file, a message is sent back to the computer that brought the query into the network. This way of 'query flooding' generates by definition extremely busy search traffic in the Gnutella network. Under certain circumstances this may disrupt the whole network.
A structural weakness is and remains the fact that the speed of the network is determined by the speed of the weakest link. In normal situations an average Gnutella network consists of just about ten thousand computers. The size of this 'horizon' is, among other things, dependent on the moment and the durability of the login. Queries are passed on along all those machines. On this search path usually also even slower 24k-telephone modems can be found which may strongly slow down the process: the strength of the whole chain is determined by the weakest link.
As with other p2p-systems safety remains an important problem with Gnutella as well. Users don't know if the files, which are taken out of an anonymous private computer, don't contain viruses, worms or Trojan horses. And since the participants of the network are anonymous, it remains difficult to assess the reliability of the information offered (documents, software). Nobody has an idea where the files come from, unless users identify themselves.
The anonymity of users is weakly protected in Gnutella. Illustrative for this is the site on which file names were offered which suggested that it was child pornographic material. The initiator was capable of keeping track of the IP addresses and domain names in a logbook of anyone who requested for a download. This kind of information is available because Gnutella makes use of HTTP. Via Gnutella one obtains just as much information about the user as via any web browser.
In the more recent versions of on Gnutella based p2p-systems all three problems mentioned have been tackled. The scalability of Morpheus has been substantially improved allowing for the processing of considerably larger numbers of participants at the same time. Building in anti-virus scanners has enhanced the safety. And finally the privacy of the users is better protected by new options that hide the IP-addresses of the downloaded files. This is effected by a connection with a worldwide network of public proxy servers that serve as intermediaries between internet users. Morpheus guarantees that no unwanted adware or spyware is smuggled in.
Freenet: distributed storage and guaranteed anonymity
Freenet is a creation of Ian Clarke that is now further developed by volunteers, according to the open-source principle. It is a much more radical sharing system than Napster: there is no central server, in principle there are no possibilities to trace the origin of a file, to trace who downloads it or saves it on his hard disc. The main goal of Freenet is absolute anonymity. It promises a guaranteed anonymous and uncensored part of the internet within the large world wide web.
The goals of Freenet are socially-politically colored and for many people attractively subversive. It enables people to distribute and download material anonymously. It makes it almost impossible to remove material from the network and works without central control.
Espra - without central failure point
Espra is a open source file-sharing client that aims to 'ruthlessly devour the hegemony of the music distributors'. Espra uses the protocols of Freenet to enable their users to share their files. The great advantage of Freenet over Gnutella or OpenNap is that the file transfers are completely anonymous. Moreover, it has no central failure point, the drawback of Napster and similar centralized systems.
Freenet is much harder to attack than Napster, but it operates somewhat more complicated than Napster or Gnutella. Meanwhile powerful and simple to operate clients are available for Freenet, of which Espra seems the most promising. Freenet is primarily used for the distribution of material that have no copyrights attached to it. It is also actively used in countries such as China and Iran by dissidents in order to distribute censored information. Freenet wishes to be an instrument of freedom and liberation.
The files that are placed on Freenet are not filed with the maker but somewhere in a random other node of the network. As the request for such a file from a certain spot in Freenet grows, that file is automatically copied to nodes in the neighbourhood. Thus, popular information is automatically distributed to many sites. This way a solution is searched for the problem of 'internet congestion'. The Freenet system is able to change its topology when the popularity of a digital file increases or decreases.
The files are immediately encrypted with a digital signature, so that nobody can mess around with the contents. Therefore even the owner of the node doesn't know which information is filed on his machine. The idea is that these owners cannot be held responsible for it.
The traffic circulating within Freenet is more restricted than with Gnutella. When a Freenet client receives a request he cannot fulfil, he sends it to one other neighbour, and not, like Gnutella to all neighbours. When the client doesn't get a satisfying answer, he tries it with one of his other neighbours. The search runs in depth and not parallel. Yet, the search speed is fairly high.
The Freenet programme has been written in Java and needs the Java Runtime Environment to function. It uses its own gateway and protocol. Thus, the programme doesn't run via HTTP, as Gnutella does.
Freenet has made a number of things possible:
- It guarantees the anonymity of the participants.
- It allows small sites to distribute large, popular documents without being stopped by the borders of bandwidth.
- It is efficient and saves especially on the distribution and storage of digital files. When files are stored by the users in more places in the network, closer to the requesting parties, the maker of the information has to spend less on server space and bandwidth. It brings information closer to those who ask for it. For this kind of storage and transfer services users in commercial settings could even ask a fair remuneration.
- It rewards popular material and makes it possible that unpopular material disappears quietly. The purpose of this is to prevent material from being taken out of the digital air when many people think this is valuable information.
The main weakness of Freenet is its search mechanism. Freenet isn't suitable for random searching. When you know precisely what you are looking for in Freenet you have a chance to find something. But then you have had to find out via other channels what the exact name is of a file.
KaZaA: speed via super nodes
The originally Dutch KaZaA uses peer-to-peer technology from Fast Track. Individual users are directly connected, without a central point of control. The only thing the user has to do is install the KazaA Media Desktop (KMD). This way the user is connected to other KMD users. KaZaA is mainly used to exchange media-data (MP3, pictures, audio, video).
Grokster works with the Fast Track protocol as well. The super node technology is used to enable fast search actions. The programme is freely downloadable. But if you want to get rid of all annoying pop-ups and adware you have to buy the pro-version of the programme.
KaZaA is a self-organizing network. Powerful computers are automatically raised to 'super nodes' that take over server tasks. In the selection of these super nodes the achievement of the processor is taken into account, as well as the bandwidth of the connection and the available time of the computer in the network. A super node contains a list of a number of files made available by other KaZaA users, and where they are localized.In a query first the closest super node is consulted. When the requested file isn't available via this super node, the query is passed on to other super nodes. Users themselves can indicate whether they are willing to have their computer function as a super node.
The scalability (the power to expand with new users) is 50 times as high as with Gnutella. The search times are low for a decentred p2p-network and vary between 2-3 seconds. However, the download speed depends on the kind of connection of the person you download from.
KaZaA is one of the best p2p-programmes and is very user friendly. More than 160 million people use it. At any time at least 3 million people are online. Worldwide the KaZaA Media Desktop was downloaded more than 230 million times mid 2003.
KaZaA follows a two-track policy: the free service for file sharing is combined with a paid service. For the paid service arrangements have been made with Buma/Stemra about the contribution of copyrights and a deal has been made with record companies.
The distribution of viruses is and remains a problem in KaZaA. Although a protection against viruses has been built in the latest version, it remains wise not to rely on this completely.
In January 2002 KaZaA.com was taken over by Sharman Networks Limited, an Australian company.
Mojo Nation: exchange with a monetary unit
Mojo Nation is a p2p marketplace for the exchange of digital files with its own monetary unit: the mojo. Users can earn mojos by making bandwidth, space on their hard disc or computer power available for the Mojo Nation.
The MojoNation agent allows users to exchange digital files. Not just MP3s files but also larger files. Contrary to Napster the files in the Mojo network are split up in thousands of chunks that easily slip through the net.
eDonkey: decentralized searching and splitting files
The latest generation p2p-software are eDonkey and BitTorrent. EDonkey has already overtaken Gnutella and is definitely on its way to pass Fast Track (the technology behind KaZaA and Grokster) as well. In May 2003 eDonkey was the most downloaded software (it took over position one from ICQ). The programme was downloaded 299 million times. Jed McCaleb from New York mainly developed it. In two ways it differs from earlier file sharing systems.
The first has to do with decentralized searching. When a file is shared in the network, the technology offers the file a 'hash' identification (an address based on the characteristics of the file itself). Every computer that is logged in in the network is granted a certain series of addresses, so it can function as an index. Queries can be executed much more efficient than in centralized systems. Suppose you are searching for your favourite singer or band. If you use Gnutella this query would be transmitted through the whole network. Each node or neighbour would be asked for the file(s). In eDonkey this query would be send directly to the computer that actually is responsible for the location of files in this category. You would get a much quicker answer.
The second —and most important— advantage is that each file is split in small parts that are independently distributed within the network. When someone starts to download these parts, they are distributed by his computer to the network. This implies that you don't have to download a complete movie before you can offer (parts of) this movie to other people in the network. The transfer of larger files can therefore be much more efficient.
eDonkey is a completely distributed self-organizing network.
BitTorrent: a technological miracle
BitTorrent is a file sharing programme for the distribution of very large files. The programme was written by Bram Cohen (San Francisco) and is especially popular in the open-source community. Unlike Napster, KaZaA or eDonkey it concentrates on distribution, and less on searching. What it lacks in searching power, it compensates in speed.
BitTorrent operates in the background of a web browser and supports the up- and downloading of files. Users who want to distribute files have to set up a 'tracker' web site. A tracker is a low-level server that keeps track of requests for a certain file and directs the requests to the users offering the file. These users post links to the tracker on a web site. These links take care of the downloads of BitTorrent.
Like eDonkey, BitTorrent splits files in small parts. BitTorrent's strength is its system of 'swarming', where each user receives a piece of the file and shares it with others. "Once someone has started downloading a file, that person's computer immediately serves as an upload server for anyone else looking for the file. The technology automatically balances upload and download speeds, ensuring that people downloading give back to the network" [Bram Cohen]. So if the number of people searching for a single file increases, the downloads get faster, as the individual parts get spread quickly around the network.
BitTorrent looks like a technological miracle: the more people are using it, the faster the system as a whole operates.
So the programme is radically different from the file transfer via the well-known FPT-protocol. FTP punishes sites for their popularity. Because the uploading is concentrated in one place, a popular site must dispose of powerful computers and large bandwidth. BitTorrent clients automatically create a mirror of the files they download. Therefore the burden in the publisher becomes very light. The key to cheap file distribution is the draining of the unused upload capacity of the clients. Their contribution grows in the same speed as their demand. This creates unlimited scalability against fixed costs.
BitTorrent has not been designed to distribute MP3-files of a few mb, but for films, programmes and complete CD's. It was no coincidence that the first copy of The Matrix Reloaded could be obtained via BitTorrent. In the mean time there exists a considerable amount of downloads, such as the newest Red Hat Linux 9.
BitTorrent is an Open-Source project and free of charge.
Being the big 'file swapper' BitTorrent soon attracted a large crowd of users. The amount of supporters decreased due to problems with bandwidth, denial-of-service attacks and fear for the organized music industry. It is impossible to shoot BitTorrent out of the digital air. But this is doesn't hold for sites that offer links to the files. It is a p2p-programme without a search function. Therefore you have to surf to sites that offer links to the downloadable file ('tracking sites'). The peers are tracked and connected via a Tracker Server. This is the Achilles heel of BitTorrent. Web traffic sites form the basis of BitTorrent downloads, they link to trackers. These are small files that localize the file to download and indicate where which data have to be up- or downloaded.
Web traffic sites like Donkax.com, Torrentse.cx and Bytemonsoon.com were forced to close their virtual doors after they were threatened by aimed denial-of-service attacks and 'cease-and-desist' letters of the Recording Industry Association of America (RIAA). The sites that went off line were mainly resource sites for illegal exchanges of music and movie files.
A weakness of BitTorrent is the load of the tracker sites. The maintenance of tracker sites requires a lot of time and money. Because they are visited so many times the costs of bandwidth also rise.
BitTorrent is badly equipped for the distribution of illegal files. "Distributing stuff that is clearly illegal with BitTorrent is a really dumb idea. BitTorrent doesn't have any anonymity features. There are things about it that make it very incompatible with anonymity" [Bram Cohen] With a simple command you can see all the IP-addresses of online computer users that are involved in the illegal exchange of a protected file.
Links to BitTorrent sites can be found on Torrent Links and Suprnova.org. It isn't hard to find sites that provide BitTorrent links to movies, software and games. There are several sites that are specialized in specific kinds of content, such as tv series or anime.
Porno in P2P
In p2p-networks the most diverging digital content can be distributed. The music industry worries about the millions of illegal copied MP3 files, the software industry tries to counteract the distribution of illegal versions of their software packages, the gaming industry comes down on kids that did not pay for their play, and the movie industry fights an uphill battle against the growing mass of illegal copies of the most recent movies.
In all these cases it concerns illegal distribution of copyrighted material. However, p2p-networks are also used for the —in itself— legal distribution of pornography. This is a stumbling block for many decent citizens and conservative politicians.
On p2p-networks much erotic and pornographic material is exchanged — which by the way is not forbidden in most countries. According to the GAO report of Linda Koontz more than 40 percent of all files which are shared in p2p-networks is pornographic. Searching KaZaA with words like 'preteen', 'underage' and 'incest' results in series of relatively large amounts of child pornographic material — that should be forbidden [see: Regulation of CyberPorno]. According to the GAO report about 42 percent of the files that are distributed on p2p-networks are associated with child pornographic pictures. Of the rest 34 percent could be identified as adult pornography and 24 percent as non-pornographic. These results are comparable with other studies on child pornography in p2p-networks, such as the study of the Customs Cybersmuggling Center (C3) and the National Center for Missing and Exploited Children.
In February 2001 Palisade Systems collected 22 million requests on a Gnutella network. Gnutella is the underlying technology that is used on p2p-networks like Morpheus, LimeWire and BearShare. Out of all requests 400,000 were selected for further analysis. It was discovered that 42 percent of all requests concerned pornographic material, among which 5 percent was child pornographic. 56 percent of all requests concerned copyrighted material, of which 38 percent concerned MP3 audio files.
Palisade concluded "that file sharing applications have no legitimate value in the workplace" [source]. Overall, 97% of all activities face a legal risk due to copyrights infringement, sexual harassment, or felony-level offences.
"These new systems for file sharing bring problems into our homes that we didn't have before"
[Henry Waxman, Democratic member of Congress]
"It's a monster let loose on the internet" (...) "It gives our kids access to incredibly lewd, filthy ... the worst imaginable type of graphic violence and sex you can imagine"
Politicians tumble over music and software piracy. As guardians of civil values they also aim their moral arrows at pornography and especially child pornography. In July 2003 two members of the American Congress, the republican Joe Pitts and the democrat Chris John proposed a law to protect children against Peer-to-Peer Pornography (P4). It laid down that p2p-networks should take action to prevent children from accidentally seeing porno.
Internet radio in p2p fashion
In the beginning there was AM, then FM. The next evolution in radio broadcast technology could very well be P2P. We are at the eve of the merging of radio broadcasting and p2p file-sharing [source]. There is a new generation of programmes that let you stream audio files to other users in a P2P network, without the need for an expensive dedicated server or bandwidth. It works in much the same way as other P2P fileshare clients except that instead of downloading files, the users download streams. These streams are then exchanged in real-time with other users. No data are stored locally on any machine connected to the network. In principle anyone can start his own radio station. Examples of such programmes are PeerCast and Streamer.
PeerCast originated from experiments in which Gnutella was used for uses other things than file sharing. The idea was to build an application that looked at query packets to compile a top 1,000 list of artists whose music was shared the most on the Gnutella network. "The idea was to give artists a gauge of how popular their music was, down to the individual tracks. Demographics like that can be very useful for artists and labels to that a feel for what their listeners like" [Giles Goddard]. In newer versions of Peercast a 'tip jar' payment scheme is implemented, so that listeners can give artists money.
- PeerCast runs on both Linux and Windows. The programme is free for anyone to use. It is expected that the programme code will be brought under open source. PeerCast is being developed by a small group, led by Giles Goddard, a contract game-programmer for Nintendo. PeerCast is a robust network because there is no central server; each user can be client, server or broadcaster of streams.
- Streamer is not as technically sophisticated as PeerCast, but it is open source. Alas, the programme only runs on Windows. Both programmes can stream MP2 files. PeerCast can also broadcast audio files encoded in the Ogg Vorbis format. Streamer was developed by Ian McLeod of Warrinton, UK, a self-employed computer game creator. He describes his programme as "pirate radio for the digital age", and is inspired by Radio Caroline and the other ship-based stations that broadcast of the coast of Britain in defiance of that country's radio monopoly in the 1960s and 1970s. He created Streamer in response to the music industry's efforts to shut down internet radio stations over royalty payments.
'Radioware' allows less famous music artists get their work heard outside of traditional radio broadcasting. The next step in the evolution of p2p networks will be streaming video. Before this new technology can stand on its own legs, it has frightened Hollywood studios and TV networks.
The path for a new internet broadcasting medium is paved. Howard Wen expects that the decline in financial means for internet radio to operate could spur further technological advances and interest in P2P streaming audio and video. "The result could be numerous pirate radio and TV stations on P2P networks in the near future" [Howard Wen 2000].
Yet we have to be careful with this kind of predictions. The music, video and movie industry will not sit by and watch when their golden calves are freely distributed to the public by p2p miscreants. The availability of copyright protected material via p2p networks is a subject that will be struggled and litigated over in the years to come. On the outcome of this struggle we can only speculate. The designer of Streamer, Ian McLeod, even fears that it might end up with "a filtered internet at the request of the media conglomerates, with 'police-ware' installed on all PCs by law, filtering not unlike the great firewall of China, 'police-ware' like a Big Brother Commie state would use".
PeerWare: Collaboration of Groups and Organizations
Collaboration in p2p-networks
The architecture of a peer-to-peer network goes far beyond the sharing of digital files. It can support enterprise processes and in particular the collaboration of mobile workers. Cooperative work is by definition peer-to-peer, without interference of an intermediary. The members of a team usually interact directly with each other, where each member is responsible for a specific series of documents. Up till now, most instruments that were used to support collaboration were based on the classical client-server architecture. This is at the expense of flexibility in the staging of interactions, because they are all carried over the server. People have to communicate and interact with each other, even when they are on the move.
PeerWare is the use of the p2p-model in applications that support the collaboration of groups. PeerWare = groupware + p2p. It is the next logical step in the development of 'groupware'. An example is Groove (introduced by Ray Ozzie, creator of Lotus Notes). The programme was meant to facilitate people who are working together on projects. The programme enables them to share files, to chat and to work together on one document. PeerWare combines central and decentral strategies of network building. Within a home network or a small local area network (LAN) Groove runs without a central server. The connected machines can interact directly with each other on an equal basis.
It's just an illusion...
PeerWare users have access to a globalinformation space that contains all information that is stored in all local information spaces of the connected members. But it's also a mere virtualinformation space because the information is not stored on one physical computer system.
PeerWare allows people to interact because they have access to an information space, which is temporarily shared and dynamically constructed out of the information spaces that other participants have made available. The content of the information space will be automatically and dynamically reconfigured according to the changes in the network, which are mainly caused by modifications in the connections between peers.
The nodes of the network are organized in a tree without roots. The documents are the leaves, which are connected to one or more nodes. Each member is connected to a local information space, where usually content is stored in the PC of a participant. These local information spaces, which are managed by the members, are made available for other members. All members of the peerware-generated network have access to the local information spaces of members who are connected at that time.
In a peer-to-peer architecture information and services are not gathered on one central node. They are spread over all the nodes of the distributed network. Users themselves have to manage the resources they want to share, and there is no need to publish them on a central server.
The figurations between peers are flexible and variable. They are primary dependent on the mobility of the participants, that is on their connection and disconnection on the network. This evidently also changes the available content of the peerware-regulated network.
The problem with p2p-programmes is that users don't automatically know on which node a certain file is localized. So there must be a mechanism that allows users to search the whole information space. Within the fully decentralized architecture of programmes like Gnutella and Freenet there is no guarantee that one can find information on all files. In an enterprise environment, however, this is a key requisite.
Another problem with common p2p-programmes is that the network is fully fluid: each user plays exactly the same role as the others and nobody contributes to the construction of a permanent infrastructure. In an organizational environment however there usually are some crucial files or services that have to be available at any time, independent of the owner or maker of it. An organizational environment happens to be much more structured than the internet environment as a whole.
In the most recent peerware these two problems have been solved by creating a hybrid p2p-model in which centralized (client/server) and decentralized (p2p) elements are combined with each other.
Advantages of peerware
More and more companies are using peerware and the direct advantages are obvious. When documents and services are made available via p2p-networks it frees storage capacity on the central servers, it reduces the load on those servers and facilitates the network traffic to critical decentralized systems. This enhances the total storage capacity that is available for the company and diminishes the costs of maintaining servers and the costs for bandwidth. Using p2p-systems allows knowledge sharing, enhances efficiency of group work and optimalizes the communication in the network.
The biggest problem for the use of peerware in a corporate environment is safety. Corporations want to be sure that sensitive information does not fall into the wrong hands. The second problem is and remains trust. To give members of the organization access to information of other members, the processes and cultures of the corporation has to change. The management of corporations have to be convinced that by sharing this information they can operate much more efficiently and creatively on the long run. Peerware can give a new impulse to the 'learning organization' and to 'knowledge management'.
Organizational and corporate peerware
The mutual competition between electronic collaboration environments [ECE] and electronic enterprise environments [EEE] is extraordinarily dynamic and fierce. It has grown into a billion business in which huge amounts of capital are invested. The market is divided between companies that overtrump each other with a combination of technological tours de force, cultivation of relations with partners, buy ups of elegant new technologies and small companies, aggressive marketing, and last but not least: the recruiting of financiers who are willing to invest millions of dollars in companies that have a chance to become numero uno.
Commerce One is a modular platform for process management. In more precise terms: "an enterprise-class suite of collaborative sourcing and procurement solution". The programme connects internal company systems (such as purchase, production, sales, distribution, marketing, payment systems, accounting) with each other and allows in between interactions. Organization members operate with an ever-growing series of programmes and a large diversity of technologies. Commerce One offers a platform on which these applications can collaborate. Commerce One offers no solution for a specific problem, but is selling an "enterprise suite", an electronic enterprise environment [EEE].
The JXTA technology of Sun comprises a series of open protocols, which allow each on the internet connected instrument (from mobile phones and wireless PDAs to PC's and servers) to communicate and collaborate in a p2p-fashion. In the virtual network each participant can interact with other participants and resources.
Other P2P Applications
Distributed computing: sharing computing power
For many years the p2p-model has been applied to share the idle computing power of millions of PCs in such a way that a worldwide super computer emerges. With this projects can be realized which are too complex for the present generation super computers. Also charity projects with limited computer sources can be realized.
Using idle computing power
The beating hart of each computer is a processor (CPU). A faster processor means a faster computer. Average computer users only use a small part of the available computational power. Surfing the internet for instance, only uses slightly more than 10 percent of the available computational power.
Seti@home is a project in which millions of internet-connected computers help search for extraterrestrial intelligence. Most SETI projects use radio telescopes to listen to radio signals from planets in distant star systems. SETI uses the data of the Arecibo Radio Telescoop in Puerto Rico, which is part of the SERENDIP project. The idea behind this project is to benefit from the idle processing cycles of PCs. People who want to join can download a small piece of software by Seti@Home and install it on their PC. When the computer is idle 300 kilobyte of data will be downloaded for analysis. The results of this analysis are eventually returned to the Serendip team, where they will be combined with the adapted data of all other participants. In this manner the search for extraterrestrial intelligence is supported, even though only a small part of the observed spectrum is analysed (only the part of 2,5 MHz). The initiators of Seti@Home expected that they could attract 100,000 volunteers. In the meantime there are more than 3 million volunteers.
Also Intel is building new infrastructures that support p2p-applications. Intel is one of the founders of the Peer-to-Peer Working Group, which tries to draw up standards for these technologies. The p2p-programme Intel@Philanthropic demonstrates how powerful distributed computing (DC) can be. By connecting millions of PCs worldwide one of the most powerful computing resource in history is created. The programme also has to stimulate the acceptance of 'distributed computing' in scientific research. The programme is used for instance by the Leland Stanford Junior University, in research on the decay of protein and the connected diseases such as Alzheimer's, ALS and Parkinson. A 'virtual supercomputer' that uses p2p-technology makes unprecedented amounts of processing power available to medical researchers to accelerate the development of improved treatments and drugs that could potentially cure diseases.
The Global Grid Forum [GGF] is a community of individual researchers and practitioners who are working on distributed computing, or 'grid' technologies. Wide-area grid technologies provide the foundation to a number of large-scale efforts utilizing the global internet to build distributed computing and communications infrastructures.
The Worldwide Lexicon Project (WLL) is an open source initiative to build a multilingual dictionary on the internet. A simple, standardized protocol has been written for the interaction with the dictionary, the encyclopaedia and the translation servers. While Seti@Home draws the idle CPU's of millions of PCs, the Worldwide Lexicon project calls for the help of internet users who are online, but are not doing anything. This is called 'distributed human computation'. The fantasy of this project looks like this: imagine that you can read qualitatively good translations of news resources, journals and short stories. These translations are not produced by an automated programme, but by thousands of internet users in the whole world. The Lexicon systems informs users via instant messaging when a WWL server has a task. Users can translate complete sentences, paragraphs or short documents. They can also translate pieces of texts of news sites, online journals and other sources. Each volunteer translates a small piece of the text. When there are enough participants who want to translate and control each other's work, the network of volunteer translators can produce an enormous stream of articles.
Most electronic learning environments, which are used in education, such as Blackboard and WebCT, don't use the p2p-model. For an efficient file exchange and support of teamwork p2p-networks can make an important contribution to internet-mediated learning. Learning environments that are strongly leaning on the p2p-technology are:
eduCommons is an open system for the creation, sharing and reuse of educational material and discourse that supports the learning process. The programme has been developed within the department for Instructional Technology of the Utah State University.
Edutella is a p2p-service for the exchange of educational metadata. The distributed search mechanism uses the features of the semantic web.
The p2p-network technology is still very young and strictly speaking scarcely out of the egg. Yet, in the meantime she has developed a quality level and reliability by which she conquers the field in practically all social domains. The p2p-technology obtains a position in the domain-specific programmes for communication, collaboration, file and computer sharing. From the corporate world and authorities via education and culture to citizens groups: they all have an interest in
- Using the computing powers of their PCs efficiently;
- Building a robust and invulnerable network;
- Sharing their own digital sources;
- Collaborating on shared documents;
- Finding and downloading files rapidly.
P2p-networks are extremely efficient and save mainly on distribution and storage of digital files. When users store the files in more places in the network, closer to the requesting parties, the producer of the information needs to spend less on server space and bandwidth.
Initially some people considered p2p-networks as a subversive technology that could even harm the roots of the internet. Meanwhile even the settled elites have accepted the p2p-technology as a constructive and economically efficient form of networking. But the p2p-technology remains explosive, especially in the hands of rebels who challenge established orders. On p2p-networks anyone who wants to can make any type of file anonymously available to x-others, with any random content. These x-others can be personal friends or vague acquaintances. The participants in p2p-networks can be completely 'unknown others'. Publication of copyright-protected material is forbidden. Yet, the participants of these networks appeal to (i) their right to make copies of digital material 'for their own use' and to (ii) their right on privacy.
At any rate, the history of p2p-networks has shown what happens when the large gamesters of the amusement and entertainment market don't realize in time what the introduction of a new network technology implies. The music industry was very slow in the uptake and wasted much —already slight— goodwill because of her defensive-aggressive attitude towards those peculiar networks of equals, in which copyright-protected tracks and albums were exchanged at a large scale.
Just like with the introduction of many other technologies we can expect that in the course of time a number of agreements are made about the moral and legal borders of the use of p2p-networks. How these borders will precisely be defined is not a matter of calculation or moral precision, but of the power relations between the actors in several specific domains and fields of application.
The main problem in this respect is not the use of p2p-networks for the exchange of child pornographic material and the support of nomadic networks of paedophiles. Most countries have reached social consensus about this fairly quickly. The responsible p2p-software makers will take this into account and build in provisions with which certain types of files cannot be distributed via their programme.
The real problem lies in the shift of the relation between private and public in the virtual spaces and networks in which people exchange information and communicate with each other. In societies that advertise themselves as a constitutional state it is a good custom that government or other 'third party' do not poke their noses into the private communication between civilians. That is the famous right of privacy. With relatives and friends I can privately —without control of any other authority or persons— communicate, interact and exchange. As long as I do not harm other interests that are protected by that same legal order. I can also talk undisturbed and share information or objects with friends of friends ('acquaintances' and other weaker ties). Then why shouldn't I be allowed to claim the right of privacy for communication or file sharing with 'friends of friends of friends' or with 'acquaintances of acquaintances of acquaintances'? According to the researchers of the 'small world' every human individual can be linked to any random other earthling in at the most six steps. So where does the right of privacy and free exchange among civilians begin or end? The pragmatic answer is: it ends legally at the borders of the laws of a country. The moral answer is more complex because it varies somewhat: it ends at the standards and values that are held by the persons involved. Are there moral principles for operating in virtual networks of unfamiliar friends?
Developers and users of p2p-programmes have often made use of the famous slogan of John Perry Barlow: "Information wants to be free". In p2p-networks information is distributed by anonymous sources. This anonymity can be valuable for many goals, as for dissidents living in repressive states, or for support groups of alcoholics. But anonymity can also be a shield for the distribution of information of which the reliability is not the issue, or information that has deception as its explicit goal. For people who are only looking for pornography or illegal CDs or films this is no problem. But for people who value the quality and reliability of information it remains important to be able to identify the source of the information. In this respect anonymity is nearly always the enemy of reliability.
Characteristic for the evolution of the networks is that the virtual becomes more detached from the physical or local. DNS disconnected the names of physical systems. In p2p-networks users can open up documents without a domain name. The one-to-one relation of names to systems is breached. The only location is the query. P2p-networks make the locations of files irrelevant. The only location is the query. This is put into practice by adding a new layer of addressing to the familiar IP-addressing system.
The present generation p2p-programmes still display a number of problems as to efficiency and scalability. However, the p2p-technology is revolutionary by nature. She is revolutionary in a technical way, because it is an efficient and scalable way of file storage and exchange and because it effectively supports the collaboration of (mobile) members of teams and organizations. The p2p-technology is also socially revolutionary because it has enabled a new way of human interaction on the internet. There are virtual 'networks of friends of friends' and 'networks of unknown friends' in which people share their sources —computer power, information, files, software, tips, news— with each other and where they can collaborate in projects. They are distributed networks that proceed in the twilight zone between the private and public. Due to their completely decentred architecture and the anonymity of the participants p2p-networks are technically robust and politically hardly vulnerable. Because these networks are so difficult to control or destroy, they evoke anxiety with the settled elites and government bodies, such as the security and criminal investigation services. The attempts to keep the p2p-networks under control with legal means or to forbid them haven't been able to stop the rise of a new generation p2p-technology which is still more efficient in storage, distribution and sharing of files, and of which the self-organizing networks are even more robust and nearly invulnerable. We have only seen the beginning of the revolution resulting from the p2p-technology.
- TeleTools - Groupware and CSCW
- Barkai, David 
Peer-to-Peer Computing. Technologies for Sharing and Collaborating on the Net.
- Borland, John 
Net radio raises a pirate flag
In: C|Net News.
- Cugola, Gianpaolo / Picco, Gian Pietro 
Peer-to-Peer for Collaborative Applications[pdf]
Information from the world of distributed computing.
A collection of thousands of users all over the world who form a gigantic worldwide computer network by sharing their computer power.
- Freenet, the free network project
- Good, Nathaniel S. / Krekelberg, Aaron
Usability and privacy: a study of Kazaa P2P file sharing
A study of the usability of KaZaA. One of the conclusions is that the majority of the users cannot tell which files they share. A large number of users share personal and private files without knowing it.
- Intel Corporation 
P2P File sharing at work in the Enterprise
A case study on the operation of the Intel Share and Learn Software(SLS). This study shows, among others things, that the load of central servers is reduced with 80% because files are intelligently distributed in the p2p-network. Intel uses SLS in order to distribute large quantities of files for a multimedia training to a worldwide audience. When a client PC requests a multimedia file, the SLS-programme determines which other client is nearest to the requesting client and has the latest version of the file. Not until this is determined is the information redirected by the programme.
- Jovanovic, Mihajlo A. / Axxextein, Fred S. / Berman, Kenneth A. 
Scalability Issues in Large Peer-to-Peer Networks: A Case Study of Gnutella
University of Cincinnati Technical Report.
- Internet Underground Music Archive
- Koontz, Linda D. 
File-sharing Programmes: Child Pornography is Readily Accessible over Peer-to-Peer Networks [pdf]
United States General Accounting Office (GAO), Testimony before the Committee of Government Reform, House of Representatives.
- McCoy, Jim 
Mojo Nation Responds
Paper on the idea behind Mojo Nation. To prevent the P2P system from becoming congested, score keeping micropayment and load-balancing are introduced in the Mojo Nation.
- Open P2P
- Peer-to-Peer Working Group (P2P WG)
- Peer-to-peer applications for research and education communities
- PeerWare in a Nutshell
- Ripenau, Matei / Foster, Ian /Iamnitchi, Adriana 
Mapping the Gnutella Network: Properties of Large Scale Peer-to-Peer systems and Implication for System Design [pdf]
In: IEEEE Internet Computing 6(1), January-February 2002.
Reviews of popular programmes for file sharing on the internet.
- Waxman, Henry A. / Largent, Steve 
Children's Access to Pornography Through Internet File-Sharing Programmes [pdf]
US. House of Representatives.
- Zeinalipur-Yazti, Demetrios 
Information Retrieval in Peer-to-Peer Systems
PhD thesis on the possibilities and restrictions of efficient searching in the files of other participants.
A portal for file exchange. Provides a survey of nearly all peer-to-peer file sharing programmes.