next up previous contents
Next: Internet Bandwidth Up: Bandwidth Capping Previous: List of Figures   Contents

Introduction

The Internet is perhaps the most important achievement of the twentieth century. Growing dramatically since the advent of the World Wide Web in the 1990s [6], it has revolutionized the transfer of information and knowledge across the world. By now, the Internet has become a part of everyone's daily lives - businesses depend on it to perform mission-critical tasks worldwide, and individuals regularly ``surf the web'' and send e-mail without giving it a second thought.

However, the Internet was never meant to be the entity into which it has evolved today; instead, its initial purpose was solely to facilitate communication after a nuclear attack. As such, it was designed to be decentralized and unreliable so as to withstand significant damage [1]. First, the Internet was designed to be decentralized - that is, instead of having an administrative unit overseeing its operation, the network can be maintained by any of the individual computers connected. Second, it was built on the assumption that the network was unreliable. In other words, each computer assumed that its connection to another could be destroyed without warning. So long as both of these principles were held in its design, the Internet would be able to continue to function and withstand the effects of a nuclear attack in which part of the network was damaged. Moreover, by considering and compensating for the potential damage that could be incurred on a network by a nuclear attack, the Internet would fulfill its purpose as an indestructible means of communication.

To achieve this, a robust network protocol (a set of rules governing the sending and receiving of data over a network) was created to provide reliable data transfer over this unreliable, decentralized network. However, while this protocol allowed for the Internet's explosive growth, it also laid the groundwork for performance problems seen today. First, this protocol accommodates the decentralized nature of the Internet, allowing any computer using it to participate as an equal member. As personal computers implementing this protocol became more popular, the number of nodes connected to the Internet grew accordingly. Second, because the network is assumed to be unreliable, the bandwidth (or data capacity) of a connection between two particular computers at a given time is not known; in fact, depending on the network's state, this varies greatly. Consequently, when sending information, a computer using this protocol is designed to probe the available bandwidth by incrementally sending more data until some of it is lost. Eventually, this transmission stabilizes and uses either as much bandwidth as is available, or as much bandwidth as requested by the application - a technique known as ``keeping the pipe full'' [3]. Considering the rapid growth of the Internet and the way in which its communication protocol consumes bandwidth, it is no surprise that bandwidth has become an important resource.

The allocation of bandwidth is even more of an issue in institutions (such as universities and businesses) in which a single link to the Internet is shared by all users. As the Internet has grown, its users have demanded more of it, seeking to take advantage of its full potential as a global communication network. Consequently, computer applications using the Internet today are much more complex and need more bandwidth than the original simple communications purposes for which the Internet was designed. Such applications include multimedia programs, which broadcast sound and video, games whose participants can be on opposite sides of the world, and file-sharing utilities that allow users to copy files to and from computers around the world. Because of the Internet's increased functionality, students at most universities are given a shared link to the Internet for ``research and using class-related applications. However, the surge in popularity of ... music downloads translates to a shortage of bandwidth and a corresponding decline in education-related application performance'' [2]. In short, as applications increasingly demand more bandwidth, their use degrades the performance of all other applications on their shared link.

In light of the contention for bandwidth among computers that share a connection, a number of solutions have been developed over the past few years to ensure that there is enough bandwidth for high-priority educational and/or work-related applications. One such solution is bandwidth capping - putting an upper limit on the amount of data flowing through a particular link. This paper discusses the implementation of bandwidth caps, beginning with a brief overview of bandwidth. Following a subsequent examination of capping techniques, it goes on to discuss the benefits and limitations of capping, concluding with a discussion of alternate mechanisms of controlling bandwidth.


next up previous contents
Next: Internet Bandwidth Up: Bandwidth Capping Previous: List of Figures   Contents
Jonathan Choy 2001-05-08