Speeding Up with HTTP 1.1
Your pages take too long to download.
Unless you're lucky enough to be working on an internal network, where all your readers will have 100-megabit fast ethernet and pages will just BAM right up on the screen, chances are good that for many people on the Internet, your pages take too long to download. If you're like a whole lot of Web designers, you've probably spent at least a couple hours obsessively trying to decrease the size of your pages, to squeeze your images down just a few more bytes, to reach some kind of happy medium between quick download time and interesting design.
Not all of it is your fault, of course. The Web is increasingly congested as more and more users get on and look at more and more pages. That, at least, is obvious. But do you know at least some of the bottleneck has been there since the beginning, and that inefficiencies in the very way the Web works can contribute to quite a lot of the time delays and congestion? And if those inefficiencies could be fixed, overall download time for all of the Web would decrease, particularly for slower connections?
Its true. And its something that's been fairly well-known to us techie folks for a while now, but hasn't had a lot of attention. The World Wide Web Consortium recently released a report, called "Network Performance Effects of HTTP/1.1, CSS1, and PNG," the intent of which is to highlight the Web's inefficiency problems (primarily with HTTP), and to promote technical changes that will solve those problems. The problem with the report is that unless you already know what's going on, its difficult to figure out just what the report is actually saying. And if you do know what's going on, then the report doesn't really tell you anything new.
So what's all the fuss about? Just what's wrong with the Web that needs fixing? The biggest problem lies with the design of HTTP. HTTP stands for HyperText Transport Protocol, and its the language that Web browsers and Web servers use to communicate with each other. When you see your browser saying "Contacting www.coolwebserver.com" and then "Waiting for reply..." there's HTTP going on there. Currently, most Web browsers and Web servers talk HTTP version 1.0. HTTP 1.0 appears to work just fine, on the surface, but if you look under the hood and see how it really works, you'll begin to realize that it doesn't work very well at all.
Here's what actually goes on when you click on a link in your browser or type in a URL. First, the Web browser sets up an initial network connection with the Web server. To set up that connection, the two sides pass several messages back and forth to each other, somewhat in the same way you exchange small talk with someone who you haven't seen for a while you'll say "hello, how are you," and the other person will reply "fine, how are you" Web browsers and servers do the same thing. (Technically, this is done over TCP, another protocol underneath HTTP that most Internet tools use to talk to each other.) After the browser and server have exchanged their greetings, then they can get down to business.
To actually retrieve a file from a Web server, the browser sends an HTTP request to get a file from the server or to submit a CGI script. The server responds with an HTTP response, which usually includes the data for the actual file the browser requested, or the results of a CGI script. Then, once the server's done sending everything, the connection is closed.
Sounds fine, right? If you're only dealing with individual files, then this process works fine. And back in the early days when a Web page was nothing but HTML, and Web traffic made up only a miniscule part of the Internet, there wasn't a problem with that mechanism. But these days the Web accounts for most of the Internet traffic, and a single Web page may contain dozens of files the HTML file for the page itself, a separate file for each image, and more files for applets, ActiveX controls or embedded media such as shockwave or RealAudio.
For each and every file that makes up a Web page, the browser and server go through that same process: they set up the initial connection, the browser requests the file, the server sends it, and the connection is closed. Then a new connection is made for the next file, as if there had never been a connection the first time. For lots and lot of small files there may be more time spent establishing connections than there is actually sending data.
It's sort of like going to the store with a list of items to buy, going into the store, buying the first item, putting it in your car, going back and buying the second item, putting that item in your car, and so on until you're done with all the items in your list. There are very few people who would tell you that your shopping system is better than just going into the store, loading up on everything you need, and then buying it all and going home.
The proposed solution and the solution that the W3C's paper tests is HTTP version 1.1, which provides two features to solve the HTTP 1.0 problems: persistent connections and request pipelining.
A persistent connection is one that doesn't close once a file is done transmitting. Using a persistent connection, when the browser connected to the Web server there would be one connection, one exchange of greetings, and the connection would continue to stay open until the browser told it to close or until things had been idle long enough that the server would close it itself. If you've ever used a dedicated FTP tool such as Fetch or FTP 2000, you've seen a persistent connection in use you connect to the server, change directories, download as many files as you need, and then quit when you're done. Persistent connections have been frequently used on the Internet almost as long as the Internet has been around they just haven't been used for the Web.
Pipelining, the second feature of HTTP 1.1, is simply the ability to send lots of file requests over the same persistent connection. The combination of these two things means that the browser and server can avoid all that setup for each individual connection and automatically speed up the time it takes to download pages. According to that W3C report, persistent connections and pipelining can reduce the amount of network traffic between a browser and server up to a factor of ten and can cut download time by as much as half and all that without changing a single line of HTML.
HTTP 1.1 was accepted as a proposed draft standard last August (the first step in the route to becoming a real Internet standard). Although many Web servers and browsers support parts of HTTP 1.1, there are few that support all of it (Apache 1.2b7 being the notable exception). Other servers, notably Netscape, support alternative mechanisms for improving HTTP such as the keep-alive header, which allows persistent connections over plain old HTTP 1.0. With HTTP 1.1 finalized, expect to see more and more software using the new standard and a slow increase in the overall efficiency of the Web.
All of which doesn't mean that your pages still won't take too long to download, but at least it'll help.
Digital ID wll most likely become more and more important, particularly as elecontric cash and payment systems become more widely used.
Do YOU have your own personal digital ID? a PGP key? Something else?
Most Active Topics:
Topic 41 Pointcast
Topic 70 FYI: Fallback Plans
Also in Web Tech:
Speeding Up with HTTP 1.1
WebTV: Better Than You Think
Animated GIFs: Friends or Foes?
electric minds |
virtual community center |
world wide jam |
edge tech |
Any questions? We have answers.