The Art & Science of Web Design | 6 | WebReference

The Art & Science of Web Design | 6

To page 1To page 2To page 3current page
[previous]

The Art & Science of Web Design

All Structure, No Style

So let's review this progression. Historically, editors would add formatting instructions for the typesetters, who would lay out the physical pages of a publication based on those rules. As a method of shorthand, style rules would be developed for each piece of a publication, and then editors simply would mark each section of a document with its semantic label. As publishing moved to computers, those codes were added electronically to text to describe how a computer would do the formatting. Eventually, SGML was created as a standard way of encoding this information, but it was too complicated for everyday usage. Today's World Wide Web uses a small and very simple application of SGML dubbed the Hypertext Markup Language (or HTML), which defines only a limited number of codes that any computer can present.

In the historical tradition of authoring, editing, and designing information, the Web browser became the automated typesetter for a standard set of general document codes. But you've probably already noticed two problems: HTML was only designed to encode structure-leaving the browser to interpret style, and HTML had only the most limited set of structural tags. What was needed was a way to include style rules, and a way to extend HTML to include any structural element and still maintain this universal standard.

In an ideal world, the Web would have progressed in much the same way that GML did in the research labs of the early 1970s. Software engineers, publishers, editors, and graphic designers would have collaborated on the best possible method for advancing the state of Web technology. So, once the popularity of the Web was obvious, the next few steps easily could have been achieved-HTML could have been extended in a clean way to accommodate new and different types of documents. Then a powerful style language could have been added, giving designers the typographical and layout control to which they were accustomed. Finally, HTML could have taken a back seat to allow a simple framework to emerge, letting anyone develop any set of tags they deemed necessary with browsing software smart enough to discover new tag sets, understand them, and display them in appropriate ways.

Actually, this has been happening behind the scenes of the Web over the course of the last few years. The World Wide Web Consortium, or W3C, is a group of industry experts representing the many disciplines of electronic publishing and distribution. And while the Web has been moving full speed ahead into the mainstream fabric of our world's culture, this group of researchers has been plotting its technological course.

But there is tragedy to this idyllic world of the Web. As the W3C worked through the mid-1990s to build a perfect group of compatible technologies, the Web itself spread like a California brushfire fanned by winds of a new networked economy. Companies went public and quadrupled their value overnight based on the simple idea of passing HTML documents back and forth.

Look, for example, at the addition of images to the Web. Early browsers were simply text-based, and there was an immediate desire to display figures and icons inline on a page. In 1993, a debate was exploding on the fledgling HTML mailing list, and finally a college student named Marc Andreessen added <img> to his Mosaic browser. People objected, saying it was too limited. They wanted <include> or <embed>, which would allow you to add any sort of medium to a Web page with the much-touted content negotiation used on the client. That was too big a project, according to Marc. He needed to ship ASAP. He added <img> to his browser. It would be years before media would be included in a page using <embed> or <applet> or <object> tags; and, it would be years before the topic even would resurface again.

Andreessen packed up and headed west to the Silicon Valley, where he and a number of other talented developers created the Netscape Communications Corporation. Released in October of 1994, their software almost overnight became the most popular browser on the Web. With this popularity came a demanding audience. The Web was amazing, but it sure was limited. Why, even the simplest desktop publishing software 10 years ago allowed some typographical control. Yet Netscape's browser was limited to that simple handful of HTML tags developed by Tim Berners-Lee just a few years back. "Give us more control!" demanded the users. "Our pages are boring!"

Netscape responded, and did so quickly. Sure, the W3C was focusing research on how to best add advanced stylistic control to the Web, but that could take forever. Netscape needed to innovate immediately, and did so by introducing a set of new tags that gave their users a least a little of the power they demanded, but without the learning curve of a whole new technology.

Thus was introduced the <FONT> tag, and with it the capability to control the appearance of an HTML document by setting typographical attributes like the font face, size, and color. Web sites, which were now becoming vehicles for corporate communication and even electronic commerce, could now give their pages a look and feel unique from the competition. "More!" demanded Web designers. And more they got. Netscape, and newly awakened corporate rival Microsoft, began adding as many proprietary tags and technologies to their browsers as they possibly could. Almost overnight, the Web was a rich landscape of new ideas, new looks, and experimentation.

HTML continued to grow with new, powerful, and exciting tags. We got <background>, <frame>, <font>, and of course, <blink>. Microsoft parried with <marquee>, <iframe>, and <bgsound> and started competing for room in the specification. And all this time, the W3C furiously debated something called HTML 3, a sprawling document outlining all sorts of neat new features that nobody supported (remember <banner> and <fig>?). It was now 1995, and things were an absolute mess.

Something needed to give. If things kept up the way they were going, Netscape and Microsoft would eventually have two completely proprietary versions of HTML, but with no way of supporting the utopian vision of content negotiation. Instead, people would be forced to choose one browser or the other, and surf content specifically created for that platform. Content providers would have to either choose between vendors or spend more resources creating multiple versions of their pages.

There are still vestiges of this lingering on today's Web, but not the nightmare scenario that was anticipated. The HTML arm of the W3C changed course and started collecting and recording current practice in shipping browsers, rather than designing a future, unattainable version of the language. The consortium began a shift from proclamation-developing standards and handing them down from on high-to consolidation, providing common ground from which the industry could grow. The history of HTML is a perfect example of this transition.

Version 2.0 of the hypertext markup language was very much a statement from the W3C to the effect that, "This is how things are going to be." And, at the time, it made perfect sense. The Web didn't have nearly the reach it does now. Back then there were few Web browsers (and no commercial ones), and the users of those browsers and developers of content realized that this new medium was a moving target-things would change, and investment in content could be wasted in six months. That was fine, for a while.

Then came HTML 3. Coinciding with the explosion of the Web as a commercial force, this version attempted a massive extension of the language. While this was being undertaken, a quickly-growing company named Netscape was busily responding to its customers' demands by adding whatever it could to HTML, virtually ignoring the academic standards work that was happening at the W3C. Again, this is understandable (although very regrettable in hindsight). As a result, the HTML 3 specification never really made it pass the draft stage.

Soon, the consortium realized that unless it began to document current practices of the big commercial browser vendors, the Web would spin out of control into a world of proprietary, inoperable versions of HTML. Small, formal working groups formed (known as editorial review boards), consisting of member companies and invited experts. These groups worked to find common ground among the popular browsers, and then to extend the specification in a way everyone could agree upon. Since the groups were made up of the people who would be shipping the browsers, the speed at which the new specifications could react began to fall in line with the releases of new software. HTML 3.2 and the subsequent version 4.0 are successful examples of this strategy at work.

But can you see the shift? It was subtle, but did not go unnoticed by the true HTML purists of the day-especially those with roots reaching back into the depths of SGML. Suddenly, the simple and pure Hypertext Markup Language wasn't a markup language at all, but a collection of presentation hacks that only barely worked from browser to browser. Standardization was losing ground. But more importantly, the tags themselves were losing meaning. What did <FONT> say about the text that it was marking up? Nothing about its meaning-just some presentational clues for the browser to use when rendering.

timeline

To page 1To page 2To page 3current page
[previous]


Revised: March 20, 2001
Created: March 20, 2001

URL: https://webreference.com/authoring/design/artsci/chap1/1/