The Web Is A Mess

Back when Tim Berners-Lee invented the World Wide Web, it all seemed so simple. Gopher, a means of sequentially browsing through hierarchically structured, text-based information, would be replaced by a technology that offered random access to information, information that need no longer be purely textual in nature, but which could include images and sound. And so it came to pass that hypertext, as we know it today, was born, along with its transfer protocol.

That was then, but this is now.

I’m glad I learned to write HTML in those early days. It was so simple: learn a few tags, mark up the structure of your document, indicating the headings, the subheadings, the paragraphs, the words requiring emphasis, the program code and the already formatted sections of the document, and then this great piece of software called a browser would render your document accordingly. The results even looked good.

There weren’t too many tags to learn. With nothing more than a basic HTML tutorial, one could be marking up one’s CV or writing an article within the hour. Within a few days, one had pretty much gained familiarity with the entire scope of HTML as it then existed. Marking up a document was conceptually confined to indicating the inherent structure of the document. Rendering was the job of the browser, perhaps under influence of the reader, who was free to express his preference for certain colours and fonts. There were few means for the document’s author to influence the rendering of his document, because that had nothing to do with the content. Even the <FONT> tag hadn’t been invented yet, and would cause a great amount of controversy when it was. Back then, people argued about the usage of <STRONG> vs. <B> and <EM> vs. <I>, the former content-based and therefore approved by the hypertext moralists, the latter presentational and therefore a threat to the logical purity of the Web.

If I were starting out authoring documents for the Web today, it would be an entirely different experience. On today’s Web, document authors — even the term itself seems quaint now — are as much, if not more, concerned with the presentation of their document as its content. In fact, one could even argue that the presentation has, to a large extent, become the content.

This is due in no small part to the mass commercialisation of the Web. You have to remember that, in the first years of the Web, commercial entities were unwelcome. People didn’t want big business polluting the Web. There was no Amazon, no eBay and no Google. AOL was still a separate, proprietary network, the term social networking hadn’t yet been coined and it was still hard to search for information. The Web then was still a niche product, offering little of interest, except to computer geeks and academics. Mathematicians and computer programmers are a lot less concerned with house style and brand recognition than multinational conglomerates, and the presentational technology of the Web reflected that.

These days, however, we have technologies like CSS and JavaScript to contend with. The View Source menu option of the browser used to yield insight into the structure of a document, providing hints on how to mark up one’s own. These days, it merely provides keyhole entrance to the haystack, in which to begin one’s search for a particular needle. Today’s documents have a <HEAD> section as large as the <BODY>, and frequently include multiple stylesheets and JavaScript libraries, used to stylise the presentation and effect dynamic, event-driven updates to the page. Attempting to localise the snippet of code or the particular style responsible for a certain effect can take minutes or even hours, rather than seconds. Styles are object-oriented and therefore frequently layered on top of one another, making it sometimes hard to unravel which particular combination of properties is responsible for a given rendering.

Most of the time, I manage to remain blissfully ignorant of the complicated nature of modern Web authoring. I use comprehensive blogging software and a mark-up plug-in to make the experience of writing prose on a computer as close as possible to that of using an electric typewriter. I’m more concerned with getting words onto the page than I am with serving you distracting animation or polyphonic sound while you read. Sorry. Not that these things can’t enhance your reading experience; just that I’m not as concerned with them.

The recent move of this blog from Movable Type to WordPress brought me into close contact with the inner ugliness of the modern Web. Before I knew it, I was wrestling with page layout and legibility issues, when all I wanted to do was write. Visual themes take away a lot of the headaches, but they also add a few of their own. No theme suits me without some modification, but before one can set to work on this, one must learn the structure of the theme, the styles it uses and how it relies on JavaScript. That can be quite an investment of time and energy. Small modifications often don’t have the intended effect and it can take multiple attempts and a lot of hair-wrenching to get things working exactly as you want them. Before you know it, you’ve got a dozen browser tabs open at various works of reference on CSS and JavaScript.

A good case in point is the automatically expanding box in which this paragraph is written.
 
You have no idea how long it took me to get this to work the way I wanted. I needed it for the in-line
examples of code that I sometimes post, which wouldn't otherwise have fit. Even the current
behaviour is a compromise, because I couldn't get it to do exactly what I wanted.

If you’ve got a slower computer or you ever browse the Web on a phone, you will have felt the proliferation of Flash around the Web. Huge applets that add little or nothing of value, often just mundane advertising, seem to crop up on the pages of just about every large site these days. I block these with a browser plug-in, but in the worst cases, the site is inoperable without them. All too often, the entire page consists of a single interactive piece of Flash. If one has a broadband Internet connection and a powerful computer, the effect can be impressive, but this should never come at the expense of accessibility. Flash, and other such fluff, is a nice way to enhance a page, but a site should never be reliant upon it. All too often, Web authors rely on it for core functionality.

Such authors are missing the point of the World Wide Web. They, along with those who advise us that their site is “best viewed with version Y of browser X” are, perhaps unwittingly, actively undermining the pioneering work of those whose designs, based on open standards, make the Web even possible in the first place.

So, we’re seeing a shrinkage in the universal accessibility of the Web. The Web may be world-wide in scope, but its usability is no longer global. We’ve strayed a long way from the ideal.

And yet, there’s a trend far more disturbing than the one towards form over content, packaging over product. It’s bad enough that presentation has, to a large degree, subjugated content, but much more worse is the voluntary surrender of publishing liberty.

The Web was conceived as a universally accessible network of documents. Content was unencumbered, freely available to all. The blogging revolution seized upon this principle and a new paradigm of amateur journalism and publishing was born, in much the same way that desktop-publishing had caused a minor revolution in the off-line publishing world fifteen years earlier.

In recent years, however, we’ve seen a disturbing trend towards authors willingly choosing to publish their work in a limited access forum. Take, for example, those authors who write exclusively on Facebook or one of the other so-called social networking sites.

Ten years ago, the Web was home to millions of highly individualistic home pages, the unique calling card of each individual author. All were created from scratch, so no two looked exactly the same.

Later came the blogs and, whilst the advent of easy to use blogging software and blog hosting sites was deleterious to the unique appearance of the personal home page, that appearance could still be modified at the will of the user and, much more importantly, access to the content was still universal.

Compare that with today’s voluntary retreat into the proprietary territory of services like Facebook. Users post not only instantly disposable one-line status updates, but also extended articles, photos and other content. Prior to the rise of social networking sites, such content would have been posted in a publicly accessible domain, but is now consigned to the proprietorship of a corporate entity, where it remains inaccessible to non-members. Not only that, it is also forced into the creative corset of the Facebook house style. The negative impact is both to the author, in the form of creative restriction, and to the reader, in the form of drastically reduced accessibility.

This voluntary surrender of free and uninhibited access on a massive scale is truly a lamentable development in the evolution of the Web and hearkens back to the bad old days before the mass adoption of the Internet, when companies like Compuserve, AOL and Microsoft tried to lure users away from the Internet with the promise of a superior, parallel network infrastructure containing higher quality content produced by experts.

Similarly, Facebook is in the business of appropriating authors and their content on a massive scale, because the more people there are on Facebook, the more reason there is for those not on Facebook to get on Facebook. It’s a subtle form of coercion, with the users themselves as the catalyst. It’s harder to blame the company than it is the dimwits who have made it what it is today.

The poor bargain that is being made by users who surrender their freedom to the likes of Facebook is one of which most of them actually remain unaware. Those who consider the matter at all doubtless feel that the trade-off is a worthwhile one, and that the service has reached such critical mass that it can scarcely even be considered proprietary any more. After all, anyone is free to open an account and partake of the content. Whilst it’s true that anyone can become a member, most of the content will continue to remain invisible to me until its author acknowledges me as a friend. Besides the accessibility issues, there are many other reasons to resist the Facebook hegemony, which I won’t go into here.

In removing from the public eye content that would otherwise have been readily digestible by anyone anywhere in the world, Facebook is building a compelling case for non-members to join. After all, how can we afford not to, when all of our friends, family members and colleagues have apparently already done so? If you can’t beat them, should you not join them?

I believe the only possible answer is a resounding No!

The Internet is the single most important invention in our lifetime. Its omnipotence flows directly from the universal nature of its content, content that is served up using technology based on open standards. Users who willingly donate their creative work to the exclusive pool that companies like Facebook use to justify their valuation as access providers of that content undermine a truly world-wide Web. That creative work is subsumed and becomes a fractional enlargement of the body of proprietary content that Facebook uses as a compelling argument for the rest of us to join.

You don’t need Facebook to stay in touch with your friends and you certainly don’t need it to enable you to reach people with the written word.

Make no mistake: Facebook is not free. There is a very real price that all of us pay if you use it.

As the man once said, freedom’s wasted on the free.

This entry was posted in Technology. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *