Google+

Unless you have been living in a bunker for the last couple of weeks, deprived of all access to the media, you can’t have failed to have received wind of Google‘s latest foray into the colourful world of social networking.

How many failed attempts at gaining traction on this greased pole lie in Google’s wake depends on how generous you are in determining failure and how broad a view you take of what precisely constitutes social networking. Products such as Orkut, Buzz and Wave immediately spring to mind. One could argue that Orkut and Buzz have been limited successes — after all, both still exist today — whereas it’s hard to argue that Wave succeeded in gaining a foothold for itself at any level.

No matter, our attention was assertively grabbed in the last couple of days of June by the appearance and media feeding frenzy surrounding the launch of Google+. Opinions vary widely from “Google’s Facebook killer” through “Twitter should be concerned” all the way to “fundamentally different from either Facebook or Twitter”.

To my mind, Google+ combines the best of Facebook and Twitter. It provides the former’s ability to track friends and share photos, videos and links, but it also learns from Twitter that not every relationship is symmetrical, so whilst I may want to read everything you produce, you might not want to see anything from me. Google+ embraces this not uncommon case, thereby allowing a publisher’s relationship of a single producer to multiple consumers.

Added to this is a serious attempt at providing a better virtual representation of the way we interact with people in real life. Whereas Facebook views all of your contacts as friends, thereby devaluing that currency, Google+ realises that not every relationship is a friendship. Some are with family members, people with whom we have little choice but to associate. Others are with colleagues, most of whom we would categorise as acquaintances, not friends. Other relationships are those we have with our (young) children, and yet others are completely one-sided, people who have no inkling of our existence, but whose on-line utterances we wish to follow for the same reasons we might read their blog.

It’s clear that some of these relationships are mutually exclusive. For example, I may not want my young children to see some of the photos and videos I might post to family and friends. Equally, I may not want my colleagues at the High Court to know about my fascination with bukkake or my cannabis growing hobby.

Even within the group of people I could accurately designate as true friends, it’s important to realise that this grid of people is not a mesh. No one person knows all of the other people in the grid. Here, the individual user forms the hub of the wheel and all of his friends are the spokes, some of whom have links to each other. So, I may wish to share a story with a bunch of people I have known since secondary school, without sharing it with colleagues from my last job who I have since promoted from acquaintance to friend. After all, the story might be an old one and be utterly meaningless to anyone not involved in the events described.

Google+ more accurately maps the complex relationships of real life human interaction to the virtual world by introducing the concept of circles. This allows you divide people into groups of friends, acquaintances, colleagues, family, Dutch speakers, obnoxious but entertaining, etc. The number of circles you can create is unlimited and you can file people in more than one.

Now, it becomes possible to share content with the union of multiple circles. It’s not yet possible to share with only the intersection of two circles, but I’m hoping this will be added at some point, thus enabling a link to an article in De Volkskrant to be shared only with people who are both friends and Dutch speakers.

Google+ has addressed many of my objections to social networking platforms. Not only has the awkward sharing model been addressed, but the very fact that it is Google behind this product, not Facebook, immediately reassures me that privacy and security will be less of a concern with this new platform. Frankly, I trust Mark Zuckerberg about as far as I can throw him, which is much less far than I’d like to throw him. Google’s no angel, but I trust them with my data. I have the added advantage of having worked there for a number of years, so I have actual knowledge of the company’s intentions on which to base my trust.

That really leaves only my objection to the closed door policy of social networking market leader, Facebook, by which I mean that the company — or more accurately, its users — is responsible for the sequestration of vast quantities of interesting content that would hitherto have found its way onto a home page or blog. If you’re not a Facebook member, you can’t see what your friends are writing. I still have no idea what’s on my wife’s wall. Maybe it’s just as well.

Google have addressed this issue by allowing postings with a scope of public. These are placed on the user’s profile page, where they are visible to all and sundry, whether or not the reader is a registered user of Google+.

I favour open communication, even when I know I hold a contentious view, so all of my Google+ postings over the last couple of weeks are in plain view for all to see. Nevertheless, I intend to expand on a few of these in the next 24 hours to give people who are accustomed to reading the blog an update of what’s been happening in our lives over the last few weeks.

If you’re interested in trying Google+ but haven’t received an invitation from any of your friends, send me an e-mail to that effect and I’ll rectify that situation.

In the future, I can see more and more of our postings finding their way to Google+ as the primary means of distribution, as it’s so much easier there to put one’s thoughts before an audience. Whereas a blog requires people to regularly visit your site or to sign up for notification of new postings, typically by e-mail or by subscribing to an RSS feed, a service such as Google+ significantly increases the opportunity for people outside one’s closest circles to become aware of our postings, by suggesting to friends of friends that they, too, follow what we share on-line. Often, these friends of friends are people that we, ourselves, know; just not terribly well. However, there may still be enough interest there for someone to consider their postings worth reading.

Until there’s a way to automatically propagate postings on this blog to Google+ and vice versa, you may wish to check both venues for postings by us. I’m more likely to post a sentence or two on Google+, whereas a full article would more likely find its way onto the blog.

WordPress 3.2

I’d been reluctant to upgrade the Web server to WordPress 3.x, fearing the loss of my many customisations to the EOS theme and the ensuing scramble to reintegrate them, which would probably fail due to fundamental changes in the structure of the new code.

Well, today I thought, ‘Bugger it, let’s do it, anyway and see how badly things turn out,’ so I installed WordPress 3.2 and quickly decided to switch over to the rather nice Twenty Eleven theme, which you’re now looking at.

I needed a photo to replace the default and the one you now see of our fireplace was the first to come into view, so for no other reason than this, it now graces the pages of our blog. You could have done a lot worse, I can tell you.

Popping Tablets

It’s no secret that I don’t like Apple; the technology company, not the Beatles’ record label. The restrictions they impose on what device owners can do with the hardware and software for which they have parted with sizeable sums are emphatically anti-consumer, as is their refusal to base their products on open standards, locking the user into inferior, proprietary technology. Apple furthermore stifle community development and continue to demonstrate lamentably poor business ethics, practices that have rightfully garnered Microsoft highly critical press for 20 years. Apple’s customers, on the other hand, display an alarming, sheeplike meekness in their willingness to participate in this shameful subjugation of the consumer.

Beyond the implications that this has for the development of technology as a whole, it has never quantifiably affected me. The reason is that Apple, in my judgement, has rarely, if ever, produced a best-of-breed device in any of the product areas it has entered. The Apple II was not the best home computer of its day, today’s Macs are far from being the best value for money for what they can do, the iPod has some glaring omissions as a portable music player and each new generation of the iPhone has always lagged some way behind competing smartphones in feature set. Apple’s modern product range looks nice, I’ll grant you that, but it’s a victory of style over substance, more fashion accessory than functional.

That all changed with the iPad. For a year or more, I had to admit that the iPad was a much better tablet computer than any of its rivals. The Android community had dropped the ball. Whilst Android is a technically superior operating system to iOS, tablets running Android failed miserably to make the best of it. Most applications didn’t take advantage of the larger screen, for example.

A year is a long time in technology, however. In the meantime, the iPad 2 had been launched, but the Android community hadn’t been grabbing forty winks on the sofa. Android 3.0 is upon us and tablets running this newest release of the operating system have started to appear. Application authors, too, have been adapting their software to make full use of the hardware on which they now find themselves running.

When Motorola launched the Xoom earlier this year, the gap that Apple had opened on rival tablets was effectively closed and arguably even reopened in favour of Android. My path to tabloid ownership was now clear. I no longer had to choose between suffering feelings of guilt and hypocrisy for buying the iPad, or the knowledge that I had wasted money on an inferior device for non-technical reasons.

And so it is that a Motorola Xoom recently found its way into our lives. I primarily wanted a tablet for the children, because much of the educational software is, thanks to the touch-screen interface, so intuitive that very little forced learning is required to derive the full benefit from it. I have regularly watched other people’s children at the Little Gym, playing with their parents’ tablet and wanted to give my children the same opportunity for computer-assisted learning. I had to reconcile that desire with my boycott of Apple, however, which I am now thankfully able to.

I didn’t bother with a data subscription. Instead, I took the duo SIM out of the car and put that in the tablet. My phone’s subscription includes unlimited data, so there’s no way I’m going to pay extra for a second, limited subscription. The unlimited ones have all been withdrawn, because the telcos can’t make enough profit on them.

If you have any recommendations for Android software for young children, do let me know.

E-mail Feed Retiring

After careful consideration, I’ve decided to remove this blog’s e-mail feed. That means that those of you who read new postings via e-mail will no longer receive them via this medium.

There are a few reasons behind this decision.

Firstly, the feed has only seven subscribers, and two of those are Sarah and me.

Secondly, and most significantly, the feed has an unacceptable lag. Feedburner sends out daily digests, which are configured to go out between 21:00 and 23:00 CET. This effectively means that a new blog entry posted at 23:01 won’t be sent to e-mail subscribers until it is a day old, by which time it is often largely irrelevant. This is one of the primary reasons that newspapers’ are seeing their circulation figures plummet. On-line news is not only bang up-to-date, but gratis to boot. Add the ability to comment, and it becomes interactive, too.

To summarise the previous paragraph, daily e-mail digests are out of step with today’s information age, in which almost everyone has a laptop, a tablet or a smart-phone with them at any time. In a world accustomed to the reality of instantaneous updates, a day is an eterniy.

Thirdly, e-mail digests present a read-only medium for blog consumption. One of the great things about blogs is their readers’ ability to comment on postings, which increases the participation and thereby also the sense of involvement of the reader. For the author of the original posting and other readers, comments provide useful feedback on the article at hand, as well as an enrichment of the subject matter. Reading by e-mail detracts from this and encourages passive consumption. By abandoning the e-mail feed, I hope to encourage participation.

Fourthly, the plain text (i.e. non-HTML) form of Feedburner e-mail strips almost all punctuation from the posting, rendering it unpleasant to read. It also strips out the URLs, reducing the usefulness of the posting (which, OK, is arguably non-existent to start with).

Finally, there are many better forms of notification and blog consumption these days than there used to be.

Firstly, there’s the blog’s RSS feed, which can be plugged into any reader, local or Web-based, such as Google’s excellent Reader.

Secondly, there are services like Twitter, which exist to aggregate brief messages of interest, posted in real time by people across the world, and present them to the reader in a unified feed. Twitter’s dubious social value notwithstanding, its short message medium is useful as a real-time notification service and Twitter clients exist for all modern computing platforms, including iOS (iPhone) and Android. As such, notifications of each new posting will now also be sent to Twitter, where they can be found in my feed. You can think of Twitter as a kind of on-line SMS aggregator.

In the case of postings made by Sarah, these receive the additional distribution of automatic crossposting to the accursed Facebook, introduced as a compromise on my part to get her to post here instead of there, thereby assuring continued public access to her words.

In short, the retirement of the e-mail notification service shouldn’t see you return to the bad old days of hitting Refresh on the blog site. If you’re one of those people who permanently have the same dozen Web sites loaded in as many tabs and regularly hit Refresh to check for updates, you’re doing it wrong. It’s no longer 2005. You should find yourself heading to the actual blog only to either post a comment or when redirected by Twitter.

Anyway, now to test whether the automatic Twitter update actually works. Today’s e-mail digest will still be sent this evening, but the service will be switched off afterwards.

The Web Is A Mess

Back when Tim Berners-Lee invented the World Wide Web, it all seemed so simple. Gopher, a means of sequentially browsing through hierarchically structured, text-based information, would be replaced by a technology that offered random access to information, information that need no longer be purely textual in nature, but which could include images and sound. And so it came to pass that hypertext, as we know it today, was born, along with its transfer protocol.

That was then, but this is now.

I’m glad I learned to write HTML in those early days. It was so simple: learn a few tags, mark up the structure of your document, indicating the headings, the subheadings, the paragraphs, the words requiring emphasis, the program code and the already formatted sections of the document, and then this great piece of software called a browser would render your document accordingly. The results even looked good.

There weren’t too many tags to learn. With nothing more than a basic HTML tutorial, one could be marking up one’s CV or writing an article within the hour. Within a few days, one had pretty much gained familiarity with the entire scope of HTML as it then existed. Marking up a document was conceptually confined to indicating the inherent structure of the document. Rendering was the job of the browser, perhaps under influence of the reader, who was free to express his preference for certain colours and fonts. There were few means for the document’s author to influence the rendering of his document, because that had nothing to do with the content. Even the <FONT> tag hadn’t been invented yet, and would cause a great amount of controversy when it was. Back then, people argued about the usage of <STRONG> vs. <B> and <EM> vs. <I>, the former content-based and therefore approved by the hypertext moralists, the latter presentational and therefore a threat to the logical purity of the Web.

If I were starting out authoring documents for the Web today, it would be an entirely different experience. On today’s Web, document authors — even the term itself seems quaint now — are as much, if not more, concerned with the presentation of their document as its content. In fact, one could even argue that the presentation has, to a large extent, become the content.

This is due in no small part to the mass commercialisation of the Web. You have to remember that, in the first years of the Web, commercial entities were unwelcome. People didn’t want big business polluting the Web. There was no Amazon, no eBay and no Google. AOL was still a separate, proprietary network, the term social networking hadn’t yet been coined and it was still hard to search for information. The Web then was still a niche product, offering little of interest, except to computer geeks and academics. Mathematicians and computer programmers are a lot less concerned with house style and brand recognition than multinational conglomerates, and the presentational technology of the Web reflected that.

These days, however, we have technologies like CSS and JavaScript to contend with. The View Source menu option of the browser used to yield insight into the structure of a document, providing hints on how to mark up one’s own. These days, it merely provides keyhole entrance to the haystack, in which to begin one’s search for a particular needle. Today’s documents have a <HEAD> section as large as the <BODY>, and frequently include multiple stylesheets and JavaScript libraries, used to stylise the presentation and effect dynamic, event-driven updates to the page. Attempting to localise the snippet of code or the particular style responsible for a certain effect can take minutes or even hours, rather than seconds. Styles are object-oriented and therefore frequently layered on top of one another, making it sometimes hard to unravel which particular combination of properties is responsible for a given rendering.

Most of the time, I manage to remain blissfully ignorant of the complicated nature of modern Web authoring. I use comprehensive blogging software and a mark-up plug-in to make the experience of writing prose on a computer as close as possible to that of using an electric typewriter. I’m more concerned with getting words onto the page than I am with serving you distracting animation or polyphonic sound while you read. Sorry. Not that these things can’t enhance your reading experience; just that I’m not as concerned with them.

The recent move of this blog from Movable Type to WordPress brought me into close contact with the inner ugliness of the modern Web. Before I knew it, I was wrestling with page layout and legibility issues, when all I wanted to do was write. Visual themes take away a lot of the headaches, but they also add a few of their own. No theme suits me without some modification, but before one can set to work on this, one must learn the structure of the theme, the styles it uses and how it relies on JavaScript. That can be quite an investment of time and energy. Small modifications often don’t have the intended effect and it can take multiple attempts and a lot of hair-wrenching to get things working exactly as you want them. Before you know it, you’ve got a dozen browser tabs open at various works of reference on CSS and JavaScript.

A good case in point is the automatically expanding box in which this paragraph is written.
 
You have no idea how long it took me to get this to work the way I wanted. I needed it for the in-line
examples of code that I sometimes post, which wouldn't otherwise have fit. Even the current
behaviour is a compromise, because I couldn't get it to do exactly what I wanted.

If you’ve got a slower computer or you ever browse the Web on a phone, you will have felt the proliferation of Flash around the Web. Huge applets that add little or nothing of value, often just mundane advertising, seem to crop up on the pages of just about every large site these days. I block these with a browser plug-in, but in the worst cases, the site is inoperable without them. All too often, the entire page consists of a single interactive piece of Flash. If one has a broadband Internet connection and a powerful computer, the effect can be impressive, but this should never come at the expense of accessibility. Flash, and other such fluff, is a nice way to enhance a page, but a site should never be reliant upon it. All too often, Web authors rely on it for core functionality.

Such authors are missing the point of the World Wide Web. They, along with those who advise us that their site is “best viewed with version Y of browser X” are, perhaps unwittingly, actively undermining the pioneering work of those whose designs, based on open standards, make the Web even possible in the first place.

So, we’re seeing a shrinkage in the universal accessibility of the Web. The Web may be world-wide in scope, but its usability is no longer global. We’ve strayed a long way from the ideal.

And yet, there’s a trend far more disturbing than the one towards form over content, packaging over product. It’s bad enough that presentation has, to a large degree, subjugated content, but much more worse is the voluntary surrender of publishing liberty.

The Web was conceived as a universally accessible network of documents. Content was unencumbered, freely available to all. The blogging revolution seized upon this principle and a new paradigm of amateur journalism and publishing was born, in much the same way that desktop-publishing had caused a minor revolution in the off-line publishing world fifteen years earlier.

In recent years, however, we’ve seen a disturbing trend towards authors willingly choosing to publish their work in a limited access forum. Take, for example, those authors who write exclusively on Facebook or one of the other so-called social networking sites.

Ten years ago, the Web was home to millions of highly individualistic home pages, the unique calling card of each individual author. All were created from scratch, so no two looked exactly the same.

Later came the blogs and, whilst the advent of easy to use blogging software and blog hosting sites was deleterious to the unique appearance of the personal home page, that appearance could still be modified at the will of the user and, much more importantly, access to the content was still universal.

Compare that with today’s voluntary retreat into the proprietary territory of services like Facebook. Users post not only instantly disposable one-line status updates, but also extended articles, photos and other content. Prior to the rise of social networking sites, such content would have been posted in a publicly accessible domain, but is now consigned to the proprietorship of a corporate entity, where it remains inaccessible to non-members. Not only that, it is also forced into the creative corset of the Facebook house style. The negative impact is both to the author, in the form of creative restriction, and to the reader, in the form of drastically reduced accessibility.

This voluntary surrender of free and uninhibited access on a massive scale is truly a lamentable development in the evolution of the Web and hearkens back to the bad old days before the mass adoption of the Internet, when companies like Compuserve, AOL and Microsoft tried to lure users away from the Internet with the promise of a superior, parallel network infrastructure containing higher quality content produced by experts.

Similarly, Facebook is in the business of appropriating authors and their content on a massive scale, because the more people there are on Facebook, the more reason there is for those not on Facebook to get on Facebook. It’s a subtle form of coercion, with the users themselves as the catalyst. It’s harder to blame the company than it is the dimwits who have made it what it is today.

The poor bargain that is being made by users who surrender their freedom to the likes of Facebook is one of which most of them actually remain unaware. Those who consider the matter at all doubtless feel that the trade-off is a worthwhile one, and that the service has reached such critical mass that it can scarcely even be considered proprietary any more. After all, anyone is free to open an account and partake of the content. Whilst it’s true that anyone can become a member, most of the content will continue to remain invisible to me until its author acknowledges me as a friend. Besides the accessibility issues, there are many other reasons to resist the Facebook hegemony, which I won’t go into here.

In removing from the public eye content that would otherwise have been readily digestible by anyone anywhere in the world, Facebook is building a compelling case for non-members to join. After all, how can we afford not to, when all of our friends, family members and colleagues have apparently already done so? If you can’t beat them, should you not join them?

I believe the only possible answer is a resounding No!

The Internet is the single most important invention in our lifetime. Its omnipotence flows directly from the universal nature of its content, content that is served up using technology based on open standards. Users who willingly donate their creative work to the exclusive pool that companies like Facebook use to justify their valuation as access providers of that content undermine a truly world-wide Web. That creative work is subsumed and becomes a fractional enlargement of the body of proprietary content that Facebook uses as a compelling argument for the rest of us to join.

You don’t need Facebook to stay in touch with your friends and you certainly don’t need it to enable you to reach people with the written word.

Make no mistake: Facebook is not free. There is a very real price that all of us pay if you use it.

As the man once said, freedom’s wasted on the free.