Monday 26 July 2010

The end of social networking or just the beginning?

Today the Guardian's Digital Content blog carries an article by Charles Arthur in which we waxes lyrical about the fact that social networking - as a technological and social phenomenon - has reached its apex. As Arthur writes:
"I don't think anyone is going to build a social network from scratch whose only purpose is to connect people. We've got Facebook (personal), LinkedIn (business) and Twitter (SMS-length for mobile)."
Huh. Maybe he's right? The monopolisation of the social networking market is rather unfortunate and, I suppose, rather unhealthy - but it is probably and ultimately necessary owing to the current business models of social media (i.e. you've got to have a gargantuan user base to turn a profit). The 'big three' (above) have already trampled over the others to get to the top out of necessity.

However, Arthur's suggestion is that 'standalone' social networking websites are dead, rather than social networking itself. Social networking will, of course, continue; but it will be subsumed into other services as part of a package. How successful these will be is anyone's guess. This situation is contrary to what many commentators forecast several years ago. Commentators predicted an array of competing social networks, some highly specialised and catering for niche interests. Some have already been and gone; some continue to limp on, slowly burning the cash of venture capitalists. Researchers also hoped - and continue to hope - for open applications making greater use of machine readable data on foaf:persons using, erm, FOAF.

The bottom line is that it's simply too difficult to move social networks. For a variety reasons, Identi.ca is generally acknowledged to be an improvement on Twitter, offering greater functionality and open-source credentials (FOAF support anyone?); but persuading people to move is almost impossible. Moving results in a loss of social capital and users' labour, hence recent work in metadata standards to export your social networking capital. Yet, it is not in the interests of most social networks to make users' data portable. Monopolies are therefore always bound to emerge.

But is privacy the elephant in the room? Arthur's article omits the privacy furore which has pervaded Facebook in recent months. German data protection officials have launched a legal assault on Facebook for accessing and saving the personal data of people who don't even use the network, for example. And I would include myself in the group of people one step away from deleting his Facebook account. Enter diaspora: diaspora (what a great name for a social network!) is a "privacy aware, personally controlled, do-it-all, open source social network". The diaspora team vision is very exciting and inspirational. These are, after all, a bunch of NYU graduates with an average age of 20.5 and ace computer hacking skills. Scheduled for a September 2010 launch, diaspora will be a piece of open-source personal web server software designed to enable a distributed and decentralised alternative to services such as Facebook. Nice. So, contrary to Arthur's article, there are a new, innovative, standalone social networks emerging and being built from scratch. diaspora has immense momentum and taps into the increasing suspicion that users have of corporations like Facebook, Google and others.

Sadly, despite the exciting potential of diaspora, I fear they are too late. Users are concerned about privacy. It is a misconception to think that they aren't; but valuing privacy over social capital is a difficult choice for people that lead a virtual existence. Jettison five years of photos, comments, friendships, etc. or tolerate the privacy indiscretions of Facebook (or other social networks)? That's the question that users ask themselves. It again comes down to data portability and the transfer of social capital and/or user labour. diaspora will, I am sure, support many of the standards to make data portability possible, but will Facebook make it possible to output and export your data to diaspora? Probably not. I nevertheless watch the progress of diaspora closely and I hope, just hope they can make it a success. Good luck, chaps!

Monday 19 July 2010

Google finally gets serious about the Semantic Web?

Google has been flirting with the Semantic Web recently, and we've talked about it occasionally on this blog. However, compared with other web search engines (e.g. Yahoo!) and the state of Semantic Web activity generally, Google has been slow to dive in completely. They have restricted themselves to rich snippets, using bits of RDFa and microformats, and making up their own too. Perhaps this was because their intention was always to purchase a prominent Semantic Web start-up company instead of putting in the spade work themselves? Perhaps so.

Google has this week announced the purchase of Metaweb Technologies. None the wiser?! Metaweb is perhaps most known for providing the Semantic Web community with Freebase. Freebase cropped up last year on this blog when we discussed the emergence of Common Tags. Freebase essentially represents a not insignificant hub in the rapidly expanding Linked Data cloud, providing RDF data on 12 million entities with URIs linking to other linked and Semantic Web datasets, e.g. DBpedia.

My comments are limited to the above; just thought this was probably an extremely important development and one to watch. A high level of social proof appears to be required before some tech firms or organisations will embrace the Semantic Web. But what greater social proof than Google? Google also appear committed to the Freebase ethos:
"[We] plan to maintain Freebase as a free and open database for the world. Better yet, we plan to contribute to and further develop Freebase and would be delighted if other web companies use and contribute to the data. We believe that by improving Freebase, it will be a tremendous resource to make the web richer for everyone. And to the extent the web becomes a better place, this is good for webmasters and good for users."
Very significant stuff indeed.

Tuesday 13 July 2010

iStrain?

Usability guru Jakob Nielsen published details of a brief (but interesting) usability study on his Alertbox website last week. Nielsen was interested in exploring the differences that might exist between people reading long-form text on tablets and other devices. To be clear, this wasn't about testing the usability of devices per se; more about 'readability'.

Nielsen's research motivation was clear: e-book readers and tablets are finally growing in popularity and they are likely to become an important means of engaging in long-form reading in the future. However, such devices will only succeed if they are better than reading from PC or laptop screens and - the mother of all reading devices - the printed book. Nielsen and his assistants therefore performed a readability study of tablets, including the Apple's iPad and Amazon's Kindle, and compared these with books. You can read the article in full in your own time. It's a brief read at circa 1000 words. Essentially, Nielsen's key findings were that reading from a book is significantly quicker than reading from tablet devices. Reading from the iPad was found to be 6.2% slower and the Kindle 10.7% slower.

I recall ebook readers emerging in the late 1990s. At that time ebook readers were mysterious but exciting devices. After some above average ebook sales for a Stephen King best seller in 1999/2000 (I think), it was predicted that ebook readers would take over the publishing industry. But they didn't. The reasons for this were/are complex but pertain to a variety of factors including conflicting technologies, lack of interoperability, poor usability and so forth. There were additional issues, many of which some of my ex-colleagues investigated with their EBONI project. One of the biggest factors inhibiting their proliferation was the issue of eye strain. The screens on early ebook readers lacked sufficient resolution and were simply small computer screens which came with the associated eye strain issues for long-form reading, e.g. glare, soreness of the eyes, headaches, etc. Long-form reading was simply too unpleasant; which is why the emergence of the Kindle, with its use of e-ink, was revelatory. The Kindle - and readers like it - have been able to simulate the printed word such that eye strain issues are no longer an issue.

Nielsen has already attracted criticism regarding flaws in his methodology; but in his defence he did not purport his study to be rigorously scientific, nor has he sought publication of his research in the peer-reviewed research literature. He wrote up his research in 1000 words for his website, for goodness sake! In any case, his results were to be expected. Applegeeks will complain that he didn't use enough participants, although those familiar with the realities of academic research will know that 30-40 participant user studies are par for the course. However, there is one assumption in Nielsen's article which is problematic and which has evaded discussion: iStrain. Yes - it's a dreadful pun but it strikes at the heart of whether these devices are truly readable or not. Indeed, how conducive can a tablet or reader be for long-form reading if your retinas are bleeding after 50 minutes reading? Participants in Nielsen's experiment were reading for around 17 minutes. Says Nielsen:
"On average, the stories took 17 minutes and 20 seconds to read. This is obviously less time than people might spend reading a novel or a college textbook, but it's much longer than the abrupt reading that characterizes Web browsing. Asking users to read 17 minutes or more is enough to get them immersed in the story. It's also representative for many other formats of interest, such as whitepapers and reports."
...All of which is true, sort of. But in order to assess long-form reading participants need to be reading for a lot, lot longer than 17 minutes, and whilst the iPad enjoys a high screen resolution and high levels of user satisfaction, how conducive can it be to long-form reading? And herein lies a problem. The iPad was never really designed as an e-reader. It is a multi-purpose mobile device which technology commentators - contrary to all HCI usability and ergonomics research - seem to think is ideally suited to long-form reading. It may rejuvenate the newspaper industry since this form of consumption is similar to that explained above
by Nielsen, but the iPad is ultimately no different to the failed e-reading technologies of ten years ago. In fact, some might say it is worse. I mean, would you want to read P.G. Wodehouse through smudged fingerprints?! The results of Nielsen's study are therefore interesting but they could have been more informative had participants been reading for longer. A follow up study is order of the day and would be ideally suited to an MSc dissertation. Any student takers?!

(Image (eye): Vernhart, Flickr, Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Generic)
(Image (beware): florian.b, Flickr, Attribution-NonCommercial 2.0 Generic)