Showing posts with label Wikipedia. Show all posts
Showing posts with label Wikipedia. Show all posts

Wednesday, 3 August 2011

The fallacy of the self-organising network: the limits of cybernetic systems theory, or not...

Recently, the BBC broadcast a series of films by Adam Curtis, the eminent British documentarian. These films were broadcast under the title, All watched over by machines of loving grace, and, in general, all focused on how models of computation and systems have been applied to the world around us. Curtis is well known for his sharp journalism, particularly in areas pertaining to the politics of power. All watched over... differed from his previous documentaries owing to its focus on technology, computers and systems, but the theme of power was nevertheless omnipresent, as was the sharp journalism. I thought some of Curtis' ideas were worth further comment here.

In part two of All watched over..., entitled "The Use and Abuse of Vegetational Concepts", Curtis focused on cybernetics and systems theory. Since ex-members of our team were experts in cybernetics I was particularly interested in what he had to say. Curtis examined how cybernetics and systems theory came to be applied to natural ecosystems, and how this gave rise to a distorted view of how nature worked in reality. The very fact that ecosystems are termed "eco-systems" suggests the extent of systems thinking in nature. Indeed, Sir Arthur Tansley, the celebrated ecology pioneer, coined the term in the 1930s. Tansley was fascinated by Freud's theories of the human brain and, in particular, his theory that the brain was essentially an interconnected electrical machine carrying bursts of energy around the brain through networks, much like electrical circuits. As an ecologist, Tansley became convinced that such a model also applied to the whole of nature believing that the natural world was governed by a network of machine-like systems which were inherently stable and self-correcting. These theories of ecosystems and cybernetics were to fuse together in the late 1960s.

Jay Forrester.
One of the earliest pioneers of cybernetic systems was Jay Forrester. Prof. Forrester (or Emeritus Professor of MIT as he is now) was a key figure in the development of US defence systems in the late 1940s and early 1950s and with his colleagues he developed theories of feedback control systems and the role of feedback loops in regulating – and keeping in equilibrium – systems. The ecology movement assimilated this idea and increasingly viewed the natural world as complex natural systems as it helped to explain how stability was reached in the natural world (i.e. via natural feedback loops). Forrester's experience of developing digital combat information systems and the role of cybernetic systems in resolving such problems inspired him to explore systems difficulties in alternative domains, such as organisations. This would become known as Systems Dynamics. As the Systems Dynamics Society note:
"Forrester's experiences as a manager led him to conclude that the biggest impediment to progress comes, not from the engineering side of industrial problems, but from the management side. This is because, he reasoned, social systems are much harder to understand and control than are physical systems. In 1956, Forrester accepted a professorship in the newly-formed MIT School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations."
Forrester used computer simulations and cybernetic models to analyse social systems and predict the implications of different models. As Curtis notes in his film, Forrester and others cybernetic theorists increasingly viewed humans as nodes in networks; as machines which demonstrated predictable behaviour.

Curtis is rather unkind (and incorrect, IMHO) in his treatment of Forrester during the 70s environmental crisis. Forrester's "world model" - created under the auspices of the Club of Rome and published in the seminal "Limits to Growth" - is portrayed in a negative light in Curtis' film, as is the resulting computer model. Yet, systems theory is supposed to provide insight not clairvoyance. This isn't reflected in Curtis' film. Forrester's model appears now to have been reasonably accurate and was later amended to take account of some of the criticisms Curtis highlights. And few could argue with the premise that destiny of the world is for zero growth; to maintain a "steady state stable equilibrium" within the capacity of the Earth.

Anyway, summarising the intricacies of Curtis' entire polemic in this brief blog posting is difficult; suffice to say the aforementioned intellectual trajectory (i.e. cybernetics, ecosystems, etc.) fostered a widespread belief that because humans were part of a global system they should demonstrate self-organising and self-correcting properties, as demonstrated by feedback control systems and most potently exemplified in the natural world by ecosystems. In particular, these ideas were adopted by the computer utopians (The California Ideology) who dreamt of a global computer network in which all participants were equal and liberated from the old power hierarchies; a self-organising network, regulated by data and information feedback loops. Of course, the emergence of the Web was considered the epitome of this model (remember the utopian predictions of the Web in the mid-90s?) and continues to inspire utopian visions of a self-organising digital society.

The inherent contradiction of the self-organising system is that despite rejecting hierarchical power structures such systems in the end actually foster concentrations of power and hierarchy. Curtis cites the failure of the hippie communes, such as Synergia which implemented failed "ecotechnics", and the relative failure of revolutions in Georgia, Kyrgyzstan, Ukraine and Iran, all of which were coordinated via the Web. And, I suppose, we could extend this to examples in the so-called Arab Spring where the desire for change during the revolution, often orchestrated via Facebook and Twitter, has not always been replicated afterwards.

On this count I feel Curtis is probably correct, and aspects of his conclusion extend further. Indeed, the utopian vision of egalitarian self-organising computer networks continues and has been rejuvenated most recently by social media and "new tech", or Web 2.0 as it is now unfashionably called. Even examples which epitomise the so-called self-organising principle, such as Wikipedia, have morphed into hierarchical systems of power. This is partly because not everyone who contributes to Wikipedia can be as trusted as the next; but it is more because groups of users with a particular world view coalesce to assert their views aggressively and religiously. Editing wars are commonplace and new article rating systems have been introduced, both of which are prone to John Stuart Mill's theories on the tyranny of the majority - all within a Wikipedia ecosystem. Contributions are increasingly governed by a hierarchy of Wikipedia Reviewers who wield their powers to scrutinise, edit, and delete flagged articles. (Ever tried creating a new article on Wikipedia? It's not as easy as you think. Within seconds you will contacted by a reviewer. They relish their control over the type of knowledge Wikipedia publishes, and they make sure you know it…)
'Network of berries' - Quinn Dombrowski, Flickr - some rights reserved
But the same erosion of self-organisation can be applied to the disproportionate growth of particular topics within social bookmarking systems (which are supposed to provide a self-organising and egalitarian way of organising information), or those who have come to dominate the blogosphere or Twittersphere. Even a social networking behemoth like Facebook is, in itself, a quintessential mechanism of control and power. Hundreds of millions of users subjected to Facebook's power and the control over personal data that it implies. So while some users may feel liberated within the Facebook ecosystem, aspects of their identity and, perhaps, their economic and political freedom have been relinquished. I'm not sure this is an issue Clay Shirky addressed satisfactorily in his recent monograph, so perhaps he and Curtis should arrange a chat.

Yet, it is incredible how pervasive the ecosystem metaphor has become. Discussing the new tech bubble on BBC News recently, Julia Meyer rationalised it as "ecosystem economics". Says Meyer:
"...very distinct "ecosystems" have emerged during the past half-decade […] Each of these camps are deeply social - there is a network at its core.
"Companies like LinkedIn and Groupon have significant and growing revenues. While these may not entirely support their valuations, they clearly point to the fact that business models plus their understanding of the network-orientation of all business is on the right track. For those of us who finance entrepreneurship in Europe, what this means is we're mostly going to help build "digital Davids" - companies who understand how to re-organise the economics to create robust and sustainable businesses where everybody wins - customers, retailers and ultimately of course, investors.
"So why are firms like Groupon worth billions? How can something as simple as organising a group discount be so powerful? Because ecosystem economics is at play."
Huh. Ecosystems? Networks? Sustainability where "everyone wins"? Re-organising networks? I smell something dodgy – and I'm not referring to the men's lavatory in the John Foster Building.

Monday, 29 March 2010

Students' information literacy: three events collide with cosmic significance...

Three random – but related – events collided last week, as if part of some cosmic information literacy solar system...

Firstly, I completed marking student submissions for Business Information Management (LBSIS1036). This is a level one module which introduces web technologies to students; but it is also a module which introduces information literacy skills. These skills are tested in an in-lab assessment in which students can demonstrate their ability to critically evaluate information, ascertain provenance, IPR, etc. To assist them the students are introduced to evaluation methodologies in the sessions preceding the assessment which they can use to test the provenance of information sources found on the 'surface web'.

Students' performance in the assessment was patchy. Those students that invested a small amount of time studying found themselves with marks within the 2:1 to First range; but sadly most didn't invest the preparation time and found themselves in the doldrums, or failing altogether. What was most revealing about their performance was the fact that – despite several taught sessions outlining appropriate information evaluation methodologies – a large proportion of students informed me in their manuscripts that their decision to select a resource was not because i t fulfilled particular aspects of their evaluation criteria, but because the resource featured in the top five results within Google and therefore must be reliable. Indeed, the evaluation criteria were dismissed by many students in favour of the perceived reliability of Google's PageRank to provide a resource which is accurate, authoritative, objective, current, and with appropriate coverage. Said one student in response to 'Please describe the evaluation criteria used to assess the provenance of the resource selected': "The reason I selected this resource is that it features within the top five results on Google and therefore is a trustworthy source of information".

Aside from the fact these students completely missed the point of the assessment and clearly didn't learn anything from Chris Taylor or me, it strikes fear in the heart of a man that these students will continue their academic studies (and perhaps their post-university life) without the most basic information literacy skills. It's a depressing thought if one dwells on it for long enough. On the positive side, only one student used Wikipedia...which leads me to the next cosmic event...

Last week I was doing my periodic 'catch up' on some recent research literature. This normally entails scanning my RSS feeds for recently published papers in the journals and flicking through the pages of the recent issues of the Journal of the American Society for Information Science and Technology (JASIST). A paper published in JASIST at the tail end of 2009 caught my eye: 'How and Why Do College Students Use Wikipedia?' by Sook Lim which, compared to the hyper scientific paper titles such as 'A relation between h-index and impact factor in the power-law model' or 'Exploiting corpus-related ontologies for conceptualizing document corpora' (another interesting paper), sounds quite magazine-like. Lim investigated and analysed data on students' perceptions, uses of, and motivations for using Wikipedia in order to better understand student information seeking behaviour. She employed frameworks from social cognitive theory and 'uses and gratification' literature. Her findings are too detailed to summarise here. Suffice to say, Lim found many students to use Wikipedia for academic purposes, but not in their academic work; rather, students used Wikipedia to check facts and figures quickly, or to glean quick background information so that they could better direct their studying. In fact, although students found Wikipedia to be useful for fact checking, etc., their perceptions of its information quality were not high at all. Students knew it to be a suspect source and were sceptical when using it.

After the A&E experience of marking the LBSIS1036 submissions, Lim's results were fantastic news and my spirits were lifted immediately. Students are more discerning than we give them credit for, I thought to myself. Fantastic! 'Information Armageddon' doesn't await Generation Y after all. Imagine my disappointment the following morning when I boarded a train to Liverpool Central to find myself seated next to four students. It was here that I would experience my third cosmic event. Gazing out the train window as the sun was rising over Bootle docks and the majesty of its containerisation, I couldn't help but listen to the students as they were discussing an assignment which they had all completed and were on their journey to submit. The discussion followed the usual format, e.g. "What did you write in your essay?" "How did you structure yours?", etc. It then emerged that all four of them had used Wikipedia as the principal source for their essay and that they simply copied and pasted passages verbatim. In fact, one student remarked, "The lecturer might get suspicious if you copy it directly, so all I do is change the order of any bullet points and paragraphs. I change some of the words used too". (!!!!!!!!!!)

My hope would be that these students get caught cheating because, even without using Turnitin, catching students cheating with sources such as Wikipedia is easy peasy. But a bigger question is whether information literacy instruction is a futile pursuit? Will instant gratification always prevail?

Image: Polaroidmemories (Flickr), CreativeCommons Attribution-Non-Commercial-Share Alike 2.0 Generic

Friday, 30 January 2009

Wikipedia: the new Knol?

Like many people I use Wikipedia quite regularly to check random facts. The strange aspect of this behaviour is that once I find the relevant fact, I have to immediately verify its provenance by conducting subsequent searches in order to find corroborative sources. It makes one wonder why one would use it in the first place.

Wikipedia continues to be plagued by a series of high profile malicious edits. Unfortunately, many of these edits aren't necessarily malicious. They are just wrong or inaccurate. There are probably hundreds of thousands of inaccurate Wikipedia articles, perhaps just as many hosting malicious edits; but it takes high profile gaffs to affect real change. On the day of Barack Obama's inauguration, Wikipedia reported the deaths of West Virginia's Robert Byrd and Edward Kennedy, who had collapsed during the inaugural lunch. Both reports were false.

This event appears to have compelled Jimmy Wales into being more proactive in improving the accuracy and reliability of Wikipedia. Under his proposals many future changes to articles would need to be approved by a group of vetted editors before being published. For me this news is interesting, particularly as it emerges barely two weeks after Google announced that the 100,000th Knol had been created on their 'authoritative and credible' answer to Wikipedia: Knol. Six or seven months ago the discussions focussed on how Knol was the new Wikipedia, now it appears as if Wikipedia might become the new Knol. How bizarre is that?!

Tightening the editing rules of Wikipedia has been on the agenda before and in 2007 this blog discussed how the German Wikipedia was conducting experiments which saw only trusted Wikipedians verifying changes to articles. So, will tightening the editing of Wikipedia make it the new Knol? The short answer is 'no'. Some existing Wikipedia editors can already exert authoritarian control over particular articles and can – in some cases – give the impression that they too have an axe to grind on particular topics. 'Wikipinochets' anyone? Moreover, Knol benefits from its "moderated collaboration" approach, with Knols being created by subject experts whose credentials have been verified. Wikipedia isn't going anywhere near this. Guardian columnist, Marcel Berlins, is probably right about Wikipedia when he states:
"I don't think there's a way of telling what proportion of Wikipedia entries are deficient, whether because of the writer's bias, mischief or lack of knowledge. It's clear that a significant number are questionable, sufficient to lead us to suspect all entries. But to do the right thing - vetting all contributors or contributions - would be impractical and hugely expensive. There is no easy solution. We many just have to accept that Wikipedia's undoubted usefulness comes at the price of occasional - perhaps frequent - inaccuracy. That is a sad conclusion to reach about an encyclopedia."
Oh well, back to verifying random facts found on Wikipedia.

Monday, 8 December 2008

Wikipedia censorship: allusions to 'Smell the Glove'?

Another Wikipedia controversy rages, this time over censorship. Over the past two days, the Internet Watch Foundation informed some ISPs that an article pertaining to an album by the 'classic' German heavy metal band, Scorpions, may be illegal. Leaving aside the fact that Scorpions is one of many groups to have similar imagery on their record sleeves (the eponymous 1969 debut album by Blind Faith, Eric Clapton's supergroup, being another obvious example), am I the only person to notice the similarities with fictional rockumentary, This Is Spinal Tap?

Like metal, censorship is a heavy topic; but I thought this tenuous linkage with This Is Spinal Tap might be a welcome distraction from the usual blog postings, which are necessarily academic. Those of you familiar with said film might recall the controversy surrounding the proposed (tasteless) art work for Spinal Tap's new album (Smell the Glove), which in the end gets mothballed owing to its indecent nature. Getting into trouble over sleeve art is part and parcel of being in a heavy metal band it would seem! Enjoy the winter break, people!

Thursday, 24 July 2008

Knol: Wikipedia, but not as we know it...

A while ago I posted a blog about how Wikipedia in Germany was experimenting with new editing rules in attempt to stem the rising number of malicious edits. In essence, these new editing rules would impose greater editorial controls by only allowing trustworthy and hardened Wikipedians to affect changes. The success of this policy remains unknown (perhaps I’ll investigate it further after posting this blog); but the general ethos was about improving information quality, authority and reliability.

While Wikipedia wrestle with their editorial demons, Google have officially launched Knol. According to the website, a knol is a "unit of knowledge", or more specifically, "an authoritative article about a specific topic". Each topic has an author who has exclusive ownership of the topic which is associated with them. An author can allow visitors to comment on knols, or suggest changes; however, unlike Wikipedia, the author cannot be challenged. This is what Google refers to as "moderated collaboration".

Says Google:
"With Knol, we are introducing a new method for authors to work together that we call 'moderated collaboration'. With this feature, any reader can make suggested edits to a knol which the author may then choose to accept, reject, or modify before these contributions become visible to the public. This allows authors to accept suggestions from everyone in the world while remaining in control of their content. After all, their name is associated with it!"
A knol is supposed to be an authoritative and credible article, and Google have therefore placed a strong emphasis on author credentials. This is apparent from the moment you visit Knol. Medical knols are written by bona fide doctors; DIY advice is provided by a genuine handyman – and their identities are verified.

Knol is clearly a direct challenge to the supremacy of Wikipedia; yet it jettisons many of the aspects that made Wikipedia popular in the first place. And it does this to maintain information integrity. Am I sorry about this? 'Yes' and 'no'. For me Knol represents a useful halfway house; a balance between networked collaboration and information integrity. Is this elitist? No - it's just common sense.

What do you think? Register your vote on the poll!

Wednesday, 24 October 2007

Wikipedia closing the doors?

Is the wiki ethos under threat by new rules from Wikipedia? I've been catching up on recent news and apparently the German version of Wikipedia will be implementing new editing rules which could improve the quality of editing for the online encyclopaedia.

As always, anyone will be able to make article edits, but it will take someone who has been around Wikipedia for an extended period of time and who has garnered trust to make the edits live on the public site. Jimmy Wales (founder of Wikipedia) announced that, if successful, the German pilot project could be rolled out across all Wikipedia language sites. This appears to be in response to a series of high profile editorial gaffs, as well as the promotion of malicious edits by high profile celebrities.

I'm generally in favour of Wikipedia tightening the editorial process; I've lost count of the editorial inaccuracies and defaced articles that I have encountered. But is this the beginning of the end for Wikipedia as we know it? Is it an acknowledgement that the public wiki can't scale?