Tuesday 30 March 2010

Social media and the organic farmer

The latest Food Programme was broadcast yesterday by BBC Radio 4 and made for some interesting listening. (Listen again at iPlayer.) In it Sheila Dillon visited the Food and Drink Expo 2010 at the Birmingham NEC and, rather than discussing the food, her focus was encapsulated in the programme slogan, 'To Tweet or not to Tweet', which was also the name of a panel debate at the Expo. Twitter was not the principal programme focus though. The programme explored social media generally and its use by small farmers and food producers to communicate with customers.

There were some interesting discussions about the fact that online grocery sales grew by 15% last year (three times more than 'traditional' grocery sales) and the role of the web and social media in the disintermediation of supermarkets as the principal means of getting 'artisan foods' to market. Some great success stories were discussed, such as Rude Health and the recently launched Virtual Farmers' Market. However, the familiar problem which the programme highlighted – and the problem which has motivated this blog posting – is the issue of measuring the impact and effectiveness of social media as a marketing tool. Most small businesses had little idea how effective their social media 'strategies' had been and, by the sounds of it, many are randomly tweeting, blogging and setting up Facebook groups in a vein attempt to gain market traction. One commentator (Philip Lynch, Director of Media Evaluation) from Kantar Media Intelligence spoke about tracking "text footprints" left by users on the social web which can then be quantified to determine the level support for a particular product or supplier. He didn't say much more than that, probably because Kantar's own techniques for measuring impact are a form of intellectual property. It sounds interesting though and I would be keen to see it action.

But the whole reason for the 'To Tweet or not to Tweet' discussion in the first place was to explore the opportunities to be gleaned by 'artisan food' producers using social media. These are traditionally small businesses with few capital resources and for whom social media presents a free opportunity to reach potential customers. Yet, the underlying (but barely articulated) theme of many discussions on the Food Programme was that serious investment is required for a social media strategy to be effective. The technology is free to use but it involves staff resource to develop a suitable strategy, and a staff resource with the communications and technical knowledge. On top of all this, small businesses want to be able to observe the impact of their investment on sales and market penetration. Thus, in the end, it requires outfits like Kantor to orchestrate a halfway effective social media strategy, maintain it, and to measure it. Anything short of this will not necessarily help drive sales and may be wholly ineffective. (I can see how social media aficionado Keith Thompson arrived at a name for his blog – any thoughts on this stuff, Keith?) The question therefore presents itself: Are food artisans, or any small business for that matter, being suckered by the false promise of free social media?

Of course, most of the above is predicated upon the assumption that people like me will continue to use social media such as Facebook; but while it continues to update its privacy policy, as it suggested this week on its blog that it will, I will be leaving social media altogether. 'Opting in' for a basic level of privacy should not be necessary.

Monday 29 March 2010

Students' information literacy: three events collide with cosmic significance...

Three random – but related – events collided last week, as if part of some cosmic information literacy solar system...

Firstly, I completed marking student submissions for Business Information Management (LBSIS1036). This is a level one module which introduces web technologies to students; but it is also a module which introduces information literacy skills. These skills are tested in an in-lab assessment in which students can demonstrate their ability to critically evaluate information, ascertain provenance, IPR, etc. To assist them the students are introduced to evaluation methodologies in the sessions preceding the assessment which they can use to test the provenance of information sources found on the 'surface web'.

Students' performance in the assessment was patchy. Those students that invested a small amount of time studying found themselves with marks within the 2:1 to First range; but sadly most didn't invest the preparation time and found themselves in the doldrums, or failing altogether. What was most revealing about their performance was the fact that – despite several taught sessions outlining appropriate information evaluation methodologies – a large proportion of students informed me in their manuscripts that their decision to select a resource was not because i t fulfilled particular aspects of their evaluation criteria, but because the resource featured in the top five results within Google and therefore must be reliable. Indeed, the evaluation criteria were dismissed by many students in favour of the perceived reliability of Google's PageRank to provide a resource which is accurate, authoritative, objective, current, and with appropriate coverage. Said one student in response to 'Please describe the evaluation criteria used to assess the provenance of the resource selected': "The reason I selected this resource is that it features within the top five results on Google and therefore is a trustworthy source of information".

Aside from the fact these students completely missed the point of the assessment and clearly didn't learn anything from Chris Taylor or me, it strikes fear in the heart of a man that these students will continue their academic studies (and perhaps their post-university life) without the most basic information literacy skills. It's a depressing thought if one dwells on it for long enough. On the positive side, only one student used Wikipedia...which leads me to the next cosmic event...

Last week I was doing my periodic 'catch up' on some recent research literature. This normally entails scanning my RSS feeds for recently published papers in the journals and flicking through the pages of the recent issues of the Journal of the American Society for Information Science and Technology (JASIST). A paper published in JASIST at the tail end of 2009 caught my eye: 'How and Why Do College Students Use Wikipedia?' by Sook Lim which, compared to the hyper scientific paper titles such as 'A relation between h-index and impact factor in the power-law model' or 'Exploiting corpus-related ontologies for conceptualizing document corpora' (another interesting paper), sounds quite magazine-like. Lim investigated and analysed data on students' perceptions, uses of, and motivations for using Wikipedia in order to better understand student information seeking behaviour. She employed frameworks from social cognitive theory and 'uses and gratification' literature. Her findings are too detailed to summarise here. Suffice to say, Lim found many students to use Wikipedia for academic purposes, but not in their academic work; rather, students used Wikipedia to check facts and figures quickly, or to glean quick background information so that they could better direct their studying. In fact, although students found Wikipedia to be useful for fact checking, etc., their perceptions of its information quality were not high at all. Students knew it to be a suspect source and were sceptical when using it.

After the A&E experience of marking the LBSIS1036 submissions, Lim's results were fantastic news and my spirits were lifted immediately. Students are more discerning than we give them credit for, I thought to myself. Fantastic! 'Information Armageddon' doesn't await Generation Y after all. Imagine my disappointment the following morning when I boarded a train to Liverpool Central to find myself seated next to four students. It was here that I would experience my third cosmic event. Gazing out the train window as the sun was rising over Bootle docks and the majesty of its containerisation, I couldn't help but listen to the students as they were discussing an assignment which they had all completed and were on their journey to submit. The discussion followed the usual format, e.g. "What did you write in your essay?" "How did you structure yours?", etc. It then emerged that all four of them had used Wikipedia as the principal source for their essay and that they simply copied and pasted passages verbatim. In fact, one student remarked, "The lecturer might get suspicious if you copy it directly, so all I do is change the order of any bullet points and paragraphs. I change some of the words used too". (!!!!!!!!!!)

My hope would be that these students get caught cheating because, even without using Turnitin, catching students cheating with sources such as Wikipedia is easy peasy. But a bigger question is whether information literacy instruction is a futile pursuit? Will instant gratification always prevail?

Image: Polaroidmemories (Flickr), CreativeCommons Attribution-Non-Commercial-Share Alike 2.0 Generic

Friday 26 March 2010

Here cometh the pay wall: the countdown begins...

So the Times Online will be charging for news content from June 2010... News International is just one of several news organisations (some big, e.g. NY Times, ABC, etc.) which have plans to erect pay walls and develop subscription based models imminently. Love him or loathe him (well, most people loathe him), you have to respect Rupert Murdoch's pit-bull instincts in leading a growing number of content providers to erect – or at least consider the erection – of pay walls. Murdoch knows that most of the content industry wants to see the proliferation of pay walls. In fact, pay walls are their only saviour. Certain death awaits them otherwise. James Harding of The Times said as much in an interview today: "It [charging for Times Online access] is less of a risk than continuing to do what we are currently doing".

The trouble is that few yet have the gumption to do it. Murdoch, I suspect, is one of several who thinks that once there is a critical mass of high profile content providers implementing pay walls then there will be deluge of others. And I think he is probably correct in this assumption. After all, subscription can actually work. The FT and Wall Street Journal have successfully had subscription models for years (although they admittedly provide an indispensible service to readers in the financial and business sectors). An additional benefit of pay wall proliferation will be the simultaneous decline of news aggregators (which Murdoch has been particularly vexed about recently) and 'citizen journalists', both of which have contributed to the ineffectiveness of advertising as a business model for online newspapers. The truth is that the future of good journalism depends on the success of these subscription-based business models; but the success of this also has implications for other content providers or Internet services experiencing similar problems, social networking services being a prime example.

If you take the time to peruse the page created by BBC News to collect user comments on this story, a depressingly long slew of comments can be found in which it becomes clear that most users (not all, it should be noted) have a huge difficulty with subscription models or simply do not understand what the business problem is. And largely this is down to the fact that most ordinary people think:
  1. That content providers of all types, not just newspapers, are a bunch of rip-off merchants who are dissatisfied with their lot in the digital sphere;
  2. That content providers generate abnormal profits from advertising revenue and that their businesses are based on robust business models, and
  3. That free content, aggregation and 'citizen journalists' fulfil their news or content needs admirably and that high quality journalism is therefore superfluous.
My opinion, for what it is worth, is that most users simply don't recognise that businesses require business models, nor do they realise that many of the services they use on the web day in and day out are either unprofitable or are losing large amounts of money. It's always good to receive something for free; however, someone somewhere has to pay. This reality is inescapable. Traditionally this has been advertisers, partly because Google have been good at it; but what do you do when advertising doesn't bring home the bacon??? Many users dismiss the implementation of pay walls by telling us that they will get their news from a free sources or blogs. The reliance on free or citizen journalism is disappointing, but more than that it is dangerous for democracy. Such sources simply do not have the resources (financial or intellectual) to deliver high quality, reliable news. They don't have the training, or the new connections, or the international correspondents, or the access to the information required, nor do they operate within recognised ethical boundaries or present facts and stories objectively and with appropriate sources or evidence. Often they are motivated more by communicating with like minded readers, perpetuating gossip or untruths.

The real truth of course is that newspapers are losing tremendous amounts of money. It seems to be unfashionable to say it – even the great but struggling Guardian chokes on these words – but free can no longer continue. Newspapers across the world have restructured and reinvented themselves to cope with the digital world. But there is only so much rearranging of the Titantic's deck chairs that can occur. The bottom line is that advertising as a business model isn't a business model. (See this, this and this for previous musings on this blog). Facebook is set to be 'cash flow positive' for the first time this financial year. No-one knows how much profit it will generate, although economic analysts suspect it will be small. What does this say about the viability of advertising as a revenue stream when a service with over 500 million users can barely cover costs? But what are Facebook to do? Chris Taylor conducted an unscientific survey with some International MBA students last week, all of whom reported positively on their continued use of Facebook to connect with family and friends at home and within Liverpool Business School. The question was: Would you be willing to pay £3 per annum to access Facebook? The response was unanimous: 'No'.

Sigh.

Thursday 25 March 2010

How much software is there in Liverpool and is it enough to keep me interested

I am worrying about 'S' which I'll define here as the quantity of commercial software source code under maintenance and 'delta S' how this figure is changing. And by implication what we should be doing to make 'S' and 'delta-S' bigger. I'd rather the government worried about this rather than subsidising super fast YouTube to cottage dwellers.

In particular I am thinking about my home market in Liverpool where I am hoping to continue to carve some kind of career in my day job. If there is not enough 'S' to keep me going for another 25 years I'm going to get bored, poor or a job at McDonalds.

'S' maintenance is only a relatively minor destination for our business information systems students (still the students with the highest exit salary in the business school I am told please send yourself and your children BIS Course), but it is important to me.

Back in the day job we are working on developing LabCom a business to business tracking system for chemical samples and their results. One of the things that appeals to my mind is the fact that it is building a machine that is making things happen. I like to see how many samples are processed on it a year. Sad I know. We are delivering various new modules that will hopefully allow it all to grow. However thus far the project is not really big enough to fund the level of technical development and architecture expertise we would like to deploy, not enough 'S' to on its own maintain and fund high level development capabilities. The alternative for software developers such as my team is to engage in shorter term development consultancy forays, but for these to be of sustained technical interest they have probably got to add up to 100K or more and alas we have not worked out how to regularly corner such jobs.

With few software companies in Liverpool I wonder how this translates into the bigger picture and whether we can measure it.

There is definitely some 'S' in Liverpool, I did some work a couple of years back looking at software architecture with my friends at New Mind who are a national leader in destination management services and have a big crunching bit of software behind it. Angel Solutions are another company with a national footprint, this time in the education sector, backed by source code controlled in Liverpool. I have come across only two or three others over the years although no doubt there are some hiding. For someone trying to make a career out of having the skills to understand and develop big software this lack of available candidates in Liverpool might be a bit of a problem. A bit like getting an advanced mountaineering certificate in Norfolk ( see news Buscuit).

With this in mind I wondered is there more or less 'S' under management in Liverpool than elsewhere. Is this really the Norfolk of mountaineering.

How can we know. There are some publicly available records, we could dig up finances of software companies and similar, although most of these companies (including my own) are principally guns for hire engaging in consultancy and development services.

Liverpool's Software/New Media industry has a number of great companies such as Liverpool Business School Graduate led Mando Group and Trinity Mirror owned Ripple Effect and the only big player Strategic System Solutions. However as I understand it these are service delivery and consultancy companies not software product development companies they make their money through expertese, they contain a relatively low proportion of 'S'. Probably the largest block of software under management will be in the IT departments no doubt they have a bit of 'S'.

So how could we weigh this here in Liverpool or elsewhere. How much software is there. Lots of small companies such as my own have part of their income from owned source code IP 'S'. There are the few larger ones. So we could try to get a list and determine what proportion of income is generated from these outfits based on public records and a little inside knowledge. We could perhaps measure the number of software developers deployed or the traditional measure of SLOC (Source Lines Of Code). I've in the past looked at variants on Mark II Function Point Analysis you can find out a little about this on the United Kingdom Software Metrics Association website . In my commercial world I’m interested in estimating cost hence toying with these methods while we ponder what we can get away with charging and is it more than the our estimated (guessed) cost. In this regional context I’m interested in whether we can measure how much value there is lurking to give a figure for 'S'.

Imagine we did measure that the quantity of commercial code under management in Liverpool was I’ll call it 'Liver-S', I then want to know how this compares to Manchester's 'Manc-S' (I suspect unfavourably) and perhaps more importantly from a career and commercial point of view how it compares to last year. Is 'Liver-S' getting bigger (what is 'delta Liver-S').
The importance of 'S' and 'delta S' is about whether we are maintaining enough work to maintain or indeed develop a capacity to ‘do big software’ in the local economy. Without which to be honest I’m going to get bored.

Any masters/mba students stuck for a bit of an assignment or even better funding bodies wanting to help me answer this question please drop me a line.

If there is not enough 'Liver-S' in the future at least I'll be able to sit at home with my super fast broadband.