First things first: this year’s Internet Librarian International was a much better affair than last year’s effort…in fact, it was actually worth while being there (sorry if this is negative, but I feel it needed saying).
OK, report time:
Cory Doctorow spoke about how copyright is not working on the Internet, how large corporations push for consistently stronger legislation in this area, and how this leads to nothing but criminalization of what the general public in increasingly greater numbers consider perfectly acceptable behaviour. He used the notion of how people originally had a closed network of telephones of a single make that produced good quality sound and a reliable service, but that people opted for cheaper, lower quality because it made it possible to ring more people; similarly, popular services on the Internet are not those high-quality channels pushed by the corporate media companies, but lower resolution, high-quantity channels such as YouTube. There is no way to compete in this market with pricing schemes adopted from the non-online world.
Doctorow’s commentary on the facts about international copyright and how this complicates matters in terms of a delivery via channels such as the Internet was rather saddening; the fact that search engines that show previews of content in search results are actually illegal in many countries demonstrates how out of touch some copyright law really is. At the same time, it has never been easier to break copyright law: it is now possible to knock off a few thousand copies of a film in an afternoon because of sharing technologies. Is there any reason to continue using legislation designed to prohibit copying when copying was difficult?
Punishment for “suspected” infringement of copyright law includes cutting off internet connections, which according to Doctorow is a an abuse of human rights in a digital age; he’s probably right here too, given that I would struggle without the Net.
Doctorow argued that since copyright law is not generally created in an open process, but in a closed process controlled largely by industrial interests, it is important to lobby for change in the way such laws are created. Examples of how engaged people are in the way copyright law affects them was demonstrated by two politicians in Canada who lost their seats due to misunderstanding the electorates’ disaffection with the way copyright law was being treated.
It was pointed out that the way industry has tried to combat copying culture has largely been based on encrypting content and selling this with a built in decryption key, which is largely going to fail because the key and the encryption are provided together – making is not exactly difficult to decrypt said content. An interesting thought on the value of copying culture is that the industrial revolution was driven by copying – creating many cheap copies on looms, and creating copies of other people’s equipment.
Regarding ebooks, Doctorow contends that part of the thing that makes books so dear to people is the experience of owning the book; ebooks have largely removed this feeling causing people to not have any affection for this medium. He points out that users do not own the content – in essence they lease content (note the Amazon book recall – this would never have happened with printed books).
A final, really relevant example of copyright silliness that was outline is the obsession of UK universities with patenting research – a Thatcher invention – this intellection property protection (IPP) means that for every £1 IPP brings in, universities pay £19 for the use of other universities intellectual property. This kind of rot has to be stopped – for many years, publicly funded research was not subject to patent protection because is was deemed to be in the public domain, for the benefit of society. Because the political attitude prevailing at the time assumed that educational institutions should be self-funding, it was deemed necessary to bring in money in this way – without realizing the fact that with income comes outgoings to other institutions doing the same. I wonder if this has any current parallels in Norway?
Doctorow concluded by encouraging librarians to work for better copyright law that has been produced in an open setting, and is adjusted for current contexts.
Tony Hirst provided a short talk on providing invisible services that gave access to library information, these are the kind of thing that we used to develop at NTNU Library about two–three years ago. He is of the opinion that the library should stop buying books, and that members of faculty staff should use Library Thing to circulate the books in their offices instead.
Peter Bryant talked about next-generation library catalogues; he is of the opinion that library catalogues provide no clue to users about the quality of the information contained therein because the cataloguers have no skills to provide this information – this is subject-related competence. He talked about the creation of authentic knowledge about content in a library catalogue using Linked Data, and points out that citation numbers in ISI and Google Scholar are supposed to provide information about academic quality, but yet they do not – in fact they provide little help. The library catalogue needs to provide ways of delivering authentic knowledge about subjects by typically providing links to other sources of content and metadata.
AK Sandberg told us about Pode, a Norwegian project. They want to create a library catalogue with a better user experience by using mashups based on open standards such as Z39.50, SRU, MARC, etc. They document everything and provide source code under an open license. (I have been involved in this project, so I can say that they are doing a lot of interesting work that runs in a different but related vein to that taking place on UBiT2010.
Behrens/Larsen provided an introduction to the work that has been taking place in Denmark on Summa, which is a system for “integrated search”. The system aggregates metadata from different sources and creates a list of hits. The hits are ranked and presented in a special interface. User testing revealed that the ranking was not good enough, and that users simply did not use the facets that had been provided as a way of narrowing the search. They had a number of ideas about how to work with the interface and improve the ranking, but found that changing the placement of facets had not improved their usage statistics.
Alan Oliver from Ex Libris told us about bX – a recommendation service based on clickthroughs in SFX – and was hacked to pieces by Peter Murray-Rust because of Ex Libris’ use of the word “open” in its marketing. Murray-Rust contended – correctly to my mind – that Ex Libris’ conception of open does not match up to the rest of the world’s conception.
Brian Kelly “standards are like sausages”; this presentation was an interesting look at standards, open standards and good things that are neither open or standards. Standards were originally seen as a way of ensuring interoperability, accessibility and avoiding vendor lock-in. Open standards such as OOXML are bad because they are in essence bound to one supplier, conflict with other standards, and are in truth ODF in an uglier wrapper (my prejudices might be coming through here too :D) Skype is a good non-open, non-standard, as is Google. Brian said that the 00s are characterized by an understanding that standards need to be applied sensibly, with that all-important contact with reality cf. the fact that W3Cs main page CSS does not validate properly because they want it to look right in all browsers. Standards should be written so human beings can understand them! Peter Murray-Rust: standards should be about rough consensus and running code.
The second day keynote, Peter Murray-Rust presented a set of challenges for libraries in the 21st century. I can really see a few ways of working the library into a key role at the university, if we take up some of these challenges. Especially as regards knocking the wind out of academic publishers – they are powerful, and power corrupts (Powerpoint corrupts absolutely).
I presented at a session on mobile libraries, and then had a long discussion with Patrick Danowski who presented at the same session. Following this, I attended an unconference session on the Semantic Web. This session was extremely interesting, but it somewhat difficult to relate, other than by stating that Linked Data is something that we should definitely be doing (but we know that already, yeah?)
All in all, this year’s ILI provided some food for thought, but I fear that the sessions that were most interesting for me are those that I have not really reported, such as the Semantic Web unconference and Peter Murray-Rust’s keynote. The reasons why these were so interesting were slightly different: Murray-Rust was inspirational, something that is nice to have now and again — it reminds you why you accept lower pay than you get offered by the commercial sector (oh, yes, that day is coming). The session on the Semantic Web reminded me personally why I work at NTNU Library: to be at the absolute cutting edge of academic library technology. I get the impression that we don’t generally understand this, and I get the impression that our web pages will never reflect this…as I said, that day is coming.