Archive for the ‘NTNU Library’ Category

BIBSYS modernization


[This is a draft, I’ll be revising it]

The stream of mail on the Biblioteknorge mailing list about BIBSYS’ modernization has been almost unstoppable — at least five mails 🙂 — and names like Knut Hegna, Hans Martin Fagerli, Kim Tallerås and Dagmar Langeggen have been connected to the topic.

What’s the fuss? Well, it seems like BIBSYS will continue developing its own library management system, rather than buying off the shelf software. This contradicts the findings of “that there report” — you know, the one that recommended getting a new, off-the-shelf system. Full of wackiness — especially points at which it contravened the standard rules that a report should contain information about its subject from commonly accepted reality rather than the creative imagination — it is easy to ignore that report, but maybe it shouldn’t have been simply “ignored”.

My take on the whole thing: really very dull, I’m sorry.

Starting at the outside edge: the online public access catalogue is a relatively uninteresting concept today, and is becoming less interesting with each day that passes, here’s why:

  • Users aren’t finding their information there, they’re finding it on the web (report)
  • University libraries aren’t registering their information in an OPAC (at best they import a subset of the e-journal and e-book data they are spending the majority of their finances on)
  • The metadata a researcher needs is not registered in an OPAC, it is available only in research databases
  • Attempts at integrating/federating search in metadata for academic content have failed
  • Portals cost money, and this is money wasted when the user doesn’t want or need to use it

The internals of a monolithic library management system are also past their sell-by-date for the majority of academic institutions:

  • Packages and subscriptions increasingly account for the majority of spending
  • Metadata import related to packages are typically limited
  • Acquisitions are increasingly expected to conform to norms applied in the rest of the institution
  • User systems are in place that register users’ role and access privileges

Given that the majority of spending goes on resources that are already findable using other methods (the metadata we’re importing must come from somewhere, and Google is the preferred tool of discovery), there is really very little need to register the majority of objects we’re currently registering.

Academic institutions — especially publicly funded ones — have resource management systems that ensure that every economic transaction is done by the book, i.e. ensuring that things are put out to tender, and that an economic overview is available to the various controllers around and about. This means that an economy module in a library management system is not a good thing, it encourages practices that go outside normal routines, and hinders the financial controllers from doing their job (you do not want or need more than one system for this kind of thing).

Another aspect of the library management system is that of loan data, where a user profile system provides what data is needed about a borrower, and then attaches various objects to this. At a modern academic institution integration of the insitutional user-profile system with the library management system’s user-profile system is one big headache…so why do things that way around? Why not implement the necessary slots for library data in the existing institutional system? The framework for the kind of query across several thousand records is already in palce, so why not use it? There is time and money to be saved here too.

We’ve got rid of a few subsystems here, but were back to the sticky issues:

  • registration of items in a library’s collections
  • sharing of data

Simplicity itself would be requiring all third parties to supply Linked Data for their products (and yes, this is a realistic thing to ask), and then registering the remainder by creating a local linked data store containing either totally unique metadata for items that are otherwise not registered anywhere, or by linking to existing items registered in other Linked Data stores (and this can include non-Norwegian sources). In this way, a massive web of data is available to the library’s users, containing references not only to things in the local library’s holdings, but potentially to all existing items in any library that provides Linked Data.

The OPAC can now either be replaced by the generic, or domain specific semantic browser that a given user prefers, or a user interface that provides a wrapper for SPARQL queries and a presentation format for the data returned from the dereferenced URIs contained in the Linked Data. The latter here could be something created locally by the library IT staff, or an enterprising librarian who knows a bit of Javascript, and can follow the instructions given in a typical Javascript library.

BIBSYS can potentially be a provider of tools for a) creating Linked Data, and b) storage and retrieval of this. Modules for solving issues related to lending and finances could be off-the-shelf software, supplied where these were deemed necessary.


Linked data [wikipedia]

Semantic Marc, MARC21 and the Semantic Web

BIBLIOTEKNORGE om vedtak om modernisering av BIBSYS Biblioteksystem


emtacl10: a website for the academic conference


I have updated the emtacl10: emerging technologies in academic libraries website; the changes add a lot of content (and some new dates!) to the information that was published previously.

For me, the interesting thing was combining a set of technologies:

  • blueprint css
  • jquery
  • eXtensible Metadata Platform (XMP)
  • RSS
  • AJAX

The really cool thing about these technologies is that they made everything really quite easy; easy to create valid, accessible code, and easy to do all of this quickly.

The “assets” list is created from metadata embedded into the files that are listed there; this, and the rest of the content is updated using AJAX provided by the jquery framework. Blueprint css is used for the layout.

Two day’s work inclusive of everything! (And I really dislike creating webpages, but this verged on fun.)

Integrated search & news


I gave a presentation of what was termed “integrated search” and “news” at NTNU Library’s internal seminar last Thursday (2009-05-28); what I presented can be characterized in the following:

  • context-based services
  • RSS/Atom
  • Search

The last two aren’t really that interesting, but the first item really is.

I made the point that our users are dependent on profile-services (i.e. services where they have a username and password that links them to an account that contains some details about them). These profile services often contain information that is relevant to their research/study interests (remember that we’re talking about a university library here).

I created two mock-ups, one of the university intranett, where a member of staff or student will have their department listed as part of their profile data; and a second one based on the learning management system in use at NTNU, Its:Learning. In the latter, it is possible to specify search sources including recommended reading, and resources linked directly to a programme of study.

Connecting the profile data on their interests with a set of resources/news feeds can hardly be described as difficult, so why is this not yet being done? To be honest, it is difficult for the library to get access to the necessary APIs to enable us to link our data to the data in these restricted, profile-based resources. We had extensive communication with Its:Learning, only to be told that they weren’t interested in working with us (they had other priorities). A shame really in this latter case, because we pretty much had everything ready from our side.

The idea of creating relevance by linking known research/study interests together with the library’s resources is not new, and in fact I have just taken receipt of the result of a customer-driven project by some students from NTNU’s computer science department. The students delivered a nice example of a feed aggregator that can also classify feeds according to “Norsk inndeling av vitenskapsdisipliner”, a Norwegian classification system for scientific disciplines. This kind of aggregator was designed to slip easily into a model where the data si fed into other systems, preferably using the APIs mentioned above. In fact, that was the whole point (the students did a good job, by the way :)).

It was pointed out to me that Google (Scholar) rules the roost when it comes to search, but I still reckon that providing quality information to academics based on relevance criteria will provide time-savings compared to the Google alternative. Google Scholar is a really good tool, but it just can’t beat the relevance of information in the small, commercial subject databases that the library provides its users. The problem is getting the users to look at these databases first — and it is here that the integration of search is a big issue.

The news approach provides links to the latest results from the databases and journals and is really a supplement to searching, it saves users time by providing users with the newest research in their field. Google can’t really do this acceptably well yet (sort by latest is hit and miss at best), so we have a good reason to provide this kind of service.

iPhone, NTNU Library continued


As part of the last round of development, we continued adding content to the XML sources. We’ve supplied all of the details for each of the 11 branch libraries at NTNU Library.

A very dull, intensive job, but it is done now.

Take a look 🙂

Conference: emtacl10 — emerging technologies in academic libraries


I happen to know that the first announcement of this conference is just around the corner, so I thought that I’d give it a bit of press now: emtacl10: emerging technologies in academic libraries. [Update: dates 26-28 April 2010!]

In layman’s terms, this is a web 2.0 conference with a difference: it’s aimed fairly and squarely at the higher educational library sector. Sounds exciting? I hope so!

Head on over to for more information.

iPhone — NTNU Library


We pushed out a little web-app that creates an iPhone webapp for NTNU Library.

The web app can be viewed in any Safari browser, but is best viewed via an iPhone or iPod Touch.

The current prototype features a few non-functioning mockups, but it gives an impression of what we’re currently thinking. The project is in a review phase during the current development cycle, so any comments/feedback gratefully received! (Comment below!)

Is there anything special here? Well, in a way: the data you see is all gathered from a set of XML files that also form the basis for a few other related projects. More on these projects later.

Oh, and everything is available to you in both English and Norwegian (Bokmål) depending on whether you’re using Bokmål as your interface language on your iPhone/iPod.

The develepment track can go in many directions, and we’re looking at different ways of achieving the same results including but not limited to XML/XSLT and an iPhone App (as opposed to a web app).

Oslo, mashups og BIBSYS SRU


Er i Oslo akkurat nå, har vært på EBSCO Open day 2009 for å snakke om Web 2.0 & UBiT 2010, og idag på en workshop i regi av Pode-prosjektet for å snakke om UBiT 2010, BIBSYS og åpenhet og mashups.

Ble litt inspirert, og mekket sammen en klasse i PHP som fungerer som en API for BIBSYS’ SRU-tjeneste. En del metoder her som forenkler det en holder på med 🙂



FRIDA, eller Forskningsresultater, informasjon og dokumentasjon av vitenskapelige aktiviteter, er den grei, eller dårlig … altså er den konge, eller does it BITE? You decide!

Seven great things about Zotero


Just a quick list (doncha love ’em?) about Zotero:

  1. It’s free and open source
  2. It does more or less what every other reference management tool does as well, or better
  3. It’s where I work (in the web browser)
  4. It works on my Linux machine, my Mac laptop and the Windows machine at the help desk
  5. I can work with the same references from multiple computers without having to think about it
  6. I can save webpages as is, and annotate them
  7. Getting references into my documents works, irrespective of whether I’m using LATEX, Google Docs, OpenOffice, Microsoft Word…



Sitter idag på seminar om BIBSYS Ask 2 (eller II, hvis du vil). Interessant.

Ask 2 grensesnittet er ganske AJAX-tung, og det er sydd sammen av flere webtjenester levert av BIBSYS.

Det spennende her er ikke noe i grensesnittet i BIBSYS Ask 2, som er ganske beta-aktig for tida, men mulighetene å selv sy sammen de tjenestene som BIBSYS bruker for å lage BIBSYS Ask 2. Dette gjør at de ulike bibliotekene kan tilpasse bibliotekskatalogen til deres brukeres behov.

Hurra (for BIBSYS)!