“Ask the users what they want!” seems to be the default response to the question of how we can improve library services, but this isn’t a good response, it’s a trite, unthinking non-response that washes the library practitioner of the responsibility of knowing their trade.
There is a lot of talk about what library users want; many of these assume that there is a clearly defined scale with “asking library users what they want” and “librarians knowing best” at the two extremes. The result of this kind of thinking is that the library asks what users want, and then acts on the results, however, this results in something even worse than not asking the user what they want: “now we know the answers, lets get on with it”. Acting on a questionnaire where users are explicitly asked “what do you want”, or the comments from a user survey such as LibQual+® results in only one answer: “I want the Moon on a stick”.
The stupidity of this kind of approach is seen in the emphasis: it isn’t what is needed by everyone, just by one particular user, and of course, these opinions will be as various as the number of respondents. Even if the user responds in seemingly tangible way “the library staff are not helpful and the website is difficult to navigate”, it must be remembered that this is seen through the eyes of an individual that asks questions such as “do you have a photocopying service” and thinks that a negative response to this question equates with unhelpfulness and thinks that navigation of the website is difficult because their computer display is broken doesn’t show the colours of the links properly (an example of this is the student who complained about the library OPAC being bad, but didn’t come to the offered courses; when I finally met with the student, it turned out that they had not been using the OPAC at all, but a third-party interface based on the LMS’ Z39.50 interface). Making service decisions based on this kind of information at face value is less than worthless, it is damaging.
Knee-jerk reactions to user-dissatisfaction expressed in generalized questionnaires will always backfire, this is because user feedback needs to be feedback on a specific question, and not questions of the kind “what do you think of the website?” The questions need to be focused on specific aspects of the services provided by the library. One of the major findings of users of the LibQual+® survey is that there is a discrepancy between the expected level of service in the holdings and the actual service. To my mind, this can be seen as a result of the “me” aspect of respondents. The interpretation of the results of surveys needs to be tempered by the understanding of this being from the perspectives of a multitude of different users with differing needs, expectations and contexts. No user actually wants “the Moon on a stick”, but this is the interpretation that is most obvious when reading the generalized feedback.
When a commercial enterprise asks what their users want, they ask the question about a specific product, and elicit responses about specific aspects of the functionality of that product. A major rethink about the product and its viability may be the result. No-one approaches a potential market without some idea of what their service entails; except libraries.
The next time you hear someone say that we should pay more attention to what users want, ask yourself the following questions:
- why do we want feedback?
- what do we want feedback on?
- will the feedback be usable?
If the answers to these questions happen to be along the lines of “we want to know if users like eBooks”, “eBooks” and “yes of course”, then it’s the same old story. The answers for the questions should rather resemble “we want to know what we can do to make service X work better”, “the eBook service we provide” and “hopefully, but we need continuous feedback to make sure that we’re doing the right thing”.
The final point here is that the feedback should be something that libraries have as a strategic point, not just a one-off or occasional hit-and-miss affair. The strategic planning of this kind of thing should not be left to individuals either, it is the responsibility of management to ensure that projects, strategic areas and goals are followed up systematically by getting targeted user feedback. Another point is that this feedback should take different forms, and should in preference be interpreted and re-interpreted in light of new data.
A good example here is the notion of analysis of website traffic; in order to get anything out of the statistics, you need to know what you want to measure. An example is “do people know how to find the OPAC?”; in order to do this, a particular kind of report can be generated. But the various goals that are identified need to be identified before the reports are generated – knowing what goals and success indicators you have will ensure that you know what to measure and how to change your service in order to achieve your goals. Typically, statistics are “gathered” and then dropped as raw data – often as graphs – into the laps of the various parties at the library; the problem with this approach is that, while it’s nice to know which pages are most visited, it is difficult to read any patterns and generate meaningful goals from data presented in this way.
In the end, what the library should strive towards is “the moon on a stick with reservations”; providing everything that the user wants, just within a framework that is feasible. An example of this is ensuring that the expectations of the level of service do not outstrip the perceived level of service – clear terms of service are a good start. If a library cannot support large volumes of acquisitions, it should not attempt to, but rather focus on providing a better ILL service and make this service more available to the users.
When we’ve achieved these things, we can start asking the real question: what do users need?
Please note: registered trademarks presented in this text are used for informational purposes only and represent neither endorsement nor recommendation of these products.