User stories imageThere’s a great blog post from last week by Kelly Lothbrook-Smith @kayelesss on twitter, ‘In defence of UX‘ that sums up just how difficult it can be to get UX and usability approaches embedded into digital development practices.  She describes the role of the UX researcher as a messenger, getting users to convey their experiences with the product and then interpreting and sharing that in the form of insight with the product management.  It’s a great description.  It resonated with me for many reasons: for the sense that all too often you come up against the view that ‘we know what users want’ and that there’s a tendancy with usability (and particularly with accessibility testing) that it is a process on a QA checklist that has to be ‘ticked’ off.  ‘We’ve done usability testing or we’ve done our UX research at the start of the project, we’ve had our focus groups, we’ve commissioned someone to do this – now we’re going to build x’.  Even in an agile environment I still get a sense that usability is perceived as being something that happens ‘after’.

And then this today from Michael Schofield, ‘The Mountain beyond the Molehill‘ identifying the challenge of what to do to turn those insights into decisions and referencing the CMMI model for UX from Coral Sheldon-Hess.  Even once you have managed to get the UX role embedded into practice how do you make sure that it is sustained and that the evidence that gets collected is used in decision-making and leads to service improvements. I’m not so sure that I entirely agree with the suggestion that “at best, failure to turn our investment in user experience design into practical return lowers the esteem of UX at work; at worst, it’s grounds to dissolve the practice entirely.”  

Libraries typically collect vast amounts of user feedback.  Surveys, comment cards, forums, whiteboards in libraries, polls, comments from help desk enquiries are just some of them.  But we struggle to make much sense of it as much of it just doesn’t give you the context to understand ‘what would make things better for the user?’   You rarely get actionable feedback – this thing is broken, please fix it.  Much of it is opinion and can give us a clue that there is something wrong with x feature, but no real idea of what is really wrong, what was the user trying to do, what do they think it should do, where does it fit in their workflow?  Even with traditional usability testing you can observe what the user is doing on your website, but you’ve designed the task and shaped the experiment and that limits what you can uncover.  But one of the attractions of many of the ethnographic UX techniques is that they lead you to a better understanding of your users and what challenges they face in using your product.  For example using cognitive mapping or directed storytelling or love-letters/break-up letters techniques (see this example from Massachusetts Libraries). Embedding that approach into the team gives a much richer picture and I’d argue actually saves the organisation time by focusing your activity on the things that make most difference to users.

So an example, from accessibility rather than UX but I’d suggest that there are parallels.  In our team we’ve manged to build up some accessibility expertise as we recognised that we needed skills in that area, we’ve also got some UX capability in the team.  Where we started with accessibility was to audit sites and then plan a programme of work to fix the issues that were identified.  But we’ve now started to try a different approach and embed accessibility expertise into the development sprints.  It means that decisions about design can take the accessibility perspectives into account at an early stage, leading to development that builds a more accessible feature from the outset.  It’s an approach that saves time as there are less things that might fail an accessibility test and have to be redone before it can go live, but it also gets a development team used to thinking about accessibility so options get brought forward that are more accessible.

There are parallels with UX practice and thinking I believe.

Many academic libraries have invested in web-scale discovery systems such as EBSCO Discovery or Primo (to name just two) and many will have also built lists of library resources for library users, whether in LibGuides or in other bespoke systems.  Often these products will be combined with IP authentication systems (such as EZProxy) to connect directly to library resources.  But for that approach to work requires library users to be on-campus and/or logged into the campus network or have found their way to (and through) the relevant library system that can give them the direct link to that resource.    But that approach essentially forces users to go through a library gateway that seems to me to replicate the print-based concept of a library, where the user has to be physically present to make use of the resources.  And that approach doesn’t really seem to gel with the multi-device, network-scale, digital world that our users inhabit with their need to access what they need to from wherever they are.

If your users aren’t starting their search in the library, but are finding resources via google, or from references, how do they get access to the resource?   We’ve seen often enough in almost any of our discovery system testing that what users want is to find the thing they need and straight-away get a link to the PDF.   How do libraries get closer to that? There is the federated access approach where users login at the point of access to the resource.  But users can often struggle to notice the relevant login link on the publishers ‘paywall’ page and then have to tackle the ‘where are you from’ federated access management game.  Feedback from users suggests that users are pretty frustrated even to see the paywall page asking for an amount to view the article and don’t always even realise that there might be a route there to to the article without paying.  The publisher-led RA21 initiative is piloting improvements to this experience with some proof-of-concept work to look at ways of making the experience better for users.  It’s an approach that has raised some concerns, particularly around privacy implications.

For a while now there have been some other approaches.  A number of libraries (including the OU) have offered tools (typically bookmarklets that plug into a browser) to help users find the right link by rewriting the publisher URL as an ‘ezproxied’ alternative.  Such tools have had a small take-up but require some maintenance to cope with continued updates to browsers.   Utrecht, one of the pioneers of alternative approaches offer such a tool with their Get Access button.   Arising from the Utrecht work the LEAN Library Access browser extension has been developed as a commercial product and has already been taken up by Manchester and others.  As well as connecting users to the ezproxied version of the resource, the browser extension also offers Library Assist to provide customised support messages tailored to different resources and Library Alternatives, linking to open access versions.  One of the advantages of the LEAN approach is that maintaining the tool to cope with browser changes doesn’t have to be done by the library.

Kopernio is another approach.  It has been around in beta for a little while and is another browser extension.  It offers integration with Google Scholar and PubMed and will add a PDF link into Google Scholar for example.  It also offers a place to store your PDFs ‘MyLocker’.  You can also associate it with an institution and once you login in it looks like it stores login details in the browser.  Kopernio also searches for open access material, stating that it indexes ‘a range of additional data sources from which PDFs can be retrieved: Open access publishers, Institutional repositories, Pre-print servers, Google Scholar and your Kopernio search history’.  It’s a freemium model, so there are limits on the free version (storage limits for example) and there’s a premium version coming soon, aimed at both researchers and institutions.  It has been developed by the original creators of Mendeley, so it comes from a different perspective to the library-derived apporaches.  It has certainly picked up on the researcher need for one-click access to PDFs and it offers a Library Guides feature that gives a customised guide to using Kopernio for your institution.   Kopernio seems to be available for Chrome at the moment.

It will be interesting to see what the take up of these types of browser tools might be, and particularly with there being two different models, with LEAN targeting libraries to buy into a subscription while Kopernio offers a freemium route to drive adoption.  What I think is particularly fascinating with the tools is the way that open access content is embedded into these tools and therefore into the workflow of users.  We are seeing it to an extent with discovery systems, in that they are adding more open access content into their knowledge bases, in some cases by harvesting open access content aggregators such as CORE.  With open access increasing in importance it is good to see that innovations are appearing that pull open access and subscription material together.

User stories imageIt’s intriguing how long it takes for a concept to rise and fall and the persistence of some ideas in the face of evidence that contradicts them.  Digital natives, the idea (suggested by Marc Prensky) that younger people are intrinsically able to function effectively in a digital world, by dint of being born at a time of digital abundance, is an idea that has spread out from the academic world and now seems established in the minds of many, for example quoted in this article from the BBC, and in this piece from Goldman Sachs taking data from Pew Research.  Yet within academic research this concept has been shown to be a myth.  A new paper by Kirschner and De Bruyckere ‘The myths of the digital native and the multitasker’ (abstract available at https://doi.org/10.1016/j.tate.2017.06.001) reviews much of the recent research and concludes that there isn’t evidence that younger people are digital natives.  In their words

“though learners in this generation have only experienced a digital connected world, they are not capable of dealing with modern technologies in the way which is often ascribed to them (i.e., that they can navigate that world for effective and efficient learning and knowledge construction).” (Kirschner & De Bruyckere 2017)

So Digital Natives – it’s not a thing.  It’s more complicated.

I wonder whether part of this might be a misunderstanding by non-academics when taking concepts from the academic world.  The ‘Scientific method‘ where researchers create a hypothesis that they test and then refine or change as a result of testing seems to confuse lay people into thinking that academics are always changing their minds, when it’s a process of enquiry where knowledge moves forward by theorising, testing and refining.

So it makes me wonder about typology, a process of categorising things into types.  Another example from recently suggested that there’s a linguistic method of distinguishing between Baby Boomers and Millenials by noting how they respond when someone says thank you.   Baby Boomers (defined as people born 1946-1964) are likely to say ‘You’re welcome’, while Millenials (1982-2002) are likely to say ‘No problem’ and there’s the suggestion that saying the ‘wrong’ response could be seen as annoying.  It interested me as I’m likely to respond with ‘No problem’ yet theoretically sit in the earlier category but am conscious that I probably wouldn’t have used ‘no problem’ when I was younger.

Typology is particularly prevalent in work around personality types and you see it most frequently in psychometric testing.  Much like digital natives it has become quite pervasive and established, with tests like Myers Briggs being regularly used.  Yet psychology researchers have moved away from this approach in favour of thinking about personality traits such as Big Five now.  Although practitioners seem convinced of the value of these psychometric tests the research pretty consistently sheds doubt on the validity, describing them alongside learning styles, as neuromyths. (e.g. eDekker et al ‘Neuromyths in education: Prevalence and predictors of misconceptions among teachers‘ Frontiers in psychology 2012).

But it is fascinating how these theories get embedded and adopted and then become difficult to shake off when the academic world has moved on to something else, has abandoned that theory as it doesn’t seem to fit the evidence.  The attractiveness of typology is also interesting.  I can see how there is a convenience factor at work here of grouping into types and I see it in the tendancy in web analytics for ‘segmentation’ and the use of personas in UX work to stand as a representation of a ‘user type’.   But…  this all increasingly suggests to me that when you are looking at categorisation you are looking at something much more fluid, where users might move from category to category, depending on numerous factors – what they are doing maybe, and we’re using the categories as much as a use case to test how a product might work for that scenario.

Libraries have long been contemplating the implications of a shift from print to digital and underlying that thinking is the perception of print and digital being very much a binary choice.  But is that necessarily the case?   A research project into ‘next generation paper’ reported by the University of Surrey and the Open University envisages some form of hybrid between print and digital, where links or buttons exist within the physical paper to connect to digital materials.

The concept of interactive paper has been around for a while as this article from the New Scientist from ten years ago shows.    So does this type of technology fundamentally change the way libraries need to think about print?    Does it provide print with a new purpose and greater longevity?  Combining the convenience of a portable format of material with a means to link directly to digital content.  Is that anything better than a smarter QR code?  Does it just replicate the inflexibility of printed material that can’t be updated with new links or changed with new information? Or could it be a route to maintaining the relevance of the printed document by linking to the latest information in digital form.

For libraries it potentially makes a stronger connection between print and digital content with maybe a need to describe the relationship between the materials in a different way.  They are related to each other and also depend on each other.    An interesting development and it will be interesting to see how and if that technology starts to appear in the mainstream.

 

The announcement yesterday of Olso public library’s new open-source linked data discovery interface comes just a few days after JSTOR unveiled their new prototype text analyzer. JSTOR’s text analyzer takes a piece of text or image and then extracts meaning from it, before finding recommendations from JSTOR for resources that might be relevant.   It’s a simple process and there’s a neat interface showing the results on one side, with an easy route directly to the PDF of the article.  The left hand side of the screen picks out the entities analysed from your text and gives a slider feature to let you adjust your results by giving more weight to some concepts than others.

I’ve not been able to find any detailed description of how it works but it looks very much like some form of semantic search feature, with processes in place to analyse the concepts in the submitted text and match them against the index of concepts from the JSTOR database.  In a lot of ways it isn’t dissimilar to the DiscOU tool we used in the Stellar project (and that development made use of semantic technologies, with entity recognition, semantic indexes and a triple store).JSTOR text analyzer screen shot of rsults page

Oslo’s linked data library system is a different approach, but again with linked data at the heart of the product.  It’s open source and looks to be making use of Koha the open source library management system, so essentially acts as an add-on to that product.  It has much the same clean look and feel of some of the latest commercial discovery products, with facets on the left and the main results on the right.  It will be interesting to see how it develops.

It’s particularly interesting to see this new development and it’s a contrast to the approach from Folio who are building their open sourcce library management system, but currently seem not to be aiming to include a discovery interface.  Makes me wonder about the potential of this development as the discovery interface for Folio?.

Interesting news this week that Elsevier have bought Plum Analytics from EBSCO.   It seems to be part of a trend for the big content companies to expand their reach by acquiring other companies in associated fields.    There’s a fascinating blog post from Roger Schonfeld from Ithaka ‘the strategic investments of content providers‘ that discusses what this might mean for the library sector and why these companies might be looking to diversify.

I’d probably reflect that library sector technology companies have a long history of mergers and acquisitions – a glance at Marshall Breeding’s chart on how companies have evolved over the years quickly shows how companies seem to change ownership or merge with great regularity, it doesn’t seem to be an especially stable marketplace.  Yet libraries typically keep library management systems for quite long periods of time, ten years doesn’t seem unusual, and often upgrade with the same vendor, but maybe that slow turnover of systems might be related to the mergers and acquisitions as parent companies realise that their investment in a library systems supplier doesn’t provide quite the level of return they wanted?  But recently there’s been a large number of systems procurements, particularly in academic libraries.  A look at HElibtech’s procurements page shows a lot of recent activity.

With EBSCO’s involvement with the FOLIO open source product and Proquest’s acquisition of ExLibris, I wonder if that means Elsevier is looking for a suitable library systems or discovery product?  Or does the acquisition of Plum Analytics mean that they are looking more at the world of citation systems, altmetrics and bibliometrics?

 

 

I think it was the quiet concentration that made the first impression on me.  Going into a room where a group of library staff were busy carrying out a cognitive mapping exercise.  Everyone was engrossed in the activity, heads down, trying out a technique that was new to most of them.

This was part of a different type of staff meeting – with @mirya and @d_r_jenkins running a great session on UX and ethnography and intoducing people to three different UX techniques: cognitive mapping, directed storytelling and love-letters/break-up letters.  With only an hour to run the introduction and try out the techniques it was quite a short time, but enough to give people a flavour of the power of these approaches.

It’s been a bit of a journey to get to this point.  About eighteen months ago we identified ethnographic techniques as being potentially immensely valuable and something we needed to know more about, experiment with and use as part of our UX practice.  The UXLibs conferences and the presentations and blogs about the topic got us up to speed enough to see the potential and to start to talk to people here about it.  Primarily we’ve been looking at the approaches from the perspective of how they can be used in our digital service development work around websites but the wider potential is clear.  The Futurelib initiative at Cambridge has been really useful to demonstrate the potential of the techniques.  So when the chance came to send some people to a UX day organised by a neighbouring institution with Andy Priestner (@andytraining) that was a great opportunity to spread knowledge about the techniques across the library.

We’re already using these techniques in online sessions with students looking at the future direction of our library websites as part of our digital services work.  Our Research Support team are using them with research students in face-to-face sessions.   And the session with library staff quickly brought up some other examples where people soon started to see other work where they could be used, in work with tutors maybe.

It was great to see such engagement and enthusiasm with the approach and really interesting to see the different maps that people  drew in the cognitive mapping exercise.  Given that we are a group of staff using a standard set of equipment (PCs, ipads for example) and tools it was remarkable how much variation there was in the maps.  That gives a lot of food for thought for the digital capabilities project that is just getting started.

 

 

 

The news, reported in an article by Marshall Breeding in American Libraries, that EBSCO has decided to support a new open source library services platform is a fascinating development.  To join with Kuali OLE but to develop what will essentially be a different open source product is a big development for the library technology sector.     It’s particularly interesting that EBSCO has gone the route of providing financial support to an open source system, rather than buying a library systems company.  The scope and timescales are ambitious, to have something ready for 2018.

Open source library management systems haven’t have the impact that systems like Moodle have had in the virtual learning environment sector and in some ways it is odd that academic libraries haven’t been willing to adopt such a system, given that universities do seem to have an appetite for open source software.   Maybe open source library systems products haven’t been developed sufficiently to compete with commercial providers.    Software as a Service (SaaS) is coming to be accepted now by corporate IT departments as a standard method of service provision, something that I think a couple of the commercial providers realised at quite an early stage, so it is good to see this initiative recognising that reality.  It will be interesting to see how this develops

Analytics seems to be a major theme of a lot of conferences at the moment.  I’ve been following a couple of library sector conferences this week on twitter (Talis Insight http://go.talis.com/talis-insight-europe-2016-live #talisinsight and the 17th Distance Library Services Conference http://libguides.cmich.edu/dls2016 #dls16) and analytics seems to be a very common theme.

A colleague at the DLS conference tweeted a picture 2016-04-22_1535about the impact of a particular piece of practice and that set us off thinking, did we have that data?, did we have examples of where we’d done something similar?    The good thing now is that I think rather than thinking ‘it would be good if we could do something like that’, we’ve a bit more confidence – if we get the examples and the data, we know we can do the analyses, but we also know we ‘should’ be doing the analyses as a matter of course.

It was also good to see that other colleagues (@DrBartRienties) at the university were presenting some of the University’s learning analytics work at Talis Insight. Being at a university that is undertaking a lot of academic work on learning analytics is both really helpful when you’re trying to look at library analytics but also provides a valuable source of advice and guidance in some of our explorations.

[As an aside, and having spent much of my library career in public libraries, I’m not sure how much academic librarians realise the value of being able to talk to academics in universities, to hear their talks, discuss their research or get their advice.  In a lot of cases you’re able to talk with world-class researchers doing ground-breaking work and shaping the world around us.]

 

wooden chart

Wooden chart tool created for a programme on data featuring Hans Rosling

One of the great things about new projects is that they offer the opportunity to learn new skills as well as build on existing knowledge.  So our new library data project is giving plenty of opportunities to learn new things and new tools to help with data extraction and data analysis.

MySQL workbench
After a bit of experimentation about the best method of getting extracts of library data (including trying to doing it through Access) we settled on using MySQL Workbench version 6.3 with read-only access to the database tables storing the library data.  It’s been a bit of a learning curve to understand the tool, the SQL syntax and the structure of our data but direct access to the data means that the team can extract the data needed and quickly test out different options or extracts of data.  In the past I’ve mainly used tools such as Cognos or Oracle Business Inteligence which essentially hide the raw SQL queries behind a WYSIWYG interface, so it’s been interesting to use this approach.  It’s been really useful to be learning the tool with the project team, because it means that I can get SQL queries checked to make sure they are doing what I think they are doing, and to share queries across the team.

In the main I’m running the SQL query and checking that I’ve got the data I want but then exporting the data as .csv to do further data tidying and cleaning in MS Excel.  But I have learnt a few useful things including how add in an anonymised ID as part of the query (useful if you don’t need the real ID but just need to know which users are unique and much easier to do in SQL than in Excel).

Excel
I’ve certainly learnt a lot more about Excel.  It’s been the tool that I’ve used to process the data extracts, to join data together from other sources and (for the time being at least) to present tables and visualisations of the data.  Filtering and pivot tables have been the main techniques, with frequent use of pivot tables to filter data and provide counts.  Features such as Excel 2013’s pivot table ‘distinct count’ have been useful.

One of the tasks I’ve been doing in Excel is to join two data sources together, e.g. joining counts of library use via ezproxy and by athens, or joining library use with data on student results.   I’d started mainly using VLOOKUP in Excel but have switched (on the recommendation of a colleague) to using INDEX/MATCH as it seems to work much better (if you can get the syntax exactly right.

The project team is starting to think that as we learn more about SQL that we try to do more of the data manipulation and counts directly through the SQL queries as doing them in Excel can be really time-consuming.

SPSS
SPSS has been a completely new tool to me.  We’re using IBM SPSS Statistics version 21 to carry out the statistical analyses.  Again it’s got a steep learning curve and I’m finding I need frequent recourse to some of the walk-throughs on sites such as Laerd statistics e.g. https://statistics.laerd.com/spss-tutorials/one-way-anova-using-spss-statistics.php  But I’m slowly getting to grips with it and as I get more familiar with it I can start to see more of the value in it.  Once you’ve got the data into the data table and organised properly it’s really quick to run correlation or variance tests, although it quickly starts to flag up queries about, which test to use and why, and what do the results mean?  I particularly like the output window that it uses to track all the actions and show any charts you’ve created or analyses you’ve undertaken on the data.

What’s next?
The team is in the early stages of exploring the SAS system that is used for our institutional data warehouse.  Ultimately we’d want to get library use data into the institutional data warehouse and then query it alongside other institutional data directly from the warehouse.  SAS apparently has statistical analysis capabilities but the learning curve seems to be fairly high.  We’ve also thought about whether tools such as Open Refine might be useful for cleaning up data but haven’t been able to explore that yet.  Similarly I know we have a need for tools to present and visualise the data findings – ultimately that might be met by an institutional SAS Visual Analytics tool.

 

Categories

Calendar

May 2024
M T W T F S S
 12345
6789101112
13141516171819
20212223242526
2728293031  

Creative Commons License