Friday, December 18, 2009

JCU is being Summon'd

It's official, we've signed up for Summon and I'm really looking forward to the end product, if not the work that has to be done to get there.

I quite proud that a smallish regional university library like ours has seen the potential of the product even in it's early days, and this is one of those rare times where the benefits have made early adoption the right choice.

I found this small blogpost today from Jonathan Miller the director of Olin Library and it filled my heart with good cheer heading into the festive season, and I quote:

"Half way through the meeting a professor says, "By the way, that R-Search is great. My students love it. I love it!"

This doesn't often happen with other library services ...."

Friday, October 16, 2009

Discovery layers article

From the Chronicle of Higher Education 'After Losing Users in Catalogs, Libraries Find Better Search Software' an overview of 'the problem with catalogues' - and Summon gets a mention eventually.

Great quote from the article:

"a graduate student specializing in early American history, once had such a hard time finding materials that she titled a bibliography "Meager Fruits of an Ongoing Fight With Virgo [the U of Virginia's catalogue]."

Monday, October 5, 2009

Summon to replace X Seach?

First up apologies for the lack of posts lately - I've been busy reading a book about improving your blog

Because I have more formal ways of telling you what's happening I'm going to concentrate on using this blog on scanning the horizon for those who want to see what might be coming over the hill and on things that are making me ponder in a brave attempt to get you to ponder with me too.

Currently I'm looking at Summon, the latest offering from Serials Solutions. I urge you to look at it on the early adopter sites and tell me if it excites you as much as it excites me:
Basically Summon is search engine & metadata harvester. Serials Solutions call it a 'web scale discovery tool' and in library tech talk it fits in the spectrum of 'Resource Discovery Layers'. Sluicing off the jargon it is like having a Google searching your library collections. All your collections: Catalogue, institutional repository (eprints) AND your subscription econtent.

It is not a federated search engine like X Search (360 Search). It does not translate your search into other formats and submit them to lots of search engines, then wait for the all the results to come back and process them and display them to you. Because it has it's own index it's quick (almost Google quick). You search it like Google too: one search box, and the same syntax for phrases, ANDs and NOTs.

All that is cool enough but don't forget what it's searching. Everything* you've got. Regardless of format or location. In one search. No more 'why can't I search for articles in the catalogue?' or 'what's the difference between Proquest 5000 and Expanded Academic Index?' No more scrolling through hundreds of databases with meaningless titles (ASABE Technical Library, Project Muse) wondering which one will return anything.

And that's only point one. I will circulate my proposal/information document to the Library Management Committee and through that forum to anyone interested. Please try it out on at least one of the example sites (you can do everything except view subscription content without logging in - yet another cool addition the list of why Summon is grabbing my attention.

Watch the promotional video


*Ok not quite everything, to search subscription eresources SS negotiates licensing agreements with publishers, and while they have literally hundreds of these agreements, with major publishers, and they are growing, there are some titles not yet included.

Thursday, June 25, 2009

Jakob Nielsen says "Stop Password Masking"

Usability advocate Jakob Neilsen's latest Alertbox recommends that you "Stop Password Masking" when creating systems that require passwords.

The Summary:

Usability suffers when users type in passwords and the only feedback they get is a row of bullets. Typically, masking passwords doesn't even increase security, but it does cost you business due to login failures.

Jakob's stuff is always food for thought, and it's hard to argue with his points about feedback, but in an education or library environment I can't imagine it would be a good thing. Simple situations, like showing a web application in a training room would be like broadcasting your login details without some fancy dataprojector footwork, especially in our environment where we are the antithesis of single sign on.

He suggests that for situations like the one above there be a checkbox offering to mask the password, which I think would decrease usability with clutter.

His call to abandon legacy design is, to me, a case of carefully chosen words to slant meaning. If he had of said abandon convention it would have been much less convincing. Masked passwords aren't just a web thing, ATMs and EFTPOS use them, as have computers since before ARPANET.

We do have students who get their passwords wrong, and with increasingly stringent rules about passwords containg upper and lower case, punctuation, numeric characters, this isn't going to get better unless we change our whole approach to authentication.

One option is card readers with PINs. Many government departments use this method (one I know of allows staff to travel to any other office in the country, place their card in the reader of any computer, enter their PIN and they get their Window profile AND their phone number.

If banks (and customers) think the Card/PIN method is secure enough for financial transactions that suggests it's secure enough for our needs.

Friday, February 13, 2009

Horizon unConference part 2

Day 3 and we start with an extremely interesting chat with John Dalton who, though based in Hobart is a DBA for LibraryThing. He talked a little about LibraryThing 4 Libraries, but stressed he wasn't a salesman, and a lot about user tags and the power of folksonomies - and coffee table books.

Next DA's GM talked about the future of publishing from his perspective - probably the highlight was the video of the Espresso Book Machine in action. Yet again I got a sense that an industry intertwined with tertiary education was at the brink of major change. Publishers are getting closer to accepting the primacy of E over P, and considering ideas like selling micro content (eg a chapter out of a textbook).

The next ILS vendor was Civica showing Spydus. I got all nostalgic, having administered URICA back in the mid 90s - the product has come a long way - in fact it's not recognisable as related to URICA at all, which was written in Pick and sitting on a UniData dbms on a proprietary UNIX box (how did I get so old?) - Spydus is firmly in the Windows camp sitting on SQLServer and with client modules that act very much in the manner of the Microsoft Office Suite. It was the only Australian provider.

We then had Innovative Interfaces present, tag team, their core product and a couple of add ons. They worked hard to dispel the 'black box' myths about Millenium, saying that the data was fully accessible through web services. The ability to access functions associated with other modules without having to leave the module you were in was very appealing (eg create a temp bib record while in circulation). Another high point was the fuzzy search that when queried for 'Harry Potter and the Magician's Rock' asked if we meant 'Harry Potter and the Sorcerer's Stone'.

The day wrapped up with a group session led by Anne Scott of the University of Canterbury. We got a sense of who was happy to sit with Horizon and tweak HIP, who was looking at a discovery layer over Horizon (which would make the move from Horizon transparent to searchers). We agreed to take HES up on their offer of groupware to give us a space to continue the conversation.

We discussed the possibility of collective bargaining, and while it makes sense to combine our economic 'might' there is little chance we would have identical aims for our resource discovery systems.

A draining but thought provoking three days, well organised, and a much needed crash course in the major ILS options open to us. I look forward talking more about the things it's making me ponder.

Thursday, February 12, 2009

Horizon Unconference : where to from here

I'm at the end of day 2 of a three day 'unconference' at the University of Tasmania in chilly Hobart, where a group of Horizon using libraries have sent representatives to talk about what's the next ILS now that it's clear SirsiDynix will not reverse their decision to euthanise Horizon 8.

About 26 libraries are here. University of Tasmania are the organisers and drivers. They had three staff in the USA last year conducting an environmental scan (I'll make Rod Foley's reports available on the intranet - very interesting reading). One thing that strikes you immediately is the difference in resourcing between US and Australian libraries. One University with around the same number EFTSUs as JCU has 185 employees (with 11 dedicated to library IT).

As Di Worth said in opening the unconference, 'we won't come away with the answer' but I already feel that I have a much clearer handle on what the question is. Linda Luther asked the question 'What is an ILS really for?'

The proprietary vendors of ILS are surprisingly few in number (at least those in the University sector) and even more surprising is how old their offerings are; Endeavour is the youngest and first released in 1995. All the products have evolved but you wonder about the core assumptions in their initial design when the most recent was released the same year as Netscape 1.0

Warwick Cathro outlined the NLA's experience with OSS but focused mostly on the directions they are taking in evaluating the multitude of systems they have think about Service Oriented Architectures.

Ex Libris then presented their ILS and product suite and answered questions about migration paths. They stress their position as an innovator and Unified Resource Management (URM) was discussed as their next generation discovery layer.

Then it was off to the library staff room for welcome drinks and nibbles - nice couches well laid out kitchen and servery, and facing an atrium full of barwangs - better laid out than Cairns but doesn't have our view!

Open Source software has been bandied around a bit, Anthony Hornby had a very pragmatic overview of how you should approach Open Source - the same way you approach proprietary software - evaluate for best fit. He dispelled a bunch of myths about OSS, from the wildly evangelical to the demonising (and showed us the little funny below).



Anthony pointed out that even proprietary vendors relied on OSS (e.g. Java), and that we all use it every day (thing blogs like this one, or IM).

We were given overviews of Koha and Evergreen by brave souls who've installed and played with it in test environments (common theme: follow the installation instructions meticulously).

HES then presented, offering the group tools for future collaboration and collective action. JCU participates in HES for a number of systems e.g. our HR system, and we benefit from the negotiated agreement with Oracle.

Next was Serials Solutions, which held little new for me, until Summon got a mention, which is sort a competitor for URM but much closer to implementation. Basically it operates more like Google Scholar in that databases are harvested for metadata, removing federated search's Achille's heel; connection files and retrieval times. SS benefit from being part of the Cambridge Information Group which includes Proquest, Bowker and Ulrich's. Apparently they have an agreement with Thomson Gale to allow harvesting of the databases and many other larger publisher are either on board or soon will be.

SirsiDynix then made their presentation, revolving around Symphony, which looked quite cool. I admire SD staff, they've had so much flack from customers since the Horizon 8 death notice but they are still cheerfully plugging away. The word going around is that SD are focusing on the international market, and rumour says that's because the customer response to H8's death in the US was so vitriolic - but it might also have something to with it's ability to handle unicode making it usable in Asia.

Softlink (Liberty) then presented - they chiefly work in the specials and small publics sector and don't have any university customers in Australia.

The breaks are full of conversations between people asking each other 'what are you going to do?'

Friday we have LibaryThing, DA, Civica and Innovative Interfaces, before another all in library only session to finish up.

Most of the powerpoints will be made available, and I will be writing a more detailed report after cogitating a bit. I think this is a good time to think about our ILS as we are thinking a lot right now about how we do resource discovery with the CMS and 360 Search being launched in the last week and a bit of January (my blonde tips are getting bigger).

Another thing Anthony Hornby had in his powerpoint really tickled me - I think I'll make it my family crest:




Friday, January 9, 2009

Introduction to Link Resolvers

What is a link resolver?

Jargon words italicised and explained below.

A link resolver is a piece of server software that translates an OpenURL request from source into a URL that will retrieve an item from a specific target.

Jargon

OpenURL

Is a standard way of representing citation-type information as a URL (OpenURL entry on wikipedia) e.g.
http://resolver.example.edu/cgi?genre=book&isbn=0836218310&title=The+Far+Side+Gallery+3

The first part (http://resolver.example.edu/cgi?) is the link resolver's base URL and the rest conforms to the OpenURL standard (version 0.1 in this case, there is also a version 1.0 which is much more sophisticated).

Source

A source has two features it contains citation information (i.e. metadata about a bibliographic item) and it has the ability to create an OpenURL by appending the citation information (in OpenURL form) to the base URL of a link resolver.

URL

Uniform Resource Locater, or as we commonly say 'a web address'.

Target

A target is a web server that stores bibliographic items (preferably full text). Good targets allow you to 'deep link' to specific items like journal articles or conference papers. Bad targets only allow you to deep link to the journal's home page or conference proceedings' home page - some targets are so bad they don't even allow that level of linking (Westlaw springs to mind).

OK, So What's a Link Resolver do again?

The Link Resolver gets an OpenURL for an item from a source, checks to see which targets we have access to can provide that item then creates a URL that deep links to that item. So effectively if you find a citation for an article in a source you can click on the OpenURL and the link resolver will check all your e-subscriptions and display the article, even if the source is purely an indexing and abstracting database with no full text component.

Sounds Cool, can we get one?

Actually we've had one since 2004 - which you might know as the 'Find It' button. For four years our Link Resolver software has been SFX. We are now transferring to some new software called 360 Link. Fundamentally nothing changes (except we expect the new software to be much more accurate in searching our e-subscriptions).

Is a Link Resolver good for anything else?

I'm glad you asked. It can accept a request from anything that can generate an OpenURL. Apart from our databases Citation Linker can create an OpenURL from the citation details you provide.

EndNote (thanks Nicole) can also pass OpenURL to our Link Resolver, so that clicking on OpenURL for a citation will search our e-subs and display the item. Zotero and other bib/ref tools can also pass OpenURLs to your Link Resolver.

COinS
, a way of marking up citations on a web page so that browsers with COinS-enabled plugins can determine the address of the user's link resolver and create an OpenURL for it.

What's really cool about OpenURLs is they bypass broken links caused by deeplinking via a static URL. For example, consider that in Reserve Online we deep link to course readings. What happens when we transfer the subscription to another publisher, or the publisher updates their site, or they are swallowed by another publisher (like Wiley's acquisition of Blackwells journals last year)? The links break, and Document Services staff have to detect the broken links, find new ones, and change the data in Masterfile. If we used OpenURLs instead it wouldn't matter if the links changed - and if we had multiple subscriptions and the prime one didn't work there would be options to try the others.

And lecturers could use it to create persistent links from citations in reading lists in Blackboard directly to the item. I even created a simple OpenURL creation tool build an OpenURL from a citation on JustUs, which is currently using SFX but I will change to 360 Link soon. In any case 360 Link has a function on its 'more information' screen that will show you the relevant OpenURL for the item you requested.


Let me know if this was useful to you, or if it was too simple or too complex.