Since I’m late to the game I will
steal borrow a couple of trends that my esteemed colleagues have already noted and throw in one of my own.
- New Catalog Possibilities – Starting with NCSU’s Endeca-powered catalog, there has been a definite trend of moving to systems not marketed by the typical (and now smaller) set of library vendors. Another option that large libraries and consortia in particular are exploring is using some form of WorldCat from OCLC as their catalog. Still more options can be found in the open source world, with Koha and now Evergreen. In fact, I believe that Sept. 5, 2006 will long be remembered as the day when the ILS world irrevocably changed. This is the day over 250 Georgia libraries began using an open source ILS they wrote themselves from scratch. The potential significance of this is hard to overstate.
- Open source goes mainstream – See #1 above. The fact that a large library consortium could bet the farm on an open source solution (and win) is a dramatic event that will serve to highlight for others that open source does not mean unsupported or unsupportable. In fact, the Evergreen team has launched their own support vendor (Equinox Software) to support others who wish to replace their ILS with Evergreen.
- Massive digitization means massive opportunities and massive challenges – The massive digitization projects of Google and OCA are pouring thousands (and soon millions) of digitized books onto the Internet. What does this mean for libraries and the users we serve? Sorry, this is Top Tech Trends, not Top Tech Solutions. I don’t know all of the implications of this yet, but I do know that we need to be thinking about this issue long and hard.
Sorry I can’t be at Midwinter, but I’ll be at Annual, so I’ll see you all there!
Metasearching is hard. Metasearching well is darn near impossible. Meanwhile, hapless libraries are left trying to sort out the good from the not so good vendor offerings. Wouldn’t it be nice if LITA put together a “bake-off” of metasearch applications? Like picking half-a-dozen or so databases that all must set up and then running some test queries and comparing results? Other issues such as the amount of work required to customize them would be useful as well, although potentially more time consuming to determine. So is it just me, or would such a thing be interesting to others as well?
I’ve mounted all my presentations on the web at http://www.cdlib.org/inside/news/presentations/rtennant/2005lita/, including the “mini-movie” that preceded my keynote talk. The music is not included, however, so you will need to get your own copy of R.E.M.’s tune.
A suite of technologies is being swept under the broad label of “Web 2.0” to signify a “second generation” Web. These technologies include things like Web Services (e.g., Simple Object Access Protocol or SOAP, and Representational State Transfer or REST) and Ajax.
The point is that these technologies provide a way for information to be sucked out of multiple, separate, and distant systems and rendered in the user’s browser as a rich, highly interactive interface. Examples of Web 2.0 technologies at work is how various people have been combining their own data with Google Maps to create an entirely new service. For example, at ChicagoCrime.org, you can pick a particular crime and have the locations of those crimes mapped on a Google Map.
This all starts to hit home when you read a paper like the Talis White Paper: Project Silkworm, available at the Silkworm web site, or when you read about the JISC Framework effort, or the similar Digital Library Federation digital library framework effort (not yet public). Why, the Silkworm Project posits, should we not create an infrastructure that allows anyone, anywhere, to write a review of a book we own that can become immediately available to any other library that wants it? Why indeed.