Top Technology Trends

"Sum" Top Tech Trends for the Summer of 2007

Listed here are “sum” trends I see Library Land. They are presented in no particular order:

1. Gaming and Second Life – I hear a lot of noise about gaming, Second Life, and libraries. Hmmm…

I consider librarianship to be about a number of processes surrounding data, information, and knowledge, specifically: 1) collection, 2) organization, 3) preservation, 4) dissemination, and 5) sometimes evaluation. I also consider the intended audiences for these processes in my definition. Oftentimes these audiences determine the types of libraries where the processes are carried out: academic, special, public, school, etc. Notice that I did not outline “how” these processes get accomplished. Since the “how” of these processes changes over time and with changes in technology, I do not think any “how” defines the core of librarianship. (Librarianship is not about books, MARC records, nor even Web pages because these are merely tools of the profession.)

That being said, I can see how gaming and Second Life can play a role in some libraries. Gaming and Second Life are digital virtual worlds. There are norms of behavior there. Things “exist” in these environments. People “live” there. Commercial transactions take place. Many gamers and Second Lifer’s are younger as opposed to older. We are increasingly told to put our content where the users are. Think MySpace and Facebook. Why would we suppose that there are not needs for data, information, and knowledge in Second Life? While I do not think gaming and Second Life is the next big thing for all libraries, I do believe they are things to keep in mind.

2. “Next Generation” Library Catalogs – As the owner/moderator of the NGC4Lib mailing list, it is my responsibility to listen to it’s traffic. After doing so I am able to outline a few things I see going on there. Again, they are listed in no priority order.

First, there seems to be a growing consensus that MARC, as a data structure, is not apropos for any “next generation” library system. This is true for a number of reasons the most important being, if libraries are partially about disseminating data, information, and knowledge, then we need to speak the language of the intended audience. In computing terms, this language is XML. Ask yourself, “How many communities know how to read and write MARC records?” Now ask yourself, “How many communities know how to read and write XML?” MARC was cool (and even “kewl”) in it’s day. It has outlived it’s true usefulness and it is an impediment to innovation.

Second, there seems to be an increasing understanding that any future “catalog” is not really a “catalog” but more like a tool enabling the user to get their primary work done quicker and easier. More and more it seems the “next generation” library catalog will enable people to not only find things but acquire things and apply services against those things. Send it to me. Save it. Annotate. Review. Rank. Compare & contrast it to other items. Share it with my friends and colleagues. Everybody has access to content. The next level of librarianship is to provide more services against the content.

Faceted browse makes a lot of noise, but I’d be careful. It is not a silver bullet, nor is it magic. Faceted browsing simply turns search results inside out. Define a set of “facets”. Subjects. Authors. Formats. Genres. Etc. Do a search. Extract the facet terms. Create a browsable list from them. Link them to canned searches. Done. Don’t get me wrong. Faceted browsing is definitely a step in the right direction, but it is only one small step toward making library content more accessible and easier to find. (Maybe I’m just jealous because MyLibrary has supported faceted classification for more than four years and nobody noticed.)

Dis-integration. This is word increasingly used when talking about library systems. I agree. What is really desirable is not one large system like Microsoft Office but a set of littler systems adhering to a set of common interoperable standards. These smaller systems might include one for creating and editing (XML) metadata — cataloging. Another might be for lending materials — circulation. Another is for indexing content and providing (SRU, OpenSearch, and/or Z39.50) search services against it — the OPAC. In such an environment a library can swap out any existing metadata editor and choose another. They can swap out one indexer for another. If the library community insists on the use of standard protocols not turnkey solutions, then the dis-integrated library system will become a reality. It will also open the field to an greater number of technical solutions and vendors from a dwindling pool of choices.

Finally, one technical solution for creating the “next generation” library catalog seems to be a multi-staged process: 1) create metadata using your existing system, 2) export or expose your data in a common format such as MARC, XML, or OAI-PMH, 3) harvest the data into a central store, 4) supplement the data with article-level bibliographic data, 5) index all the data, 6) provide services against the index beginning with search and browse. This seems to be the way of OCLC’s WorldCat Local, Ex Libris’s Primo, and the University of Rochester’s XC. Compared to metasearch interfaces, it makes a lot of computing sense because it facilitates much better possibilities for relevancy ranking, and it does not rely on a multitude of remotely located machines and diverse protocols for search results.

3. Operating system agnostic computing or “the network is the computer” – Have you noticed that it does not really matter what type of computer you are using in order for you to get your work done? I edit text and image files, send email, update my calendar, re-calculate spreadsheets, update databases, browse the Web, etc. More and more of this work is done over the Internet. Google is offering many of these tools for free. It allows for cross-platform computing even between home and office. It sort of feels like we are beginning to see the re-birth of the “thin client”. Like dis-integration, very little of this is possible without a set of common (non-proprietary) standards.

4. Things not going away – There are a number of other things that have been mentioned in this venue previously, and they are still apropos. For example, open source software is here to stay and continues to influence the whole of computing as well as Library Land. Open access publishing is growing and continues to exist beside traditional publishing. Successful open access publishing activities require substantial investments in time. While the content from open access is “free” it is really only as “free as a ‘free’ kitten”. In this regard open access is very much like open source software. Neither one come without costs. Microformats such as COiNS and UnAPI are truly useful. I think we will see a growing number of these things being developed inside and outside libraries, but until tools for exploiting them become ubiquitous none of the microformats will predominate.

Whew! How’s that?


Eric Lease Morgan
University Libraries of Notre Dame

June 15, 2007