Top Technology Trends

Eric Lease Morgan's Top Tech Trends for ALA 2006; "Sum" pontifications

This is a list of top technology trends in libraries my very small and cloudy crystal ball shows to me.

The increasing availability of Voice of over IP (VoIP) is making it easier to communicate with people all over the world in real time.

Email is nice, and it has a number of advantages over the use of telephones. For example, because email is essentially a form of the written word, it allows you to communicate with many people across great distances of time and space. It is good for sharing detailed information. It is of sort of permanent because it is fixed in writing. On the other hand, real time voice communication can often be more efficient and communication things through tonal inflections that get lost in writing. With the increasing availability of VoIP technology (such as through the use of Skype) we might see increasing collaboration across nations because communication processes will be enhanced.

Web pages in the form of blogs and wikis are becoming the norm as opposed to the exception.

Increasingly I see the technology of blogs and wikis being used to form the home pages and supporting pages websites. These technologies, while each having their own particular syntax for content creation, only require a Web browser to use. This eliminates operating system and file transfer technicalities and at the same time eliminates much of issues surrounding look & feel. Blogs and wikis are relatively easy to use. Unfortunately, much like data-base driven websites, these things make preservation and archiving difficult because the content is literally joined with underlying database applications. At the same time, things like RSS/RDF used by blogs and wikis make syndication easier. There is no perfect, real-world solution to anything.

Social networking sites like MySpace and Facebook are forms of communication, identify formulation, and benign exhibitionism.

Most of us like to see our names in print. (Just think of the results of the scholarly communications process!) Late adolescence is a time when people increasingly ask themselves “Who am I and how do other people see me?” Social networking sites like MySpace and Facebook make it easy to address these issues on a global scale. They are real communities fraught with the same advantages and disadvantages of any other community, just magnified to a greater degree. Libraries are always a part of communities whether the community be a family, a neighborhood, a municipality, a business, a college or university, a state, or a nation. Being a part of communities such as MySpace and Facebook might be something libraries should consider.

The ideas forming the core of open source software are slowly leaking into other domains including science and government.

I draw your attention to the June and upcoming July issues of First Monday (firstmonday.org) where participants from all over the world examined and discussed all things open. I was particularly impressed with the way the concepts of open source software were being applied to areas of science. Along with the sharing of articles describing the outcomes of research, the data used to do analysis are being shared as well. This sort of openness makes for more transparency and better science. Transparent government is generally seen as a good thing too, and because of the Internet more things are be easily shared regarding government.

Meta-search is not living up to expectations, and I assert this is because it is based on poor technological assumptions.

The users of DIALOG (remember?) wanted metasearch and the best DIALOG could come up was a metasearch of its “blue pages.” Z39.50 was originally designed as a protocol for search. The ability to search multiple databases/indexes simultaneously was an add-on feature that never really came to fruition. These technologies failed to live up to their expectations because each of the underlying databases/indexes embodied their own particular information schemas designed to meet the needs of their own particular domains. When brought together these databases/indexes lost their richness. Metasearch fails for the same reasons. Not everybody implements Z39.50 in the same way. SRW/U do not dictate particular element names in their returned data sets. Screen scraping is just plain o’ dumb. Just as there is no over-arching metadata schema that satisfies every domain (Dublin Core and/or MODS come closest), metasearch fails because it is not able to amalgamate such a wide variety of content. Metasearch is a Holy Grail.

Mass digitization will change aspects of librarianship.

As the content of libraries is mass digitized there will be a greater shift towards services and away collections. When content is mass digitized and freely available on the ‘Net the questions regarding libraries will be less about “Do you have this, that, or the other thing?” and more about “What can you do for me to make this content more meaningful and useful?” While it might seem efficient in terms of disk space and authenticity to centrally store data and information, the local ownership of data and information facilitates preservation and manipulation. “Lot’s of copies keep stuff safe.” As people create their own collections, and as these collections are increasing malleable because they are digital, people will expect to use the content in new and different ways. Find similar content. Find content contradicting this idea. Trace this idea forwards and backwards. Pretty-print this document. Index and provide search against all my content. Etc. If everybody has the content, then what is the role of library? The provision of services against library content are an area of great opportunity.

Licensed content and digital resource management (DRM) schemes are not going away, but neither is open access.

Copyright is designed to encourage and spread the fruits of the useful arts. Instead it has been exploited by corporations (middlemen) who aggregate these fruits and redistribute them to the masses. People want to be recognized for their accomplishments. This is true for the artist as well as the scientist. At the same time the I believe the more authentic, passionate, and productive artists and scientists would produce their same fruits if copyright were not to exist. What are they going to say? “No, I’m not going to paint this painting or write that song because someone might steal it.” “No, I’m not going to invent that cure for AIDS because I might not be rewarded accordingly.” Furthermore there are a whole lot of great musicians and brilliant scientists out there, but they simply have not benefited from marketing and hype bestowed upon them by companies whose goal is to make money, not necessarily to improve the human condition. Licensed content and DRM systems are manifestations of such an environment. These will co-exist with open access content because of the accessibility of ‘Net. The ‘Net will provide its own sort of peer-review and short-circuit some of the issues surrounding copyright and DRM. As the folks at Google say, “On the ‘Net democracy works.”

There is a growing discontent with the library catalogs.

From a user’s point of view, library catalogs are notoriously difficult to use. Too many options. Too time consuming to get the materials. It only contains books. Similarly, the overwhelming multitude of available bibliographic indexes makes it difficult to find “just a few good articles.” In this environment Google and Google Scholar come to the rescue pretty quickly. As I have read in numerous OCLC reports, and as I have heard from Google executives, people know libraries contain large quantities of authoritative information, but ease of access and convenience trump such qualities in a heart beat. The venerable library catalog needs to evolve. Yes, there is need to know what is physically available from a library but more importantly people need to acquire and use content for their learning, teaching, and research activities that goes beyond library collections. In a networked environment where physical location is not as much of a limitation, library catalogs need to go beyond inventory control systems for librarians to information tools for students, instructors, and scholars. The content of the catalog needs to expand. It needs to be designed more for users and less for librarians. It needs to become an empowering and enabling technology instead of an impediment.

The cataloging process is moving from complete and full to good enough because full-text indexing and automatic classification is less expensive.

Traditional cataloging is expensive. The amount of content that needs to be cataloged is increasing. The number of times cataloging records need to be edited because of the more dynamic nature of digital content is also increasing. There is an increasing amount of full-text available to libraries. Automatic, machine-generated metadata creation techniques is much less expensive and requires much fewer people. All of these things are lending themselves to creating “good enough” records as opposed to high-quality, very detailed records. By no means is anyone saying that the cataloging process is useless. Researchers in the application of digital libraries acknowledge the need for things like MARC(XML) and MODS, ontologies (a.k.a. controlled vocabularies), and metadata in general. On the other hand, the creation and maintenance of these things without the exploited use of computers is too expensive and not scalable. “How big is your backlog of original cataloging that needs to be done?”

OCLC is continuing to expand and redefine itself.

Mr. Fred Kilgour’s legacy in OCLC is alive and well. I had the opportunity to share an office with him for a number of months while I was a clinical instructor in the library school at the University of North Carolina at Chapel Hill. At that time I was going to give Mr. Kilgour all sorts of grief about weird searching syntax (“4221”, etc.). Instead I discovered a man who was combining information science with traditional bibliography. He told me was sad because he thought all the exciting things were still to come, and he said this ten years ago even though he didn’t use email. As an individual Mr. Kilgour has slowed down but the institution he created continues to grow. It supports traditional librarianship while investigating new ways to provide data and information service. Lorcan Dempsey, who leads OCLC’s research efforts, and his team always have fun and interesting things to explore. OCLC’s acquisition of RLG and Openly Informatics will bring additional strengths to the system. I just can’t figure out why OCLC doesn’t try to provide open source software library application support for a fee.


Eric Lease Morgan
June 18, 2006 (Father’s Day)