International Vistor to the 2006 LITA Forum

From October 26 -29, 2006, a fairly large group of librarians and information technology professionals from various institutions across the United States descended upon Nashville Tennessee to attend the LITA Forum. The 2006 Forum had as its theme ‘Web Services as Library Services.

I was fortunate to be there as I had been selected by LITA-IRC to be the recipient of the International Visitor Grant in memory of the late Professor Errol Hill. My children, living in London, could not picture their father in Nashville, home to the Grand Ole Opry and the legends of country music. I come from Trinidad and Tobago, the land of calypso and steel pan music. In Nashville nonetheless, is where I was, and I found the city and the LITA Forum a wonderful experience.

For me, it was an honour and a privilege to have been selected for the award, particularly as it was granted in memory of a great son of the soil of Trinidad and Tobago, Professor Errol Hill. I had been exposed to the work of Professor Hill early during my years as a high school student at Naparima College in San Fernando, Trinidad, when we studied his folk play/musical “Man Better Man”. I therefore felt that in attending the LITA Forum through this sponsorship, I was somehow reconnecting to this aspect of my past.

The fact that library services could be developed through the interoperability of web services programs along a continuum ranging from the very simple to complex sophisticated applications engaging the library user in highly interactive ways was the underlying thread of this conference. Throughout the conference venue, there seemed to be a sense of excitement in a future limited by our own imagination and creativity as librarians. There was also a sense that there was so much that the profession had to think about in order to meet the growing expectations of our ‘new’ users, that it simply boggled the mind.

As usually happens at any good conference, attendees were spoilt for choice when it came to the concurrent sessions. We wished that we could have attended all the sessions as they all had something important to provide in terms of expanding our understanding of how libraries could effectively use the Web to deliver added-value services.

The LITA Forum keynote speakers all provided serious food for thought. The opening session featured Alan Stoker, Steve Maer and John Rumber of the Country Music Hall of Fame and Museum. Although their presentation was not in the main about web services, they did provide some useful insights into the preservation and archiving of acetate and vinyl recordings. The conference was also privileged to hear digitized audio recordings of excerpts from some of these cultural heritage pieces. Incidentally the archive is using DSpace to manage its digital repository, a fact that I found particularly pertinent as we are also experimenting with DSpace at the library of The University of the West Indies where I currently work.

The second day’s keynote session was very interesting. It featured Thom Gillespie, creator and designer of the MIME program in interactive communication in the Department of Telecommunications at Indiana University. He maintains that in the twenty-first century libraries should be about supporting very active learners in the ways in which schools rarely support them, namely with books and video, software, training, and potentially, even a venue for publication Gillespie is enthusiastic about computer game design and sees gaming as a tool for creating user interfaces that libraries could tap into to teach information literacy and generally engage the user in ways that are relevant to their cultural and learning background in this new millennium.

Stephen Abram, the man with the really intriguing job title, VP Innovation, SirsiDynix, was the keynote speaker on the final day of the Forum. He stressed that there is a global conversation presently engaging many across the world about the next generation of the web. It’s happening under the name of Web 2.0, a concept that refers to a perceived or proposed second generation of Web-based services—such as social networking sites, wikis, communication tools, and folksonomies—that emphasize online collaboration and sharing among users. Abram spent considerable time challenging the librarians in the audience to be introspective and to question what are the skills and competencies that they will need in the environment of the Web 2 and perhaps the Web 3 in the not too distant future.

The concurrent sessions were as interesting as the keynote addresses. Among those that I managed to attend were presentations on the following:

• innovative library instruction modules developed using web
based applications,
• improving library services with AJAX and RSS,
• exploring the use of podcasting and blogs in libraries, and
• developing database driven web sites using Cold Fusion.

All these sessions pointed to the fact that libraries continue to experience technological innovations as they strive to be creative in the delivery of content to their users and to create more dynamic, robust and interesting websites. It was also clear that librarians will need to continue to grapple with the question of how best to use the web’s communicative advantage to help library users find the most useful, relevant and authoritative information available. Should we as librarians distrust the web, or should we embrace its best characteristics and technologies in order to better treat with our users?

One of the highlights of the LITA Forum is the opportunity for professional networking during the unofficial meetings. Everybody likes to socialize during the breaks and the lunch sessions. It was really a pleasure to be able to meet and discuss issues with colleagues from all over the US and some from Europe as well! This international and multicultural aspect makes the conference so much more interesting and rewarding.

In conclusion, I feel that that the 2006 LITA FORUM concretized in my mind some of the ideas and concepts associated with the leading edge information technologies that are fast being adopted under the promise and potential of the Web 2. Perhaps more importantly, it served as timely reminder to us librarians of the tremendous responsibility to adapt to this new scenario. I now feel confident that I can help my librarian colleagues at the University of the West Indies to apply some of the knowledge of leading edge technologies that I imbibed at the conference to ensure that our students and faculty have a research and learning environment that is on par with international standards and best practice. It would be great if more librarians from the Caribbean and other developing regions are exposed to such ideas through this type of conference.

I think that the conference was a success and, for all of us who were able to attend, an unforgettable event. I would like to thank my sponsor, Mrs. Grace Hope-Hill for the grant which enabled my travel and participation in the LITA Forum, 2006.


Frank Soodeen
Librarian III (Systems)
The University of the West Indies
St. Augustine
Trinidad and Tobago

Forum 06 poster sessions

Sadly, I only had an hour between meetings, so I didn’t get to every poster session, but here at last are the notes I do have. A PDF of the session descriptions is available on the LITA web site. There was a good range of topics and library types represented.

Instructional Media and Library Online Tutorials

Li Zhang – Mississippi State University

  • Online tutorials require far more than just duplicating print materials to the web. They currently have a large project to develop tutorials for both distance students and on-campus students. They’re trying to develop a single set of online tutorials that works for all of their audiences.
  • Too many bells and whistles distract rather than inform. Their web committee found that including audio or video for too many pieces of a tutorial makes it unusable for people using older computers or dialup Internet access.

Integrating Library Services: An application proposal to enable federation of information and user services

Erik Mitchell – Wake Forest University
Article to be published in Internet Reference Services Quarterly, February 2007

  • A “point-of-need approach is contrasted with the point-of-service approach utilized in traditional library systems.” Instead of creating a federated search, Eric is working on a sort of federated service. It will combine multiple data sources, and will also provide useful user services like item renewal — without the user having to step out of the interface into a separate OPAC/ILS interface. To this end, he is using an OpenURL link resolver + web services.
  • RSS, relevance ranking, renewals: He’d like to reindex using, e.g., Google, for relevance ranking. He’d also like people to be able to subscribe, say, to an RSS feed of what they have out, and be able to one-click renew. Reindexing is the easy part; adding the circulation data is harder; and enabling the system to update live circulation data is the really hard part. He wanted to use NCIP for this, but it wasn’t supported yet by the ILS vendor (he may have to resort to a little screen scraping).

Information on the Go: A journey of incorporating portable media players into library technology

Amy Landon, Larisa Hart – Ozarks Technical Community College

  • “Can’t take the lab home? Now you can.” They started putting course reserve materials onto iPods for students to check out. Their library has an interesting variety of materials for different course lab work — like a collection of rocks for one class. They decided to start adding pictures of these rocks, etc., to the iPods. This way students can study the rock pictures at their leisure. Since each iPod holds way more data than they have content available, they are putting everything on there including “How to study” DVD content.
  • One iPod per thousand students: They have about 10,000 students total; they currently have 6 video iPods and 4 iPod Nano that circulate for a couple of days at a time. They’re ordering a few more iPods, but so far the number available is keeping up with the demand.

Scanning the Past: Central Florida Memory

Lee Dotson, Selma K. Jaskowski, Joel Lavoie, Doug Dunlop – University of Central Florida

  • A “virtual place where visitors can discover what Central Florida was like before theme parks and the space program.” Central Florida Memory is a collaborative project of the University, the county library system, the regional oral history center, and other partners. The project started under an IMLS grant and they’re seeking new funding sources to sustain it.
  • Digitization Spec Kit details their software, equipment, and procedures. See more about their project and browse their current digital collections at

Using Web Services to Advertise New Library Holdings: RSS library feeds in the campus CMS

Edward Corrado, Heather L. Moulaison – College of New Jersey

  • Design decisions included:
    • What is a “new” item?
    • How to group feeds?
    • What data to display in feeds?
  • About the only feed people actually add to their aggregators is the list of New DVDs 🙂 The feed is created by a Perl script. One click from the feed takes the user to the OPAC. Feeds get incorporated by faculty into the course management system (the student doesn’t have to know that a list of new titles there is actually an RSS feed).
  • See the October 2006 CIL and the College’s library web site for more info.

Office for Information Technology Policy (OITP) Update: Part 2

Alternate (possibly better) title: Participatory Networks: The Library as Conversation (oh wait, I already used the alternate title for a post)

Continuing on with the program from Office for Information Technology Policy (OITP) Update: Part 1
(a teaser for this part of the session was as previously blogged on LITAblog, btw)

Participatory Networks: The Library as Conversation
Joanne Silverstein (
Director of Research and Development
Information Institute of Syracuse (

Information Institute of Syracuse was invited by the Office of Information Technology Policy at the ALA to write a White Paper about recent developments in Web-based innovations and their relationship to, and potential for use in libraries.


—Why call it “Participatory Networking”?

.The authors propose “participatory networking” as a positive term and concept that libraries can use and promote without the confusion and limitations of previous language. The phrase “participatory network” has a history of prior use that can be built upon.
.The phrase “participatory networks” represents systems of exchange and integration and has long been used in discussions of policy, art, and government. The phrase has also been used to describe online communities that exchange and integrate information. Implies a conversation between groups and people.
.For the purposes of discussion, we define participatory networking as digital, cultural communications and artifacts that are networked in technology, and that are collaborative in principle.

—Why not simply adopt the terms social networking, Web 2.0 or Library 2.0?

1. “Social networking” has negative connotations
.Social network sites (MySpace, Facebook) captured public attention. Proven very popular. Some attention, however, has been very negative.

2. the “2.0” nomenclature is too vague
.Web 2.0: Ambiguity also dogs the Web 2.0 world. For some it is technology (blogs, AJAX, Web Services, etc.) For others it is simply a buzzword for the current crop of Internet sites that survived the burst of the .com bubble.

3. Web 2.0 implies more than just inclusion of users in systems.
.The term Library 2.0 is a vague term used by some as a goad to the library community. .this term limits the discussion of user-inclusive Web services to the library world. .when factoring in Dewey’s classification, Ranganathan, and the introduction of communications technology into the library, we would think the Library has to be on at least version 12 (beta of course) 🙂

—Libraries as Conversations

.“…treats information as a conversation” (Karen Schneider at
.”the reason for a book is to afford conversation across space and time”(Jeffrey R., Young, Chronicle of Higher Education July 28, 2006)
.“Conversation is central to exchanging information.” (Klemm, 2002)

—Conversation Theory

Gordon Pask (1928-1996)
.relativist, constructivist theory, draws on cybernetics (flow on information). It derives from teaching and learning: learning is the quintessentially constructivist activity
.Intelligence resides in interaction, Conversation
.describes recursive interactions called “Conversation”differences may be reduced until agreement or “agreement over an understanding” is be reached.

Participatory Networking — the Study/White Paper

.Library as Facilitator of Conversation
.Participatory Networking
.Libraries as Participatory Conversations

.Review what’s going on in libraries, describe it in context and present it to library administrators, decision and policy makers. Make them aware of the good work going on out there.
.We also want to present library decision makers with the opportunities and challenges of participatory networks as set in a context of conversation theory.
.Finally, we want to present a cohesive framework for libraries to not only fit tools such as Blogs and Wikis into their offerings (where appropriate), but also to show how a more participatory, conversational, approach to libraries in general can help libraries better integrate current and future functions.

Library as Facilitatior of Conversation:

.Conversations create knowledge.
.Some conversations span millennia. Others only a few seconds. Some happen in real time, some do not.
.In some conversations, there is a broadcast of ideas from one author to multiple audiences.
.Some conversations start with a book, or a video, or a Web page.
.Users need sophisticated processes to facilitate the conversation. The library serves this vital role for many communities, the implication is that libraries are in the conversation business.
..Bricks and mortar libraries: you can observe the conversations as library speaker series, book groups, and collection development processes.
..Online: library has fallen short of the ideal of conversation facilitator.
There is great opportunity, however, for online libraries to provide invaluable conversational, participatory, infrastructure to their communities online.

Participatory Networking

Participatory networking, at least the technological foundations of it, stem from developments in “Web 2.0”
What pervades the Web 2.0 approach is the notion that Internet services are increasingly, no surprise, conversations.
A core concept of Web 2.0 is that people are the content of sites.
Flickr ( provides users with free Web space to upload images and create photo albums. Users can then share these photos with friends, or with the public at large. Flickr facilitates the creation of shared photo galleries around themes and places.
MySpace ( lets users create rich profiles, including blogs, to link up to new people with common interests.

It may not be possible to narrow down a definition of Library 2.0. And perhaps we don’t need to.
But we do want to make clear the notion of Participatory networks comprises:
We use the phrase participatory networking to encompass the concept of using Web 2.0 principles and technologies to implement a conversational model within a community (a library, a peer group, the general public, etc.).

What if the user, finding no relevant information in the catalog, adds the information?
Possibly s/he adds information from their expertise (say through a Wiki),

Can your library functions be as easily incorporated into these types of conversations? Can a user search your catalog and present the results on his or her Web site? The point is that libraries need to be proactive in a new way. Instead of the mantra, “Be where the user is”, we need to, “Be where the conversation is.”

Eventually, blogs, Wikis, RSS, and AJAX will all fade in the continuously dynamic Internet environment.
However, the concepts of participatory networks and conversations are durable.

All in all a long presentation, at the end of the day too.  The Information Institute of Syracuse has a commenst and particiaption page and an about page.  Please feel free to explore their contenta dn leave your comments.

Wikis : when are they the right answer?

Jason Griffey of the University of Tennessee at Chattanooga presented a brief, bright and breezy look at the basics of wikis and their use in libraries to an attentive group of about 60 attendees at the end of day two of the 2006 LITA National Forum.

The basic appeal of the wiki is that it is a modern day example of “Many people make for light work”.

Wikis are designed to allow contributors to add to and revise the information on the site, so that the shape and scope do not have to be predetermined. In fact, wikis are a good choice when the shape and scope cannot be predetermined, and they can grow organically as new facts are added. They are good for dealing with “fringe“ items.

Potential problems with wikis stem from the lack of control. Duplication, lack of cross references, and eventual entropy can make mature wikis less useful.

Wikis vs. Librarians

Wikis are foreign to the systems librarians are used to. They lack a preexisting classification scheme, they are always being organized, absolute control is not necessary– in fact, is usually impossible. This lack of control can be a problem for librarians.

Other “Web 2.0” technologies (folksonomies, tagging, are similarly unstructured, and allow for “emergent order”

Wiki options

There are three different types of wiki: server-based, hosted remotely, or installed locally (on a desktop, not available to he world).

Leading wikis

Mediawiki – free, open source, based on php/mysql, most popular wiki by far), has many extensions (e.g. maps, citations, mp3,

PBwiki – remotely hosted, is fast, easy. Potential downside: the hosted site could someday get advertising

Tiddlywiki – local on a pc, has no web server. It can be used to create searchable, free-form notes.

Great library wikis:

Ohio University Libraries Biz Wiki – like a research portal, it’s a controlled wiki – creator and faculty only can edit, but not students.

Library Success wiki
– best practices for libraries

ALA Midwinter 2007 wiki

LITA wiki – very new

Lawrence Lessig’s collaborative book project on line – soliciting help in revising a book he doesn’t have time to revise on his own.

When is a wiki appropriate?

When the scope is big, but not too big (i.e. when the project has some focus, or scope).

When distributed creation is needed.

When you aren’t completely sure where you’re going with the project (i.e. when you are asking a fuzzy question).

Wiki best practices

Trust your users

Monitor the information

Seed your needs (make sure to include some content to invite input: a blank page is intimidating)

The Spin on Thin : Thin Clients in Academic Libraries

On the final day of the 2006 LITA National Forum, Helene Gold, electronic services librarian at Eckerd College (St. Petersburg, Florida) described how thin clients are being integrated into the computing environment in the college’s new library. The 25 people who braved Forum-fatigue to attend were not disappointed by Helene’s engaging and accessible presentation.

When Eckerd College decided to build a new library, it also decided to house the campus ITS department in the new building. The ITS department in turn decided to use the opportunity to install thin clients in the new facility to showcase the technology. This had both positive and negative ramifications– while the ITS staff was committed to making the project work, “buy in” by the library staff came more slowly.

Thin clients

Thin clients are relatively simple and durable devices that have no storage or computing power of their own but which can be used to communicate with applications running on servers. While old PCs can be reused as thin clients (removing or deactivating the hard drives), Eckerd decided to use “Sun Ray” thin clients from Sun Microsystems in their new library.

The Eckerd ITS elected not to use a Citrix or Microsoft terminal server that could have presented students with the MS Windows user interface, but instead chose to use the Gnome user interface (most commonly used with Linux) that was supported by their Sun terminal servers. They made this choice both because of lower cost and the desire to simplify systems management requirements, as well as the desire to make the thin clients stand out from the MS Windows PCs used in the library.

The Survey

When she heard about the project, Helene conducted a SurveyMonkey survey to discover library experiences with thin clients, and found that although few respondents were using thin clients, most of those using them were satisfied.

Negative comments elicited by the survey included software incompatibility, lack of CD-ROM support, frequent freeze ups, network bandwidth too limited make thin clients feasible, and finally, students were unfamiliar with the devices and needed more support (as compared with PCs). Notably, all the respondents of the survey were using the MS Windows interface (via Citrix or Microsoft terminal servers).

Positive comments included the space-saving and compact natures of the devices, energy savings (thin clients use 1/6 of the electricity of of PCs), centralized upgrades and management, longevity (can last 7-10 years, compared to PC lifetimes of 3-5 years), security (students login securely to the network and server security is far better than that of Windows PCs), no virus or worm threats, and also that the devices have virtually no value to thieves– they only work in thin client environments. Finally, there are no games or student-initiated downloads, and no chat.

From the Trenches

In her last segment, Helene shared insights from their day to day experiences with the devices:

Printing has proven to be a problem. The system’s Sun printer server did not play well with the campus’ Novell iPrint system, and after much work, the ITS staff eventually removed it and went with CUPS (Common Unix Printing System) instead.

The devices do not have CD-ROM drives (or even support external ones), and students needed to use language CDs have to be directed to Windows PCs elsewhere in the building.

Although the Sun Rays can support USB storage devices, other types of USB devices usually do not work.

Students who needed to use MS Windows applications (such as MS Office) access them off the network from a Linux-based CrossOver server from CodeWeavers which runs them on a compatibility layer. Although CrossOver has been explicitly tweaked to run in Sun thin client installations, Office access is still the leading source of student complaints and some of the steps students must take while using it seemed cumbersome. Saving a document, for example, was a six-step process.

As noted, the library was faced with the double burden of supporting students on both a new hardware and software platform at the same time.

Software that is intensely computational is not a good fit for thin client computing, because all users are sharing the processor(s) on the server(s).

Thin clients are also very sensitive to network bandwidth problems, again because every task, every keystroke the users are doing is 100% dependent on the network.

There is a lack of control with thin clients. They are centrally controlled by the systems staff, and there is little tweaking possible from the user level.

On a positive note, Sun thin client environments offer Sun’s “SmartCard” technology. Students can be issued a card that they insert into a slot on the machine, log in over the network on any thin client, then pull out the card and take it to any other thin client and rejoin their session. A sessions can remain open all week, allowing students a lot of flexibility in their work arrangements. Drawbacks include the cost of the card ($4) and confusion from students because it is a separate card from the campus ID.

The Set Up

Eckerd used two Sun 2-way servers with 4 GB of RAM each, and a single processor Dell server (with 2 GB of RAM) runs CrossOver Server under Linux. They have 30 thin clients currently.


In conclusion, Helene noted that planning is crucial in making a project like this a success. Who will control and implement the project? What are the needs of the users? Software compatibility issues should be carefully examined. What cost savings are anticipated? What is the projected return on investment?

Finally, does the library or the ITS staff have people with Unix experience? And does the library have a good relationship with its ITS staff?

Both the Library staff and the Systems staff need to work together on a project like this, and the earlier they get get together in the process, the better.

In the discussion that followed, the open source K12LTSP project ( was mentioned as a free way to experiment with thin client technology.

It’s About Time, It’s About Place: Designing Interoperable Modular Web Applications for Delivering Online Library Instruction

Debra A. Riley-Huff, Web Services Librarian, University of Mississippi

The biggest cost for web-delivered library instruction is staff time. The software is cheap or free. Being able to use modules in different contexts (interoperability) or re-use structural pieces with new content (modularity) saves time and, therefore, money.

One of the things to know at the outset is your server environment, operating system, and who has control of the server. She had some things to say about various server and database products:

  • Microsoft asp.NET is proprietary but there is lots of information available about it. It has some security issues.
  • PHP5 has a large Open Source community but you need a programmer to do much with it.
  • ColdFusion is good at running smaller things, and instruction modules do tend to be small. It is quick to build things using freely available scripts.
  • Microsoft Access and MySQL. Microsoft Access is easy enough but is only appropriate for five or fewer people accessing it at one time. For more users, you will need MySQL.

She showed us several examples of web-based instruction. Some general advice:

  • Maximize the use of images, like screenshots.
  • Put no more than about 8 steps on a single page to minimize downloading time and because people tend to prefer shorter pages.

One example was static XHTML, but she had saved the structure in 2 different design formats (2 column and 3 column) with the “Lorem ipsum” text as a place holder. When someone else wanted a similar tutorial, she was able to pull it together quickly by just putting in new text and screenshots, all the design and coding work was re-usable.

Another example used ColdFusion to build dynamic pages that served up instructions for using EndNote and RefWorks with different databases. The librarian, back-end interface, allowed these modules to be easily built and changed using an interface that looked like a WYSIWIG blog or wiki interface screen, a simple CMS. The underlying database was Microsoft Access which worked because they never had more than a couple of students querying at the same time.

The third example used XML and XSLT to deliver repetitive content. The example was contact information for the librarian. If the phone number changed, the information could be changed in one place and then it would be fed out to the many places that information appears on the website using the same technology as RSS feeds.

The fourth example talked about storing all the instruction material, in whatever format, in a digital object repository. She listed a variety of options for doing this:

  • DSpace
  • Drupal/Plone
  • Blackboard (although not as interoperable as other options)
  • Ruby on Rails
  • Greenstone

She was particularly taken with Greenstone which she says is underutilized because it was hard to use when it first came out. It has improved a lot in the last four years and can store all kinds of file formats including video and audio. It’s only appropriate for small depository projects, but a library instruction repository would likely only have 250 to 500 objects in it. It supports full taxonomies.

The Internet and the Experience Effect: A Closer Look

Rachel Kirk, Middle Tennessee State University
Steve Bales, University of Tennessee

The Pew Center Internet and American Life Project published a report in 2001 called “Getting Serious Online” that drew a conclusion that as Internet Users become more experienced, they engage in more serious pursuits on-line, moving from games to banking, for example. As a project for a Ph.D. class, Rachel and Steve decided to use a General Social Survey (GSS) data set, taken from a survey in 2002, about “The Information Society” to corroborate the Pew report. Much to their surprise, it did not.

The GSS survey asked people which web sites that they visited in the last thirty days and how many times. Rachel and Steve divided those into recreation and serious web sites and found no correlation between how many years that a person had been using the web to what type of sites they visited. They also checked if the type of sites visited was a function of age and this also did not correlate. Instead, they found that as people gained web experience, they visited more of all types of sites—a finding that more recent Pew reports seem to corroborate.

Steve noted the “netizen” effect. People who have three or more years experience tend to become citizens of the Internet, using it for most information needs, serious and recreational.

Rachel’s personal theory, although she doesn’t have the data to back it up, is that people “get into grooves and kind of follow them along until something knocks them into another one.” An implication of this observation is that we can’t necessarily assume that what we teach freshmen in a one-shot library use instruction class during an English literature course will be transferred by the student into other situations, like a history or psychology class the next semester. We may need to expose them to new grooves as they have a need for that groove.

Improving Library Services with AJAX and RSS

Hongbin Liu, Web Services Librarian at Yale University
Win Shih, Head of Systems and Databases at the University of Colorado at Denver and Health Sciences Center.

As spokesperson for this session, Hongbin covered the background of Web 2.0 technologies such as AJAX, tagging, blogs and RSS, and demonstrated how library websites can meet the needs of web users by providing customization and interactivity features.

About AJAX

  • Asynchronous JavaScript And XML
  • Originated in 1997, but was not popular at that time because browsers didn’t support JavaScript
  • Not a technology in itself, but refers to the use of a group of technologies that provide interactivity (XHTML, CSS, DOM, XMLHttpRequest)


  • Speedy browsing
  • Updates information without the need to refresh the browser


  • Complex, browser specific, requires JavaScript to be enabled, debugging is required, and poses a security risk (source code viewable)
  • Browser back button will not work
  • Difficult to track use statistics

Examples of services that use AJAX are:

Why Personalize?
Web users who keep personlized webpages are more likely to come back to the site often and will surely visit everyday to check their custom homepage content. Sites listed above support drag and drop interfaces, RSS feeds, integration with search engines, and the ability to add and remove content. Statistics have shown that users prefer these features over many of the currently implemented MyLibrary sites. MyLibrary is currently used by a number of institutions, but has proven less desirable due to the burden of high maintenance and low usage.

Hongbin demonstrated examples of AJAX used for real-time data validation and auto completions in a number of different applications.

Examples included:

AJAX uses for the Library OPAC
OCLC LCSH Live Search – Instant results list of subject terms
Univ. of Huddersfield Library Tag Cloud – Tag Cloud of catalog subject terms

As blogs have been one of the most highly praised features of the Web 2.0 era, Content Management Systems (CMS) that incorporate blog and other social/collaborative features are proving to offer more useful features for library websites. A great example of this technology in action is The Ann Arbor District Library site. AADL uses Drupal. Plone is another CMS with similar features that other libraries are using.

More library blogs

In conclusion, the goal of library websites should be to engage users with interactivity and personalization, and integrate library services into the customizable services that web users maintain regularly such as MyYahoo and Google/IG.


  • Q: Why make an effort to use MyLibrary since we’ve seen it fail? Why not go directly with the Google/IG format?
  • A: Yale will no longer promote its Google/IG style Medical Library page, but will work toward integration with Google/IG and make the Medical Library’s RSS feeds available for users to insert into their existing Google/IG pages.

  • Q: How many people pull the Yale Medical Library’s feed into their Google/IG page?
  • A: We haven’t collected statistics yet, but are working toward getting access to how people are using the feeds.

Web 2.0 – Becoming Library 2.0

Stephen Abram, VP of Innovation at SirsiDynix, closed out the 2006 LITA Forum on Sunday morning.

One of his first statistics was that libraries collectively ship and circulate more than every day. But we’re not like Amazon in a lot of other ways. We’ve decided we should be making decisions for our patrons, instead of letting them choose. Why don’t we have Amazon-style recommendation engines? We could at least give the user a choice of whether or not to keep their history private.

It all comes down to the user – we need to understand them better before we do something like charge in to fix the OPAC, convinced we know what is wrong with it.

Libraries do community better than Google – it’s our trump card, but only for now. We can’t cede that ground to the giants. Google Scholar has deals with about 200 database providers. What if they decide to parse your Gmail account and deliver scholarly articles targeted extremely specifically to you? In five years they’ll have 50 million books online to apply that methodology to as well.

Abram went on to talk about how students are going to use Google and the like heavily no matter what we do. We need to focus more on teaching them how to use the tools in Google well – advanced queries instead of just a couple words. We can also educate our high school and college age users about the reality of Google. How many students know just how extensively search engine optimizers manipulate the top Google hits on hot button issues?

Millennial users have higher IQs and more advanced brains, but ultimately believe their skills are more advanced than in reality. They’re format agnostic, and don’t want to have to deal with separate databases for their searches. Abram says we need to be constantly looking at realities like this, on a rolling five year planning horizon – who’s on their way to us?

And our future users really are changing – video games support a wide variety of learning styles beyond the traditional, and meanwhile sites like Facebook are allowing students to create a sustainable social network for life. Even if the definition of ‘friend’ on social network websites is different than what previous generations think of.

Abram showed an image of a giant swiss army knife with dozens of tools – no matter how useful each tool may be, you can’t tell what each one is until you unfold it. This applies to the tools in libraries as well, so we need to be more transparent to make up for it.

What we need to do is create an experience. Be the fabric of the community, not appended to it. To do this, we need radical trust – that’s what can create Library 2.0. But there’s no one step by step route to that destination. We will necessarily go through a process of trial and error. But don’t be afraid to experiment! It’s going to be required.

Ultimately, delivering information isn’t our job – it’s improving the quality of the questions.

To make these changes in how we deliver service and relate to our users, we’re going to need more time in the day. Productivity tools exist now to help us toward that goal, if we’re willing to take advantage of them. RFID and self service checkout, for example.

In conclusion, we need to rededicate ourselves to a focus on the end user. Not just today, but for life, taking into account how their needs change over time. How do we become that librarian 2.0? We play! Keep up with new technology, don’t be afraid of it. Try new things and see what happens. You can do it on your own, or even better institutionalize the change like the Public Library of Charlotte & Mecklenberg County’s “Learning 2.0” initiative.

Low threshold strategies for libraries to support “other” types of digital publishing

Robert H. McDonald and Shane Nackerud summarized two different aspects of low threshold digital publishing. Robert covered Florida State University’s program of various institutional repository tools, and Shane outlined the University of Minnesota’s UThink blogging platform.

One of the big advantages of an institutional repository program to FSU was that it gave them something to highlight during the recent SACS accreditation process. Their philosophy for the project revolves around the idea of “Barrier free access”.

The structure involves three main tiers:
-The actual Institutional Repository (run on Bepress’ Edikit and PKP Open Journal systems)
-Outreach / Communication (blogs, web sites, wikis, etc)
-Finding Aids for the stored materials

EdiKit is a hosted service and easy to implement. Ex Libris’ Digitool service is going to be the main site for submitting documents to the repository.

The open source content management system Drupal is used to manage the respository’s web site. A big plus is that it automates the creation of a wide variety of RSS feeds. Robert considers RSS readers to be valuable real estate of our users, and any effort we can take to reach them is useful.

Implementing MediaWiki took more training than the organizers expected – fundamentally people just aren’t familiar with the edit it yourself model.

Future plans: get more faculty using the system, promote the basic ideas of open access journals, and work more on integrating everything together.

Robert’s presentation is online here:

Shane gave us a tour of the history and current implementation of the University of Minnesota’s UThink blogging system. Essentially, the university provides free blog hosting to all students and faculty. Over 3500 blogs are currently hosted, more than 1000 of which are still being actively posted to. That’s more than 45,000 total posts.The system was built on a relatively low end server, with 120gb of hard drive space. Even with this limitation, UThink still has been able to let students upload files such as mp3s for the purpose of running a podcast. The blogs themselves are run on the Movable Type platform. There was quite a bit of tweaking in the background necessary to tie the blogs into students’ existing campus network accounts, but it all works seamlessly now.

One main goal in the creation of UThink was to retain the content students are blogging elsewhere as a sort of cultural memory of the university. Additionally the system promotes intellectual freedom, changes attitudes about the library, and helps form communities of interest. The most popular blog the system ever hosted, for example, was all about the sports teams on campus.

Interestingly, only 60% of the blogs are run by undergrad students. Shane theorizes that a lot of undergrad students have existing blogs elsewhere already set up when they come to campus, and don’t feel a particular need to change systems. Anecdotally supporting this, grad students tend to have most of the personal blogs (as opposed to class blogs, for example). Once students graduate, they retain access to their blog as long as they log in at least once every six months.

Two main types of academic use have emerged. Either a professor uses a blog to start discussion, or a professor requires students to maintain their own blog on class related matters.

Unexpected uses have also shown up. For example, other official campus sites outside the library have used the blogs’ RSS feeds to populate their own content.

One of the biggest hurdles in maintaining the system is comment spam. UThink recently added a captcha system (they require a user to type in letters from an image) to combat it. Also, some students don’t use the service because it is not anonymous.

Plusses of running the Uthink program:
-An opportunity to defend intellectual freedom (as when a local business threatened to sue if a disparaging post wasn’t taken down – Shane stood his ground and they went away)
-An opportunity for education in the area of RSS, podcasting, design, etc.
-A massive cultural memory repository has emerged – imagine if something like this was running around the time of September 11th, for example.

Lessons learned from the program:
-Serve those who want to be served
-Work within the current academic processes
-Using UThink to enhance existing library services has been more difficult than expected, but it has opened doors for discussion.
-A committee is needed – this is time consuming! Shane did most of the work himself, but would do it differently a second time.
-In the end, intellectual freedom and cultural memory are the big winners.