All posts by Genny

5-minute madness

This was a good idea and we should repeat it in future Forum years … maybe even at ALA. The program proposal submission and acceptance timeline is so long, this is the only way to get some of the late-breaking news from people who are doing interesting projects. Next time, though, let’s do this in a room with Internet access!

Meebo widget at University of Utah

Meebo widget added to their user instruction web site redesign — why Meebo? It integrates with blog software like WP, plus, is a standalone widget that goes in your web page, which means it doesn’t require any specific client to be installed on the user’s computer.

“We can dismiss these as fads, or we can incorporate them into our site. We deal with incoming freshmen, and this is what they’re using.” That’s why they decided to incorporate them.
Widget is embedded on ALL pages not just Ask a Librarian page (cited Wells, 2003 p. 136 “Chat request button needs to be on all frequently-used web pages”).

RSS Feed – New acquisitions at BYU

Ranny Lacanienta, ranny@byu.edu
4,500 new titles a month so he split it into subject specific feeds
Also offers “Custom Feed”

Oops, I forgot to ask if I could have his code :)

LOCKSS for local materials

Ranti Junus – their issue is exactly that of the last Q&A from the second keynote: The library used to receive campus publications in print, but now they no longer receive the publications at all, since departments view “publishing” on web in PDF to be the end of their distribution process. So the library is seeking ways to grab the PDF content and auto-archive it.

The MetaArchive Project was their inspiration.

The Future is Not Out of Reach

David Lee King, Digital Branch & Services Manager at Topeka & Shawnee County Public Library, gave this second keynote, subtitled Change, Library 2.0, and Emerging Trends. He started off with “If you hate being on Flickr, duck” — the second time someone has said this at a session. Is photographing the audience the new fad?

Change
“We are the lucky ones” being the computer geeks, yet much of the audience raised hands when asked if they felt pulled in multiple directions being in IT work.

Transformations

  • Shared comments as conversation is “new” (at least in terms of public conversation with patrons and community) — e.g., AADL director’s blog and comments
  • Friending on the web – Flickr/MySpace/Facebook, IM buddy lists – you now have friends for life (do you? is this really web-dependent? As the technology changes and Facebook gets bought out by Yahoo and requires a Yahoo login, will your old high school friends really still stay on the same channel with you?)
  • Content changes e.g. patron-generated content such as Denver Library YouTube contest, RSS feeds for snippets of only certain kinds of content (“PaperCuts” blog at Topeka; sports blog at Kansas City) — this need not be viewed as transferring control of the information to patrons, but adding “user-generated” (staff and customer) content to the authority-controlled content — involving the user
  • Tagging: del.icio.us at Lansing Public Library for reference links; AADL customer-generated tags in OPAC; Casey Bisson at Lamson Library WPOPAC supports tags
  • The web as platform: instead of going to the library to use library content, patrons are going to the library to use the Internet to use non-library content — “we are now a launch pad to a destination rather than a destination”
  • Mashups: combining content used to be limited to e.g. writing a research paper, now it is combining applications and content from multiple web sources

Continue reading

The Scientific and Social Challenges of Global Warming

Jeffrey Kiehl is a senior scientist in the Climate Change Research section at the National Center for Atmospheric Research in Boulder, Colorado. The Forum committee tries to include a speaker from a local organization at Forum; this year that speaker happens to be talking about a topic that’s been much in the news lately.

History of climate change science
Joseph Fourier (the same mathematician who gave us the Fourier transform) asked: What determines the temperature of Earth? In papers published in the 1820s, he hypothesized that the atmosphere must be blocking the escape of some of the reflected heat from the Sun; otherwise the earth’s surface temperature would calculate as near freezing. In the 1860s, John Tyndall built on Fourier’s work with experiments to determine which gases absorb heat rather than letting the heat out of the atmosphere. One of these gases was carbon dioxide. Svante Arrhenius noted that the Earth would warm by 4 degrees Centigrade with a doubling of the carbon dioxide in the atmosphere due to the amounts released by then-current industry (1890s).

Not until the 1950s did Dave Keeling at La Jolla put together a lab to measure the carbon dioxide in the atmosphere. That regular measurement has continued to this day and the carbon dioxide is continuing to increase over time. There’s a regular up-and-down line in the graph of levels of carbon dioxide which Jeffrey calls “the breath” of Earth. During the summertime, the level of carbon dioxide lowers because plants absorb carbon dioxide. During the winter, when deciduous plants drop their leaves, carbon dioxide levels increase.

Continue reading

Forum 06 poster sessions

Sadly, I only had an hour between meetings, so I didn’t get to every poster session, but here at last are the notes I do have. A PDF of the session descriptions is available on the LITA web site. There was a good range of topics and library types represented.

Instructional Media and Library Online Tutorials

Li Zhang – Mississippi State University

  • Online tutorials require far more than just duplicating print materials to the web. They currently have a large project to develop tutorials for both distance students and on-campus students. They’re trying to develop a single set of online tutorials that works for all of their audiences.
  • Too many bells and whistles distract rather than inform. Their web committee found that including audio or video for too many pieces of a tutorial makes it unusable for people using older computers or dialup Internet access.

Integrating Library Services: An application proposal to enable federation of information and user services

Erik Mitchell – Wake Forest University
Article to be published in Internet Reference Services Quarterly, February 2007

  • A “point-of-need approach is contrasted with the point-of-service approach utilized in traditional library systems.” Instead of creating a federated search, Eric is working on a sort of federated service. It will combine multiple data sources, and will also provide useful user services like item renewal — without the user having to step out of the interface into a separate OPAC/ILS interface. To this end, he is using an OpenURL link resolver + web services.
  • RSS, relevance ranking, renewals: He’d like to reindex using, e.g., Google, for relevance ranking. He’d also like people to be able to subscribe, say, to an RSS feed of what they have out, and be able to one-click renew. Reindexing is the easy part; adding the circulation data is harder; and enabling the system to update live circulation data is the really hard part. He wanted to use NCIP for this, but it wasn’t supported yet by the ILS vendor (he may have to resort to a little screen scraping).

Information on the Go: A journey of incorporating portable media players into library technology

Amy Landon, Larisa Hart – Ozarks Technical Community College

  • “Can’t take the lab home? Now you can.” They started putting course reserve materials onto iPods for students to check out. Their library has an interesting variety of materials for different course lab work — like a collection of rocks for one class. They decided to start adding pictures of these rocks, etc., to the iPods. This way students can study the rock pictures at their leisure. Since each iPod holds way more data than they have content available, they are putting everything on there including “How to study” DVD content.
  • One iPod per thousand students: They have about 10,000 students total; they currently have 6 video iPods and 4 iPod Nano that circulate for a couple of days at a time. They’re ordering a few more iPods, but so far the number available is keeping up with the demand.

Scanning the Past: Central Florida Memory

Lee Dotson, Selma K. Jaskowski, Joel Lavoie, Doug Dunlop – University of Central Florida

  • A “virtual place where visitors can discover what Central Florida was like before theme parks and the space program.” Central Florida Memory is a collaborative project of the University, the county library system, the regional oral history center, and other partners. The project started under an IMLS grant and they’re seeking new funding sources to sustain it.
  • Digitization Spec Kit details their software, equipment, and procedures. See more about their project and browse their current digital collections at www.cfmemory.org.

Using Web Services to Advertise New Library Holdings: RSS library feeds in the campus CMS

Edward Corrado, Heather L. Moulaison – College of New Jersey

  • Design decisions included:
    • What is a “new” item?
    • How to group feeds?
    • What data to display in feeds?
  • About the only feed people actually add to their aggregators is the list of New DVDs :) The feed is created by a Perl script. One click from the feed takes the user to the OPAC. Feeds get incorporated by faculty into the course management system (the student doesn’t have to know that a list of new titles there is actually an RSS feed).
  • See the October 2006 CIL and the College’s library web site for more info.

Unbundling the ILS @ NCSU

Vendor Endeca is at Forum this year in case you’re thinking about doing the same thing to your OPAC that Emily Lynema and Andrew Pace described in this presentation.

Andrew Pace, head of IT at the North Carolina State University libraries, explained that Endeca enabled them to implement faceted search on their catalog.

The context:

Roy Tennant’s statement that the usual current OPAC “should be removed from public view.” NCSU decided to look into some of the “next generation” library search tools they might use to make their library search better. A list he showed included Aquabrowser, WorldCat.org, Georgia PINES, Koha, etc. He demoed a few: Clusty creates faceted-type display on the fly; dice.com uses Endeca for faceted search; Amazon.com has a faceted display; EBSCO databases have Grokker interfaces available.

Existing catalogs are hard to use. They grew out of backoffice processing systems. The vast majority of libraries are living with the OPAC bundled with their ILS. At NCSU, for example, they saw in their search logs a lot of broad topical searches being done which did not retrieve useful results (too many, too irrelevant); even known-item search didn’t support features such as spell correction or relevance ranking. The display of search results also had problems: users could not browse or link from valuable metadata like subject headings, at least not in a reasonable number of steps. Nor could they adequately filter on aspects of the item record like proximity and availability (is it in my branch, checked in?).

What is Endeca? Search and information access technology provider for Circuit City, CompUSA, Lowes, many other e-commerce sites.

Why Endeca for NCSU? Better subject access through facets, better response time, better natural language searching, true browse without any query at all.

Emily Lynema, systems librarian, demonstrated Endeca at NCSU, using general terms like “Java” and “Civil war” and clicking on the facet list at left to filter a search for more specific results. Also, she demonstrated browsing without any search term, and removing applied filters when a search was narrowed down too much.

They created their own customized algorithm in Endeca for relevance ranking. They prefer a heading containing exactly the query as entered, then a phrase match in the record; the phrase in the title is preferred over the phrase somewhere in the contents. They had to refine their algorithm after initial release, based on user responses.

The display includes a facet list at left, removable filters and category browsing at top, search results at right. I find the display cluttered, but I forget what their old display used to look like!

True browse is now available by any of the facets they have set up. The public interface currently only shows browse by LC classification, but they have the option to set up other ways to browse.

Automatic spelling correction means if a user enters “dictionary of organic compunds” and there are less than 5 results, the catalog will automatically also retrieve results for “dictionary of organic compounds.” The system also suggests “Did you mean?” and can do automatic stemming.

SirsiDynix Unicorn ILS and Web2 online catalog are still used. Endeca handles the keyword search, Web2 handles the authority search and the detail page display. They have to export MARC records nightly from the ILS into a format Endeca can use. The Endeca system indexes the data into its internal engine. The libraries’ web site can be completely controlled by NCSU; their web application interacts with the Endeca engine through an API.

Staff resources: implementation team of seven

  • 5 IT staff, 1 cataloging librarian, 1 reference librarian
  • Functional requirements: 40-60 hours total
  • Java-trained IT librarian: full time about 14 weeks
  • IT project manager: about 25% time for 20 weeks

Total timeline: about six months

Major decision points:

  • What facets should be used (Endeca will help walk through)
  • Designing the user interface: eliminate information overload as much as possible while providing enough information to enable navigation
  • Toss the old OPAC or integrate it into the interface? They integrated (“Search begins with” box searches the old Web2 authority records)
  • What type of relevance ranking algorithm, for author vs. topical vs. title searches

Working with a non-library vendor can have a lot of special challenges:

  • Data formats that are library-specific
  • Data consistency between ILS and Endeca (due to one-way export from ILS)
  • Data issues especially with older cataloging practices still persisting in catalog records

Usage statistics

Request types:

  • Search 68%
  • Search + navigation (facet refinements after a search) 21%
  • Navigation (true browse) 11%

They have a number of other statistics such as navigation types selected (mostly topical — either subject topic, subject genre, or LC classification).

Usability testing

Limited test group of 5 on new catalog and 5 on old catalog. The results showed that in general, completing tasks in the new catalog rated easier and took less time than in the old catalog.

For students, relevance ranking is key; only 13% continue past the first page of search results. Faceted browsing is intuitive. Library jargon continued to confuse students (“keyword anywhere”, e.g.). Users experienced with the old system were suspicious of features in the new one (they expected a simple search box to retrieve completely unusable results).

In general, they have found that the new system does retrieve more relevant results.

Andrew returned to talk about future directions, including:

  • Experiment with FRBR
  • Integrate the catalog with other search tools (like their website search) through web services
  • Enrich the catalog with external web services
  • Use Endeca to index local collections

The problem with data silos continues: vendor databases, serials lists, OPAC, etc. What needs to happen in the future is true interoperability among data stores. Our metadata needs to be more visible to other “storefronts” where users go for information. Their Endeca implementation not only creates a better search interface for the OPAC itself, but creates a more interoperable data platform for integrating their OPAC data into other services.

For more info: NCSU Endeca project site will contain the slides from this presentation.

Improving Library Services with Ajax and RSS

The room is full for this session by Hongbin Liu from Yale and Win Shih from University of Colorado — despite the number of other really interesting-sounding sessions in this time slot!

Hongbin had done a web site redesign project for both the public and internal sites at his previous job in New Orleans. The analysis included usability and information organization, and turned up problems including terminology (undergraduates not understanding the term “catalog”).

In the past web sites have been either static or database-driven. Now we have “Web 2.0,” about which people have different definitions, but includes high interaction/collaboration sites like Flickr, as well as advanced user interfaces that make a site highly responsive.

Look at a value co-creation matrix:

  • Britannica Online – low personalization, low collaboration
  • Personal web sites – high personalization, low collaboration
  • Wikipedia – low personalization, high collaboration
  • MySpace, Flickr – high personalization, high collaboration

Blogging is used in libraries for things like

  • What’s new – promoting library events
  • New books
  • Course-specific resources

The advantage is that not only do you get the updates on the web site, but the user can also subscribe to the RSS feed and doesn’t have to go to the web site to get the updates. Blogs can also allow the users to comment on or contribute to the web site content.

Blogs + Content Management System: enables going beyond blog functionality.
At Hongbin’s library, access to the library web site has gradually dropped. Users don’t come to the library web site unless they have to. This means going closer to the user’s world: Google, MySpace, My Yahoo, MSN, etc.

More than three dozen libraries have implemented “My Library” services similar to My Yahoo. Hongbin asked the audience members involved with “My Library” services how their services have worked out. The response was that people have not actually used these services, and some have been discontinued. This matched up with Hongbin’s next slide, which said an irrelevant My Library is a burden and these services often have low adoption rates.

Google/IG allows you to personalize your home page. Hongbin asked the audience again about their use of Google/IG and the response was very positive. Start.com is a similar services from Microsoft. Why do companies set these up? If you keep users with your personalized home page, they will come back again. Can the Google/IG approach be used to improve My Library?

A look at what goes into Google/IG: An RSS aggregator that collects RSS updates. Ajax technologies to provide interactivity. (Ajax is not a technology but a set of technologies that work together: HTML, CSS, the Document Object Model, JavaScript, and the XMLHttpRequest object to exchange data with a server, using XML format.) Some uses of Ajax include: real-time form data validation and auto-complete; sophisticated user interface controls; and refreshing data on the page in real time.

Ajax pros: more interactive, seems speedier to the user, a rich web browsing experience

Ajax cons: requires JavaScript to be turned on, may need to be very browser-specific JavaScript, some security concerns, and standard browser controls like bookmarking and the Back button don’t work in most implementations

Ajax uses in the library:

Ajax-enabled Google/IG style Yale Medical Library site
Ajax-enabled OPAC from OCLC Research

“Getting the user involved” means, for Hongbin, not just usability studies or surveys, but creating interfaces that themselves directly involve the user.

Q & A:

Q: Why create your own Google/IG interface? Why not move toward making all your information accessible through people’s existing Google/IG or Start.com, etc?

A: We are moving in exactly that direction, discontinuing development of our own Ajax-enabled site, and making as much library information as possible available via RSS feed.

Libraries and Public Interest Entertainment

Thom Gillespie directs the Mime program at Indiana University. He told the story of how it started: When he was teaching in the school of library and information studies, he was interested in games and media: “I wasn’t sure where I was going, but I was pretty sure it wasn’t where the school was going.” His interest was in visualizing information, sort of a visual MELVYL. Initially he had students from fine arts, information studies, and instructional design. His classes morphed into games classes, which got him thrown out of the information studies department. At that point the telecommunications department was interested in his ideas about fun in the user interface, which was the start of the Mime program. Graduates have gone on to work for Lucas, Microsoft, etc.

Where did he get the Public Interest Entertainment idea? There is actually a Public Interest Entertainment Corporation, piecorp.org, which he thought was a great concept.

Why do people get their information from the Daily Show? It’s a myth that only young people watch it. It takes a comic to step beyond the boundaries the “real” news shows stay within, like asking Pervez Musharraf “Where’s Bin Laden?” In the Mime program, students concentrate on areas from political communication to games; classes cover a range from 3-D modeling to “citizen media” — using blogs, wikis, and other media to create public information on local issues.

Network scavenger hunts, like the original Internet Hunt created by Rick Gates years ago, have turned into alternate reality games like The Beast and Majestic. (See Alternate Reality Gaming Network). Instead of keeping computer games within the realm of the computer screen, people are “playing games into existence” as with Pacmanhattan. Now there are “boring gaming” genres like Food Force from the UN which have a social change/educational component.

Thom was struck by the efforts LITA people make to create tools for searching and learning — yet says none of these efforts are the ones that work. What is in Google? We are in Google. Why does Google buy Picasa, Sketchup, etc.? Google is becoming a game. If you create your own content, your information system is dead. If you work like Google, you create the information system and you let the content come from the people.

It’s all going to come down to citizen media. The public library, especially, is the place where the people can come in and not just browse the web, but come in and create media. Check Orange Blender for open source versions of a lot of media creation technology. Find ways to bring in the community and make them partners in creating content.

Many Users, One Computer

Eric Delozier of Penn State presented Many Users, One Computer, and Access to Web Services: Information Technology Risk Management in Libraries. I arrived a bit late, so I’m starting where I came in:

Liability issues: without adequate protection, patrons’ personal files and information might be lost or stolen; systems can be damaged.

Causes for loss:

  • Hardware failure such as CPU or disk drives
  • Environmental causes such as fire
  • Software causes, either malware or software flaws
  • Losses caused by user behavior: can be intentional or unintentional, by patrons or staff

After identifying risks, identify the potential consequences and the likelihood or frequency of occurrence. See Jacobson, Robert V., “Risk Assessment and Risk Management” in Computer Security Handbook, 2002, Wiley & Sons.

Risk mitigation: try to prevent losses, but also plan for recovery.

Hardware prevention & control measures include locks and alarms; software measures used at Eric’s institution include disk wiping (DBAN), backup and recovery (Ghost), integrity/restoration (Deep Freeze), malware detection (Symantec Antivirus), software updates (Microsoft Update), authenticationand authorization (Kerberos, borrower database), rights and permissions management (Active Directory), printing controls (Uniprint). Some software and policies are mandated universitywide; others are specific to the library. Administrative controls include policies, such as codes of conduct, and end-user agreements.

A new concept for me was the idea that risk management might include transferring responsibility to someone else. An example Eric gave was having the campus computing department take over responsibility for library computer issues after hours.

Risk management plan: have an overall policy and goals; assess risks; decide on objectives and the actions to meet each objective (such as objective “recover files and folders,” action “obtain and install backup and recovery software”).

Evaluate the results of your risk management process. In addition to quantitative measures like cost and frequency of incidents, also use feedback, suggestions, comments from patrons and staff.

Eric closed by urging everyone to consider getting some disk-wiping software.

Q & A

A lively discussion ensued about keeping logs of patron activity. Eric’s institution has a universitywide requirement for each student’s login to a computer on campus be recorded, so the library doesn’t have a choice to wipe logs. Public librarians in the audience want the logs to be wiped as often as possible to protect patron privacy, but also find they do get subpoenas and have to turn over computers to the police. One librarian mentioned she specifically has to budget for extra computers so they will be available if the police take some away.

Evolutions in Subject Searching

Slides are available here for Evolutions in Subject Searching: the Use of Topic Maps in Libraries with Steve Newcomb, co-founder of topicmaps.org and a co-author of a topic maps standard, and Patrick Durusau, on the board of TEI as well as involved in other markup standards organizations (didn’t catch them all).

I had assumed this session would be about things like AquaBrowser, but in fact, it was about an approach to representing knowledge in an expandable, shareable data structure.

Apparently topic maps and XML-based topic maps are big in a whole other context outside the library world, wherever people are managing industrial quantities of knowledge or documents. Basically, this talk addressed the structural underpinnings of one way to do a semantic map of a domain, and be able to construct a crosswalk to another domain using the same map. Sounds very labor-intensive.

One thing I learned is that in the world of topic maps, everything is a subject. This is not Anglo-American Cataloging Rules, here. Any data element of any kind can be a “subject” in a topic map.

Did you know there is a LITA Topic Maps IG? If you’re at LITA Forum, catch Suellen Stringer-Hye to ask about it. They will also be meeting at Midwinter.

Steve Newcomb

This speaker went through a number of slides that seemed to jump around the topic, as I adjusted to the fact that I’d walked in the room knowing nothing about this kind of topic map:

  • The most basic thing about topic maps: one subject for each location
  • The subject doesn’t need to be identified in any specific way, but must be identified
  • Using topic mapping, you can create bridges or “wormholes” between heterogeneous information sources and representations, or universes of discourse
  • A subject is defined by key-value pairs
  • XTM: XML Topic Mapping
  • Related organizations, companies, and conferences:
    • www.ieml.org
    • www.versavant.org
    • Atlas Elektronik
    • www.cit.de topic maps for municipal information for city of Stuttgard
    • US Department of Energy uses topic maps for asset management, weapons secrets classification
    • Dutch Tax and Customs uses topic maps to identify duplicate sources of publications and select the free or lower-cost version, rather than paying vendors twice for the same information
    • Extreme Markup Languages annual conference – primary industrial conference
    • TMRA Topic Maps Research and Applications – primary academic conference

Patrick Durusau

Different people define, or identify, subjects in different ways. How to capture different identifications? How to reuse mapping of identifications?

Where this problem may become critical is, for example, when medical terminology changes. Existing studies may become essentially unfindable, or hard to find without chasing down a trail of see-also references. A teaching hospital’s own study on a medical condition was too old to be found with current medical terms; the patient dies.

Subject identification by term or by unique identifier works within slow-changing areas and within a single area where everyone uses the same vocabulary or identifier set — otherwise, it breaks down.

Topic maps use a representative (proxy) for a subject that can contain multiple identifications for the same subject. (Example: Mark Twain and all his many noms de plume.)

All keys in a topic map are references to proxies. The topic map contains a legend. The topic map is self-documenting and therefore can be shared.

Q & A:
Q: Libraries already have such detailed subject indexing and cross-referencing. Why would it be worth it for a library to go to the trouble to create a topic map?

A: Because other libraries or institutions can contribute or share, especially when you are trying to provide a single point of access into multiple semantic tagging systems (such as LCSH and a museum database).

Q: Why didn’t you talk about the visual displays of topic maps?

A: This talk is not directly about the user interfaces; the topic mapping initiatives are concerned with both the interface and the underlying data representations.

Q: Does the topic map tell only about synonyms? What does it tell us about related terms, broader terms, narrower terms?

A: In topic maps, relationships themselves are also subjects, with properties identifying what kind of relationship.