All posts by Will Stuivenga

Will Stuivenga <wstuivenga@secstate.wa.gov> Statewide Database Licensing (SDL) Project Manager Washington State Library Division, Office of the Secretary of State 360.704.5217 fax: 360.586.7575 MSLS University of North Texas MA Music Theory, Univ. of Washington Current job is in the Library Development department of the Washington State Library. Previous positions include 5 years as Computer Systems Manager/Librarian for the Coastal Resource Sharing Network in Oregon, 3 years as Internet Trainer for Amigos Library Services, and 6 years as Reference and Electronic Resources Librarian at Southern Methodist Univ. in Dallas, TX. Blogger for fun: Will's Vintage Ties: vintageties.blogspot.com, and Tillabooks: Will's Book Blog: tillabooks.blogspot.com Moonlighting job: Church organist/pianist, currently at First Baptist, Olympia, WA Reading for fun: science fiction Collecting: Vintage men's neckties especially 1940's era; Seattle Space Needle kitsch; Word books (dictionaries, thesauri, etc.); musical postage stamps Latest form of artistic expression: collage

LITA PreConference: Contracting for Content in a Digital World

LITA Preconference Friday, June 23, 8:30 am – 2:30 pm
Contracting for Content in a Digital World

A panel of experts discussed the forces and interests on the national and international scene that are shaping the new terms libraries, publishers, aggregators and search engines are negotiating in contracts and licenses today. 

Sybil Boutilier, Manager of Contract Administration for San Francisco Public Library, the program coordinater and moderater welcomed the participants and introduced the session. Continue reading

Online NW: Keynote Speaker Paul Bausch

Online Northwest is a conference focusing on the use of technology within libraries. The conference is held in late January or early February in Oregon.

Online NW 2006
February 10, 2006
CH2M Hill Alumni Center, Oregon State University
Corvallis, Oregon
Keynote: Paul Bausch

Paul Bausch was co-creator and developer of Blogger; PC Magazine named him one of their 2004 People of the Year, and he is currently a resident of Corvallis, OR, where he founded ORblogs, a directory of blogs written by Oregonians. Bausch has also written three O’Reilly “hack” books: Flickr Hacks (Feb. 2006), Yahoo! Hacks (Oct. 2005) and Amazon Hacks (Aug. 2003).

As part of the speaker introduction, it was suggested that

  • 2002 was the year of the blog
  • 2003 was the year of the RSS feed
  • 2004 was the year of the Wiki and
  • 2005 was the year of the podcast

Bausch began by exclaiming about what an exciting time this is, working at the intersection of libraries and technology. People use library skills on a daily basis with tools like Yahoo! and Google. Metadata is mainstream, although once the domain of librarians. People need this information. People in the web world and people in the library world have a lot of common and need to exchange information. Bausch expressed a desire to “convince you that the web and library worlds are working in the same arena.”

Notes from the talk:

About me (the speaker)
Helped put together Blogger. Has a degree in journalism. Also helped Safari Books Online working with the O’Reilly hack series. Need to explain the word hack. Not a black hat hacker. Reclaiming the word hack for the good guys.

Definitions: Hack: Verb. Informal. To alter a computer program. Bricolage: Noun. Taking whatever is at hand and assembling something out of it. Example: Amazon has this wish list feature; a hacker shows how you can adapt it to be on your cell phone. So when you are in the DVD store, you have your list with you.

Web 2.0
A phrase being used a lot lately. A lot of people use it as a euphemism for “cool” as in “That’s so web 2.0.” That’s not a good definition. Web 2.0 is the rebounding after the dotcom bubble burst.

How can he convince us that there’s substance behind all this excitement? He decided to approach it like a hack. First step, explain how the basic tool is intended to work. The way he looks at the web in general, whether 1.0 or . . .

That starts in ancient Greece. He likes to read a lot of history. His favorite story from history says a lot about the current state of the web: an astronomer named Ptolemy, 2nd c. A.D. collected all the astronomical data into a single volume, the Almagest, c. 150. In it, Ptolemy accumulated the knowledge of 100’s of years of Greek astronomy into a more accessible format in 13 volumes. Proposed a geocentric universe. The book was extremely influential, and its influence lasted for centuries. In the 9th century it was translated into Arabic. The book was lost to the western world in the Library of Alexandria fire.

The Middle Ages was the age of translation; we don’t think of it as an age of great discoveries. Someone could make a good living translating texts from Arabic into Greek or Latin.

Copernicus (fast forward a thousand years) was an Italian philosopher. He had access to Almagest and commentaries regarding it thanks to translation. He even learned Greek so he could get closer to the translation. Put out his own commentary “on the revolutions of the heavenly spheres” in 1543, adopting a heliocentric model. Copernicus and Ptolemy are the superstars of this story, but there are so many people who made the story possible: translators, commentators.

The central promise of a library is that someone can access scholarship, through reading what other people have written before they themselves can add to the larger conversation. In the web, we’re in the middle of our own world of translation, but we are translating offline processes into the online world. Web 2.0 is beginning to speak the language of the web like a native; finding its strengths.

Web 1.0 took our ideas about traditional media and directly transferred them to the web. The result was the newspaper web sites we see, very static, broadcasting information rather than providing a participatory space.

Web 2.0 is the next step. Yahoo! first organized the web with categories. Hired an army of editors. Translated an existing metaphor to the web. Google came along and did away with the categories. Looked for latent patterns in the way people used the web. This is an example of Web 2.0. They shifted the burden of organization from hired staff to others, to their users in a way.

Personal publishing started with home pages. Translated the metaphor of the newspaper with sections (or web pages) for different things, different topics. If something doesn’t fit into one of the categories, it probably doesn’t go up.

Blogger is a native web application. Instead of setting up categories, you organize by time, so you don’t have to create any categories. The other piece is that the information is organized into little pieces that can be individually linked to.

Barnes and Noble is an example of a traditional book store just translated onto the Web. Amazon added user reviews; the capability of selling your used books; provided universal bibliographic information. Not so much a bookstore as a platform where people can get and use information about books. This is Web 2.0.

Other upcoming examples include Flickr, del.icio.us, Wiki. These applications have made the activity social.

Three Aspects of Web 2.0

Bausch proposed to describe 3 aspects of Web 2.0 so as to help us spot them:

One: Openness

The first attribute is a sense of openness, a willingness to share data.

Publishing is closed. Books: He loves books, but it’s very frustrating not to have access to the text in his books. He calls it the Ctrl-F problem. In a single book it’s annoying; if you have a stack of books, it’s a real problem, and if you don’t even know which books to look in, it’s even worse.

There is some work being done. He works with Safari Books. Safari U is a project where you can take sections of different books and combine them in new ways. Google Print is another example. Similar to Amazon’s “search inside the books” feature. These approaches upset traditional business.

Another aspect of openness is being aware of how people use the web. Flickr URLs are so simple. Flickr loves links. Give people not only the URL but a little snippet of HTML so that you can link to the photo. So many sites want to restrict how you can use the content, and attempt to control your experience on their sites.

Two: Decentralization

Example: chicagocrime.org using Google Maps; combining an RSS feed from the Chicago police dept and Google Maps. No one from the police department or Google ever sat together in a room and agreed to do this. An independent developer put this together without anyone granting permission, or needing to. Google’s open API allows people to do this kind of thing.

This is Web 2.0; taking pieces and assembling them into something new.

Take Wikipedia as another example; they have between 2 and 5 employees, but have amassed an encyclopedia to rival anything published. While not 100% accurate, neither are the published ones. It’s good enough most of the time.

Weblogs made RSS possible because they created large amounts of content that people could work with. Something that would traditionally be done by newspapers, assembling information in categories, using RSS people can do for themselves.

As an example, ORblogs, one of Bausch’s web sites; collects all the information from many Oregon bloggers together into one source. This is possible even though all these people are using different tools in different places.

Three: Participation

A great example is how Flickr attacked the problem of categorization; organizing by date wasn’t enough. They came up with a system of tagging; assigning keywords to each photo; didn’t try to prescribe the vocabulary; turns out this sort of works; you don’t have to worry about the physical attributes in a digital space.

It’s possible in a digital world for every user to have his or her own categorization system. Same thing in del.icio.us; people add value by tagging; users can go in and see how others are categorizing the same things, or how they are using the same tags.

Flickr lets him keep up with his friends; seeing what they are doing. One can also collaborate with strangers. He showed the example of squared circle; a whole community who take pictures of circular things and crop them into squares. A great metaphor for the web because people are sort of adding their own little bit selfishly, but have no idea their own little piece would be part of this larger visualization (someone created a giant circle of all the circles). A lot of people behind the scenes are making this possible; allowing for openness.

Bausch cited Ranganathan’s 5 laws of lib science as found on Wikipedia.
Replace the word books with weblogs, or wikis, or whatever words you use for your online stuff; these are good principles for web designers.

A goal of libraries is to enable conversations like those of Copernicus and Ptolemy on a global scale, and on a local scale also. These Web 2.0 tools are also helping do this.

Someone in the audience asked or commented about context. In losing the original context; one loses significant information about the original person and their qualifications. Bausch answered that the Web is about links; any piece of information can be linked back to its original source and context. This ability to link isn’t present in the physical world; the context can be present but you have to work at it. The second piece is authority, and that’s a whole other talk! (audience laughter)

Bausch has seen a lot of stuff about Library 2.0 also, but a lot of it is just hype and talk. He recommended a couple of the following breakout sessions, those on Social Software and Firefox Extensions as examples of more practical expressions of applying Web 2.0 to create Library 2.0.

Someone asked about Yahoo!’s purchase of Flickr. In Bausch’s opinion, Flickr will change Yahoo! rather than the other way around. He encouraged us all to go look at Flickr and to emulate it.

I attended the following breakout sessions:

  • Firefox Can Do That?: Using Extensions to Customize Your Web Browser
  • Harvesting Business Information by Harnessing the Power of RSS Feeds and Blogs
  • A Little Help From Your Friends: An Overview of Shared and Social Bookmarking
  • Doing More with What You Have: Leveraging Your Subscription Databases (I was the presenter on this session, so my attendance was fairly obligatory!)

A listing of all breakout sessions with links to some session presentations (and more to be added as they are made available) is available.

Executive summaries of all breakout sessions are also available.

ALA President’s Program

ALA President’s Program
The Future of Our Profession: Educating Tomorrow’s Librarians

Sunday, January 22, 2006, 3:30-5:30 PM
Gonzalez Convention Center Theatre
Speakers: Michael Gorman, Bill Johnson, Andrei Codrescu

The program had four significant segments:

Part 1: Introductions and Bill Johnson

This important program had more than just a little schizophrenic feel to it. People obviously came to hear big name author and PBS commentator Andrei Codrescu, but his remarks had little to do with the title of the event, or with Michael Gorman’s focus on library education. That was as it should be (I’m sure no one came to hear Codrescu discuss education for librarians), but it divided the program focus quite dramatically creating an event with a marked split personality.

ALA President Michael Gorman led off by introducing his yearlong presidential focus on key issues in library education, expressing his desire to engage in a dialogue on the subject of education for librarianship.

  • Are students receiving the skills, knowledge and values they need?
  • Is the “L” in LIS receiving the attention it deserves?

Gorman stated that the captioned text from the program would be posted on the ALA website after the conference, but warned that this process takes several weeks. (I could find no information about this on the ALA website on Feb. 1 while editing this posting from my notes.)

C-SPAN’s Book-TV filmed the event for later broadcast. The president’s web page on the ALA site will post the broadcast date when it becomes available.

Note: I can’t even locate an ALA President’s Page for Gorman on the ALA site using either the Google-based search engine, or following the site’s own hierarchy. Under “ALA Governance” one is linked to Gorman’s page at California State University, Fresno, much of which (including Gorman’s list of appearances) doesn’t appear to have been updated since August, 2005. Although the page on Gorman’s Forum on Education for Librarianship has been updated more recently, it contains no information about the Codrescu event.

Gorman recognized his Presidential Advisory Committee members. He then talked about New Orleans and ALA’s decision to honor “our commitment” to hold our next conference there. According to him, this will be the first major national conference held in New Orleans since Katrina.

Gorman introduced Bill Johnson, Director of the New Orleans Public Library, who spoke for a few minutes about the library situation there. He thanked the people of San Antonio and Texas for their response to the Katrina disaster, also people in the library community for bringing the ALA convention to New Orleans. He also expressed thanks to libraries around the country that helped.

What was it [Katrina] like? It was like having a massive heart attack and then having to get up and go to work the next day, said Johnson. Initially they opened 3 libraries with 19 people.

Johnson also stressed that ALA will be the first convention coming into the city, giving the library leverage with the city authorities. However, “You won’t be the first event because we’re going to have Mardi Gras!” said Johnson. “All the bugs will be worked out then. I’d like to see you all there!”

Gorman announced several programs ALA has undertaken to aid libraries in New Orleans including the “adopt a library” program. Thanks to donors, $270,000 has been raised for the Katrina library relief fund. New Orleans conference-goers will be able to volunteer for a full day either before or after the conference to help in rebuilding efforts via the “Libraries Build Communities” program.

Next Gorman requested that the audience stand for a moment of silence honoring the passing of Gerald Hodges. Hodges joined ALA staff in 1989 and served as Director of Member Services. He will be sorely missed, said Gorman.

Finally, Gorman introduced the featured speaker, Andrei Codrescu, mentioning his role as a weekly commentator on NPR’s All Things Considered and as editor of The Exquisite Corpse. Recent efforts include a movie “Road Scholar,” and his latest book New Orleans, Mon Amour: Twenty Years of Writings from the City, copies of which were available for purchase and signing by the author after the event.

Part 2: Andrei Codrescu

Blogger’s note: There is absolutely no way I can even pretend to do justice to Codrescu’s remarks here. I will only try to provide a small sampling of bons mots for your delectation.

Codrescu began by stating that he wished to add to what his fellow Louisianan (Bill Johnson) had told us, expressing his hope that his audience would all be in New Orleans for ALA. “If anything,” he quipped, “it will be less dangerous, since the criminals have left, gone to other places to practice their craft.”

(C. later took flack over that remark during the Q & A from someone who found it insensitive and insulting to N.O. residents. C. responded that only a small percentage of the N.O. population were criminals, just like everywhere else.)

Codrescu began, as many celebs do, by declaring his affinity for libraries and librarians. Librarians are some of his favorite people, he insisted. In his case, he backed it up by pointing to the prominent role of librarians in his published work. His work is “filled with librarians, keeping the flame of literacy flickering in these pixilated times.” In Casanova in Bohemia, an old man is a librarian. In Wakefield, the protagonist’s daughter (I think it was) is one.

C. hates repeating himself, which is why he’s not an actor. In preparing for this talk, C. had three questions he put to himself:

  1. How is a librarian better than a mouse click? A: Not much and much more. A machine doesn’t waste time thinking about the quality of the information it provides.
  2. What can library buildings do besides holding books? A: Libraries are cultural centers for those who don’t fall prey to television and video games.

    There are homeless and mentally ill people who think of libraries as their churches.

    In addition to being a poet, a social worker and a nurse, the modern librarian must maintain the library’s collections of esoteric literature.

    Libraries are useful for sheltering great numbers of people, even in a storm that rips the roof right off the poverty in our city.

    Take away the library, and what you have is a mindless shopping mall.

    Caveat: the bookshelves get in the way of things like poetry reading. This sometimes results in wine spillage.

    Libraries should promote circuses and music.

    Libraries should transform themselves into producers of culture, feeding back into Google, instead of turning people into Googleing gophers.

  3. What does Freedom to Read mean to ALA? Answer: “I became a writer because I read forbidden books.” He cited an elderly man in Romania who provided all the proscribed works of literature to the younger generation including C. He credited this “unofficial” librarian with his having become a writer.

C. referred to Section 215 of the Patriot Act, and quoted from the Library Bill of Rights.

In what was undoubtedly the most controversial aspect of his speech, C. took ALA to task for not supporting the independent libraries and librarians that are being persecuted in Cuba. When he visited Cuba he was appalled by the lack of books. “Cuba today is the Romania of my growing up,” he said. “I hope the ALA will pass a resolution condemning Castro’s regime for flagrant violations of human rights.”

All three questions resolve into one question: Do we believe we can survive in the 20th century? Can we make a difference to people who forget to read and those who are forbidden to do so?

For the rest of Codrescu’s remarks, you’ll have to wait for the transcript (parts of which at least are apparently available on the ALACOUN list archives, and elsewhere), or wait for the Book-TV broadcast. There are also postings on other blogs, such as the PLA Blog.

Part 3: Michael Gorman

Once C. concluded his talk, Gorman took the podium, and in a particularly schizophrenic moment, read his prepared remarks, which were almost completely unrelated to Codrescu’s, but instead related to the program’s announced title: “The Future of Our Profession: Educating Tomorrow’s Librarians.”

There are 3 ways in which people learn, said Gorman:

  1. by experience
  2. by listening to people who are wiser and know more than they
  3. through interaction with the human record, i.e. by reading

This is a positive and affirming profession, said G.

Historically there have been 3 main reasons for restricting access to the human record:

  1. blasphemy
  2. indecency
  3. sedition

G. spoke against these kinds of restrictions by any government.

Some questions to ponder:

  • Can we as ALA define the elements of our profession in the 21st century so that they can be passed down to future generations?
  • Are we carrying out the duty of our profession to control our professional education?
  • Is ALA sufficiently responsible in relation to its professional education, as regards accreditation, curriculum, etc.?

The stakes are high and failure cannot be contemplated, said G.

Part 4: Q&A with Gorman and Codrescu

Index cards were handed out as people entered the theater. Questions for C or G were written on the cards and passed to Gorman by ALA volunteers.

Gorman began by attempting to respond to Codrescu’s attack on ALA for not denouncing the Cuban gov’t. He talked about the dispute over the activity, and whether the individuals in question are really librarians or not.

G: Having a book in your house and lending it to another person doesn’t mean you’re a librarian.

C: ALA should make a stronger statement of solidarity with these people. Cubans have nothing to read. ALA is already involved in all kinds of politics and should condemn the Cuban gov’t.

C. thinks ALA is one of the most articulate associations in the US in defending freedom of speech and should continue to do so.

A question was asked about China. Is the situation there as extreme as in Cuba? C. said no, because they can’t control the Internet, but that we should condemn restrictions on freedom to read there also.

A question asked if these kinds of political issues should be our [i.e. ALA’s] business. Certainly it’s your business, said C.

C. listed what he described as the two main propaganda points that have come from Cuba, and that used to come from the now defunct Eastern European communist regimes:

  • High literacy rates
  • Free medical care

Both of these were false he said. Regarding free medical care, yes, it’s free but there are no machines. Basically the level of care was so low as to be non-existent, or useless. Likewise, if there is nothing available to read, literacy rates are meaningless.

A question asked if we should support people who were paid to overthrow the Cuban government. Codrescu: “I think people should overthrow ALL governments.”

Gorman: I can can see the headlines now: “Anarchist Addresses Pinko Communist Librarians.”

Gorman: This will be televised, you know.

Codrescu: Good—we’ll be retiring soon.

Another question asked about the future of printed books. C: Book technology has lasted from the 16th to the 21st centuries. But he essentially said that books will probably go away fairly soon, preserved as historically relevant objects.

C: I may be the last writer.

Someone asked if C. remembered 20 years ago speaking to the NMRT for a small sum. Everyone loved him, so he (the person who arranged for C. to speak) would like to claim responsibility for C’s later success. C. did not remember the speaking engagement, but said that 20 years ago he was just beginning to live in and write about New Orleans.

G. responded to a question about library education by stating that we need at least a 2-year LIS curriculum, maybe more. How can we overcome the financial barriers to have a vibrant profession and a stream of new librarians coming into the profession he asked? You can’t graduate from medical school without a course in anatomy, or from law school without a course in contracts, but you CAN graduate from an LIS degree program without a course in cataloging.

Asked about his latest book, New Orleans, Mon Amour, C. described himself as a dybbuk, a genie or spirit, a local genius who inhabits a place. He stated that he didn’t revisit or revise the older pieces that went into the collection.

New Orleans, he said, has never quite been an American city. First of all, the French built it. New Orleans is characterized by laissez-faire, lassitude, leisure, not efficient things. America doesn’t like New Orleans. The culture of N.O. came from poverty, said C. Do we now recreate that poverty?

A question asked: What are you currently reading? C. cited Michael Harington (according to the posts on the ALACOUN list, he was apparently referring to Donald Harington. )

Gorman said he’d just finished a wonderful book by Francis Wheen: Idiot-Proof, which in the UK was released as How Mumbo-Jumbo Conquered the World. Wheen also wrote a biography of Karl Marx, Gorman noted. Which led to another headline: “Pinko Librarian Praises Commie Author.”

After the session was over, Codrescu signed books in the theater lobby, and then attended the President’s Reception.

LITA Standards Interest Group Program

LITA Standards Interest Group Program
ALA Midwinter 2006
January 21, 2006 4:00PM – 5:40PM
Henry B. Gonzales Convention Center Room 008B
San Antonio, Texas

Table of Contents

Yan Han, Systems Librarian at The University of Arizona Libraries, Tucson, AZ and current chair of the LITA Standards IG introduced the program and the first speaker.

Note: LITA will post all presenter PowerPoint files on the LITA website, if they provide them for this purpose. Not all presenters used ppt, however.

Part 1: New Standards Update

NISO’s Strategic Direction

Patricia Stevens, Interim director of NISO, spoke first. She reported that the NISO Board has been involved in strategic planning over the past 18 months or so. As a part of this process, the NISO mission statement was revised as follows: “NISO fosters the development and maintenance of standards that facilitate the creation, persistent management, and effective interchange of information so that it can be trusted for use in research and learning.“

NISO works with intersecting communities of interest. The Mellon Foundation funded a blue-ribbon panel which said that NISO should work strategically rather than reactively. As a result, NISO has developed a strategic map.

Report of the NISO “Blue Ribbon” Strategic Planning Panel Presented to the NISO Board of Directors May 3, 2005

NISO Strategic Direction approved by the NISO Board of Directors, June 30, 2005

NISO works across the full life cycle of a standard from pre-standard activities right through until a standard becomes obsolete. Roy Tennant was commissioned by NISO as an independent agent to review the standards process. His report, released on Dec. 15, 2005, listed key recommendations to the NISO Board: NISO should

  • register members in their various areas of interest, and have members only vote on standards that come within the purview of those areas, rather than having all members vote on all standards as they do now, even if they individually aren’t knowledgeable in some areas.
  • create a standards path that anyone can follow.
  • hold a substantial annual meeting.
  • hire a standards coordinator (where will the funding come from?).
  • expand its patent policy.

The audience was encouraged to contribute to the NISO strategic planning process by communicating with the Board. Some of Tennant’s recommendations will be voted on by the entire NISO membership.

Pat encouraged interested parties to attend a program from 4-6 on Sunday which will provide an update on other standards.

SUSHI: the Standardized Usage Statistics Harvesting Initiative

Tim Jewell, Head of Collection Management Services, the University of Washington, spoke on SUSHI: the Standardized Usage Statistics Harvesting Initiative and the work of the NISO SUSHI Working Group. Providing a brief historical backdrop, he described the Digital Library Federation (DLF) Electronic Resource Management Initiative (ERMI) which sought to define what electronic resource management systems ought to do. ERMI didn’t tackle statistics because COUNTER was working on that and they wished to avoid duplication of effort. ERMI II involves license expression and data dictionary revision.

Tim also provided a brief recap of COUNTER and its Release 2 of the code of practice for journals and databases. Release 2 has four required reports as well as optional ones that focus on journals. There are 3 reports defined for databases, and 5 for books and reference works.

Some of the problems involved with usage data include

  • The expanding scope of e-resources
  • The resulting proliferation of data
  • COUNTER is helpful, but
  • There is a lack of standardized containers
  • It is time consuming to pull the data together from the many different sources

Tim listed the members of the working group, both the founding members, and the newer members. He presented a slide which provided a graphical illustration of how SUSHI should change the current manual collection of usage data into an automated process. For some of the technical details, he handed off to another member of the SUSHI working group, Ted Fons of Innovative Interfaces.

Ted mentioned that a small group of stakeholders has developed a client-server model in which a client can request information from the server (the database aggregator or producer, or publisher). COUNTER’s XML schema wraps the SUSHI data. The goal is to get the data into the ERM. SOAP is involved because these are designed as web services. They kept it simple and light-weight.

There is lots of information on the SUSHI web site. The Journal Report 1 prototype is finished. A security wrapper will be added, and they plan to test with live data soon.

Next steps include:

  • Publicize the effort, and push for its adoption by data providers
  • Write a NISO “draft standard for trial use”
  • Hold a stakeholder meeting in the spring
  • Gather input, revise the draft into a real standard
  • Expand the effort to encompass other COUNTER reports

License Expression Working Group

The next speaker was Nathan Robertson of the Maryland School of Law. He talked about the work of the License Expression Working Group. The Publishers Licensing Society (PLS), DLF and NISO are cosponsors. The goal is to create a way to communicate license information back and forth. Currently you have to manually encode license terms into your ERM. They are trying to automate this process. You could then, for example, send encoded versions back and forth during license negotiations. You could use the process to communicate with an open link resolver, to communicate license terms to the end user. They are working with ONIX on encoding license values.

Robertson wished to make it abundantly clear that the working group is NOT building a rights expression language that could be used for purposes of DRM. Those systems begin with NO rights, and then only allow those specific rights that the provider wishes to allow. They are NOT DOING THAT.

They are only attempting to express exactly what the licenses say. Silence [in regards to a given right or area] means absolutely nothing either positively or negatively. Their system cannot be used to enable machine language type enforcement.

Their system includes rights statements in ONIX format. It covers ERMI and other things, but will be more granular than ERMI. The working group is evaluating existing licenses and identifying ERMI and other elements and deciding which need to be rigidly encoded in XML and which can be handled by notes.

RFID standards and Issues Progress Report

The next speaker was Dr. Vinod Chachra, CEO VTLS Inc and Chairman of NISO’s working group on RFID standards.

The objective of the working group is to look at RFID standards as used in the United States in regard to the following issues:

  1. Interoperability
  2. Isolation
  3. Privacy concerns
  4. Cost considerations (affordability)

  1. Interoperability: the goal would be to have an RFID tag for a book in one library work in another library, regardless of the vendor(s) or ILS involved.
  2. Isolation: application isolation is what they’re interested in; library tags should not set off alarms in the grocery or video store, or vice versa. Application “family” identifiers are used to tell the software what kind of place (library, grocery store, pharmacy) the tags belong to. Further, there are 2 zones: public and private. ISO groups are setting up 2 family identifiers for libraries, one for items that are checked out, and one for those that are not, since it is important to know which class any given item is in. Another way of handling this piece is to use a specific bit setting instead of separate family identifiers. But if some vendors do it one way, some the other, then there goes your interoperability.
  3. Privacy concerns: what content should be on the RFID tag, or on the bar code? Some privacy concerns are clearly exaggerated; others are real. RFID tags operating at 13.5 mhz can only be read 2-4 inches away from the item, so the idea that someone walking down the street can determine what book you’re reading is exaggerated, and not realistic. The privacy concerns that are valid may apply equally to today’s bar codes, but just haven’t been that much of a concern in the past. The heightened privacy concerns surrounding RFID have made issues that were ignored in the past seemingly more relevant today, and has forced RFID developers to address concerns that have been ignored with bar codes. The working group is working to identify the REAL privacy issues and suggest solutions.
  4. Cost considerations: some groups want more information stored on the RFID tag, such as location info, which would make automatic sorting mechanisms more feasible, and which could aid inventory functions; determining which books are misshelved, etc. The group is looking at which data elements to include while keeping cost down. The more data elements, the more costly the system. The group is attempting to work out compromises in order to make a recommendation.

The Europeans have already done a lot of work in this area. Rather than reinvent the wheel, the group is taking the Danish model and using it as a basis for their work.

Web Services and Practices

Candy Zemon, Senior Product Strategist at Polaris Library Systems and a member of the NISO Web Services and Practices Working Group, spoke about web services interoperability. She defined the problem:

  • The goal is cross-vendor cross-language cross-platform limited purpose communication
  • There are a large variety of web-based and non-web-based methods
  • Do we need standards?
  • Is there time to develop them?

Zemon described VIEWS (Vendor Initiative for Enabling Web Services) which did a lot of work in 2004-5. See www.views-consortia.org. The NISO Web Services and Practices (WSP) Working Group kind of took over this effort from VIEWS. There are both a working group and an interest group (which is larger). The missions are different. NISO is trying to define best practices. Whether or not to produce a standard is not a done deal:

  • These services develop rapidly and readily
  • W3C is also working on it
  • These services are very narrow individually and can be used for lots of stuff

The WSP is charged with producing a best practices document which will

  • ID web services used in the library world
  • Group these services into categories
  • Define best practices for them

See http://www.niso.org/committees/Services/Services_comm.html#charge

Part 2: ISBN-13 Transition

The second section of the program was devoted to the ISBN-13 transition, and featured three speakers.

ISBN-13 ILS Challenges

Theodore “Ted” Fons, Senior Product Manager at Innovative spoke on ISBN-13 ILS challenges.

The ISBN number is scheduled to change from the current 10-digit format to a 13-digit format on Jan. 1, 2007. Fons addressed challenges for

  1. Cataloging
    • Indexing
    • User searching
    • Matching

    For cataloging, you have to deal with two kinds of numbers: the old 10-digit ISBNs and the new 13-digit ones. For previously published numbers, you calculate the 13-digit number from the 10-digit one.

  2. Acquisitions
  3. EDI
    • EDIFACT
    • BISAC: no further work is being done on BISAC (fixed digit)
  4. Timing of interface changes with book vendors
  5. Documentation
    • Industry guides
    • Vendor-provided documentation

ISBN-13 implementations should observe this rule: the burden is always on the system to interpret the number that the user supplies, and respond appropriately, regardless of whether the supplied number is 10 or 13 digits.

Timing is important: the transition period is 1/1/2006 to 12/31/2006. In EDIFACT messages, PIA segments carry 10-digit numbers, LIN segments carry 13-digit numbers. During the transition period, systems should read either type of segment; after the sunrise date, only LIN segments should be read.

For additional information, Google ISBN-13. Some useful sources include:

ISBN-13 and LC

David Williamson, a Cataloging Automation Specialist at the Library of Congress, spoke on the topic of ISBN-13 and LC. LC has developed an implementation plan for ISBN-13 . OCLC’s implementation date was Oct. 1, 2004. LC has been recording ISBNs in pairs for the same manifestation, using separate 020 fields (because subfield a is not a repeatable field). The 13-digit ISBN first, followed by the 10-digit one.

For CIP data they made some exceptions. They provide a maximum of 2 pairs per CIP record; this relates to multi-volume works which would have multiple pairs. All the ISBNs would appear in the cataloging record, just not in the CIP data, as publishers don’t have space for long CIP records.

Back cover EANs (bar code UPC numbers) will not be treated as ISBNs by LC, even though the number could be an ISBN.

LC has put up a handy-dandy ISBN converter which converts in both directions, with or without hyphens.

Currently LC is processing 2000 ISBN-13s per month. 25% of CIP records now include ISBN-13s. It has been an easy translation at LC. They’ve encountered no real problems. The biggest problem has been partners that can’t handle the 13-digit ISBNs, which leads to work-arounds.

Send questions about the presentation to dawi@loc.gov (the speaker’s e-mail address).

ISBN-13 and OCLC

The final speaker was Glenn Patton, Director of the OCLC WorldCat Management Division. OCLC moved WorldCat onto a new platform during the transition period which complicated issues. They didn’t want to waste coding on the old system that was about to become obsolete. Some of Patton’s points:

  • The ISBN is one of the highest-used searches in WorldCat
  • It is a primary matching point for
    1. record loading,
    2. with vendors,
    3. for linking to evaluative content (book covers, tables of content, etc.) in FirstSearch WorldCat.

OCLC’s interim solution (until they moved to the new platform):

  • They told member libraries to put ISBN-13s in the 024 field for now, and code them as EANs (this posed problems for LC, as they were putting them in 020)
  • They converted incoming 13s to 024
  • They recommended searching using the “standard number” field, not ISBN fields

OCLC’s plans include:

  • They may convert all WorldCat 10-digit ISBNs to 13-digit ones, or they may convert only the more recent ones
  • They will inform members on how and when to get their interim records replaced with final corrected ones

Using Usage Data

This had to be the single longest program offered at ALA this year, short of the all-day preconferences. FOUR hours! But for someone genuinely interested in the topic, such as myself, the program quite amazingly sustained interest throughout. But it would be absurd to even TRY to blog the program in any great detail. So I’ll just try to hit a few high points here and there. The program coordinator promised to post all of the presentation slides on the ALCTS website eventually, within a few weeks after the convention.

Use Measures for Electronic Resources: Theory and Practice
Monday, June 27, 2005 1:30 – 5:30 PM
Collection Management and Development Section, Association for Library Collections and Technical Services

Speakers (in the order they spoke):
Martha Kyrillidou, Director, ARL Statistics Program
Dr. Peter T. Shepherd, Project Director, COUNTER
Oliver Pesch, Chief Strategist for Electronic Resources, EBSCO Information Services
Daviess Menefee, Director of Public Relations, Elsevier
Todd Carpenter, Business Development Director, BioOne
Brinley Franklin, Director of Libraries, University of Connecticut
Joe Zucca, University of Pennsylvania Library

The program was organized into three large segments with 2 or 3 speakers representing each:

  1. Standards
  2. Vendors
  3. Universities

Standards

Martha Kyrillidou began by discussing what she described as a draft report, titled “Strategies for Benchmarking Usage of Electronic Resources across Publishers and Vendors.” A preprint of this white paper is available online. The paper describes the ARL E-Metrics project, describing the history of attempts to evaluate usage of networked electronic resources, then analyzes results of a surveys of ARL members compiled in 2000 and again in 2004.

Ultimately what you want, said Kyrillidou, is not to have to deal with each vendors statistical reporting system separately, trying to combine all those numbers. She suggested three possible approaches to a solution. The first would create a combined database for sharing data across institutions, kind of a new OCLC for statistical data. In the second proposed model, multiple databases would be used, but the databases would be capable of talking to each other. In the third approach, there would be no databases at all, but rather a standard, with everyone using the same XML DTD, or some equivalent type of technology.

Use is not everything. Focus on the user. DigiQUALâ„¢ is an attempt to focus on digital library service quality. This project is funded via the NSF and NSDL. Institutions can use DigiQUAL to create a user survey for evaluating their web sites.

Kyrillidou’s final slide showed a dress with the text “Does this make me look fat?” written across it. Everyone wants the statistics they collect to make them look good.

Peter Shepherd is the COUNTER (Counting Online Usage of Networked Electronic Resources) project director. He began by providing an update on current COUNTER activities and progress. Release 2 of the COUNTER Code of Practice for Journals and Databases was released in April, 2005.

Dr. Shepherd provided the following principles for usage statistics:

Usage statistics:

  • Should be practical
  • Should be reliable
  • Only tell part of the story
  • Should be used in context

How can usage statistics help us measure success?

Both libraries AND vendors need usages statistics.

COUNTER Release 2 includes specifications for consortia-level reports, although only 2 of the 5 reports must be available at the consortial level.

Dr. Shepherd put in a plug for COUNTER membership. Libraries can join for only $375/year, and consortial membership is $500.

Vendors

Oliver Pesch provided an overview of EBSCO’s statistical strategies, and their stats management interface. He made the important point that libraries need to isolate federated search sessions and searches (via IP address or by usergroup/profile) so that these are counted differently than normal searches. He illustrated how a single user search can create multiple searches across various vendor statistical reporting systems. NISO is developing a standard which will allow metasearch activity to identify itself as such to databases.

He also suggested that we take a look at ERUS as a stats consolidator. ILS vendors often provide options as well.

Daviess Menefree provided similar background information on Elsevier’s statistical reporting activities.

Todd Carpenter spoke on behalf of smaller publishers. Now that BioOne allows full-text crawling of its journals by the search engines, 96% of its traffic comes from Google.

Universities

Brinley Franklin presented a summary of three in-house university unit cost studies which analyzed all aspects of journal costs, and compared print with electronic. Typically the non-subscription costs: staffing, housing of print journals, etc. were substantially higher than the subscription costs.

In a Drexel University study from 2002, the cost per use for print journals was $17.50, while per use costs for e-journals was a mere $1.85. A similar study in Muenster, Germany the following year had much the same results: 16.68 Euros print per unit cost, and 3.47 Euros for e-journal per unit cost. Not to mention that in both studies the e-journal use was much higher than the print use.

A 2003 University of Virginia study calculated a cost per article downloaded of $1.64 and a per search cost of $0.52. A University of Connecticut study found per search costs of $1.15 and $1.20 in 2002 and 2003, respectively. In a CARL study, Alliance libraries realized a per search cost of $0.25.

Unit cost data can become a very powerful tool for management and collection development decisions. One conclusion that can be easily drawn from these studies is that universities should work cooperatively to substantially reduce bound journal collections. There is no reason for every institution to house and service the same enormous backfile print collections.

MINES (Measuring the Impact of Networked Electronic Services) provides a totally different approach to evaluating e-journal usage. MINES uses a web-based survey form and sampling plan to measure who is using which resources, from where, and for what. These brief (3 or 4 question) surveys pop up during the authentication process, and are answered by selected users before they gain access to the resource.

E-Use Measurement: A Detour around the Publishers

To say that Joe Zucca’s work is impressive is a major understatement. My reaction was “I want this guy doing MY stats!” Basically he and his people are bypassing vendor generated statistics entirely, and are generating incredibly granular statistics using web metrics. He showed us graphs and charts measuring usage of electronic resources by student housing locations: a “party” house vs. an academically oriented “house” or an average upper division house.

One interesting byproduct of his statistical studies was a very high degree of correlation between checkout of print items with login access to electronic resources over time. The total numbers for e-resource use were an order of magnitude larger than the print checkout numbers, but when one went up or down, so did the other, proportionally.

My personal conclusion: we (me, in my job as statewide database licensing project manager for the State of Washington) should be doing a lot more with usage statistics than we are.

Giving them “Google-like” Searching

Implementing a Federated Search Tool

Speakers:

  • Peter Webster, St. Mary’s University
  • Marvin Pollard, California State University
  • Robert Sathrum, Humboldt State University
  • Joseph Fisher, Boston Public Library

Peter Webster led off the panel with an overview of the basics of federated searching.

First he defined the concept:

  • Too @#& many interfaces
  • “One stop shopping”
  • “Google like searching”
  • Silo busting
  • Cross-file searching

He reminded us that cross-file searching isn’t really a new idea. Remember Dialog? And think of Ovid and FirstSearch. If you buy all your databases from one aggregator, you have it now.

Next he listed the current array of tools:

  • WebFeat “the original federated search” and patent holder
  • Muse Global “the World’s leading federated search tool” claim they began the company in 1997, predating WebFeat
  • CSA Multisearch
  • Serials Solutions Central Search
  • Ovid Search Solver (Muse Global)

Most ILS’s now offer a federated search tool:

  • Ex Libris Metalib
  • Sirsi Single search (Muse Global)
  • Endeavor (Muse Global)
  • Innovative (Muse Global)

Comments:

Should federated searching be patentable?

Practically all libraries are automated, so the ILS vendors have to keep adding new features and services to generate new revenue stream. Federated search is one of the current hot examples.

What’s “under the hood” of a federated search tool?

  • Custom “targets” / “database translators” / “Source packages”
  • You have to translate for each individual database or vendor
  • HTML, Z39.50 (known to be slow with more than 10 different targest) , XML, SQL
    OAI-PMH, API programming
  • Screen scraping

Search issues:

  • Target selection (should you include bib records for books along with virtual content in the same search)
  • Results sort order (first back? Not good)
  • Deduplicating
  • Search feature variability

In a federated search environment, searches are generally only as good as the lowest common denominator, which means you’re often reduced to keyword searching, losing functionality, especially from specialized databases. Search vendors working on this all the time and making improvements, but current offerings still have a long way to go.

What does the future hold?

  • Even better E-content integration
  • We won’t be buying separate tools to bolt on top of our existing content for long
  • Most database providers will be providing XML search gateways and API’s, so as to provide cross functionality between databases.

Examples of current initiatives to watch:

  • Crossref Search Pilot
  • NISO Metasearch Initiative — TG3 (We’re still a long way from standards but they will come)
  • Ontario Scholar’s Portal, CSA’s interface to search all kinds of content
  • Google Scholar (the 5000 pound gorilla)

Future trends:

  • Rapid change
  • More standardized e-content searches and interfaces
  • Simple and diverse cross-search and fed search options
  • Near universally web-searchable e-content indexing

Final comment:
Soon you may not need a federated search tool. In a few years you may be able to link at least some databases without any extra expense or effort via built-in XML, API’s and the like.

Second Speaker:

Marvin Pollard began by briefly detailing the history of federated search efforts in the California State University system. They started trying to build a system back in 1997 when there weren’t any off the shelf applications. Currently they are using MetaLib.

Major challenges have included

  • User interface design
  • Authentication across 23 institutions
  • Configuring searchable resources for 23 institutions
  • Promoting federated searching to librarians

They created a User Services Task Force. Members included:

  • CSU public service librarians with years of experience designing library web sites
  • People with information competence expertise
  • The team was supported by programmers with expertise in web development

What do users want? Full text NOW

Federated search is only half the solution. Open URL link resolution is the other half. Document delivery is made available when full text is not.

The Cal State project had two initial goals:

  1. Searching multiple databases simultaneously
  2. Searching different databases individually, but using the same interface for each

After reviewing all of the currently available applications, Cal State chose MetaLib.

According to Jakob Nielsen’s May 9 Alertbox , user mental models for search are getting firmer. Designs that invoke this mental model but work differently are confusing.

As Pollard succinctly put it, a user interface designed by committee will not be loved by anyone. In the Cal State federated search project, responsibilities are divided. The Chancellor’s office handles such responsibilities as:

  • Licensing for applications
  • Installing upgrades and updates
  • System set up and troubleshooting
  • Analyzing and resolving OpenURL issues
  • Providing first line support and training
  • Liaison with vendor for application support

The individual CSU libraries are responsible for aspects such as:

  • Customizing style sheets and banners (they can do it themselves or have the Chancellor’s office do it to their specification)
  • Configure database categories & types
  • Assign databases/resources to categories
  • Integrate MetaLib into library services
  • Include MetaLib in bibliographic instructions

The Cal State system has one MetaLib server, running 23 instances. Each library can administer their own instance, localizing the knowledgebase to their own resources.

The system provides a Google-like search capability, can dedupe and sort the results by year or by date. A search form that includes database descriptions has also been implemented. The system includes a “find journal” component using SFX which is routed through the link resolver. Users can created their own “My Databases” page replete with alerts, saved searches, journal lists, database lists, and the like.

System development is now moving toward the use of Web API’s. Work remaining to be done includes:

  • Continue refining the user interface
  • Extend federated search capabilities
  • Promote the idea of federated searching
  • Integrate with course management systems on campus

Third Speaker:

Robert Sathrum of Humboldt State University provided a closer look at how an individual library within the Cal State system has implemented the Cal State product. Some of the issues that had to be addressed included basic organization of the resources to be included:

  • Which databases should be included?
  • How many categories should there be?
  • Who would make these decisions?

It was hoped that subject librarians would assist in this process. About half actually did so. After conducting user surveys, they ended up with 10 broad subject categories, with a maximum of 8 resources under each.

Each database configuration has to be tweaked for optimum search functionality and display.

Setup issues include:

  • How many resources to search at once
  • Effects on performance, and on vendor servers
  • Effect on licenses and costs (especially if you are paying on a per search cost, or have licensed a limited number of simultaneous users)
  • User authentication/authorization

How do you integrate federated search into your library’s web site? Some of the options include:

  • Wait for “perfect” system (you may be waiting a LONG time)
  • Fully replace existing tools
  • Incorporate into the site as an additional 51st tool

Fourth Speaker:

Finally, Joseph Fisher of the Boston Public Library presented a case study of using federated searching in a large public library setting. Unfortunately, my notes from this section of the program did not transfer from my handheld to my laptop when I got back to my hotel, and my memory is not accurate enough to reconstruct them at this point.

A brief question and answer session followed the formal presentations. All of the presentation slides will be placed on the LITA web site after the conference.