Category Archives: 2005

Annual LITA Forum September 29 – October 2, 2005, at the San Jose Marriott.

Big Wave of Standards Reviews

There’s lots going on in standards, and Cindy Hepfer, our ALA Voting representative to NISO, is working hard to keep up (with the rest of us puffing hard in her wake). The next few standards posts are going to be more compact than usual, so I can get the word out before inundation hits. These are all NISO standards up for review, so can be downloaded from the NISO site.

So here, in the order I got them, with deadlines in bold:

1. ANSI/NISO/ISO 12083-1995 (R2002), Electronic Manuscript Preparation and Markup.

This is a periodic review ballot for the published standard, ANSI/NISO/ISO 12083-1995 (R2002), Electronic Manuscript Preparation and Markup. This standard is a national adoption of the international standard ISO 12083:1994. It is available for download from: http://www.niso.org/standards/iso12083-1995r2002/.

Comments on this standard are due to Cindy by May 13, 2009.

——————————–

2. Review of ANSI/NISO Z39.2-1994 (R2001), Information Interchange Format. This is a periodic review ballot for the published standard, ANSI/NISO Z39.2-1994 (R2001), Information Interchange Format. It is available for download from: http://www.niso.org/standards/z39-2-1994R2001/.

The international version of this standard ISO 2709, Information and documentation — Format for information exchange, was revised in 2008 to clarify the use of Unicode with UTF-8 encoding for records employing this standard. In appropriate places the term “octet” was used in place of “character”.

In accordance with NISO procedures, all review ballots are accompanied by a recommendation from the responsible leadership committee. NISO’s Content and Collection Topic Committee recommends a vote of “Withdrawal” for ANSI/NISO Z39.2-1994 (R2001), in favor of the use of ISO 2709. The NISO version (Z39.2) would continue to be available on the NISO website as a withdrawn standard.

Cindy’s deadline for comments on this review is May 12, 2009.

———————————–

3. Review of ANSI/NISO Z39.14-1997 (R2002), Guidelines for Abstracts. This is a periodic review ballot for the published standard, ANSI/NISO Z39.14-1997 (R2002), Guidelines for Abstracts. It is available for download from: http://www.niso.org/standards/z39-14-1997R2002/.

NISO’s Content and Collection Management Topic Committee recommends a vote of REAFFIRM for ANSI/NISO Z39.14-1997 (R2002). If reaffirmed, the Topic Committee will then study the standard more closely to determine if and why a revision might be needed for Z39.14. A reaffirmation now will provide additional time to do a more thorough study of this standard. You are encouraged to provide comments with your vote that might provide the Topic Committee with additional information regarding the possible need for a future revision. Please note that a revision can begin at any time after the reaffirmation of the current standard (with NISO voting member approval); it is not necessary to wait until the next 5-year periodic review.

Cindy’s deadline for comments on this review is May 11, 2009.

—————————————-

4. Review of ANSI/NISO Z39.23-1997 (R2002), Standard Technical Report Number Format and Creation. This is a periodic review ballot for the published standard, and is available for download from: http://www.niso.org/standards/z39-23-1997r2002/.

If reaffirmed, the Topic Committee will then study the standard more closely to determine if and why a revision might be needed. A reaffirmation now will provide additional time to do a more thorough study of this standard. You are encouraged to provide comments with your vote that might provide the Topic Committee with additional information regarding the possible need for a future revision. Please note that a revision can begin at any time after the reaffirmation of the current standard (with NISO voting member approval); it is not necessary to wait until the next 5-year periodic review.

The deadline for comments to Cindy on this review is May 21, 2009.

—————————————-

5. Review of ANSI/NISO Z39.26-1997 (R2002), Micropublishing Product Information. This is a periodic review ballot for the published standard, which is available for download from: http://www.niso.org/standards/z39-26-1997r2002/.

In accordance with NISO procedures, all review ballots are accompanied by a recommendation from the responsible leadership committee. NISO’s Content and Collection Management Topic Committee recommends a vote of REAFFIRM.

The deadline for comments to Cindy on this ballot is May 19, 2009.

——————————————

All comments to Cindy can be sent to her at HSLcindy@buffalo.edu. Expect another wave of announcements shortly …

Diane I. Hillmann
LITA Standards Coordinator

Rules for First LITA Forum

This is my first Blog posting, my first post to the LITA Blog on my first LITA Forum. That is a lot of firsts for a weekend trip to beautiful San Jose. I learned some important lessons at this first forum. First, fly in Thursday night whether or not you are attending a pre-conference. The second would be that the LITA hotel rate extends for the three days before the forum and for three days after the forum. If the hotel gives you trouble, call the LITA office. Third borrow a laptop from work that doesn’t tend to run hot and if does wear clothes that offer better insulation from the heat, like denim. Fourth, build a little time in for exploring the city but that goes without saying at a conference. Most importantly, do not let your laptop get dropped at security at the airport. Aside from those rather painless lessons, I had a wonderful first forum.

Web Feed, The Greatest Thing Since Sliced Bread.

Gred McKiernarn from Iowa State University presented on “Web Feed, The Greatest Thing Since Sliced Bread. Greg with a confessed quirky sense of humor discussed why web feeds are the best thing since sliced bread with copious power points that contained numerous examples of RSS feeds. For those of us not familiar with RSS feeds he gave an overview of broad and specific use of web feeds.

  • Why a feed?Automatic notification of a change on a website. The news comes to you.
  • How a feed?There are software products available that can create RSS outputs. You also need reader to read an RSS feed. Currently IE does not have one native to it. However it is rumored that IE 7.0 will have one.
  • Where a feed?You need to subscribe RSS feed
  • When a feed? Examples of RSS feeds are:
    • Notice of changes in Library Services. For instance, a library can have a RSS feed for a specific classification of books, reference books or Resource Guides. This is one of the many ways that we can reach our patrons.
    • Vendors are beginning to provide RSS feeds: Proquest’s ABI/Inform and Compendex.
    • Can be used to inform the community of instruction activities.

E-Matrix: NCSU Library Eresources Management System

Andrew Pace and Stephen Meyer, NCSU Libraries
Sunday, October 1, 2005

A great session with the bigger picture of eResource management in mind. Useful for any librarian looking to manage a dispersed and disparate set of library data. Andrew and Stephen have got their mind around what it means to administer records of database subscriptions, ejournals and print journals while at the same time managing access and display issues for librarians and the users we work for. It’s an all encompassing system that reworks how we can manage our growing eResources and it will involve ALL departments in the library.

Andrew began with a review of Electronic Resource Management (ERM). ERM has been talked about for a while with little progress. The DLF as well as library vendors have started to put some thought into it. Companies and parties have also been looking to develop these systems. (Innovative Interfaces was first to market.) At its core, ERM is about re-envisioning collection management. For some heavy reading on the subject, check the DLF ERMI site (includes a link to DLF ERMI report). Andrew stressed that this was not something that happened overnight. There were rumblings of it around 1999… When it did get scoped out by an NCSU working committee, the E-Matrix had three objectives: managing acquisitions, providing access via discovery/display and collection management.

Stephen presented on scoping the data – deciding on what type of data needed to be included in the E-Matrix. It’s a large moving target, but it was limited to acquisitions data, licensing data, bibliographic data and subject (display) data. Both Andrew and Stephen emphasized that deciding what data they needed was the easy part. The contentious part was deciding what department would be the “authoritative data store” for the E-Matrix. (A role traditionally held by tech services and the Library OPAC…)

After a brief talk about licensing and acquisitions data (stuff that might stay behind the scenes a bit), Stephen continued with a quick rundown the data for the public interface of E-Matrix. And this is where it got pretty interesting if you’re thinking about how to display multiple facets of resources for your users. The ERM committee asked a group of public service librarians to come up with a vocabulary to use based on the following facets:

  1. Container – type of resource (e.g., article database, online data set…)
  2. Content – what is inside resource (e.g., images, citations…)
  3. Aboutness – what is the resource about (e.g., general subjects – biology, education…)

Just seeing these facets was really helpful. We can get a rich set of access points for our resources if we used each of these options. And that was Stephen’s next point as he showed a mock-up of the public interface of the E-Matrix. It was pretty text-heavy, but it offered lots of access points via tabs, and multiple displays. (A user might need to spend some time there before really getting comfortable with it.) Stephen did a walk-through of the display of the two major components of the E-Matrix – Databases and Journals. Before handing it back over to Andrew, he mentioned some future directions like the ability for librarians to create custom pages with a simple html select and drop-down form. Very cool stuff.

Andrew did talk a bit about the back-end specs, Oracle, JAVA (JSP, Struts), PL/SQL database. The database scheme was flashed on the screen and it was complex. Over thirty tables, at least. “It’s a complicated problem” – Stephen. Andrew and Stephen aren’t sure how to share the code, but would like to. Boston College Libraries has been working on an ERM and they have a data dictionary and other documentation available at http://www.bc.edu/bc_org/avp/ulib/staff/erm/erm-db/. BC Library is not supporting – just offering documentation for those interested.

A great macro view of where libraries can go with managing eResources. And even if you can only use bits and pieces of the E-Matrix idea, you’re still going to be improving things. All kinds of information (including presentations) is available at http://www.lib.ncsu.edu/e-matrix/. For another take on the session, check Karen Coomb’s earlier post from her blog.

danah boyd and Michael Gorman slug it out

I was supposed to blog the danah boyd keynote [note: she has now posted it here], but it’s now difficult to view it outside the context of the subsequent Michael Gorman luncheon speech. When I dutifully met with the other members of the LITA Forum 2005 committee on Sunday morning, they remarked that attendees had found boyd "provocative", and at least anecdotally it sounds like you all enjoyed the juxtaposition of the two speakers. We think of them as on opposite sides of the spectrum. Are they?

Both addressed the failings of tools like Google to capture and use important metadata, as follows:

  • Gorman: Google lacks both precision and recall — it retrieves too many results, that are not the best results for the question at hand. [I would argue here that Google lacks access to the best results (buried as they are in proprietary and/or password-protected OPAC and vendor databases) so we and our vendors have set Google up to fail in recall. And if you consider that for ridiculously broad queries it retrieves only 30,000 hits among its seventy-bazillion indexed pages then perhaps it's not doing too badly on precision.]
  • boyd: Google doesn’t distinguish relevance according to date, which particularly with blogs, can be misleading — "I still get daily comments on postings from 1997" — nor does it provide any indication of context or community — these comments come from people who randomly found her blog and of course aren’t interested in her or her research, but have wandered in by accident. "How do we create safe space?" if everything is indexed. [I would argue here that until Google gets around to becoming massively more complex, this might be why it's still necessary for librarians to hand-index the Internet, crazy as it sounds, to point people not only towards their topics of interest (websites generally) but perhaps also towards their communities of interest (blogs and forums).]
  • Gorman: We need better metadata hand-encoded in pages. But maybe that’s not enough.
  • boyd: We need better metadata automatically derived by search engines. But maybe that’s not enough.

Gorman in his luncheon speech and otherwise, lately seems to exhort librarians to band together with publishers of reliable information, against the tide of bloggers and full-text web search engines, to create a searchable corpus of the retrievable and reliable.

boyd in her keynote told us that librarians have in the past been accused of being intellectual property pirates themselves, and exhorted us to band together against the publishing establishment that now scorns bloggers and sues Google: "Librarians are some of the best defenders of civil liberties. Put on your eyepatch and say arrr!"

Okay, here are a few notes from boyd’s actual talk:

Like many digital media, blogging is a form of orality. It’s primarily communication, not publishing.

Blogs are persistent (they archive and their archives are indexed) but their content tends to focus on the moment. Blogs are worth looking at for the same reason libraries keep letters from the 18th century: they’re about people performing their lives, the modern equivalent of an archive of old letters.

Blogging [in the diary sense I gather] is used as a way to feel out what is appropriate, to model bloggers’ own lives and then see the result. It’s a form of talking and performing in public. But then, because of the persistence of blogs, it’s different from talking to the public nearby. Blogs acquire a public dispersed not only in space across the web, but in time. [Imagine Anaïs Nin not realizing her diaries would be seen by everyone, and still seen now?]

Blogging contains a lot of remix — pulling in pieces of others’ communications. But then they serve as redistribution of intellectual property. What is fair use? What happens when a remix becomes popular — is it still fair use? It wasn’t too long ago that librarians were seen as pirates.

(Cf. Roy’s End of the World As We Know It in the opening keynote … darn good thing he didn’t include the song in his posting of his presentations!) [Note, shortly after this keynote I went to the Breaking out of the Box presentation in which Raymond Yee called scholarship itself a form of remix. What do you think of that?]

From the Q&A after the presentation:

One thing for librarians to know is, for bloggers, if it doesn’t exist online (can’t be linked to), it doesn’t exist.

People are getting into niches that are no longer about geography: There’s a huge decline in suicide rates among gay, lesbian, bi, transgender teens in the current generation because they are getting online young and learning they’re not alone. On the other hand, there are pro-bulimia and pro-cutting sites (although mental health professionals say it’s better that these sites are out there than for the bulimics and self-cutters to hide silently). Blogs (or the web at large) don’t really promote cross-cutting communities; people for the most part seek others with whom they have something in common.

Q: Libraries try to distribute information with a known level of reliability. How do we separate the wheat from the chaff among blogs?
A: Trying to separate it now is premature; what seems like chaff could be critical documentation of a period in time, in retrospect. Storage prices now allow us to save everything. It’s searching it that is still a problem. Metadata, context are lacking in our available search; there’s a need to learn to retrieve by quality indicators. Search engines don’t have this down yet.

Re-imagining Technology’s Role in the Library Building

Re-imagining Technology’s Role in the Library Building

Sue Thompson and David Walker of Cal State San Marcos gave us an excellent presentation, which focused on public computing and instruction labs as well as a Web page redesign in their new Kellogg Library. I am going to limit my focus to the public computing and instruction lab portion of the presentation for the purpose of this review.

Anticipating completion of the new library building in the winter of 2004 gave the team at San Marcos the opportunity to examine and plan out exactly how they wanted to approach the new technology that would be implemented in the building. Rather than just “install a bunch of computers and software” the librarians and IT staff worked together to define the use of technology in the library space. Historically, technology in the library is used to access the OPAC, databases and software applications. However since most patrons can now accomplish these activities from home, the team asked, “What is the purpose for coming to the library.” Important question!

The team at San Marcos came up with a technology plan, which focused around the goals of encouraging the use of library expertise in research and instruction as well as creating an environment supportive of the iterative research process.

The library went from 40 to 240 public computing workstations. Workstation placement and images were extremely well thought out to provide maximum access to resources as needed. Furniture was designed and chosen with an eye to making it comfortable to study and use workstations for a long period of time. Three instruction labs were created to support the library’s teaching mission. Each one has a customized layout and image to support different presentation styles and needs. Very interesting and popular is the “Collaboratorium” which was designed for group study and research. Lecterns and supporting equipment were designed and operate under a model that proposes that technology must support, enhance but not get in the way of instruction. Staff has found some of the most popular features in the new labs to be the desktop control of all peripherals through an in house application and instructor control of all training stations through Altiris Vision and MasterPointer. A highly specialized media edit station allows full audio and video creation functionality.

Very importantly Susan also discussed the unforeseen things that came up which included new responsibilities, which we probably all know too well such as microforms, laptop checkout, pay for print and adaptive technology. Hats off to the folks at Cal State San Marcos for coming up with a top notch and well-received technology integration plan in their library and big thanks for sharing their experience with us as well!

Downloadable Books, Audio and Video: One Experience

Users of downloadable eBooks and audio books want many of the same titles as print readers, according to Michelle Jeske of the Denver Public Library. In her presentation Downloadable Books, Audio, and Video: One Experience, she reported that DPL is a large customer of downloadable materials and it foresees an increasing demand for them. Being one of the first customers of eBooks from netLibrary, starting in 2000, DPL has found that service both useful and frustrating. At this point DPL owns most of the netLibrary titles, but it will not be adding any more; the difficult user authentication process and the inability to customize the service to DPL’s needs has led the library to decide that there is little future in the contract. DPL has signed on with Overdrive, which has more bestselling materials and is more user friendly; users can enter their library card numbers and do NOT have to create accounts to access eBooks; users can also download Overdrive materials onto their PCs or PDAs.

Jeske said that another reason that DPL signed with Overdrive was to get the downloadable audio books. When DPL began offering the downloadable audio books on January 3, 2005, every title was checked out within 24 hours; the library went back to Overdrive and bought more copies and negotiated unlimited checkouts for 50 titles. Many users either load these books onto their Windows Media pocket devices or burn CDs; the product does not work directly with Apple IPods.

There is a workaround for loading to IPods. Users first download the Overdrive audio books and then burn them to CDs. Then the users reload the audio books into ITunes and from there into their IPods.

There are some problems with the downloadable book and audio book market. Some publishers are resisting the movement, fearing that their content will be pirated. Other publishers that are producing downloadable books are signing exclusive deals with vendors, making it necessary for libraries to use multiple vendors if they want all the popular titles. The technical standards vary among publishers, too.

Libraries who chose to offer downloadable media must consider training numerous staff members to assist users.

Surveys and statistics at Denver Public Library show downloadable users want the same titles as print readers. Top circulating titles are the same as those on print bestseller lists – if they are available. If not, almost any downloadable books will do; classics circulate especially well; the demand is so great.

To see recent titles added to the Denver Public Library downloadable books collection, click here.

Information and the Quality of Life: Environmentalism for the Information Age (take 1)

David M. Levy (University of Washington) gave this closing keynote session for the conference. Levy began his talk by noting that many of us feel that life is out of balance somehow and that technology seems to have something to do with it. As we speed forward do we lose sight of the bigger picture?

Levy asks us: how can we recognize and establish balance? We have an abundance of information sources, devices and technologies. When does this abundance lead to overload? We have an abundance of attentional choices. When does this lead to fragmentation? We lead full lives with full schedules. When does this become “busyness”? We largely subscribe to rapid action and response. When is this speed counterproductive?

Some of the negative consequences of this speed up and overload: physical and mental health, productivity, effectiveness and quality of work, job satisfaction, decision-making, social cohesion and capital, democratic governance, and ethics. Some people can thrive on a 24/7 informational diet, but many cannot.

Levy quoted the well-known Vannevar Bush article in Atlantic Monthly in 1945, “As We May Think,” in which Bush conceptually proposes the basic tenets of hypertext and digital computing as a solution to the problem of information overload (he called his proposed device a “memex”). Levy notes that we have done all that Bush proposed and more, but this has not solved the information overload, but arguably worsened it.

Levy’s basic idea is that we spend a lot of time using technology to find, gather, and consume information, but we have lost sight of the need to slow down and process the information—a time to contemplate the world (the Greek ratio vs. intellectus). This was a nice way to end the conference—a helpful reminder to take a breath, slow down, and be calm.