LITA Program Planning Comittee (PPC)

Topic discussed:

  1. Schedule corrections for Imagineering and Public Libraries & Technology Interest Groups
  2. Program planing submission process. We need to streamline the process, remove the manual process and cange it with a web-based form if possible, and have it ready by Annual 2009 for 2010 program submission. At this point, it might be difficult to achieve it in six months if we rely on ALA IT to build the infrastructure for us. A working group would be established to assess and provide recommendations
  3. Program proposal work flow. The PPC committee would like to see if individuals could submit a program proposal without going through formal Interest Group channel. Another working group would be created to research and provide recommendations.
  4. LITA Manual section 10 on Programs at ALA Annual Conference and how to make the manual more user friendly. A group of PPC members would look into this.
  5. Looking at possibility of LITA PPC to sponsor a program, especially to help new members and/or incoming IG chair to organize a program.

We’re getting a new program proposal from the International Relations Committee. They’re planning to bring topics on technology and developing world:

  • OLPC (One Laptop per Child) project.
  • OACIS (online Access to Consolidated Information on Serials),a project to digitize and make available selected scholarly humanistic Iraqi journals. Also a similar project, AMEEL (A Middle Eastern Electronic Library), to digitize about 100,000 pages of scholarly journal content from ten Middle Eastern countries, as well as providing technological training and infrastructure between those institutions.
  • United Nation efforts on bringing new technologies to the developing world.
  • Tentative schedule is Saturday, 8-10am.

Optimizing Library Resources for Screen Readers

LITA National Forum 2008
Presenter: Nina McHale, Auraria Library, University of Colorado Denver.

The speaker started her presentation by sharing her experience navigating a website with a user with visual disability. She pointed out that even though the information structure on a website makes sense for us (the visual users), it does not necessarily make sense for users with screen readers. Making your website accessible is very important because the goal of a library is providing access and not providing barriers.

Accessibility matters because:

  • 10 million people in the US are blind or visually impaired; 1.3 million people are legally blind due to age or other health issues.
  • screen readers are used by blind users as well as people with learning and physical disabilities.
  • writing good code is good practice and makes the web pages more accessible to all.

Nina pointed out why accessibility is an issue:

  • proliferation use of graphics makes it more difficult for people with visual disabilities to use the website.
  • typical web browsers tend to be too forgiving for bad code.
  • a lot of library web pages tend to be home grown or don’t have a dedicated group to create and maintain the web pages.

Two governing standards for web accessibility:

Section 508 is mostly based on the web content accessibility guidelines (WCAG) from the W3C, with the addition of 8 more standards. Federal agencies are required to comply with the Section 508 standards.

Putting web standards to work:
– check that the code behind the web pages is standards compliant and accessible
– use free web-based validation tools available to check different kinds of web content. For example:

  • from the World Wide Web Consortium (W3C)
  • css validator: http://jigsaw.w3.org/css-validator/
  • html validator: http://validator.w3.org/
  • Cynthia Says (http://www.cynthiasays.com) to validate for accessibility
  • Fangs FireFox extension (http://sourceforge.net/projects/fangs) that produces a print transcript similar to the voice output of screen readers. This browser extenstion also allows to check the document structure (headings) and text used for links makes sense when it’s read without the context.
  • The reports from these validation tools can be difficult to interpret at first, but they usually include the link numbers to help pin-point the exact location of the problematic codes. Most of the errors tend to be simple and easy to fix.

    Typical problems in web design, their corresponding standards, and solutions to those problems:

    1. No alternative for visual elements (photos, images, etc.)
    2. Poor document structure (internal html structure)
    3. Repetitive navigation

    1. Visual elements
    A lot of library websites use images and photos to increase the visual appeal of their web pages or to support document structure. However, users of screen reader might not have access to the information conveyed by those images. Solutions to this is by using the alt or the longdesc tag

    2. Poor document structure:
    Use appropriate header (h1 – h6), meaningful hyperlink text, and correct label forms (including search boxes) would really help users with screen reader to “scan” the web page and get the appropriate information.

    3. Repetitive navigation:
    Good web site requires a consistent design, but we end up having a repetitive navigation. Although, experienced screen reader users could just “hop along” and ignore the repetitive navigation. However, it’s better if we also provide a “skip navigation” link or, by the magic of CSS, have the navigation links be read as the last part of the document on the screen reader.

    One of the proposed agenda is a demo of JAWS screen reader. Unfortunately, there was a technical issue and the demo has to be canceled. A discussion followed about what other libraries are doing to make their website accessible, the accessibility of AJAX (see http://www.w3.org/TR/wai-aria/ ), and keystroke behavior (combining ‘onmouse’ with ‘onclick’).

    Other resource mentioned: World Usability Day (http://www.worldusabilityday.org/).

    Nina’s presentation can be found at http://library.auraria.edu/~nmchale/presentations/lita2008/optimizing.pptx

    ERM and e-Books

    Firday June 22, 2007

    LITA ERM Interest Group did a managed discussion on e-books. Ted Koppel, Verde ERM Product Manager (ExLibris) gave the talk. (Note: Verde just starts working on e-books management system.) His function in this talk was basically asking questions and raising awareness on e-books management.

    Koppel suggested that we start thinking about e-books management now. Even though many libraries are just getting used to e-journals management and might still learning the ins and outs of the licensing management stuff, many of these libraries are already delivering e-books.

    Start thinking on usage scenarios such as use for e-learning, e-reserve, and e-books as e-textbooks. Other e-books scenario are possible: single use circulation, institutional repositories, archiving and preservation especially in the wake of the digitization projects from Google and other commercial companies.

    There are several functional areas that a library needs to consider, ask, or make decisions:
    Acquiring e-books commercially

    • Does the supplier offer a collection management tool?
    • Does the supplier provide metadata or cataloging tool?
    • What is the role of licenses and permissions and how do we manage those into the data.
    • How does the industry deal with the open access model as well as the so-called free e-books such as government documents?

    Acquiring or creating e-books locally

    • what departments within the institution that produce the e-books, who manages the collections, who does the collection development
    • e-books only or other digital materials as well?
    • where is the metadata coming from for the locally created material?
    • Granularity: how is the ERM system used to managed the collection?
    • Use/copyright restrictions, licensing/contracts for the locally produced e-books.

    Description

    • What description/identifier should we use (Dublin Core, MARC, etc.)
    • What Unified Resource Identification (URI) that is used?
    • Shall records added to OPAC or do we need to keep them separately?
    • Differences in indexing and access points.
    • Use publisher’s search platform or should we develop it locally to our own need?

    Discovering e-books

    • At the discovery level, are e-books different than their physical version?
    • What kind of search mechanism is the one and how the indexes are built? Do we need indexes?
    • Which thesauri to use? Should it be LCHS or our own local practices?
    • Combining e-book search results with other results, presumably related material?
    • Do we need to FRBRized the result?
    • Can we embed e-books search in other platform such as a course management system?
    • Does it offer relevance ranking result?
    • User tagging?
    • Rules for use – who tells the users and how? When ERM stops and DRM kicks in?
    • ‘Unlimited access’ vs. charge out this copy model?
    • Pay per view or other use model?
    • Prerequisite requirements for delivery (specific browser, computer OS, etc.)
    • Granularity
      • deep links to title/chapter/page within an e-book?
      • Indexing and retrieval depth: chapter? pages? paragraph?
    • Resource sharing system, is it possible?

    e-books management

    • Is e-books management different than e-journals?
    • Has the role of Collection Management changed?
    • Staff role?
    • License, usage, DRM?
    • Budget, support, maintenance?

    Koppel summarized that:

    • e-books are still in their infancy.
    • e-books usage will follow, as will users expectations.
    • our experience with managing e-journals will make the move to managing e-books easier.
    • but there is still much to learn.

    There were several questions, discussions, and updates after the talk. A representative from Overdrive talked about their product and mentioned International Digital Publishing Forum, formerly the Open e-book Forum (OeBF). He also mentioned that Adobe just released a free Adobe Digital Edition 1.0 , for Windows and MacOS (linux version coming soon). This is a rich internet application (RIA), Flash-based. The software can also open and read PDF docs.

    Several ERM members presented reports from several conferences they went to: NASIG, ACRL, and ER&L. They participated at several focus groups discussing various issues on ERM:

    • ERM implementation and workflow planning space for discussion/online community for sharing best practices
    • ERM systems that come with some default settings
    • staffing for e-resources
    • training and appropriate staff levels
    • standardized licenses from publisher that they can upload to ERM
    • no standards for publishing e-data
    • ERM vendors to provide consultation services for ERM implementation

    Other tidbits mentioned:

    • Blog for ER&L
    • Victoria Reich from LOCKSS encourages libraries to use e-books because we can utilize preservation initiatives like LOCKSS to have permanent archive of our e-book collections.
    • ONIX standards for holdings data:
      • SOH (Serials Online Holdings) format v.1.1
      • SRN (Serials Release Notification) User Guide is available
    • OPLE – open source tool for ONIX for Serials

    One attendee wondered if there’s a possiblity for direct communication mechanism between publishers and libraries, as well as communication between publishers and agents, especially in term of licensing. Coincidently, my co-worker just reported that NISO has a working group called SERU (Shared e-Resource Understanding) that just published a draft on common understanding between libraries and publishers. This draft is aimed for publishers and libraries that prefer to simplify (or even remove the need of) journal licenses.

    ERM-IG now has a new mailing list.

    Preconference: Intro to Web Services

    [late posting due to Internet connection issue]

    Sara Randall from University of Rochester opened the program by giving us the definition of Web Services and its components. She showed Rochester’s CUIPID (Catalog User Interface Platform for Iterative Development) project that implements an XML-based library catalog with Google-like “did you mean” spell checking. She emphasized that web services are important to promote interoperability, including support for legacy applications and just-in-time integration.

    Eric Lease Morgan followed up with a more elaborate definition of the service, some simple hands-on examples, and several web services projects such as OAI-PMH, the WordNet thesaurus project from Princeton, Google Maps, Weather Channel on the desktop, and “semi web services” of integrating MyLibrary content into Notre Dame’s campus portal. He also emphasized the need to start learning XML and embracing web services especially since many users prefer accessing library resources through the Internet.

    Two case studies were then presented. Jeremy Frumkin from Oregon State discussed the OCKHAM Initiative project, a collaboration between Oregon State, Notre Dame, Emory, and Virginia Tech. Digital Library Services Registry (DLSR) enables easy advertising and discovery of digital library services by using a P2P network. Harvest-to-Query Service (H2Q) is a method for enabling any OAI-PMH available collection to be Z39.50 available. More info on http://ockham.org.

    Diane Vizine-Goetz from OCLC discussed their Library of Congress Controlled Vocabulary project, a service that checks the name against the Library of Congress authority file. Another project presented was Terminology Services that can be delivered through MS Office 2003 Research Task pane. The goal of this project is to ”make controlled vocabularies more accessible to people and computer applications.” This service should be available for public in July.

    From vendors’ perspective, Carl Grant represents Vendor Initiative for Enabling Web Services (VIEWS), an effort by vendors to enable interoperability in corporation with NISO. The initiative is hopefully to bring a win-win situation for the libraries as well as the vendors to achieve the optimal service. Read June 15th issue of the Library Journal on library automation (The Dis-Integrating World of Library Automation) about his thoughts on the service.