The last speaker was Jennifer Bowen from the University of Rochester. Jenniferâ€™s most recent claim to fame is as the ALA representative to the Joint Steering Committee for RDA development, but she explicitly preceded her remarks by commenting on the pleasure of speaking for herself at this meeting.
She began by explaining the RDA development structure, including the small paid staff supporting the effort. She described the reasons behind RDAâ€™s development, and expressed a teensy bit of regret that she has stepped down as ALA rep just as things got really interesting. She pointed out some of the things that hampered RDAâ€™s development:
â€¢ too much hype, too many hard to achieve goals
â€¢ need for backwards compatibility interfered with looking forward
â€¢ tight funding and time line
â€¢ success of the standard tied to the success of the product
â€¢ consultation process needed improvement
Consulting with other communities has been an issue all along, both in determining what communities should be consulted, and what was to be gained by consulting with them. Clearly archives and museum communities were important, but also other communities beyond libraries with whom metadata sharing was desired or who had something specific to offer.
In discussing what is gained by consultation, obviously metadata interoperability is paramount, but a better technology focus coming from outside libraries has also been a gain. In the context of learning from other communities, Jennifer mentioned that her eXtensible catalog project team includes an anthropologist, and they have found that association to be very useful.
Successful consultations must be at the appropriate level, and may need to be ongoing, rather than one time. Difficulties arise with funding and sustainability for those kinds of consultations. Jennifer included some recommendations for RDA:
â€¢ move forward to first release in 2009
â€¢ aggressively move forward with RDA application profile and related efforts
â€¢ restructure the effort to focus more on consultation and less on document editing
Jennifer made some comments on the future of controlled data, and talked about whatâ€™s needed:
â€¢ need identifiers!
â€¢ evaluate the potential based on well designed systems (by implication not based on the systems we have now)
â€¢ need better tools for catalogers (she pointed out Gary Strawn in the audience, who has built a number of great tools for Voyager users). As an example she described a tool developed at the U. of R. with computer science students, which is used to disambiguate names
â€¢ facilitate faceted browsing
Jennifer talked a bit about what is needed to create new catalogs. She mentioned in particular using web services to enrich data, metadata supporting faceted browsing, and better structured data. She emphasized that we need to encourage more experimentation and research, something not facilitated by current ILS systems. She mentioned some of the new open source developments and suggested that the library community should embrace the new open source software environment, and feed back the results of their work into the standards arena.
This last point lead into a discussion about metadata sharing. Jennifer talked about the need to share local augmentation of data and to think beyond how libraries currently share into some important new territory. She pointedly mentioned the sometimes harsh discussions about visions that populate our list-based discussions. She offered a positive vision for bibliographic control:
â€¢ give catalogers effective tools so they can focus on the intellectual effort
â€¢ librarians should participate in systems development
â€¢ find new ways for catalogers to contribute widely to metadata improvement
â€¢ catalogers need to be confident that systems will use their work effectively
To close, Jennifer recommended more positive, decisive future actions, to clearly redefine roles and responsibilities, plus more efforts to explain and justify the tradeoffs for change.
Questions to Jennifer were difficult onesâ€”how, for instance, to â€˜marketâ€™ the important changes coming without frightening the community, while reassuring the managers that the changes will be significant enough to justify the cost. There was some talk of history, with Brian Schottlaender replaying the transition to AACR2, where erasing cards was a cost factor. He contended that the ability to change native machine-readable cataloging made all the difference, in terms of cost of change. Bob Wolven questioned whether we need library standards to be further deconstructed on a practical level. Jennifer mentioned some tentative plans for using the RDA AP in the extensible catalog work.
Janet Swan-Hill brought up an ethical issue that arises as libraries seek to cut their costs as well as their contribution to the whole. Sara Shatford Layne spoke during the public comment period, urging us not to forget the academic researcher, as user, in our deliberations. She also supported the notion that we need more structured data rather than less, and extolled the virtues of authority data. She urged us to remember that â€œcataloging is a public goodâ€ and should not be judged only by its economic value.
Kevin Randall was the second person signed up to speak. He felt that there was no inherent conflict between the first two speakers: one spoke of the content, the other the container. He was worried less about containers, except when they leaked, but was more concerned about the content. He was also concerned that there were still few good tools for catalogers. He spoke as well about the CONSER standard record, which has been a topic of discussion for over a year. The record focuses on â€˜access,â€™ not â€˜identification.â€™ He felt it was rushed, and not properly identified as a â€˜less than fullâ€™ alternative. Kevin suggested that there was a need to think about the nature of the cooperative structure that we have developed over time.
A third commenter, Michael Norman from UIUC, discussed a digitization project his institution has with the Open Content Alliance. Some of the resources scanned had not been checked out in many years, but were now being downloaded repeatedly. As this work is being done they are exploring options to add value to the metadata (citations, TOC data, indexes). Since there is no place for this information in MARC, they have been experimenting with using METS. He mentioned also the single vs. multiple record issue, now exacerbated by the scanning projects, and hoped that there would be more discussion about those issues. He pointed out that librariesâ€™ use of OAI-PMH should improve the general quality of metadata. Michael also spoke briefly about how automated metadata generation is being considered as part of the package moving forward, for their institutional repository, for instance.
Clifford Lynch, in an inspired role as summarizer, took over the microphone. He first urged the audience to consider submitting additional comments to the website. He also said that he would attempt to build some bridges to the next meeting, as the WG was thinking about what questions should be considered as they approach â€œEconomics and Organization of Bibliographic Dataâ€ as their third topic.
He voiced his intention to extract the things he did hear that were of interest, as well as what he was amazed he didnâ€™t hear. He mentioned the first speakerâ€™s focus on quality control, and what that might imply as we consider our shared bibliographic enterprise. He pointed out that quality is never perfect, and in an operational way in systems we should think carefully about how to measure quality. Once we can talk about measuring, we can begin to talk about the economics of the problem, within an environment of constrained funding. We donâ€™t really know the trade offs yet, but we should think about whether we can provide the proper value systems to promote quality.
Cliff mentioned the â€œinsightful commentsâ€ on the importance of legacy vocabularies, and the importance of exploring the economic issues of opening these things up. He pointed out the important place these will take in our infrastructure. He wondered why we hadnâ€™t spoken much about examining the content of things like name authorities, especially in the context of the needs of publishers and rights organizations. He mentioned the discussion of codification of bibliographic practice, but thinks that the more central questions come with the notion of completely digital objects that can be operated on in a computational way. He was interested that this did not come up in the discussion since it will come up again, particularly in the economic discussions.
As for the use of the term â€˜bibliographic controlâ€™â€”we will have physical objects around for a while, with surrogates a part of our life for some time to come. The final commenter inspired him to think about how much of the content will migrate to the metadata, and how we might think about a world ahead full digital things, where we may need to draw some line between one and the other. This is no longer a theoretical question, clearly.
Cliff mentioned the discussions about tools and uses of metadata by systems–the systems we use now clearly affect our view. In terms of tools for cataloging, it needs to be askedâ€”what are we trying to do better? Quality control? More efficiency? Richer description? What should be our priorities in terms of tools?
At the end he told a story about a group he ran into recently called the â€œProofreaderâ€™s Collectiveâ€ proofreading text transcriptions, with Project Gutenberg and other projects. He wondered whether this model might be taken advantage of as we discuss issues like quality control. Three important issues to take away: (1) there is a dichotomy between perfect record and the reality of resource descriptions; (2) there is important value in legacy vocabularies; (3) there are significant systems issues when metadata moves around.
In Cliffâ€™s view there are a lot of players in the arena of standards development, and we need to be able to explain these efforts better. We also need to discuss the economics and value of having standards openly accessible. The IETF and W3C have been successful because they have made their activities easily accessible; NISO has also taken up this challenge. If these descriptive standards are to have an impact they must be widely and easily available, in convenient forms, and the economics of this issue must be addressed.
Richard Stewart, Indian Trails Public Library, wondered at the lack of comments by public libraries. He mentioned that we do have common ground, but also have needs that havenâ€™t been addressed sufficiently in this process. Cliff suggested that the WG would welcome comments from public libraries.
James Nye, a University of Chicago bibliographer for Asian collections, mentioned the issues of regional scripts and the systems to handle them. He mentioned that the standards we use donâ€™t effectively provide access to users in those areas, which limits the possibilities for collaboration with librarians in these areas as well.
Joan Scuitema University of Illinois, Chicago, pointed out that there is a lot of black and white thinking in libraries and systems. At some point in her career she acquired a degree as a therapist, and she notes that in an environment of extreme change, this tendency to ignore the gray areas is exacerbated. She urged the group to â€˜move back to the centerâ€™ and deal realistically with these issues, and resist the urge to declare anything less than perfection a failure.
Richard Amelung pointed out that the issue of scripts is one of those common grounds with public libraries, who provide services to a broad diversity of immigrant communities.
Marc Gartler, Harrington College of Design, in a comment about image data, noted our current approach to images lacks sufficient granularity.
Deanna Marcum, in her final thoughts, described LCâ€™s position as a large gray area. She mentioned Jenniferâ€™s presentation and her note about roles and responsibilities, and mentioned that LC has been widely misunderstood in those conversations. When the strategic planning process began, she read all the LC annual reports, a difficult but enlightening process. The early ones were philosophical documents, thoughtful views of what LC should be and do. LC has been the leader for decades in bibliographic control, largely because it has created more records than anyone else, and because it has assumed over time the responsibility to lead in this area. The library has assumed it should be an innovator, but the library has also taken on the task of support and maintenance of the cataloging community, and thereâ€™s a conflict between the structure need for this support and that needed to innovate. LC understands and appreciate the confidence others have in the institution, but also want to innovate where they can.
LCs task is to serve all libraries, all citizens, other national libraries, etc., but the balance for future decisions is complicated by a lack of new and continuing resources. Not all the libraryâ€™s holdings are under bibliographic control, and those that arenâ€™t also are not really available to users. Is it more important to make them available, perhaps by digitization, or to bring them under bibliographic control? She also mentioned that the technology they have now does not support other scripts.
When she met with the ALA Board two summers ago, she was asked how much money was in the LC budget to support its service to other librarians. The answer is â€œzero.â€ Congress has never supported financially this kind of service thought they support it in theory. They would like to continue to do this, but the challenge is great.