Developing Best Project Management Practices for IT Projects, Day 2

The two presenters were:
Frank Cervone, Assistant University Librarian for Information Technology
Northwestern University
f-cervone@northwestern.edu

Grace Sines
, Head, Information Technology Branch
USDA, ARS, National Agicultural Library
Gsines@nal.usda.gov

Day 2 continued with discussions on the 9 areas of knowledge within project management.

Having a work breakdown structure (WBS) can be helpful in time management. This is a decomposition of all the parts of the project in a hierarchical arrangement.

Quality management is difficult because you can’t just expect quality to happen. The team needs to set specific and measurable goals to help achieve quality, and check the quality throughout the life of the project life cycle.

Human resources management is also difficult because of the nature of people. It’s crucial to have a competent and committed staff, and to provide training if necessary. To keep the “expert” staff members from getting burned out since they are often put on many teams, try having team members rotate, or have the expert staff member mentor another team member instead.

Each member on the team has a different way of leading and responding to leadership. However, “If no one seems to be in charge, then no one is.”

As mentioned earlier, it’s important to have buy-in from the team. Team members need to know what their roles are on the team, and the leader should know the motivation of the team members for being on the team. And it’s also crucial that project plans not be developed in isolation, and to have the project manager pick his/her own team. The team then writes the scope or charge, and does the planning.

There are three types of team members:

  • Achievement motivated team member: These people set clear targets and provide lots of feedback.
  • Affiliation motivated team member: These people allocate time for small talk, need time to process, and often need support or coaching.
  • Power motivated team member: These people are concerned with getting involved, are usually very eager, and want to make sure people understand their responsibilities and key deliverables.

We discussed the various forms of pulling and pushing technologies (email, blogs, RSS feeds, bulletin boards, and websites) that can help increase communication within a project team. Frank said that he really liked the concept of the blog because you can have feeds coming from them, which gives people options.

Risk management is a systematic process for planning, identifying, analyzing, monitoring, and responding to and controlling risk. Risk is often seen as a negative thing, but could be looked at more positively, and might present new and challenging opportunities. The project team needs to do a risk analysis, evaluate contingency to be included, and carefully measure the risk.

How do you estimate time?

The presenters gave us a nifty equation that will help estimating how much time a particular activity within a project, or the project itself, will take.

O = optimistic guess of how long the project will take to complete
P = pessimistic guess of how long the project will take to complete
ML = most likely the time it will take to complete the project

O + P + (ML x4) / 6 = the time it will probably will take to complete the project

We looked at tools for helping manage the project parts. Microsoft Project or Visio are great tools, but an Excel spreadsheet can sometimes do the job just as well, depending on the size of the project.

Reasons why failure occurs:

  • Failing to establish commitment
  • Inappropriate skills
  • Project isn’t really necessary, or it’s seriously misguided
  • Premature commitment to a fixed budget or schedule
  • Adding resources to overcome schedule slippages
  • Inadequate people management skills


Why projects succeed:

  • High user involvement
  • commitment by all (including team and sponsors)
  • cultural acceptance
  • adequate time and resources
  • Support from the management
  • Clearly defined objective and requirements
  • Excellent planning and project management
  • Good communication
  • Being able to stop a project when needed

We concluded by talking about how to introduce these ideas and tools into our own work environments. Frank and Grace suggested that these templates and ideas could gradually be introduced so they have higher potential to get integrated.

Developing Best Project Management Practices for IT Projects, Day 1

The two presenters were:
Frank Cervone, Assistant University Librarian for Information Technology
Northwestern University
f-cervone@northwestern.edu

Grace Sines, Head, Information Technology Branch
USDA, ARS, National Agicultural Library
Gsines@nal.usda.gov

There were about 44 people in attendance, and there was an interesting breakdown from there:

  • 30+ from academic libraries
  • 0 from school libraries
  • 2 from Tech processing areas
  • 2 from collection development areas
  • 1 from marketing

Day 1 began with an overview of what we’d be doing:

  • Looking at project management in general
  • Defining what a project is and is not
  • Developing an understanding of what project management is primarily from the viewpoint of the Project Management Institute (PMI)
  • Investigating the 5 process groups and 9 knowledge areas
  • Examining dozens of project management templates, documents, examples, etc.
  • Examining the best practices for IT projects

The objectives were to basically provide us with information that we can begin using now and to make managing projects easier. As the preconference went on, I began to feel overwhelmed but grateful for the numerous templates and examples they handed out to build a wonderful tool kit of proven best project management practices.

Information technology projects are hard to manage because they involve people of different skill levels, budgets, distributed leadership techniques, and getting people to understand what the project’s scope really involves. You have to deal with differences of opinions. Frank mentioned the “movement” toward evidence-based librarianship, where decisions are not based on opinions, but rather based on research.

What is a project? A project has a beginning and an end. It’s not a repetitive task, and the end result is usually tangible. A program, on the other hand, is operational an ongoing.

It was stressed multiple times throughout the preconference that “The people who must do the work should be in on the planning of the work.”

Successful project managers:

  • Enable staff
  • Are excellent, ethical communicators
  • Have high administrative credibility
  • Are sensitive to interpersonal issues
  • Have political know-how
  • Practice participatory management (not hierarchical management)
  • Get buy-in from the team


Successful team members:

  • See lots of options, alternatives, and possibilites
  • Focus on things they can control
  • Feel challenged, energetic, and WANT to be there
  • Have clear and written goals
  • Learn from mistakes
  • Know themselves

Frank and Grace spent most of the preconference going through the project management blueprint outlined by the Project Management Body of Knowledge (PMBOK). There are 9 Knowledge areas and 5 Processes to project management. We had several large and small group discussions about these things. For example, we examined a project where an IT department was upgrading the library to Windows XP, and looked at the scope of the project, what risks are involved, complications in training and dealing with people and levels of service, etc.

9 Areas of Knowledge

  1. Scope Management
  2. Time Management
  3. Cost Management
  4. Quality Management
  5. Human Resources Management
  6. Communications Management
  7. Risk Management
  8. Procurement Management
  9. Integration Management

5 Processes to Project Management

  1. Initiating
  2. Planning
  3. Executing
  4. Monitoring and controlling
  5. Closing

Most of the work is done in the planning stage, and it should always be done in groups or teams. The project scope seems to be one of the most important pieces, because if it’s not clear, then the project may not go well.

Electronic Publishing Software for Libraries

This concurrent session covered the background, purpose, and evolution of the DPubS (Digital Publishing Systems) open source software project, based at Cornell University Library, as well as a case study based on Pennsylvania State University Libraries’ use of the package. The audience left with an appreciation of the potential of electronic publishing software to allow an academic library to provide enhanced services to its user community.

David Ruddy, from Cornell University Library’s Electronic Publishing Initiatives division, who has been involved with the project for a number of years, started by saying that the project had two main objectives:

  • to allow publishers to organise and deliver both open access and subscription controlled content; and
  • to give users the ability to navigate and access content.

The project came about because of a number of factors, including the disaggregation of the publishing industry (mainstream publishers often contract out their electronic services) and the rise in prices of conventional books and journals in the past 10 years. Other reasons for the project included a desire to offer publishing alternatives and offer a tool to allow others to become involved in scholarly publishing, and to support local initiatives in scholarly publishing.

Cornell University Library’s involvement in electronic publishing began as a result of Project Euclid, which started in 2000 and currently provides access to 45 journal titles in mathematics and statistics, involving 30 different publishers. Approximately 66% of the content is available as open access, and the remainder is subscription controlled. The Center for Innovative Publishing is involved with publishing several other titles, and has a number of other projects underway, primarily local initiatives.

The DPubS software began as a project in the Computer Science department, and was picked up the the Library in the late 1990s. Project Euclid provided the momentum for initial redevelopment of the software, and a Mellon grant (in combination with the involvement of Pennsylvania State University) funded further work in the last two years.

There was an ambitious development agenda. The main areas of work were the generalization of the software to support multiple document types, the development of different administrative interfaces, provision of interoperability with institutional repository software, and provision of editorial management facilities. In the first six months of 2006, 6 additional development partners were involved.

The system uses a simple object model, and the architecture is a distributed services model with clearly defined APIs. It supports different presentation options, which can be customized for each publication. It also enables content to be made available under multiple subscription and revenue models (not just open access). It is designed to have low maintenance and operating costs. The integration with an institutional repository means that the repository software can look after preservation and archiving, while DPubs focusses on presentation and access controls.

DPubS 2.0 was released in October 2006 on Sourceforge; it supports OAI 2.0, and can be used in combination with Fedora as an underlying repository.

Further plans include extending the editorial tools to support peer review, enabling it to work with dSpace as well as Fedora, enhancing the administration interfaces, documentation, and allowing contributions from the user community using the open source development model.

The challenges include finding others who are both interested in being involved and have the capability to contribute. David noted that it is a leap for libraries to move from being content consumers to being actively involved in content publishing.

See: http://dpubs.org/, http://dpubs.org/wiki/ and http://cip.cornell.edu for more information.

Mike Furlough, from Pennsylvania State University Libraries, then gave a user’s perspective on DPubS. Penn State has been a DPubS development partner, and their involvement has included testing alpha versions of the software, testing its integration with Fedora and dSpace, developing test cases for journal backfiles and conference proceedings, and refining and testing the editorial services.

At Penn State, the University Press is part of the University Libraries. The Office of Digital Scholarly Publishing wants to provide a scholar driven service, particularly for at risk literature. They hope to experiment with different business models; currently all of their content is available as open access, except for a print on demand facility. Their current project is Pennsylvania History, which has content available back to 1934. They expect to start publishing three other titles in 2007/2008. They are exploring ideas for publishing defunct journals, and setting up a conference publishing service, mainly for conferences hosted at Penn State. They will also consider new original content.

Outstanding questions they will be considering as they use DPubS are:

  • Does the content management architecture align with their mission?
  • How will their implementation contribute to the DPubS community?
  • What staffing levels are needed?
  • How can the publishing program be grown to support the teaching mission of the university?

See: http://dpubs.libraries.psu.edu and http://www.libraries.psu.edu/digital/ scholarlycomm/ for more information.

Q: Is there a space limitation on the number of journals or the number of objects?
A: No. The intent is to keep growing and scale the software up as necessary.

Q: Can DPubS handle journal articles supported by rich data sets?
A: Yes, they have already had this feature requested, and it can be handled by creating another format in the repository.

Q: What is involved in keeping these publishing initiatives going?
A: The journal needs a committed editor and board; it is the responsibility of the content creating community. Preservation and affordability are still works in progress. As journals grow production will require more staff. Library expertise in routine work can help, and it’s a good fit — for example, technical services staff can prepare metadata for journal content. It provides a good opportunity for the library.

Q. What is the role of the subject librarian?
A: Project Euclid works closely with Cornell’s math librarian. Subject librarians can provide advice in particular fields and help with relationship development.

Electronic publishing projects are an ongoing challenge, and require a different mindset. Each project needs to be evaluated.

Not So Different After All — Creating Access to Diverse Objects in Digital Repositories

Speakers:
Gretchen Gueguen, Digital Collections Librarian, University of Maryland Libraries
Jennifer O’Brien Roper, Metadata Librarian, University of Maryland Libraries
PowerPoint presentation

Gretchen Gueguen began the session by giving an overview of the work done by the University of Maryland Libraries. UM has identified four basic types of digital collections:

  1. Thematic collections, sometimes containing multiple types of digital objects, tightly organized around a single subject (example: Documenting the American South).
  2. Object collections, generally containing multiple object types not organized into topical collections (example: Indiana University Digital Library Program
  3. Packaged collections, containing secondary source materials about a topic but few, if any, primary source digital objects (example: Romantic Circles)
  4. After-the-Fact Collections, aggregating work from other locations into a single repository (example: NINES).

This is an unusual way of distinguishing between types of digital projects, but although the examples might be better, it might be useful.

Gretchen identified three building blocks of digital projects: metadata, vocabulary, and interface design. Under metadata she discussed what I would have called interoperability protocols, particularly Z39.50 and OAI-PMH. Under vocabularies, she discussed pre-coordinated vocabularies such as LCSH compared to post-coordinated vocabularies, which are more typical of the online environment. She also addressed local vocabularies, noting that lack of control may make them unmanageable. Hierarchical vocabularies also present opportunities for interface design, although she advocated the use of multiple hierarchies and multiple modes of access. She recommended the interface used in the Documenting the American South project, which includes browse lists based on LCSH.

Gretchen followed this discussion with a review of the Thomas MacGreevy Archive, a humanities research project originally intended to provide access to the writings of and about the Irish poet and critic Thomas MacGreevy (1893-1967). As the archive has grown, however, additional types of digital objects have been included, and the original system used for the textbase (TEI P4) has not accommodated these well. The need for a project redesign allowed their digital collections team to test some of their ideas about metadata, controlled vocabularies, and interface design. The team extended their use of TEI P4 to include other types of objects (which is possible, although not perhaps the easiest approach!) and adding additional terms to their locally constrained controlled vocabulary that will enhance the browsability of these objects. The team has also redesigned their search interface to make the site more user-friendly. Gretchen did note that if they were to start from scratch on this project, they would not probably use TEI, although she also commented that TEI P5 would be better suited to a multi-type digital collection because of its ability to handle namespaces.

Jennifer O’Brien Roper then spoke about the University of Maryland’s Digital Repository, which uses Fedora for the underlying repository with customized interfaces for each collection. During the question and answer period, it became clear that this is totally separate from DRUM, the Digital Repository at the University of Maryland, which is a DSpace installation. As part of the development of this repository, UM also developed a rich metadata standard, University of Maryland Descriptive Metadata (UMDM), which combines elements from Dublin Core and VRA into a custom DTD. Their system was based on one originally developed by the University of Virginia, who has since switched to MODS. The descriptive metadata is then embedded in a METS wrapper that adds metadata about file structure.

UM uses an ingest system to take metadata from disparate source and normalize it into UMDM. This ingest system is form-based and includes drop-down lists for controlled vocabulary items. Every item must have at least one subject heading from a common vocabulary; the UM Technical Services Department will create authority files as appropriate to allow the vocabularies to evolve. Additional terms can be selected from standard thesauri, including LCSH, the Getty Art & Architecture Thesaurus, and the Thesaurus for Graphic Materials. The next step, beginning in 2007, will be the development of a system for browsing based on these common subject terms. This will not be a dynamic system; instead, the indexes will be generated weekly.

Are There No Limits to What NCIP Can Do?

This session was subtitled “E-Commerce, Self-Service, Bindery, ILL, Statistics – New Applications for the NCIP Protocol”, and as the session began attendees got an answer to the question posed by the title, as presenter Ted Koppel of Ex Libris admitted that new applications for NCIP have not been as plentiful as was anticipated, so the presentation has been re-focused to include a section on how the NCIP standards process is working, and those bindery, statistics and e-commerce applications went missing. So I guess the question in the title has been answered: there are some things it can’t do (yet!).

The presentation came in four parts three parts, with Ted’s introduction, Candy Zemon addressing problems and proposed solutions with NCIP, Jennifer Pearson describing an example of OCLC’s use of NCIP, and Candy Zemon, this time filling in for an absent Gail Wanner, previewing a browser plug-in being developed as part of the “Rethinking Resource Sharing Initiative”.

Part 1

Candy Zemon of Polaris presented on “NCIP: Growing pains”, describing the protocol and how it came to be and also helped explain why the session had to be rejiggered.

NCIP (the NISO circulation interchange protocol, also known as Z39.83), was intended to help establish communications between disparate systems for use in Direct Consortial Lending (DCL), circulation ILL and self service circulation systems. Today, NCIP is an established standard up for a regular review soon.

The question presents itself- why has not this useful standard seen more success? First, NCIP is invisible to the user (when it works!). In many cases, it does what 3M’s SIP or SIP2 do.

While there have been many pilot projects, current uses for NCIP include bindery, self-check, self-sorters, and self-service finance.

The NCIP Implementers Group met to review how problems and perceived problems with the protocol problems, and plan ways to solve them. In part, NCIP came to be used in ways not originally intended, finding fewer of the new applications Ted mentioned in the introduction, and more use in self service circulation and self-sort situations (perhaps because of difficulties with the rather loosely maintained 3M SIP protocols). The sense was that NCIP was too complex for some of these uses. The solutions proposed include making the messages smaller, with fewer mandatory messages, and the the removal of some message elements in situations where a trust relationship between the communicating systems was already established.

Documentation was also felt to be a problem, and the existing documentation will be reorganized and additional documents will be created, including more targeted guides for specific uses, and some “Why use NCIP” guides.

Confusion has been caused by the overlap in functionality between the DCB (direct consortial borrowing) and C/ILL (circulation/inter library loan) profiles in NCIP. The solution: harmonize these profiles.

It was felt that NCIP needs greater extensibility, a major part of the appeal of the 3M SIP protocols. NCIP may incorporate the XML tag.

As NCIP has found increasing use in self service situations, bandwidth concerns have emerged. The solution will be to add the ability to batch or list in messages, as well as reduce the overhead in trusted partner situations.

Finally, a number of bugs are still outstanding. The solution: fix ‘em!

Part 2

Jennifer Pearson of OCLC described the use of NCIP in OCLC’s Worldcat Resource Sharing program.

OCLC is seeking broaden resource sharing from simply library-initiated “inter library loan” to patron-initiated “fulfillment” (i.e. to include purchase options). The hope is to keep libraries in the game in this Age of Amazon, and to keep OCLC in the game as a central, “neutral broker” of the whole process.

Authenticated and validated through NCIP, patrons could have borrowing capabilities from home (that might include home delivery), including purchase options, with all the disparate systems involved in such a process tied together through NCIP.

OCLC is working to make NCIP management less complex by serving as central broker and thus, fewer point to point setups are needed.

OCLC is currently partnering with SirsiDynix, Polaris, and a group of Montana libraries. Work with TLC and Carl are expected to start later. The system may debut in next calendar year.

Part 3

Candy Zemon, stepping in for original presenter Gail Wanner (of SirsiDynix and the “Rethinking resource sharing initiative”) presented “Rethinking resource sharing: getting what you want“.

Candy briefly described the history and goals of the Rethinking Resource Sharing Initiative, which began with a white paper in February 2005, continued through several national forums, and updated white paper, developed a formal leadership, and now plans yet another forum.

Their goal is to create a new global framework to allow people to get what they want based on cost, time, format and delivery, and that is user focused (i.e. can both start and end outside a library and is not library-centered), vendor neutral, has a global context, and uses the concept of Resource Sharing (not just ILL). With ILL, scarce resources are allocated, but with RS, one picks from an abundance of resources.

They are currently working on a user-centric tool, the Get-it Button Project, an open source, cross-vendor, modular web browser plug-in that parses web pages to find published materials, performs an availability check, and displays results based on a patron’s profile. It may be previewed by ALA Midwinter.

The session concluded with a discussion of marketing options for he plug-in.

Multimedia Tutorials for Remote Users

Show me the librarian that doesn’t have repetitive questions that receive repetitive answers and I’ll show you oceanfront property in West Las Vegas. If, on the other hand you are a librarian that would like to have a video recording of the steps you have repeated over and over, then video tutorials are a route you may want to consider.

Whoa! I’m not a videographer! You don’t have to be a programmer, professional videographer or uber geek to use screen-capturing software and develop video tutorials. There are several options available to the novice tutorial developer. All of them are relatively easy.

A few examples:

TechSmith’s Camtasia
Adobe’s Captivate
Deskshare’s My Screen Recorder
Instant Demo

The price range on these software packages runs from just under $30 to around $300.
Instant Demo and My Screen Recorder are on the low end, Camtasia is in the middle and Captivate is on the high end of the price range. File size runs conversely to the price range listings.

The application chosen by the presenter was Camtasia. This was chosen because her university adopted it, because it was easy to use and because her remote users needed the tutorials it would produce.

Who are remote users?

MOST of today’s college communities are comprised of remote users therefore, assistance to users have changed. Nobody wants to ask questions face to face or on the phone (even with an 800 phone number). The ways to meet the library users has changed over time.

  • 1986 – phone and paper handout packets
  • 1996 – phone and paper handout packets, email and web pages
  • 2006 – phone and paper handout packets, email and web pages, IM, e-reserves, CMS, social software, multimedia tutorials

Why did OSU choose video tutorial for there next step in remote user service?

Many of today’s learners are visual learners. They need to see what you are doing online. The video tutorial gives them that option without compromising the desire for anonymity. These tutorials are relatively timeless and repeatable (tutoring on demand 24/7).

What does it take to produce a video tutorial using Camtasia?

Video creation requires software designed to record and edit, hardware with minimum requirements to run the software chosen, a quiet space with microphone and computer, and a script (not absolutely required but recommended). The quiet space is required due to the audio recording process as well as the need to concentrate. A headset microphone combination reduces uneven recording by placing the microphone within constant mouth-to-microphone distance.

What tips can you give me?

  • Don’t – ad lib, use verbal pauses such as ummmm, errr, or hmmmm, cough, sneeze or sniff or swear
  • If you can’t talk and move the mouse at the same time, record the visual and audio separately
  • Think about your computer setup before purchasing a combination headset/microphone.

There are good how-to tutorials on the Camtasia website at http://www.techsmith.com . Check them out!

Session presenter: Christina Biles, Digital Library Services Librarian, Oklahoma State University

CUIPID 4: Building a faceted searching and browsing interface for your library catalog

(note: the preconference material referred to the software as CUIPID 3, but a new version has since been completed)

CUIPID (pronounced “cupid”) is the University of Rochester’s Catalog User Interface Platform for Iterative Development. Built by their Digital Initiatives Unit, it serves as an experimental base for library catalog enhancements.

David Lindahl and Jeff Suszczynski were on hand to walk us through what CUIPID is, as well as some insight into the development process.

First we learned a bit about just what the Digital Initiatives Unit is. The staff of 8 (including a wide variety of non-librarian disciplines, such as an anthropologist) performs constant research into user needs via work practice studies and other methods. They just finished a 2 year comprehensive study of how undergrads write papers. We saw a video clip from interviews conducted at night in a UofR dorm, which was both informative and quite funny. Some interesting facts learned: Freshmen don’t stop at just the first three results of a search, and are not afraid of the reference desk. And most are less capable with technology than expected.

On to CUIPID, which has gone through a lot of changes. Version 1 used a small subset of records in MARCXML format, and solved 80% of UI issues they had previously identified. It collapsed similar editions via text matching, used the Google spellcheck API, etc. But unfortunately it wouldn’t scale up well – the license for the Verity indexing tool they used would be prohibitively expensive to use for the full sized catalog.

Next came SARA (I didn’t catch what the acronym stands for). It was a home made metasearch engine, covering all types of material the library holds – books, web sites, video, subject guides, databases, etc. It ran multiple concurrent queries to the catalog – no need to select author, title, etc. before the search. Users would narrow results by type later. Unfortunately, SARA had extremely debilitating performance issues.

CUIPID 2 was built on a trial version of TextML, an XML database product. It’s interface and features were similar to SARA’s, and also faced scalability issues. In addition, TextML would not have been free for the full version.

CUIPID 3 indexed more than 3 million records, using MS SQL server and ColdFusion for the interface. It was pretty similar to the current CUIPID 4, which was covered in more detail:

CUIPID 4’s features are going to be slowly integrated into the U of R’s existing Voyager catalog. The name has been informally changed to simply C4, “because it’s fun” to say. It follows the previously described faceted method of searching, letting the user drill down to the correct categories of results. Their inspiration for this system came from web sites like Sears’ and Home Depot’s.

It interfaces fully with the current list of student login information, allowing services like placing holds and recalls. There’s a number of relatively small features that are nice touches – displaying contextually appropriate metadata, for example. So for a movie result the director gets displayed, instead of the author for a book. The system makes extensive use of various APIs, pulling in external data like Amazon’s book covers and reviews, recent blog posts about a title via Technorati, etc.

Describing CUIPID 4 admittedly sounds sort of dry. But we got to see a live demo of the system, and it really blew me away. The interface is very intuitive, response time is fast, and it seems to be a pretty polished product even now.

Features to be added in the future include replacing the Amazon images with local copies, imiproved acceptance of unicode in catalog records, holdings records, and FRBR functionality (either homegrown or via OCLC’s system).

A separate project of the Digital Initiatives Unit was mentioned briefly – the eXtensible Catalog (XC). While still in early pre-planning, ultimately they hope to make the XC an open source catalog to hold all types of collections. It will be designed to be experimented with, and be compatible with your existing ILS and any form of scripting (PHP, ASP, CF, etc). Sounds like a very exciting project to me – more information is at www.extensiblecatalog.info.

This presentation had a huge amount of data for me to take in, but I’m glad I went. It was really interesting to see some of these catalog innovations in practice.

Multimedia Tutorials for Remote Users

Show me the librarian that doesn’t have repetitive questions that receive repetitive answers and I’ll show you oceanfront property in West Las Vegas. If, on the other hand you are a librarian that would like to have a video recording of the steps you have repeated over and over, then video tutorials are a route you may want to consider.

Whoa! I’m not a videographer! You don’t have to be a programmer, professional videographer or uber geek to use screen capturing software and developing video tutorials. There are several options available to the novice tutorial developer. All of them are relatively easy.

A few examples:

TechSmith’s Camtasia
Captivate
My Screen Recorder
Instant Demo

The price range on these software packages runs from just under $30 to around $300.
Instant Demo and My Screen Recorder are on the low end, Camtasia is in the middle and Captivate is on the high end of the price range. File size runs conversely to the price range listings.

Implications of Interoperable Systems and Geographic Information to Libraries

Speakers:
Chieko Maene, Map Librarian, University of Illinois at Chicago Library
John Shuler, Government Documents Department Head, University of Illinois at Chicago Library

Chieko Maene began the session with an overview of Web services, particularly the concepts of reusability, information sharing, and service orientation. She focused on the Open Geospatial Consortium (OGC) and its standards for Web Map Services (WMS) and Web Feature Services (WFS). The Web Map Service was first described in 1999 and became an ISO standard in 2005.

Some sample Web mapping services that Chieko pointed out have been developed by the U.S. Geological Survey:

Chieko demonstrated two WMS applications developed at the University of Illinois at Chicago Library. The first, the Chicago Aerial Photograph Finder, uses digital shapefiles from the UIC’s digital index, combined with historical aerial photos from the Illinois State Geological Survey (ISGS) delivered over an ArcIMS layer and Digital Orthophoto Quadrangles for Urban Areas delivered as a WMS. The second application, the Digital Global Index Database, is a collection of online map indexes published as a WFS to share them with other users. It uses ESRI’s proprietary ArcIMS server and GeoServer, an Open Source Map Server. Chieko also recommended uDig as an open-source desktop GIS application.

Due to lack of time, John Shuler kept his comments short. He referred to the Geospatial One Stop (GOS) project, which is an attempt to provide a geospatial data and map services catalog for data collected across various government agencies. He described this project as a “dream,” one that he does not believe that the U.S. government can realize. He cited two concerns: first, that the U.D. government has withdrawn a number of aerial photos and geospatial data sets due to concerns about homeland security, and second, that the U.S. government has stated concerns about whether its services provide undue competition to the private sector.