Category Archives: 2007

Annual LITA Forum 2007, Denver

Terminology Management Systems

The last of the ISO activities forwarded to us by the busy Cindy Hepfer, ALA Representative to NISO, is an ISO Committee Draft of an International Standard issued for ballot: ISO/DIS 26162, Design, implementation and maintenance of terminology management systems.

NISO has forwarded the following about this draft standard:

“This is a ballot for the draft standard, ISO/DIS 26162, Systems to manage terminology, knowledge and content – Design, implementation and maintenance of terminology management systems. This ballot is from TC37 / SC3 (Terminology and other language and content resources / Systems to manage terminology, knowledge and content).“

“ISO/DIS 26162 is one of a family of standards to facilitate the exchange of terminological data. This standard gives guidance on choosing the relevant data categories, designing and implementing a data model and a user interface of a terminology management system (TMS) with a view to the intended user group. The phases described here are indispensable for the successful development of a TMS and for avoiding costly errors. The standard may be used for choosing the appropriate TMS for a certain purpose. This standard is intended for terminologists, software developers and others who are involved in the process of developing or acquiring a TMS.”

As Cindy reminds us, ALA is a voting member of NISO, and NISO is the official U.S. voting member for the International Organization for Standardization (ISO) Technical Committee 46 on Information and Documentation. This is not a NISO standard, but is being balloted by ISO’s TC46. ALA is providing feedback to NISO as to whether to approve or disapprove the standard. NISO will review and consider this feedback prior to submitting the U.S. vote.

If you have an interest in commenting on this work and recommending a vote, please let Cindy know by May 15, 2009. Your vote options are: Yes (approve the new project), No (do not approve the project), and Abstain (from the vote). Comments are required for No and Abstain votes.

The draft standard is available to ALA members by applying directly to Cindy (HSLcindy@buffalo.edu) and confirming your ALA membership. If you can, please copy me (metadata.maven@gmail.com) on your request.

Diane Hillmann
LITA Standards Coordinator

Annette Smith – 2007 LITA Forum Travel Grant Winner

Ms. Annette Smith from Barbados, West Indies, was the winner of the 2007 LITA-IRC Travel Grant to the 2007 LITA Forum. Ms. Smith is the Director of the National Library Service, Bridgetown, Barbados, WI. In her candidate’s report, she gives her impressions of the 2007 LITA Forum in Denver, Colorado.
—————————————————————————
2007 LITA Forum

Phew! The flight was touching down in Denver, I had made it. I hadn’t dared to say this before; it wasn’t unheard of to have an aircraft turn back for one reason or other.

I arrived in Denver on October 5 late at night, too late even to buy a toothbrush. I didn’t care; I had made it. I was the lucky recipient of the 2007 LITA Travel Grant and I had made it to the main conference of the National Forum.

I had registered in August to catch the “early bird special.” My plan was to arrive in Denver on the afternoon of the third, attend the pre-conference, the main conference, and spend two days, through the courtesy of the Denver Library Association, visiting public libraries and looking at services and programmes.

My plans, to quote the old adage, had nearly “come to naught” as local conditions conspired to prevent me from leaving Barbados. So here I was, arriving at almost 10 PM on the night of the fifth, two days into the Forum, looking for a taxi and a toothbrush! I found the taxi; but, instead of wending my way to the room I had originally booked at the Denver Marriott Center Hotel, the venue of the Forum, I was now on my way to the Marriott at Cherry Creek, $ 17 US per trip from the venue of the Forum.

The Forum was all I had expected. The theme was Technology with Altitude. I had checked the schedule and had narrowed my list of “absolutely must attend” sessions reluctantly to six of the concurrent sessions, two general ones, and all of the poster sessions. I had also decided that I would try to register on the spot for the second pre-conference if there was still space.

My late arrival should have forced me to reduce the number of concurrent sessions I could attend. This would have been the sensible approach but instead I tried to regain lost time by hopping around from session to session. On hindsight, I should have relied on the conference papers to cover the areas I could not attend. Eventually I attended David King’s, The Future is not out of Reach: Change, Library 2.0 and Emerging Trends; Corrado’s, http://Library 2.0; some of Catherine Dannik’s, It’s Up and Running. Now What …; Martha Chantiny, Using the Street Print Engine for Digital Image Collections at the University of Hawaii; all of the Poster Sessions and Jeremy Frumkin, In our Cages with Golden Bars. I was also able to spend one day visiting libraries.

It seems to me that technology and library go hand in hand, like the proverbial ‘hand and glove’. Every time a new application comes on the scene the library community finds a way to build it into the programme or service delivery system. However, for some of us, the new technologies are creating an operating environment that, if not totally unfamiliar, at least appears a lot different from the one to which some of us have grown accustomed. It may well be, as Toffler wrote way back in the 70s, that the time lag between the idea of a new technology and the application of that technology has been drastically cut. More than 30 years later this analysis is probably truer than it was then.

In the past, when the librarian and libraries guarded access to the portals of knowledge, when we stood between the customers and the technology, change went on around us but if we could not afford to buy it we could keep quiet about it. For some of us this has all changed. The customer now not only knows what is on the market, he knows how to use it and when the new release is out, long before some libraries and librarians even see the outdated beta version.

At the time of writing his Future Shock, Toffler identified the impact of the application of technology at different levels separating people into three groups:

- People of the past whose lives were still geared to the slower rhythms of agriculture making up 70%;
– The industrialized people of the present who had only lingering memories of the agricultural past making up more than 25%; and
– The people of the future, about 2-3%, the earliest citizens of the worldwide super industrial society always looking for a change.

I feel Toffler could have pegged a fourth group; a group of “wannebees” a group that understands, even though it cannot climb on to the bandwagon, that the synergies created by the evolving customer needs and new technologies would force change; that the change would affect tasks, requiring different skills and qualities to perform these tasks as well as requiring different styles in management and leadership.

I would like to think that even if I was not in the 25 or 2 % that at least I had left 70% group and could be in the fourth group. So I arrived at LITA 2007 with my checklist of questions:

- What’s the latest in the new ICTs?
– What discrete technologies do I need to know about?
– How relevant are they to a small library in a developing or country-in-transition stage?
– Are these technologies affordable?
– Who needs these technologies?
– Can one afford to ignore these technologies and for how long?
– What competencies will the library need to embrace these technologies to remain relevant?
– What structures will need to be dismantled or rebuilt to adjust?
– Is it possible to compensate for the lack of these technologies?

I had gotten to the Forum late but I was glad that I had made it. I was glad that I had had the opportunity to attend. I left with papers that would help to narrow the information gap created when I missed half the sessions, with ideas, answers, more questions, but at least with the names of contacts I had made; maybe, and more than likely, finding answers in the future would not be so hard.

I owe thanks to a host of people for providing this opportunity. Some of them I may never find out about and some I hope I have thanked already but here I should like to thank my friend and mentor Carla Stoffle for bringing the Grant to my attention; to Claudia Hill for working her own brand of magic when it seemed as if planning and effort would lose the day-thank you so much Claudia-and to Rochelle Logan of the Douglas County Libraries who was gracious enough to clear a slot in her busy schedule to show me around some of the libraries.

To the 2008 lucky candidate enjoy yourself, enjoy the intimate environment of a small meeting, ALA participants know what I mean!

Annette Smith

Integrating Information Resources: The tale of the Primo development partnership at Vanderbilt

PROGRAM BLURB:
Vanderbilt formed a partnership with Ex Libris to develop Primo, a new search and retrieval platform for library and university resources. The session will cover:
· Project management experience from the project
· Working with staff that are not confident with the product
· Shifting time schedules
· What happens when the deliverables change
· Staffing needs
· Technical development
· How to integrate two different ILS philosophies
· Creating a GetIT system
· Extracting information

Speakers: Dale Poulter and Jody Combs, Vanderbilt University

Saturday 10:50am; about 50 in audience

NOTES:

Explanation of link between strategic plan & partnership was used to develop a good support base

Environment:
– Users were searching for articles in opac – i.e. looking in the wrong place – documented by log files
– Vanderbilt library also manages blackboard system for campus
– In terms of staff – you can work with the “constructively skeptical” but not the “terminally skeptical”

Project:
– Amazon data was used for enhancement metadata
– Had an integrated comment feature in their preview version
– Created a test pilot web site, put links on opac and other library web page
– Combining all the holdings records into a single MARC record for processing – one of the most difficult things to do.

Alpha Search Jean & Alexander Heard Library – Demo:
– Sign-in is also remote access authentication.
– Can save search sets, send to Endnote, tag, output to del.icio.us
– E-shelf (my book shelf feature)
– Spell check like google – suggestions “do you mean” – but only recommends if there is something that matches in their database
– “Facets” = new jargon for search limit
– Suggested new searches functionality (“more like this”)
– When entering comments have to click on an agreement to make comment public and to be used by Vanderbilt

Speaker comments:
* Goal: once user is at the item can get it as quickly as possible, or provide them with alternatives to request item
* “took a great deal of work” to get the authority checking/collapsing function into the product
* Can define what group can access what – like circ groups – called “scope”
* Primo has multiple access level account functionality
* “total cost of ownership is very low”
* Can change rank to control what the default search is.
* Per tab and per view default view settings can be modified
* Don’t have universal one-login works everywhere system yet.
* Version 4 is faster than 3.3 – federated searching will always be slower than single interface search

Questions from audience:

* What’s the role for native interfaces after Primo is rolled out?
No. They plan to go with Primo as *the* interface
Then later said, yes we would link out to the native interface and they would stay available to link out to.

* Will librarians be teaching native interfaces and well as Primo?
Yes.

* Have you thought about pipes going directly to databases instead of Metalib?
Depends on the vendor, might be in future development
Vendor might create a Primo-compatible index at the database side to incorporate into the Primo federated search. Very complex option.
But the vendor would keep control of their data. Might be better to always have a separate tab for separate searching.

* Do you need to have an OPAC underneath it?
Yes. The ILS data or digital repository data needs to be maintained in the background.
To clarify: you wouldn’t NEED to run the local opac, but they will do that because some people still prefer it.

* Primo is built on Lucene – why use Primo instead of Lucene, which is “free”?
Partnering with very large institutions which have the resources to create a working product in a short period of time.
Wanted to ultimately achieve a very user-oriented product.

* Have you done much customization of Ex Libris products, Primo ramifications?
They haven’t made a lot of changes and have not yet broken anything.

* Scenarios where user has to bounce out to the native OPAC? Does it confuse users?
Demo of Primo search, use “get it” button – brings up their opac record – right now then click on ILL and have to link again – but they are working on making it all seamless, one login and all processes in Primo.
If a given function can be offered in all data sources, it would make sense to push that to the Primo interface.
They haven’t had any confused user feedback.

* Why did they choose default search as combo OPAC and TV News?
They wanted to demonstrate that you can search all local data sources at once.

It’s Up and Running, Now What? Strategies for Building Content in an Institutional Repository

Speaker: Catherine M. Jannik, Georgia Institute of Technology Library and Information Center
Program blurb: In August 2004, Georgia Tech Library launched SMARTech with approximately 2,500 legacy items. In the beginning, we focused on authors self-archiving pre-prints and postprints, research and technical reports, and electronic theses and dissertations. As interest in archiving other materials increased and we realized that our faculty was not properly motivated to submit their own work, we changed our approach to collecting materials for our institutional repository and added a dark archive for strictly archival material. We have launched an electronic publishing service, Epage @ Tech, to support the creation and capture of e-journals, conferences, and lecture series to facilitate scholarly communication. As we provide these tools to faculty to accomplish their goals and they in turn become more aware of the need for repositories, we are more likely to convince them to deposit their personal materials. We will discuss:
· The technical support and training we provide departments to digitize and submit their own materials
· How we partner with departments to capture materials using their current electronic workflows
· How we provide production services to support e-journals, conferences, and the capture of lecture series, symposia, and the like
· Planned services to introduce these services into individual faculty members’ workflows
===============
Session had about 37 in audience

NOTES:
Technology easy, it’s the social conditions and psychology of the institution
By the 2nd semester of ETD grad school was sold and wanted to make ETD required
Were able to start with “legacy materials”
Pre and post prints very difficult to get from faculty
Self submission never happened – library went to departments to get their stuff
Efforts to get self submissions had the effect of marketing the service to the campus
Alumni association “mines” the student yearbook They got their alumni association to do the scanning and PDF production
Student newspaper gives them their online issues in e-format
Download stats come from the “bitstream view” count.
Need to educate users to use the item handle not the download link
“We never asked people to catalog the books themselves that they requested” – so needed to develop a service layer for IR

Services:
Aiding depts in capturing things that now disappear – lectures, presentation of thesis equivalent performances
Offering server space as a library service to the CAMPUS.
Taking grant files when the agency requires that the product must be kept forever (sustainability). NSF grant applicants at Georgia Tech asked for letter from library stating they can make their research available forever.
Publish articles using open journal system software OJS from Public Knowledge Systems

OUTREACH:
Inserting themselves into departmental communication flow – like announcement systems, etc.
Research funded reports that are marked “final” the Office of Sponsored Programs will transmit the report & metadata (in the “coming soon we hope” category)
Presentations may have already gotten permissions for public access, or you can try and get it after the fact, or make it restricted to campus only
Do innovative things like taping students as they install an exhibit to go with exhibit photos or other materials

QUESTIONS FROM AUDIENCE:
How do users respond to the epageTech service offering?
They don’t know what the library is taking about. Faculty are concerned about getting tenure credit for depositing with IR or creating an open source journal.
They like it when they understand it after they’ve had enough of their time to explain it.

Staffing levels?
I person worked with Systems, Archives, etc. IR architect joined 6 monhths, then a web designer, every 6 months got a new person, as of July 1st – Digital Initiatives is 6 people but not organized that way. Library has been reorganized.

What about loading scientific data?
Not yet.

Statistics on self submission vs. library submisstion?
Low self-sub, library input is growing

Multimedia materials – does metatdata describe the whole thing or the parts?
Links can be created among different parts, but most described as a whole.

Is multimedia streaming – is it fully encapsulated or coming from another server?
Streaming video is located on IT services – launches an external application.

How do you handle removal of item?
“We don’t”. If anyone asks we say no. Moved to the dark archive. If it is illegal or unethical they will remove it.

When you get datasets will you still be using dSpace.
Probably not, might use Fedora, can’t predict

Creating Your First Topic Map

Friday Oct. 5, 4:20
Edward Iglesias, Central Connecticut State University
Suellen Stringer-Hye, Vanderbilt University

We were commended to check out the Topic Maps LITA interest group also.

Edward and Suellen had an infectious enthusiasm for topic maps, and this session was super! They want to make progress together as a library community, creating and merging topic maps.

Topic maps are a way of describing knowledge structures and associating resources related to them.The topic map paradigm came about in old days of sgml people wanted to combine data with back of the book indexes. There’s a topic map standard: xtm standard derived from sgml.

A topic map consists of topics associations and occurances.Topic maps are graphs not lines of details.They showed several examples of topic maps, and then took the audience step by step through the creation of a topic map.

Examples of existing topic maps:

Australian Literature gateway

http://www.austlit.edu.au/

****************

A web2.0 social bookmarking site using the TMCore Topic Map
engine. The tags used to categorize bookmarks are topics within
a shared Topic Map and topics with associations are created by
the users.
o http://www.fuzzzy.com/

********************

New Zealand Electronic Text Centre
 The website of the NZ Electronic Text Centre is driven by a topic
map.
o http://www.nzetc.org/

Building a Large-Scale Open Source Repository at OhioLINK

Thomas Dowling, OhioLINK

Or: “A cautionary tale in three acts”

At OhioLink they decided that the repository should be “A place for our stuff”, including:

  • art collections high quality/high resolution – multiple tens of thousands of images
  • art and architecture slides from U Cincinnati
  • paintings and drawings from U Cinn collection
  • items from Works Progress from Cleveland State
  • Akron art museum
  • history : historical photos e.g. Wright Brothers at Kittyhawk
  • digs of Mayan architecture
  • historical photographs from lake erie
  • national underground railroad freedom center items
  • OSU geology collection
  • forestry research ctr items
  • dolphin embryos images
  • digital video – documentaries, etc.
  • Kent State oral histories of shootings
  • 100 level undergrad physics experiments
  • OSU lab that has phenomenal archive of bird calls with thorough metadata
  • Text site: 9.1M articles of dissertations center
  • 7500 electronic books

THE PLAN

They want a unified repository architecture with open source tools. They chose open source software: Fedora – because it has proven to be bullet proof.

Open source: now cost models seem more feasible and lots of freedom to try things out

· in the plan: ingest in xml

· goes from machine to machine xxsl protocol

· get it all into a fedora repository

· and pump it out via various softwares ext or UM digital library extension service

That was his LITA conferrence proposal, and their project is still very much in progress – real life pressures have slowed down the program –they have tried since 2005 and weren’t making progress. Recently someone created a deadline and they got to work; after two years of development: no progress other than determining that they are going to develop what they can using DSpace. Several factors were revealed, including expiring licenses and a need to have something up and working.

Much of the talk was very interesting details regarding their decision making processes and the growth of their understanding regarding what they needed to do in order to get a large-scale repository actually out of development and into production.

Peaks and Pitfalls: Designing a Large-Scale Repository Workflow for Quality Assurance

LITA Forum Saturday October 6, 3:20

Frances Knudsen, Beth Goldsmith – Los Alamos National Laboratory

Our speaker discussed the experiences that they have had working with The Los Alamos National Laboratory Research Library’s aDORe Repository. She spoke about philosophical side versus real world side of QA .

They run a repository on a home made system by Herbert von der Somple, and they ingest using batch mode in this repository of approximately 80 million metadata records, 1.5 million fulltext tems, and several million other complex digital objects from multiple data providers, internal publications, and OAI harvests.

She revealed the details of their QA efforts, discussing their successes and failures and decision making processes.

She gave several instances of human error that quality assurance may not find. And she also passed along several tidbits based on their experience, such as: one of the things they did was to give a style sheet to people who are looking at data – translating it to a form rather than making the cataloger look at the code.

All in all it was a fascinating tale of quality assurance principles and practice.

Libx: Connecting Users and Libraries

Annette Bailey – Virginia Tech

The room was over-filled showing that a lot of people are interested in the project.

In 2005 the working group thought they’d like to produce a tool that would be a virtual librarian – could they do it wihtout becoming an MS paperclip?

Annette reviewed the decision making process through which they decided on a client side solution rather than server side and the result is the Libx browser plugin.

Libx Editions provide a local, branded editions of Libx, and are created by using the edition builder on the Libx site. Annette demonstrated building an edition in a few minutes – wow.

See details about Libx and the Libx edition builder at “Libx: a Browser Plugin for Libraries”

http://libx.org/

Five Months With WorldCat Local

Jennifer Ward, Head of Web Services at the University of Washington discussed how her library has implemented a pilot project-WorldCat Local (WCL). WCL at UW searches the local catalog, World Cat, and four article databases. Not all resources are included in WCL. Early English Books Online and Eighteen Century Collections Online are excluded from WCL due to 3rd party license agreement restrictions. Local records that have not been contributed to Summit, and other serials have also been excluded from WCL.

Why Test World Cat Local?

After several internal discussions about NexGen catalogs and conducting literature reviews in this area, the UW staff made the decision to implement WCL. Some of the key issues they considered were how to work at the higher network level and how to facilitate discovery to delivery.

Meeting the User’s Needs

The UW implementation team realized that users are diverse with a diverse set of needs. A patron may need a particular resource on one day and something totally different on the next. They also realized that users often become overwhelmed with choices. As a result of this they added links to the campus portal improving access and visibility to library resources.

Ward also discussed the challenges in silos of information. It’s difficult for users to have to search different databases. There is also some confusion in how to request items whether it’s through the UW Library catalog, Summit, the union catalog or WCL. There is also quite a difference in the amount of time it takes to get an item depending on how it is requested. The UW Library catalog can take up to two weeks, while Summit is much quicker only 2-3 days on average. Illiad is the best method for getting obscure items or items not owned by UW. However, UW library has received negative feedback regarding ILL through Illiad.

Advantages to Using WCL

  • Simple search box
  • UW holdings float to top.
  • Don’t have to change interfaces
  • Availability is front and center easy to see
  • Get access to full text online through OpenURL
  • Ferberized results sets
  • Create and save lists
  • Multilingual interface
  • Book jackets

Ward also mentioned that UW has conducted usability testing and that they are using the results of this testing to improve the interface of WCL. She could not talk about the specifics of this testing, only that they have made significant changes due to it.

Ward did show screen captures of the site.

Highlights of Searching/Interface

After the initial search, facets appear on the left hand side of the screen for content, format, author, language, and year.

The Results screen holdings location is in the middle of the screen. Holdings and availability are retrieved in real time. When a user clicks on the item, it is then requested through Summit.

Libraries tab-location information is based on your IP address. So you will see libraries within a certain IP, but you can change it to your local. You can see if other libraries nearby you have the item.

Details Tab

Likely will soon be the default tab

Item details, summary/abstract and notes

Subjects tab-displays all the subject heading in the record. Includes MESH as well as LCSH.

Editions Tab

Displays all the different editions of an item. Uses OCLC ferber algorithm to pull this information together.

Citations can be formatted in an array of styles such as APA, Chicago, or ALA

Other Notes on WCL

Users can write reviews for an item.

Very clear messaging for items not owned and clear instructions that you can get it through ILL Illiad. Not a dead end for the user

Journal and article displays-e.g. time comes up as first hit in WorldCat, but not in UW catalog. World Cat uses relevancy ranking.

Use web bridge so this is integrated into WorldCat and e-resources are displayed.

Impact on borrowing

After only a quarter, UW has seen a drastic increase in their borrowing. Prior to World Cat, there was an increase in borrowing of 6% a year. With the implementation of WCL, Ill Borrowing at UW has jumped to 39.5%. However, only 16.28% of these requests were processed. The other requests were mistakes. Naturally, the increase in borrowing has impacted staff work load, but Ward didn’t go into specifics about this. However, they are aware of this and working on ways to alleviate it.

Challenges with WCL

  • Collating data from multiple sources
  • Mapping metadata for articles and books is a challenge to make it findable.
  • Mapping records from UW to Summit (pockets of records that aren’t in Summit)
  • Locally enhanced records-local holdings data how much of that is network level material.
  • Local/consortial policies are not aligned. Some users can’t borrow thru summit but can through UW
  • Some lost functionality from catalog-practical info. hours, location of library
  • ferber algorithm-it does bring manifestations of a work under on record, but it’s not perfect.
  • Planning to continue to use WCL as primary search system through fall Quarter and see how it develops.
  • ILL Requests is a big issue and causing additional work.

URLs:

http://www.lib.washington.edu/

http://uwashington.worldcat.org/

When asked what her number one piece of advice in implementing WCL, Ward said to remain flexible.

Ward gave an interesting and insightful overview of WCL and provided some of the challenges and benefits that UW has experienced to date. My notes coupled with the slides will hopefully provide a good overview of the program.

Finding, Using, and Sharing Scholarly Content

The speakers for this session were Beth LaPensee of JSTOR and Alice Preston from Ithaka.

JSTOR is currently in the process of doing a major site redesign, and Beth LaPensee gave an overview of some of the changes that might (emphasis on the word might; this is still a work in progress) be included in the final version, which will be released sometime next year. Some of the more notable changes that she mentioned include:

  • The ability to limit searches to journals within a specific discipline from the basic search page.
  • The option to rerun previous searches within a session.
  • An auto-complete feature when searching by journal title.
  • The ability to search at any point while browsing; i.e., there will be a box on all of the browse pages that will allow you to search within a specific journal or issue of a journal that you have browsed to, without having to navigate back to the search page.
  • The main page for browsing a journal will be combined with the information page for that journal.
  • On the combined information page/browse page for each journal, there will be options for accessing the journal’s content for people who do not subscribe to JSTOR, such as “Publisher Sales Service” (a.k.a. article pay-per-view).
  • Citation linking and related articles linking.
  • Article-level links out of JSTOR to journal content that is on the wrong side of their moving wall.
  • Faceted searching (which is already available in the JSTOR Sandbox).
  • The ability to adjust the relative importance of keywords in your searches by using graphical sliders next to each keyword.
  • MyJSTOR, which will include things like the ability to save searches; notification of new results for searches; and the ability to save a list of favorite journals and favorite disciplines, which will be used as the default for your searches.

Alice Preston talked about Aluka, another project from Ithaka. Aluka is a digital library of material about Africa. It currently contains 20 terabytes of data, including high-resolution specimen sheets of African plants that can be zoomed in on to the microscopic level, photographs and laser scans of endangered cultural heritage sites, and digitized original source materials from southern Africa’s liberation movements.

JSTOR subscribers have a free preview of Aluka until the end of the year—it’s a link at the top of the JSTOR home page. Aluka will be offered free of charge to institutions in Africa, and Ithaka hopes that when Aluka is formally launched that enough institutions in the developed world will subscribe to Aluka to subsidize that free access for Africa.