Jobs in Information Technology: March 11

New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.

New This Week

Director, Library Information Technology Services,  Kansas State University Libraries,  Manhattan, KS

Manager, University Library Systems, Kent State University ,  Kent State University Libraries, Kent, OH

Network Engineer – Library, City of Phoenix,  Phoenix, AZ

Reference and Instruction Librarian,  Pennsylvania State University Libraries, Erie Campus,  Erie, PA

Web Services Librarian, University of Oregon Libraries,  Eugene, OR

Visit the LITA Job Site for more available jobs and for information on submitting a  job posting.

 

Is Your Library In?

In my previous post, I discussed learning XSLT for my current Indiana University Digital Library Program internship. Along the way, I explained a little about TEI/XML, as well. Thinking about these tools led me to consider all of the different markup and programming languages, and tools that go into building a digital library. While my Guide to Digital Library Tools is kept for another day, I wanted to explore one platform in particular. Omeka.

IU hosts a fantastic series called Digital Library Brown Bags on Wednesdays throughout the school year. I’ve attended many and see an Omeka usage pattern emerging. The most recent Brown Bag was titled Designing the Digital Scholarly Commons: “In Mrs. Goldberg’s Kitchen: Jewish Life in Interwar Lodz” given by Halina Goldberg, Associate Professor of Music at the Jacobs School of Music, and Adam Hochstetter, Web Developer for the Jacobs School of Music.

After seeing many projects utilizing Omeka and creating a few of my own, I was astounded by the extensiveness and detail of this particular project including panorama photograph tours and pop-up information (sign up for notification when the exhibit goes live here). Omeka’s uses are twofold: digital storage for digitized items and a platform to exhibit those items. There are two versions, Omeka.net, though which users can host projects for free or a small fee, and Omeka.org, hearty, more expensive and hosted by an individual person or organization.

To store items, Omeka utilizes the Dublin Core Metadata initiative metadata scheme. Once a user uploads an item (read: picture of an item) he or she fills out a form asking about different parts of the metadata, such as creator, date, description, publisher, language, etc. The item list is always available to browse.

The real magic happens through an exhibit. Like physical exhibits in a rare book library, museum or gallery, the Omeka exhibitor brings together items in relation to a theme. In the example above, the theme was the items found in “Mrs. Goldberg’s Kitchen” and their cultural and historical significance. Omeka provides nearly seamless integration of items in exhibits, hence the magic. Programmers can also do back-end code and template alterations, similar to WordPress.

When beginning to use Omeka, there is a small learning curve and a WYSIWYG (what you see is what you get) feel. I’m curious if this is the reason many libraries choose to implement Omeka projects. Throughout the Brown Bag series and presentations featuring Omeka, I’ve noticed that ¾ of the time is spent discussing the project, and the rest is spent discussing problems or limitations with Omeka. There is always a question along the lines of “so, once you store all of this information in Omeka, can you integrate it with other platforms? Can you export it out?”

As digital librarians find ways to link and standardize digital projects across the web, what will this mean for data “trapped” within Omeka? When I think about this question something like this pops into my mind:

Image retrieved from kids.baristanet.com.

Image retrieved from kids.baristanet.com.

But with Omeka so widely used and only increasing in popularity for library and digital humanities projects, is the orange person an Omeka project with linked projects in blue, or the opposite?

I would love to hear if your library is “in” with Omeka or “in” with other digital exhibits and libraries! Feel free to comment with your successes, limitations, questions, and remarks!

2015 Kilgour Award Goes to Ed Summers

Ed-Summers1-130x130The Library & Information Technology Association (LITA), a division of the American Library Association (ALA), announces Ed Summers as the 2015 winner of the Frederick G. Kilgour Award for Research in Library and Information Technology. The award, which is jointly sponsored by OCLC, is given for research relevant to the development of information technologies, especially work which shows promise of having a positive and substantive impact on any aspect(s) of the publication, storage, retrieval and dissemination of information, or the processes by which information and data is manipulated and managed. The awardee receives $2,000, a citation, and travel expenses to attend the award ceremony at the ALA Annual Conference in San Francisco, where the award will be presented on June 28, 2015.

Ed Summers is Lead Developer at the Maryland Institute for Technology in the Humanities (MITH), University of Maryland. Ed has been working for two decades helping to build connections between libraries and archives and the larger communities of the World Wide Web. During that time Ed has worked in academia, start-ups, corporations and the government. He is interested in the role of open source software, community development, and open access to enable digital curation. Ed has a MS in Library and Information Science and a BA in English and American Literature from Rutgers University.

Continue reading

Librarians, Take the Struggle Out of Statistics

statisticsrandomCheck out the brand new LITA web course:
Taking the Struggle Out of Statistics 

Instructor: Jackie Bronicki, Collections and Online Resources Coordinator, University of Houston.

Offered: April 6 – May 3, 2015
A Moodle based web course with asynchronous weekly lectures, tutorials, assignments, and group discussion.

Register Online, page arranged by session date (login required)

Recently, librarians of all types have been asked to take a more evidence-based look at their practices. Statistics is a powerful tool that can be used to uncover trends in library-related areas such as collections, user studies, usability testing, and patron satisfaction studies. Knowledge of basic statistical principles will greatly help librarians achieve these new expectations.

This course will be a blend of learning basic statistical concepts and techniques along with practical application of common statistical analyses to library data. The course will include online learning modules for basic statistical concepts, examples from completed and ongoing library research projects, and also exercises accompanied by practice datasets to apply techniques learned during the course.

Got assessment in your title or duties? This brand new web course is for you!

Here’s the Course Page

Jackie Bronicki’s background is in research methodology, data collection and project management for large research projects including international dialysis research and large-scale digitization quality assessment. Her focus is on collection assessment and evaluation and she works closely with subject liaisons, web services, and access services librarians at the University of Houston to facilitate various research projects.

libstatsDate:
April 6, 2015 – May 3, 2015

Costs:

  • LITA Member: $135
  • ALA Member: $195
  • Non-member: $260

Technical Requirements

Moodle login info will be sent to registrants the week prior to the start date. The Moodle-developed course site will include weekly asynchronous lectures and is composed of self-paced modules with facilitated interaction led by the instructor. Students regularly use the forum and chat room functions to facilitate their class participation. The course web site will be open for 1 week prior to the start date for students to have access to Moodle instructions and set their browser correctly. The course site will remain open for 90 days after the end date for students to refer back to course material.

Registration Information

Register Online page arranged by session date (login required)
OR
Mail or fax form to ALA Registration
OR
Call 1-800-545-2433 and press 5
OR
email registration@ala.org

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, mbeatty@ala.org.

In Praise of Anaconda

papapishu-Black-snake

Do you want to learn to code?  Of course you do, why wouldn’t you?  Programming is fun, like solving a puzzle.  It helps you think in a computational and pragmatic way about certain problems, allowing you to automate those problems away with a few lines of code.  Choosing to learn programming is the first step on your path, and the second is choosing a language.  These days there are many great languages to choose from, each with their own strengths and weaknesses.  The right language for you depends heavily on what you want to do (as well as what language your coworkers are using).

If you don’t have any coder colleagues and can’t decide on a language, I would suggest taking a look at Python.  It’s mature, battle-tested, and useful for a just about anything.  I work across many different domains (often in the same day) and Python is a powerful tool that helps me take care of business whether I’m processing XML, analyzing data or batch renaming and moving files between systems.  Python was created to be easy to read and aims to have one obvious “right” way to do any given task.  These language design decisions not only make Python an easy language to learn, but an easy language to remember as well.

One of the potential problems with Python is that it might not already be on your computer.  Even if it is on your computer, it’s most likely an older version (the difference between Python v2 and v3 is kind of a big deal). This isn’t necessarily a problem with Python though; you would probably have to install a new interpreter (the program that reads and executes your code) no matter what language you choose. The good news is that there is a very simple (and free!) tool for getting the latest version of Python on your computer regardless of whether you are using Windows, Mac or Linux.  It’s called Anaconda.

Anaconda is a Python distribution, which means that it is Python, just packaged in a special way. This special packaging turns out to make all the difference.  Installing an interpreter is usually not a trivial task; it often requires an administrator password to install (which you probably won’t have on any system other than your personal computer) and it could cause conflicts if an earlier version already exists on the system.  Luckily Anaconda bypasses most of this pain with a unique installer that puts a shiny new Python in your user account (this means you can install it on any system you can log in to, though others on the system wouldn’t be able to use it), completely separate from any pre-existing version of Python.  Learning to take advantage of this installer was a game-changer for me since I can now write and run Python code on any system where I have a user account.  Anaconda allows Python to be my programming Swiss Army knife; versatile, handy and always available.

Another important thing to understand about Anaconda’s packaging is that it comes with a lot of goodies.  Python is famous for having an incredible amount of high-quality tools built in to the language, but Anaconda extends this even further. It comes with Spyder, a graphical text editor that makes writing Python code easier, as well as many packages that extend the langauge’s capabilities. Python’s convenience and raw number crunching power has made it a popular language in the scientific programming community, and a large number of powerful data processing and analysis libraries have been developed by these scientists as a result. You don’t have to be a scientist to take advantage of these libraries, though; the simplicity of Python makes these libraries accessible to anyone with the courage to dive in and try them out.  Anaconda includes the best of these scientific libraries: IPython, NumPy, SciPy, pandas, matplotlib, NLTK, scikit-learn, and many others (I use IPython and pandas pretty frequently, and I’m in the process of learning matplotlib and NLTK).  Some of these libraries are a bit tricky to install and configure with the standard Python interpreter, but Anaconda is set up and ready to use them from the start.  All you have to do is use them.

While we’re on the subject of tricky installations, there are many more packages that Anaconda doesn’t  come with that can be a pain to install as well. Luckily Anaconda comes with its own package manager, conda, which is handy for not only grabbing new packages and installing them effortlessly, but also for upgrading the packages you have to the latest version. Conda even works on the Python interpreter itself, so when a new version of Python comes out you don’t have to reinstall anything.  Just to test it out, I upgraded to the latest version of Python, 3.4.2, while writing this article. I typed in ‘conda update python‘ and had the newest version running in less than 30 seconds.

In summary, Anaconda makes Python even more simple, convenient and powerful.  If you are looking for an easy way to take Python for a test drive, look no further than Anaconda to get Python on your system as fast as possible. Even seasoned Python pros can appreciate the reduced complexity Anaconda offers for installing and maintaining some of Python’s more advanced packages, or putting a Python on systems where you need it but lack security privileges. As an avid Python user who could install Python and all its packages from scratch, I choose to use Anaconda because it streamlines the process to an incredible degree.  If you would like to try it out, just download Anaconda and follow the guide.

February Library Tech Roundup

Image courtesy of Flickr user paloetic (CC-BY)

Image courtesy of Flickr user paloetic (CC-BY)

We’re debuting a new series this month: a roundup inspired by our friends at Hack Library School! Each month, the LITA bloggers will share selected library tech links, resources, and ideas that resonated with us. Enjoy – and don’t hesitate to tell us what piqued your interest recently in the comments section!


Brianna M.

Get excited: This month I discovered some excellent writing related to research data management.


Bryan B.

The lion’s share of my work revolves around our digital library system, and lately I’ve been waxing philosophical about what role these systems play in our culture. I don’t have a concrete answer yet, but I’m getting there.


John K.

I’m just unburying myself from a major public computer revamp (new PCs, new printers, new reservation/printing system, mobile printing, etc.) so here are a few things I’ve found interesting:


Lauren H.

This month my life is starting to revolve around online learning.  Here’s what I’ve been reading:


Leanne O.

I’ve been immersed in metadata and cataloguing, so here’s a grab bag of what’s intrigued me lately:


Lindsay C.

Hey, LITA Blog readers. Are you managing multiple projects? Have you run out of Post-it (R) notes? Are the to-do lists not cutting it anymore? Me too. The struggle is real. Here are a set of totally unrelated links to distract all of us from the very pressing tasks at hand. I mean inspire us to finish the work.

LITA Webinar: Beyond Web Page Analytics

Or how to use Google tools to assess user behavior across web properties.

analyticssmallTuesday March 31, 2015
11:00 am – 12:30 pm Central Time
Register now for this webinar

This brand new LITA Webinar shows how Marquette University Libraries have installed custom tracking code and meta tags on most of their web interfaces including:

  • CONTENTdm
  • Digital Commons
  • Ebsco EDS
  • ILLiad
  • LibCal
  • LibGuides
  • WebPac, and the
  • General Library Website

The data retrieved from these interfaces is gathered into Google’s

  • Universal Analytics
  • Tag Manager, and
  • Webmaster Tools

When used in combination these tools create an in-depth view of user behavior across all these web properties.

webpageanalyticsFor example Google Tag Manager can grab search terms which can be related to a specific collection within Universal Analytics and related to a particular demographic. The current versions of these tools make systems setup an easy process with little or no programming experience required. Making sense of the volume of data retrieved, however, is more difficult.

  • How does Google data compare to vendor stats?
  • How can the data be normalized using Tag Manager?
  • Can this data help your organization make better decisions?

Continue reading

Agile Development: Estimation and Scheduling

Image courtesy of Wikipedia

Image courtesy of Wikipedia

In my last post, I discussed the creation of Agile user stories. This time I’m going to talk about what to do with them once you have them. There are two big steps that need to be completed in order to move from user story creation to development: effort estimation and prioritization. Each poses its own problems.

Estimating Effort

Because Agile development relies on flexibility and adaptation, creating a bottom-up effort estimation analysis is both difficult and impractical. You don’t want to spend valuable time analyzing a piece of functionality up front only to have the implementation details change because of something that happens earlier in the development process, be it a change in another story, customer feedback, etc. Instead, it’s better to rely on your development team’s expertise and come up with top-down estimates that are accurate enough to get the development process started. This may at times make you feel uncomfortable, as if you’re looking for groundwater with a stick (it’s called dowsing, by the way), but in reality it’s about doing the minimum work necessary to come up with a reasonably accurate projection.

Estimation methods vary, but the key is to discuss story size in relative terms rather than assigning a number of hours of development time. Some teams find a story that is easy to estimate and calibrate all other stories relative to it, using some sort of relative “story points” scale (powers of 2, the Fibonacci sequence, etc.). Others create a relative scale and tag each story with a value from it: this can be anything from vehicles (this story is a car, this one is an aircraft carrier, etc.), to t-shirt sizes, to anything that is intuitive to the team. Another method is planning poker: the team picks a set of sizing values, and each member of the team assigns one of those values to each story by holding up a card with the value on it; if there’s significant variation, the team discusses the estimates and comes up with a compromise.  What matters is not the method, but that the entire team participate in the estimation discussion for each story.

Learn more about Agile estimation here and here.

Prioritizing User Stories

The other piece of information we need in order to begin scheduling is the importance of each story, and for that we must turn to the business side of the organization. Prioritization in Agile is an ongoing process (as opposed to a one-time ranking) that allows the team to understand which user stories carry the biggest payoff at any point in the process. Once they are created, all user stories go into a the product backlog, and each time the team plans a new sprint it picks stories off the top of the list until their capacity is exhausted, so it is very important that the Product Owner maintain a properly ordered backlog.

As with estimation, methods vary, but the key is to follow a process that evaluates each story on the value it adds to the product at any point. If I just rank the stories numerically, that does not provide any clarity as to why that is, which will be confusing to the team (and to me as well as the backlog grows). Most teams adopt a ranking system that scores each story individually; here’s a good example. This method uses two separate criteria: urgency and business value. Business value measures the positive impact of a given story on users. Urgency provides information about how important it is to complete a story earlier rather than later in the development process, taking into account dependencies between user stories, contractual obligations, complexity, etc. Basically, Business Value represents the importance of including a story in the finished product, and Urgency tells us how much it matters when that story is developed (understanding that a story’s likelihood of being completed decreases the later in the process it is slotted). Once the stories have been evaluated along the two axes (a simple 1-5 scale can be used for each) an overall priority number is obtained by multiplying the two values, which gives us the final priority score. The backlog is then ordered using this value.

As the example in the link shows, a Product Owner can also create priority bands that describe stories at a high level: must-have, nice to have, won’t develop, etc. This provides context for the priority score and gives the team information about the PO’s expectations for each story.

I’ll be back next month to talk about building an Agile culture. In the meantime, what methods does your team use to estimate and prioritize user stories?

Join LITA’s Imagineering IG at ALA Annual

Editor’s note: This is a guest post by Breanne Kirsch.

During the upcoming 2015 ALA Annual Conference, LITA’s Imagineering Interest Group will host the program “Unknown Knowns and Known Unknowns: How Speculative Fiction Gets Technological Innovation Right and Wrong.” A panel of science fiction and fantasy authors will discuss their work and how it connects with technological developments that were never invented and those that came about in unimagined ways. Tor is sponsoring the program and bringing authors John Scalzi, Vernor Vinge, Greg Bear, and Marie Brennan. Baen Books is also sponsoring the program by bringing Larry Correia to the author panel.

books_LITAimagineering

John Scalzi wrote the Old Man’s War series and more recently, Redshirts, which won the 2013 Hugo Award for Best Novel. Vernor Vinge is known for his Realtime/Bobble and Zones of Thought Series and a number of short fiction stories. Greg Bear has written a number of series, including Darwin, The Forge of God, Songs of Earth and Power, Quantum Logic, and The Way. He has also written books for the Halo series, short fiction, and standalone books, most recently, War Dogs as well as the upcoming novels Eternity and Eon. Marie Brennan has written the Onyx Court series, a number of short stories, and more recently the Lady Trent series, including the upcoming Voyage of the Basilisk. Larry Correia has written the Monster Hunter series, Grimnoir Chronicles, Dead Six series, and Iron Kingdoms series. These authors will consider the role speculative fiction plays in fostering innovation and bringing about new ideas.

Please plan to attend the upcoming ALA Annual 2015 Conference and add the Imagineering Interest Group program to your schedule! We look forward to seeing you in San Francisco.

Breanne A. Kirsch is the current Chair of the Imagineering Interest Group as well as the Game Making Interest Group within LITA. She works as a Public Services Librarian at the University of South Carolina Upstate and is the Coordinator of Emerging Technologies. She can be contacted at bkirsch@uscupstate.edu or @breezyalli.

Librarians: We Open Access

Open Access storefront

Open Access (storefront). Credit: Flickr user Gideon Burton

In his February 11 post, my fellow LITA blogger Bryan Brown interrogated the definitions of librarianship. He concluded that librarianship amounts to a “set of shared values and duties to our communities,” nicely summarized in the ALA’s Core Values of Librarianship. These core values are access, confidentiality / privacy, democracy, diversity, education and lifelong learning, intellectual freedom, preservation, the public good, professionalism, service, and social responsibility. But the greatest of these is access, without which we would revert to our roots as monastic scriptoriums and subscription libraries for the literate elite.

Bryan experienced some existential angst given that he is a web developer and not a “librarian” in the sense of job title or traditional responsibilities–the ancient triad of collection development, cataloging, and reference. In contrast, I never felt troubled about my job, as my title is e-learning librarian (got that buzzword going for me, which is nice) and as I do a lot of mainstream librarian-esque things, especially camping up front doing reference or visiting classes doing information literacy instruction.

Buzzword meme

Meme by Michael Rodriguez using Imgflip

However, I never expected to become manager of electronic resources, systems, web redesign, invoicing and vendor negotiations, and hopefully a new institutional repository fresh out of library school. I did not expect to spend my mornings troubleshooting LDAP authentication errors, walking students through login issues, running cost-benefit analyses on databases, and training users on screencasting and BlackBoard.

But digital librarians like Bryan and myself are the new faces of librarianship. I deliver and facilitate electronic information access in the library context; therefore, I am a librarian. A web developer facilitates access to digital scholarship and library resources. A reference librarian points folks to information they need. An instruction librarian teaches people how to find and evaluate information. A cataloger organizes information so that people can access it efficiently. A collection developer selects materials that users will most likely desire to access. All of these job descriptions–and any others that you can produce–are predicated on the fundamental tenet of access, preferably open, necessarily free.

Democracy, diversity, and the public good is our vision. Our active mission is to open access to users freely and equitably. Within that mission lie intellectual freedom (open access to information regardless of moralistic or political beliefs), privacy (fear of publicity can discourage people from openly accessing information), preservation (enabling future users to access the information), and other values that grow from the opening of access to books, articles, artifacts, the web, and more.

The Librarians: We Open Access

The Librarians (Fair use – parody)

By now you will have picked up on my wordplay. The phrase “open access” (OA) typically refers to scholarly literature that is “digital, online, free of charge, and free of most copyright and licensing restrictions” (Peter Suber). But when used as a verb rather than an adjective, “open” means not simply the state of being unrestricted but also the action of removing barriers to access. We librarians must not only cultivate the open fields–the commons–but also strive to dismantle paywalls and other obstacles to access. Recall Robert Frost’s Mending Wall:

Before I built a wall I’d ask to know
What I was walling in or walling out,
And to whom I was like to give offense.
Something there is that doesn’t love a wall,
That wants it down.’ I could say ‘Elves’ to him…

Or librarians, good sir. Or librarians.