Job Opening: LITA Executive Director

Large Blog ImageThe Library and Information Technology Association (LITA), a division of the American Library Association, seeks a dynamic, entrepreneurial, forward-thinking Executive Director.

This is a fulfilling and challenging job that affords national impact on library technologists. As the successful candidate, you will be not only organized, financially savvy, and responsive, but also comfortable with technological change, project management, community management, and organizational change.

Interested in applying? For a full description and requirements, visit http://bit.ly/LITA_ED

ALA logoSearch Timeline

We will advertise for the position in April, conduct phone interviews in early May, and conduct in-person interviews with the top candidates at ALA Headquarters in Chicago, mid to late May.

Ideally, the candidate would start in June (perhaps just before ALA Annual Conference), and there would be a one-month overlap with current Executive Director Mary Taylor, who retires July 31.

Search Committee

  • Mary Ghikas, ALA Senior Associate Executive Director
  • Dan Hoppe, ALA Director of Human Resources
  • Keri Cascio, ALCTS Executive Director
  • Rachel Vacek, LITA President
  • Thomas Dowling, LITA Vice-President
  • Andromeda Yelton, LITA Director-at-Large
  • Isabel Gonzalez-Smith, LITA Emerging Leader

Let’s Hack a Collaborative Library Website!

A LITA Preconference at 2015 ALA Annual

Register online for the ALA Annual Conference and add a LITA Preconference

Friday, June 26, 2015, 8:30am – 4:00pm

In this hackathon attendees will learn to use the Bootstrap front-end framework and the Git version control system to create, modify and share code for a new library website. Expect a friendly atmosphere and a creative hands-on experience that will introduce you to web literacy for the 21st century librarian. The morning will consist of in-depth introductions to the tools, while the afternoon will see participants split into working groups to build a single collaborative library website.

bootstraplogoWhat is Bootstrap

Bootstrap is an open-source, responsive designed, and front-end web framework that can be used to create complete website redesigns to rapid prototyping. It is useful for many library web applications, such as customizing LibGuides (version 2) or creating responsive sites. This workshop will give attendees a crash-course into the basics of what Bootstrap can do and how to code it. Attendees can work individually or in teams.

gitlogoWhat is Git

Git is an open-source software tool that allows you to manage drafts and collaboratively work on projects – whether you’re building a library app, writing a paper, or organizing a talk. We will also talk about GitHub, a massively popular website that hosts git projects and has built-in features like issue tracking and simple web page hosting.

Additional resources

Bootstrap, LibGuides, & Potential Web Domination  – Discussion of the use of Bootstrap at the Van Library, University of St. Francis

Libraries using Bootstrap example:
Bradford County Public Library

Library Code Year Interest Group

This program was put together by the ALCTS/LITA Library Code Year Interest Group which is devoted to supporting members who want to improve their computer programming skills. Find out more here.

bronstadPresenters

Kate Bronstad, Web Developer, Tisch Library, Tufts University
Kate is a librarian-turned-web developer for Tufts University’s Tisch Library. She works with git on a daily basis and teaches classes on git for the Boston chapter of Girl Develop It. Kate is originally from Austin, TX and has a MSIS from UT-Austin.

klishHeather J Klish, Systems Librarian, Tufts University
Heather is the Systems Librarian in University Library Technology at Tufts University. Heather has an MLS from Simmons College.

 

Junior Tidal, New York City College of Technology
juniorTidalJunior is the Multimedia and Web Services Librarian and Assistant Professor for the Ursula C. Schwerin Library at the New York City College of Technology, City University of New York. His research interests include mobile web development, usability, web metrics, and information architecture. He has published in the Journal of Web Librarianship, OCLC Systems & Services, Computers in Libraries, and code4Lib Journal. He has written a LITA guide entitled Usability and the Mobile Web published by ALA TechSource. Originally from Whitesburg, Kentucky, he has earned a MLS and a Master’s in Information Science from Indiana University.

AC15sfPod238x120Registration:

Cost

  • LITA Member $235 (coupon code: LITA2015)
  • ALA Member $350
  • Non-Member $380

How-to

To register for any of these events, you can include them with your initial conference registration or add them later using the unique link in your email confirmation. If you don’t have your registration confirmation handy, you can request a copy by emailing [email protected]. You also have the option of registering for a preconference only. To receive the LITA member pricing during the registration process on the Personal Information page enter the discount promotional code: LITA2015

Register online for the ALA Annual Conference and add a LITA Preconference
Call ALA Registration at 1-866-513-0760
Onsite registration will also be accepted in San Francisco.

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, [email protected]

Creating Better Tutorials Through User-Centered Instructional Design

guidanceA LITA Preconference at 2015 ALA Annual

Register online for the ALA Annual Conference and add a LITA Preconference

Friday, June 26, 2015, 8:30am – 4:00pm

Have you wanted to involve users as you design interactive e-learning, but aren’t sure where to start? In this unique, hands-on workshop, you will learn the core and emerging principles of instructional and user experience design and apply what you have learned to design, develop, and test a tutorial you create. The three dynamic and experienced workshop facilitators will cover topics including design thinking, user-centered pedagogy, user interface prototyping, and intercept usability testing while providing hands-on practice in each area.

Check out these 3 tutorials examples:

Popular vs. Scholarly Sources
Academic Search Complete
Locating Manuscripts in Special Collections

Presenters:

meryYvonne Mery, Instructional Design Librarian, University of Arizona

Yvonne co-authored the book, Online by Design: the Essentials of Creating Information Literacy Courses. She has co-authored several papers on the integration of information literacy in online classes and presented at numerous national conferences on best practices for online information literacy instruction.

blakistonrRebecca Blakiston, User Experience Librarian, University of Arizona Libraries

Rebecca has been at the University of Arizona Libraries since 2008, and the website product manager since 2010. She provides oversight, management, and strategic planning for the library website, specializing in guerilla usability testing, writing for the web, and content strategy. She developed a process for in-house usability testing, which has been implemented successfully both within website projects and in an ongoing, systematic way. She has authored, Usability Testing: a Practical Guide for Librarians.

sultLeslie Sult, Associate Librarian, University of Arizona

Leslie is in the Research and Learning department. Her work is focused on developing and improving scalable teaching models that enable the library to reach and support many more students than was possible earlier through traditional one-shot instructional sessions. With Gregory Hagedon, Leslie won the ACRL Instruction Section Innovation Award in 2013 for their work on the software Guide on the Side, which helps instruction librarians create tutorials for database instruction.

Guide on the Side

“Understanding that many librarians are feeling the pressure to find methods to support student learning that do not require direct, librarian-led instruction, the University of Arizona Library’s Guide on the Side provides an excellent tutorial grounded in sound pedagogy that could significantly change the way libraries teach students how to use databases,” said award committee co-chairs, Erin L. Ellis of the University of Kansas and Robin Kear of the University of Pittsburgh. “The creators have made a version of the software open access and freely available to librarians to quickly create online, interactive tutorials for database instruction. This allows librarians to easily create tutorials that are both engaging to students and pedagogically sound. Guide on the Side serves as a model of the future of library instruction.”

AC15sfPod238x120Registration:

Rates

  • LITA Member $235
  • ALA Member $350
  • Non-Member $380

How-to

To register for any of these events, you can include them with your initial conference registration or add them later using the unique link in your email confirmation. If you don’t have your registration confirmation handy, you can request a copy by emailing [email protected]. You also have the option of registering for a preconference only. To receive the LITA member pricing during the registration process on the Personal Information page enter the discount promotional code: LITA2015

Register online for the ALA Annual Conference and add a LITA Preconference
Call ALA Registration at 1-866-513-0760
Onsite registration will also be accepted in San Francisco.

Questions or Comments?

For all other questions or comments related to the course, contact LITA at (312) 280-4269 or Mark Beatty, [email protected]

Is Your Library In?

In my previous post, I discussed learning XSLT for my current Indiana University Digital Library Program internship. Along the way, I explained a little about TEI/XML, as well. Thinking about these tools led me to consider all of the different markup and programming languages, and tools that go into building a digital library. While my Guide to Digital Library Tools is kept for another day, I wanted to explore one platform in particular. Omeka.

IU hosts a fantastic series called Digital Library Brown Bags on Wednesdays throughout the school year. I’ve attended many and see an Omeka usage pattern emerging. The most recent Brown Bag was titled Designing the Digital Scholarly Commons: “In Mrs. Goldberg’s Kitchen: Jewish Life in Interwar Lodz” given by Halina Goldberg, Associate Professor of Music at the Jacobs School of Music, and Adam Hochstetter, Web Developer for the Jacobs School of Music.

After seeing many projects utilizing Omeka and creating a few of my own, I was astounded by the extensiveness and detail of this particular project including panorama photograph tours and pop-up information (sign up for notification when the exhibit goes live here). Omeka’s uses are twofold: digital storage for digitized items and a platform to exhibit those items. There are two versions, Omeka.net, though which users can host projects for free or a small fee, and Omeka.org, hearty, more expensive and hosted by an individual person or organization.

To store items, Omeka utilizes the Dublin Core Metadata initiative metadata scheme. Once a user uploads an item (read: picture of an item) he or she fills out a form asking about different parts of the metadata, such as creator, date, description, publisher, language, etc. The item list is always available to browse.

The real magic happens through an exhibit. Like physical exhibits in a rare book library, museum or gallery, the Omeka exhibitor brings together items in relation to a theme. In the example above, the theme was the items found in “Mrs. Goldberg’s Kitchen” and their cultural and historical significance. Omeka provides nearly seamless integration of items in exhibits, hence the magic. Programmers can also do back-end code and template alterations, similar to WordPress.

When beginning to use Omeka, there is a small learning curve and a WYSIWYG (what you see is what you get) feel. I’m curious if this is the reason many libraries choose to implement Omeka projects. Throughout the Brown Bag series and presentations featuring Omeka, I’ve noticed that ¾ of the time is spent discussing the project, and the rest is spent discussing problems or limitations with Omeka. There is always a question along the lines of “so, once you store all of this information in Omeka, can you integrate it with other platforms? Can you export it out?”

As digital librarians find ways to link and standardize digital projects across the web, what will this mean for data “trapped” within Omeka? When I think about this question something like this pops into my mind:

Image retrieved from kids.baristanet.com.
Image retrieved from kids.baristanet.com.

But with Omeka so widely used and only increasing in popularity for library and digital humanities projects, is the orange person an Omeka project with linked projects in blue, or the opposite?

I would love to hear if your library is “in” with Omeka or “in” with other digital exhibits and libraries! Feel free to comment with your successes, limitations, questions, and remarks!

In Praise of Anaconda

papapishu-Black-snake

Do you want to learn to code?  Of course you do, why wouldn’t you?  Programming is fun, like solving a puzzle.  It helps you think in a computational and pragmatic way about certain problems, allowing you to automate those problems away with a few lines of code.  Choosing to learn programming is the first step on your path, and the second is choosing a language.  These days there are many great languages to choose from, each with their own strengths and weaknesses.  The right language for you depends heavily on what you want to do (as well as what language your coworkers are using).

If you don’t have any coder colleagues and can’t decide on a language, I would suggest taking a look at Python.  It’s mature, battle-tested, and useful for a just about anything.  I work across many different domains (often in the same day) and Python is a powerful tool that helps me take care of business whether I’m processing XML, analyzing data or batch renaming and moving files between systems.  Python was created to be easy to read and aims to have one obvious “right” way to do any given task.  These language design decisions not only make Python an easy language to learn, but an easy language to remember as well.

One of the potential problems with Python is that it might not already be on your computer.  Even if it is on your computer, it’s most likely an older version (the difference between Python v2 and v3 is kind of a big deal). This isn’t necessarily a problem with Python though; you would probably have to install a new interpreter (the program that reads and executes your code) no matter what language you choose. The good news is that there is a very simple (and free!) tool for getting the latest version of Python on your computer regardless of whether you are using Windows, Mac or Linux.  It’s called Anaconda.

Anaconda is a Python distribution, which means that it is Python, just packaged in a special way. This special packaging turns out to make all the difference.  Installing an interpreter is usually not a trivial task; it often requires an administrator password to install (which you probably won’t have on any system other than your personal computer) and it could cause conflicts if an earlier version already exists on the system.  Luckily Anaconda bypasses most of this pain with a unique installer that puts a shiny new Python in your user account (this means you can install it on any system you can log in to, though others on the system wouldn’t be able to use it), completely separate from any pre-existing version of Python.  Learning to take advantage of this installer was a game-changer for me since I can now write and run Python code on any system where I have a user account.  Anaconda allows Python to be my programming Swiss Army knife; versatile, handy and always available.

Another important thing to understand about Anaconda’s packaging is that it comes with a lot of goodies.  Python is famous for having an incredible amount of high-quality tools built in to the language, but Anaconda extends this even further. It comes with Spyder, a graphical text editor that makes writing Python code easier, as well as many packages that extend the langauge’s capabilities. Python’s convenience and raw number crunching power has made it a popular language in the scientific programming community, and a large number of powerful data processing and analysis libraries have been developed by these scientists as a result. You don’t have to be a scientist to take advantage of these libraries, though; the simplicity of Python makes these libraries accessible to anyone with the courage to dive in and try them out.  Anaconda includes the best of these scientific libraries: IPython, NumPy, SciPy, pandas, matplotlib, NLTK, scikit-learn, and many others (I use IPython and pandas pretty frequently, and I’m in the process of learning matplotlib and NLTK).  Some of these libraries are a bit tricky to install and configure with the standard Python interpreter, but Anaconda is set up and ready to use them from the start.  All you have to do is use them.

While we’re on the subject of tricky installations, there are many more packages that Anaconda doesn’t  come with that can be a pain to install as well. Luckily Anaconda comes with its own package manager, conda, which is handy for not only grabbing new packages and installing them effortlessly, but also for upgrading the packages you have to the latest version. Conda even works on the Python interpreter itself, so when a new version of Python comes out you don’t have to reinstall anything.  Just to test it out, I upgraded to the latest version of Python, 3.4.2, while writing this article. I typed in ‘conda update python‘ and had the newest version running in less than 30 seconds.

In summary, Anaconda makes Python even more simple, convenient and powerful.  If you are looking for an easy way to take Python for a test drive, look no further than Anaconda to get Python on your system as fast as possible. Even seasoned Python pros can appreciate the reduced complexity Anaconda offers for installing and maintaining some of Python’s more advanced packages, or putting a Python on systems where you need it but lack security privileges. As an avid Python user who could install Python and all its packages from scratch, I choose to use Anaconda because it streamlines the process to an incredible degree.  If you would like to try it out, just download Anaconda and follow the guide.

Agile Development: Estimation and Scheduling

Image courtesy of Wikipedia
Image courtesy of Wikipedia

In my last post, I discussed the creation of Agile user stories. This time I’m going to talk about what to do with them once you have them. There are two big steps that need to be completed in order to move from user story creation to development: effort estimation and prioritization. Each poses its own problems.

Estimating Effort

Because Agile development relies on flexibility and adaptation, creating a bottom-up effort estimation analysis is both difficult and impractical. You don’t want to spend valuable time analyzing a piece of functionality up front only to have the implementation details change because of something that happens earlier in the development process, be it a change in another story, customer feedback, etc. Instead, it’s better to rely on your development team’s expertise and come up with top-down estimates that are accurate enough to get the development process started. This may at times make you feel uncomfortable, as if you’re looking for groundwater with a stick (it’s called dowsing, by the way), but in reality it’s about doing the minimum work necessary to come up with a reasonably accurate projection.

Estimation methods vary, but the key is to discuss story size in relative terms rather than assigning a number of hours of development time. Some teams find a story that is easy to estimate and calibrate all other stories relative to it, using some sort of relative “story points” scale (powers of 2, the Fibonacci sequence, etc.). Others create a relative scale and tag each story with a value from it: this can be anything from vehicles (this story is a car, this one is an aircraft carrier, etc.), to t-shirt sizes, to anything that is intuitive to the team. Another method is planning poker: the team picks a set of sizing values, and each member of the team assigns one of those values to each story by holding up a card with the value on it; if there’s significant variation, the team discusses the estimates and comes up with a compromise.  What matters is not the method, but that the entire team participate in the estimation discussion for each story.

Learn more about Agile estimation here and here.

Prioritizing User Stories

The other piece of information we need in order to begin scheduling is the importance of each story, and for that we must turn to the business side of the organization. Prioritization in Agile is an ongoing process (as opposed to a one-time ranking) that allows the team to understand which user stories carry the biggest payoff at any point in the process. Once they are created, all user stories go into a the product backlog, and each time the team plans a new sprint it picks stories off the top of the list until their capacity is exhausted, so it is very important that the Product Owner maintain a properly ordered backlog.

As with estimation, methods vary, but the key is to follow a process that evaluates each story on the value it adds to the product at any point. If I just rank the stories numerically, that does not provide any clarity as to why that is, which will be confusing to the team (and to me as well as the backlog grows). Most teams adopt a ranking system that scores each story individually; here’s a good example. This method uses two separate criteria: urgency and business value. Business value measures the positive impact of a given story on users. Urgency provides information about how important it is to complete a story earlier rather than later in the development process, taking into account dependencies between user stories, contractual obligations, complexity, etc. Basically, Business Value represents the importance of including a story in the finished product, and Urgency tells us how much it matters when that story is developed (understanding that a story’s likelihood of being completed decreases the later in the process it is slotted). Once the stories have been evaluated along the two axes (a simple 1-5 scale can be used for each) an overall priority number is obtained by multiplying the two values, which gives us the final priority score. The backlog is then ordered using this value.

As the example in the link shows, a Product Owner can also create priority bands that describe stories at a high level: must-have, nice to have, won’t develop, etc. This provides context for the priority score and gives the team information about the PO’s expectations for each story.

I’ll be back next month to talk about building an Agile culture. In the meantime, what methods does your team use to estimate and prioritize user stories?

Join LITA’s Imagineering IG at ALA Annual

Editor’s note: This is a guest post by Breanne Kirsch.

During the upcoming 2015 ALA Annual Conference, LITA’s Imagineering Interest Group will host the program “Unknown Knowns and Known Unknowns: How Speculative Fiction Gets Technological Innovation Right and Wrong.” A panel of science fiction and fantasy authors will discuss their work and how it connects with technological developments that were never invented and those that came about in unimagined ways. Tor is sponsoring the program and bringing authors John Scalzi, Vernor Vinge, Greg Bear, and Marie Brennan. Baen Books is also sponsoring the program by bringing Larry Correia to the author panel.

books_LITAimagineering

John Scalzi wrote the Old Man’s War series and more recently, Redshirts, which won the 2013 Hugo Award for Best Novel. Vernor Vinge is known for his Realtime/Bobble and Zones of Thought Series and a number of short fiction stories. Greg Bear has written a number of series, including Darwin, The Forge of God, Songs of Earth and Power, Quantum Logic, and The Way. He has also written books for the Halo series, short fiction, and standalone books, most recently, War Dogs as well as the upcoming novels Eternity and Eon. Marie Brennan has written the Onyx Court series, a number of short stories, and more recently the Lady Trent series, including the upcoming Voyage of the Basilisk. Larry Correia has written the Monster Hunter series, Grimnoir Chronicles, Dead Six series, and Iron Kingdoms series. These authors will consider the role speculative fiction plays in fostering innovation and bringing about new ideas.

Please plan to attend the upcoming ALA Annual 2015 Conference and add the Imagineering Interest Group program to your schedule! We look forward to seeing you in San Francisco.

Breanne A. Kirsch is the current Chair of the Imagineering Interest Group as well as the Game Making Interest Group within LITA. She works as a Public Services Librarian at the University of South Carolina Upstate and is the Coordinator of Emerging Technologies. She can be contacted at [email protected] or @breezyalli.

Diagrams Made Easy with LucidChart

Editor’s note: This is a guest post by Marlon Hernandez 

For the past year, across four different classes and countless bars, I have worked on an idea that is quickly becoming my go-to project for any Master of Information Science assignment; the Archivist Beer Vault (ABV) database. At first it was easy to explain the contents: BEER! After incorporating more than one entity the explanation grew a bit murky:

ME: So remember my beer database? Well now it includes information on the brewery, style AND contains fictional store transactions
WIFE: Good for you honey.
ME: Yeah unfortunately that means I need to add a few transitive prop… I lost your attention after beer, didn’t I?

Which is a fair reaction since trying to describe the intricacies of abstract ideas such as entity relationship diagrams require clear-cut visuals. However, drawing these diagrams usually requires either expensive programs like Microsoft Visio (student rate $269) or underwhelming experiences of freeware. Enter Lucidchart, an easy to use and relatively inexpensive diagram solution.

Continue reading Diagrams Made Easy with LucidChart

Let’s Talk About E-rate

E-rate isn’t new news. Established almost 20 years ago (I feel old, and you’re about to too) by the Telecommunications Act of 1996, E-rate provides discounts to assist schools and libraries in the United States to obtain affordable telecommunications and internet access.

What is new news is the ALA initiative Got E-rate-rate_logoe? and more importantly the overhaul of E-rate which prompted the initiative- and it’s good news. The best part might well be 1.5 billion dollars added to annual available funding. What that means, in the simplest terms, is new opportunities for libraries to offer better, faster internet. It’s the chance for public libraries of every size to rethink their broadband networks and  make gains toward the broadband speeds necessary for library services.

But beyond the bottom line, this incarnation of E-rate has been deeply influenced by ALA input. The Association worked with the FCC to insure that the reform efforts would benefit libraries. So while we can all jump and cheer about more money/better internet, we can also get excited because there are more options for libraries who lack sufficient broadband capacity to design and maintain broadband networks that meet their community’s growing needs.

The application process has been improved and simplified, and if you need to upgrade your library’s wireless network, there are funds earmarked for that purpose specifically.

Other key victories in this reform include:

  • Adopting a building square footage formula for Category 2 (i.e., internal connections) funding that will ensure libraries of all sizes get a piece of the C2 pie.
  • Suspending the amortization requirement for new fiber construction.
  • Adopting 5 years as the maximum length for contracts using the expedited application review process.
  • Equalizing the program’s treatment of lit and dark fiber.
  • Allowing applicants that use the retroactive reimbursement process (i.e., BEAR form) to receive direct reimbursement from USAC.
  • Allowing for self-construction of fiber under certain circumstances.
  • Providing incentives for consortia and bulk purchasing.

If you’re interested in learning more, I’d suggest going to the source. But it’s a great Friday when you get to celebrate a victory for libraries everywhere.

To receive alerts on ALA’s involvement in E-rate, follow the ALA Office for Information Technology Policy (OITP) on Twitter at @OITP. Use the Twitter hashtag #libraryerate

 

What is a Librarian?

 

steamman-black

When people ask me what I do, I have to admit I feel a bit of angst. I could just say I’m a librarian. After all I’ve been in the library game for nearly 10 years now. I went to library school, got a library degree, and I now work at FSU’s Strozier library with a bunch of librarians on library projects. It feels a bit disingenuous to call myself a librarian though because the word “librarian” is not in my job title. Our library, like all others, draws a sharp distinction between librarians and staff. Calling myself a librarian may feel right, but it is a total lie in the eyes of Human Resources. If I take the HR stance on my job, “what I do” becomes  a lot harder to explain. The average friend or family member has a vague understanding of what a librarian is, but phrases like “web programming” and “digital scholarship” invite more questions than they answer (assuming their eyes don’t glaze over immediately and they change the subject). The true answer about “what I do” lies somewhere in the middle of all this, not quite librarianship and not just programming. When I first got this job, I spent quite a bit of time wrestling with labels, and all of this philosophical judo kept returning to the same questions: What is a librarian, really? And what’s a library? What is librarianship? These are probably questions that people in less amorphous positions don’t have to think about. If you work at a reference desk or edit MARC records in the catalog, you probably have a pretty stable answer to these questions.

At a place like Strozier library, where we have a cadre of programmers with LIS degrees and job titles like Digital Scholarship Coordinator and Data Research Librarian, the answer gets really fuzzy. I’ve discussed this topic with a few coworkers, and there seems to be a recurring theme: “Traditional Librarianship” vs. “What We Do”. “Traditional Librarianship” is the classic cardigan-and-cats view we all learned in library school, usually focusing on the holy trinity of reference, collection development and cataloging. These are jobs that EVERY library has to engage in to some degree, so it’s fair to think of these activities as a potential core for librarianship and libraries. The “What We Do” part of the equation encapsulates everything else: digital humanities, data management, scholarly communication, emerging technologies, web programming, etc. These activities have become a canonical part of the library landscape in recent years, and reflect the changing role libraries are playing in our communities. Libraries aren’t just places to ask questions and find books anymore.

The issue as I see it now becomes how we can reconcile the “What We Do” with the “Traditional” to find some common ground in defining librarianship; if we can do that then we might have an answer to our question. An underlying characteristic of almost all library jobs is that, even if they don’t fall squarely under one of the domains of this so-called “Traditional Librarianship”, they still probably include some aspects of it. Scholarly communication positions could be seen as a hybrid collection development/reference position due to the liaison work, faculty consultation and the quest to obtain Open Access faculty scholarship for the institutional repository. My programming work on the FSU Digital Library could be seen as a mix of collection development and cataloging since it involves getting new objects and metadata into our digital collections. The deeper I pursue this line of thinking, the less satisfying it gets. I’m sure you could make the argument that any job is librarianship if you repackage its core duties in just the right way. I don’t feel like I’m a librarian because I kinda sorta do collection development and cataloging.

I feel like a librarian because I care about the same things as other librarians. The same passion that motivates a “traditional” librarian to help their community by purchasing more books or helping a student make sense of a database is the same passion that motivates me to migrate things into our institutional repository or make a web interface more intuitive. Good librarians all want to make the world a better place in their own way (none of us chose librarianship because of the fabulous pay). In this sense, I suppose I see librarianship less as a set of activities and more as a set of shared values and duties to our communities. The ALA’s Core Values of Librarianship does a pretty good job of summing things up, and this has finally satisfied my philosophical quest for the Platonic ideal of a librarian. I no longer see place of work, job title, duties or education as having much bearing on whether or not you are truly a librarian. If you care about information and want to do good with it, that’s enough for me. Others are free to put more rigorous constraints on the profession if they want, but in order for libraries to survive I think we should be more focused on letting people in than on keeping people out.

What does librarianship mean to you? Following along with other LITA bloggers as we explore this topic from different writers’ perspectives. Keep the conversation going in the comments!