During this winter break, I’ve had a slight lull in library work and time to reflect on my first semester of library school, aside from reading for pleasure and beginning Black Mirror on Netflix (anybody?). Overall, I’m ready to dive in to the new semester, but one tidbit from fall semester keeps floating in my thoughts, and I’m curious what LITA Blog readers have to say.
Throughout my undergraduate education at the University of Nebraska-Lincoln, I was mainly exposed to two different sets of digital humanities practices: encoding and digital archive practices, and text analysis for literature. With my decision to attend library school, I assumed I would focus on the former for the next two to three years.
Last semester, in my User Services and Tools course, we had a guest speaker from User Needs Assessment in the Indiana University Libraries. As the title suggests, he spoke about developing physical user spaces in the libraries and facilitating assessments of current spaces.
For one portion of his assessments, he used text analysis, more specifically topic modeling with MALLET, a Java-based, natural language processing toolkit, to gain a better understanding of written survey results. This post by Shawn Graham, Scott Weingart, and Ian Milligan explains topic modeling, when/how/why to use it, and various tools to make it happen, focusing on MALLET.
If you didn’t follow the links, topic modeling works by aggregating many texts a user feeds into the algorithm and returns sets of related words from the texts. The user then attempts to understand the theme presented by each set of words and give reason to why it appears. Many times, this practice can reveal themes the user may not have noticed through traditional reading across multiple texts.
From a digital humanities perspective, we love it when computers show us things we missed or help make a task more efficient. Thus, using topic modeling seems an intuitive step for analyzing survey results, as the guest speaker presented. Yet, was also unexpected considering his more traditional position.
I’m curious where you have used some sort of technology, coding, or digital tool to solve a problem or expedite a process in a more traditional library position. Librarians working with digital objects use these technologies and practices daily, but as digital processes, such as topic modeling and text analysis, become more widely used, I’m interested to see where else they crop up and for which reasons.
Feel free to respond with an example of when you unexpectedly used text analysis or another tech tool in your library to complete a task that didn’t necessarily involve digital objects! How did you discover the tool? How did you learn it? Would you use it again?