We’re in the midst of re-thinking our entire Information Literacy curriculum, and I’ve been waxing philosophical on the role technology will play into this new and uncharted land. The new Framework for Information Literacy has thrown the instructional library world into a tizzy. We are all grappling with everything from understanding the threshold concepts themselves to determining how to best teach them. We’ve done this all along of course with the previous Standards for Information Literacy, but there’s something about this new incarnation that seems to perplex and challenge at the same time.
This is part four of my Linked Data Series. You can find the previous posts in my author feed. I hope everyone had a great holiday season. Are you ready for some more Linked Data goodness? Last semester I had the pleasure of interviewing Julie Hardesty, metadata extraordinaire (and analyst) at Indiana University, about Hydra, the Hydra Metadata Interest Group, and Linked Data. Below is a bio and a transcript of the interview.
Julie Hardesty is the Metadata Analyst at Indiana University Libraries. She manages metadata creation and use for digital library services and projects. She is reachable at firstname.lastname@example.org.
Can you tell us a little about the Hydra platform?
Sure and thanks for inviting me to answer questions for the LITA Blog about Hydra and Linked Data! Hydra is a technology stack that involves several pieces of software – a Blacklight search interface with a Ruby on Rails framework and Apache Solr index working on top of the Fedora Commons digital repository system. Hydra is also referred to when talking about the open source community that works to develop this software into different packages (called “Hydra Heads”) that can be used for management, search, and discovery of different types of digital objects. Examples of Hydra Heads that have come out of the Hydra Project so far include Avalon Media System for time-based media and Sufia for institutional repository-style collections.
What is the Hydra Metadata Interest Group and your current role in the group?
The Hydra Metadata Interest Group is a group within the Hydra Project that is aiming to provide metadata recommendations and best practices for Hydra Heads and Hydra implementations so that every place implementing Hydra can do things the same way using the same ontologies and working with similar base properties for defining and describing digital objects. I am the new facilitator for the group and try to keep the different working groups focused on deliverables and responding to the needs of the Hydra developer community. Previous to me, Karen Estlund from Penn State University served as facilitator. She was instrumental in organizing this group and the working groups that produced the recommendations we have so far for technical metadata and rights metadata. In the near-ish future, I am hoping we’ll see a recommendation for baseline descriptive metadata and a recommendation for referring to segments within a digitized file, regardless of format.
What is the group’s charge and/or purpose? What does the group hope to achieve?
Over the winter break, I had the pleasure of listening to the audio book version of The Life-Changing Magic of Tidying Up: The Japanese Art of Decluttering and Organizing by Marie Kondo. In this book, the author explains in detail her method of tidying up (which she calls KonMari). I highly recommend you read the book in its entirety to gain a fuller understanding of what the KonMari method entails, but in short:
- Gather everything you own that falls into a specific category
- Touch each item individually. Hold it, feel it, connect with it
- Ask yourself, “Does this item spark joy within me?”
- If it doesn’t spark joy, ask, “is it useful or necessary?”
- Lastly, if the item doesn’t spark joy, and it isn’t useful, discard it. Also, as you discard it, thank it for fulfilling its purpose, whatever it may have been.
- Do this category by category until your life is only filled with those things that spark joy.
As I listened to this book, I started to make some connections between the techniques being described and how they could apply to my life as a web services librarian. In this post, I’ll point out a few of the random connections it sparked for me, and perhaps others will be encouraged to do something similar, or even apply KonMari in other areas of librarianship — I’d love to hear what others have to say!
The first thing that stuck out to me about this method is how similar it felt to performing a content audit. Content auditing is an important step in developing an overall content strategy — I’d recommend taking a look at Margot Bloomstein’s article, “The Case for Content Strategy — Motown Style” for a pretty practical overview of content strategy and content auditing. Any information architect, or information worker in general, would be remiss to skip the step of documenting all existing content prior to structuring or restructuring any sort of website or *ahem* LibGuides system. I think that LibGuides (or any of the LibApps, really) would be a great candidate to begin experimenting with content auditing and discarding things. Applying the question “Does it spark joy?” actually becomes a really interesting question, because not only should you be considering it from your own perspective, but also that of the user. This quickly dives into a question of user experience. The oft-spoken about epidemic of “LibGuides Gone Wild” could be at least somewhat tamed if you were to apply this question to your guides. Obviously, you may not always be in a position to be able to act on the discarding of guides without buy-in, but maybe this can provide you with yet another language to describe the benefits of focusing on users.
One type of item that Kondo discusses is seminar notes, which, based on her description, aligns pretty much 100% with the notes we all take when we are at conferences. When I first started attending library conferences at the beginning of my career (about 5 years ago), I would shun taking notes on a computer, insisting that handwriting my notes would result in more effective notes because I would have to be more particular about what nuggets of knowledge I would jot down. In reality, all I would end up with was a sore hand, and I would actually miss out on quite a bit of what the speaker was saying. As I progressed, I would eventually resort to using an iPad along with OneNote, so that I could easily tap out whatever notes I wanted, as well as take pictures of relevant slides and include them along with my notes. This, I believed, was the perfect solution. But, what exactly was it the perfect solution for? It was the perfect solution to make sure I could provide an adequate write-up / conference recap to my co-workers to prove that I actually did learn something and that it was worth the investment. That’s pretty much it. Of course, in my own mind I would think “Oh, these are great! I can go back to these notes later and re-ingest the information and it will be available next time I need it!”. But, I can count on zero hands how many times I actually did that. One of the things that Kondo says about these sorts of events is that the benefit and purpose of them is in the moment — not the notes. You should fully invest yourself in the here and now during the event, because the experience of the event is the purpose. Also, the best way to honor the event is not to have copious notes — but to apply what you’ve learned immediately. This portion of the book pretty much spoke to me directly, because I’m 100% guilty of worrying too much about proving the greatness of professional development opportunities rather than experiencing the greatness.
While the last example I used can pretty much apply to any librarian who attends conferences, this example of where I can apply KonMari is pretty particular to those who have to code at some level. I think I may be more guilty of this than the average person, but the amount of stuff I have commented out (instead of deleting altogether) is atrocious. When I’m developing, I have a (bad) habit of commenting chunks of code that are no longer needed after being replaced by new code. Why do I do this? For the number one reason on Kondo’s list of excuses that people have when discarding things: “I might need it someday!”. In the words of Kondo herself, “someday never comes”. There are bits of code that have probably been commented out instead of deleted for a good 3 years at this point — I think it’s time to go ahead and delete them. Of course, there are good uses for comments, but for the sake of your own sanity (and the sanity of the person who will come after you, see your code and think, “wut?”) use them for their intended purpose, which is to help you (and others) understand your code. Don’t just use it as a safety net, like I have been. I’m even guilty of having older versions of EZproxy stanzas commented out in the config file. Why on Earth would those ever be useful? What makes me even worse is that we have pretty extensive version control, so I could very easily revert to or compare with earlier versions. You can even thank your totally unnecessary comments as you delete them, because they did ultimately serve a purpose — they taught you that you really can simply trust yourself (and your version control).
Well, that’s it for now — three ways of applying KonMari to Web Services Librarianship. I would love to hear of other ways librarians apply these principles to what they do!
How do you feel about 40,000 square feet full of laser cutters, acetylene torches, screen presses, and sewing machines? Or community-based STEAM programming for kids? Or lightsabers?
If these sound great, you should register for the LITA “Makerspaces: Inspiration and Action” tour at Midwinter! We’ll whisk you off to Somerville for tours, nuts and bolts information on running makerspace programs for kids and adults, Q&A, and hands-on activities at two great makerspaces.
Artisan’s Asylum is one of the country’s premier makerspaces. In addition to the laser cutters, sewing machines, and numerous other tools, they rent workspaces to artists, offer a diverse and extensive set of public classes, and are familiar with the growing importance of makerspaces to librarians.
Parts & Crafts is a neighborhood gem: a makerspace for kids that runs camp, afterschool, weekend, and homeschooling programs. With a knowledgeable staff, a great collection of STEAM supplies, and a philosophy of supporting self-directed creativity and learning, they do work that’s instantly applicable to libraries everywhere. We’ll tour their spaces, learn the nuts and bolts of maker programming for kids and adults, and maybe even build some lightsabers.
Parts & Crafts is also home to the Somerville Tool Library (as seen on BoingBoing). Want to circulate bike tools or belt sanders, hedge trimmers or hand trucks? They’ll be on hand to tell you how they do it.
I’ll be there; I hope you will be, too! https://www.eventbrite.com/e/makerspaces-inspiration-and-action-registration-19968887480″Register today.
Connections – Michael Rodriguez
Several LITA bloggers, including myself, attended our first-ever LITA Forum in November 2015. For me, the Forum was a phenomenal experience. I had a great time presenting on OCLC products, open access integration, and technology triage, with positive, insightful audience questions and feedback. The sessions were excellent, the hotel was amazing, the Minneapolis location was perfect, but best of all, LITA was a superb networking conference. With about 300 attendees, it was small enough for us to meet everyone, but large enough to offer diverse perspectives. I got to meet dozens of people, including LITA bloggers Bill, Jacob, and Whitni, whom I knew via LITA or via Twitter but had never met IRL. I got to reenergize old comradeships with Lindsay and Brianna and finally meet the hard-working LITA staff, Mark Beatty and Jenny Levine. I formed an astonishing number of new connections over breakfast, lunch, dinner, and water coolers. Our connections were warm and revitalizing and will be with us lifelong. Thanks, LITA!
To Name – Jacob Shelby
LITA Forum 2015 was my first professional library conference to attend, and I will say that it was an amazing experience. The conference was just the right size! I was fortunate to meet some awesome, like-minded people who inspired me at the conference, and who continue to inspire me in my daily work. There were so many great sessions that it was a real challenge choosing which ones to go to! My particular favorite (if I had to choose only one) was Mark Matienzo’s keynote: To Hell With Good Intentions: Linked Data, Community and the Power to Name. As a metadata and cataloging professional, I thought it was enlightening to think about how we “name” communities and to consider how we can give the power to name and tell stories back to the communities. In all, I made connections with some wonderful professionals and picked up some great ideas to bring back to my library. Thanks for an awesome experience, LITA!
Game On – Lindsay Cronk
A conference is an investment for many of us, and so we always look for ROI. We fret about costs and logistics. We expect to be stimulated by and learn from speakers and presentations. We hope for networking opportunities. At LITA Forum, my expectations and hopes were met and exceeded. Then I got to go to Game Night. What better way to reward a conferenced-out brain than with a few rounds of Love Letter and a full game of Flash Point? I had a terrific time talking shop and then just playing around with fellow librarians and library tech folks. It reminded me that play and discovery are always touted as critical instructional tools. At this point I’m going to level a good-natured accusation- LITA Forum gamified my conference experience, and I loved it. I hope you’ll come out and play next year, LITA Blog readers!
No, get YOUR grub on! – Whitni Watkins
As someone on the planning committee for LITA Forum, I spent a decent amount of time doing my civic duty and making sure things were in place. After a couple of years of conference heavy attending, I learned that you cannot do it all and come out on top. I was selective this year, I attended a few sessions that peaked my interest and spent a few hours discussing a project I was working on in the Poster session. I’ve learned that conferences are best for networking, for finding people with the same passion to help you hack things in the library (and not so library) world. My fondest memory of this year’s LITA forum was the passionate discussion we had during one of our networking dinners on the hierarchy in libraries, how we can break it, and why it is important to do so. Also, afterwards meeting up as LITA Bloggers and hanging out with each other IRL. A great group of people behind the screen, happy to be a part of it.
Did you attend this year’s LITA Forum? What was your experience like?
Each month, the LITA bloggers share selected library tech links, resources, and ideas that resonated with us. Enjoy – and don’t hesitate to tell us what piqued your interest recently in the comments section!
An open letter to PLoS regarding libraries’ role in data curation, compiled by a group of data librarians.
I only have one link to share, but it’s pretty awesome. POP (Prototype on Paper) is a program that lets you create a simulated app without having to know how to code. Simply upload an image file and you can create clickable screens to walk through how the app might work once it would be fully functional. Great for innovation, entrepreneurship, and general pitch sessions!
Hi there, future text miners. Before we head down the coal shoot together, I’ll begin by saying this, and I hope it will reassure you- no matter your level of expertise, your experience in writing code or conducting data analysis, you can find an online tool to help you text mine.
The internet is a wild and beautiful place sometimes.
But before we go there, you may be wondering- what’s this Brave New Workplace business all about? Brave New Workplace is my monthly discussion of tech tools and skill sets which can help you adapt and know a new workplace. In our previous two installments I’ve discussed my own techniques and approaches to learning about your coworkers’ needs and common goals. Today I’m going to talk about text mining the results of your survey, but also text mining generally.
Now three months into my new position, I have found that text mining my survey results was only the first step to developing additional awareness of where I could best apply my expertise to library needs and goals. I went so far as to text mine three years of eresource Help Desk tickets and five years of meeting notes. All of it was fun, helpful, and revealing.
Text mining can assist you in information gathering in a variety of ways, but I tend to think it’s helpful to keep in mind the big three.
1. Seeing the big picture (clustering)
2. Finding answers to very specific questions (question answering)
3. Hypothesis generation (concept linkages)
For the purpose of this post, I will focus on tools for clustering your data set. As with any data project, I encourage you to categorize your inputs and vigorously review and pre-process your data. Exclude documents or texts that do not pertain to the subject of your inquiry. You want your data set to be big and deep, not big and shallow.
I will divide my tool suggestions into two categories: beginner and intermediate. For my beginners just getting started, you will not need to use any programming language, but for intermediate, you will.
Start yourself off easy and use WordClouds.com. This simple site will make you a pretty word cloud, and also provide you with a comprehensive word frequencies list. Those frequencies are concept clusters, and you can begin to see trends and needs in your new coworkers and your workplace goals. This is a pretty cool, and VERY user friendly way to get started text mining.
WordClouds eliminates frequently used words, like articles, and gets you to the meat of your texts. You can copy paste text or upload text files. You can also scan a site URL for text, which is what I’ve elected to do as an example here, examining my library’s home page. The best output of WordClouds is not the word cloud. It’s the easily exportable list of frequently occurring words.
To be honest, I often use this WordClouds’ function in advance of getting into other data tools. It can be a way to better figure out categories of needs, a great first data mining step which requires almost zero effort. With your frequencies list in hand you can do some immediate (and perhaps more useful) data visualization in a simple tool of your choice, for instance Excel.
Depending on your preferred programming language, many options are available to you. While I have traditionally worked in SPSS for data analysis, I have recently been working in R. The good news about R versus SPSS- R is free and there’s a ton of community collaboration. If you have a question (I often do) it’s easy to find an answer.
Getting started in R with text mining is simple. You’ll need to install the packages necessary if you are text mining for the first time.
Then save your text files in a folder titled: “texts,” and load those in R. Once in, you’ll need to pre-process your text to remove common words and punctuation. This guide is excellent in taking you through the steps to process your data and analyze it.
Just like our WordClouds, you can use R to discover term frequencies and visualize them. Beyond this, working in R or SPSS or Python can allow you to cluster terms further. You can find relationships between words and examine those relationships within a dendrogram or by k-means. These will allow you to see the relationships between clusters of terms.
Ultimately, the more you text mine, the more familiar you will become with the tools and analysis valuable in approaching a specific text dataset. Get out there and text mine, kids. It’s a great way to acculturate to a new workplace or just learn more about what’s happening in your library.
Now that we’ve text mined the results of our survey, it’s time to move onto building a Customer Relationship Management system (CRM) for keeping our collaborators and projects straight. Come back for Brave New Workplace: Your Homegrown CRM on January 11th.
New vacancy listings are posted weekly on Wednesday at approximately 12 noon Central Time. They appear under New This Week and under the appropriate regional listing. Postings remain on the LITA Job Site for a minimum of four weeks.
New This Week:
Visit the LITA Job Site for more available jobs and for information on submitting a job posting.
This is part three of my Linked Data Series. You can find the previous posts in my author feed. I’ve decided to spice things up a bit and let you hear from some library professionals who are actually implementing and discussing Linked Data in their libraries. These interviews were conducted via email and are transcripts of the actual interviews, with very minor editorial revisions. This first interview is with Allison Jai O’Dell.
Allison Jai O’Dell is Metadata Librarian and Associate University Librarian at the University of Florida, George A. Smathers Libraries. She is on the editorial teams of the RBMS Controlled Vocabularies and the ARLIS/NA Artists’ Books Thesaurus – and is working to publish both as enriched, five-star linked datasets. Learn more about her from her website.
Can you give a brief description of TemaTres?
TemaTres is a free, open-source content management system for knowledge organization systems (KOS) – such as library thesauri, taxonomies, ontologies, glossaries, and controlled vocabulary lists.
Can you list some key features of TemaTres?
TemaTres runs on a Web-server, and requires only PHP, MySQL, HTML, and CSS. TemaTres is quick to install, and easy to customize. (Gosh, I sound like a salesperson! But it really is simple.)
TemaTres is a cloud-based solution for multiple parties to build and access a KOS. Out-of-the-box, it provides a back-end administration and editing interface, as well as a front-end user interface for searching and browsing the KOS. Back-end users can have varying privileges to add, edit, or suggest concepts – which is great for collaborative projects.
TemaTres makes it easy to publish Linked Data. Concepts are assigned URIs, and the data is available in SKOS and JSON-LD formats (in addition to other formats, such as Dublin Core and MADS). Relationships can be established not only within a KOS (where reciprocal relationships are automatically inferred), but also to external Web resources. That is, TemaTres makes it easy to publish five-star Linked Data.
How have you used TemaTres in your institution? Can you give an example?
I have used TemaTres on several thesaurus projects to streamline collaborative workflows and publish (linked) data. For example, at the University of Florida, George A. Smathers Libraries, we are using TemaTres to develop, publish, access, and apply local controlled vocabularies and ontologies. I am particularly excited to collaborate with Suzan Alteri, curator of the Baldwin Library of Historical Children’s Literature, to develop an ontology of paratextual features. Because our special collections are so unique, we find need to extend the concepts available in major library thesauri. With SKOS under the hood, TemaTres makes that possible.
What challenges have you faced in implementing TemaTres?
With TemaTres and SKOS, we now have the ability to create relationships between thesauri. This is a new frontier – external links have not previously been a part of thesaurus production workflows or thesaurus data. So, now we are busy linking legacy data, and revamping our processes and policies to create more interoperability. It is a lot of work, but the end result – the ability to extend major thesauri at the local or granular level – is tremendously powerful.
How do you see TemaTres and similar linked data vocabulary systems helping in the future?
The plethora of controlled vocabulary and ontology editors on the market allow us to publish not only metadata, but the organizational structures that underlie our metadata. This is powerful stuff for interoperability and knowledge-building. Why wait on the future? Get started now!
What do you think institutions can do locally to prepare for linked data?
There are two answers to this question. One is about preparing our data. Linked data relies on URIs and relationships. The more URIs and relationships we can squeeze into our data, the better it will perform as linked data. Jean Godby and Karen Smith-Yoshimura give some great advice on prepping MARC data for conversion to Linked Data. Relationships – that is, predicates in the RDF triple – can be sourced from relationship designators and field tags in MARC data. So, Jean and Karen advise us to add relationship designators and use granular field tagging.
The second answer is about preparing our staff. In the upcoming volume 34 of Advances in Library Administration and Organization (ALAO), I discuss training, recruitment, and workflow design to prepare staff for linked data. Library catalog theory (especially our tradition of authority control), metadata skillsets (to encode, transform, query, clean, publish, expose, and preserve data), and current organizational trends (towards distributed resource description and centralized metadata management) provide a solid basis for working with linked data.
Librarians tend to focus on nitty-gritty details – hey, it’s our job! But, as we prepare for linked data, and especially as we plan for training, let’s try not to lose the forest for the trees. Effective training keeps big picture concepts in sight, and relates each lesson to the overall vision. In the ALAO chapter, I discuss a strategy to teach conceptual change, inspire creativity, and enable problem-solving with linked data technologies. This is done by highlighting frustrations with MARC data and its applications, then presenting both the simplicity and rewards of the linked data concept.
Do you have any advice for those interested in linked data?
Do not simply publish linked data – consume it! Having a user’s perspective will make you a better data publisher. Try this exercise: Take a linked data set, and imagine some questions you might pose of the information. Then, try to construct SPARQL queries to answer your questions. What challenges do you face? And how would you change the dataset to ameliorate those challenges? Use these insights to publish more awesome data!
I want to thank Allison for participating in this wonderful interview. I encourage you to check out TemaTres and to think about how you can begin implementing Linked Data in your libraries. Stay tuned for the next interview!
Over the last few months I have described various components of Agile development. This time around I want to talk about building an Agile culture. Agile is more than just a codified process; it is a development approach, a philosophy, one that stresses flexibility and communication. In order for a development team to successfully implement Agile the organization must embrace and practice the appropriate culture. In this post will to briefly discuss several tips that will help develop Agile development.
The Right People
It all starts here: as with pretty much any undertaking, you need the right people in place, which is not necessarily the same as saying the best people. Agile development necessitates a specific set of skills that are not intrinsically related to coding mastery: flexibility, teamwork, and ability to take responsibility for a project’s ultimate success are all extremely important. Once the team is formed, management should work to bring team members closer together and create the right environment for information sharing and investment.
Encourage Open Communication
Because of Agile’s quick pace and flexibility, and the lack of overarching structures and processes, open communication is crucial. A team must develop communication pathways and support structures so that all team members are aware of where the project stands at any one moment (the daily scrum is a great example of this). More important, however, is to convince the team to open up and conscientiously share progress individual progress, key roadblocks, and concerns about the path of development. Likewise, management must be proactive about sharing project goals and business objectives with the team. An Agile team is always looking for the most efficient way to deliver results, and the more information they receive about the motivation and goals that lie behind a project the better. Agile managers must actively encourage a culture that says “we’re all in this together, and together we will find the solution to the problem.” Silos are Agile’s kryptonite.
Empower the Team
Agile only works when everyone on the team feels responsible for the success of the project, and management must do its part by encouraging team members to take ownership of the results of their work, and trusting them to do so. Make sure everyone on the team understands the ultimate organizational need, assign specific roles to each team member, and then allow team members to find their own ways to meet the stated goals. Too often in development there is a basic disconnect between the people who understand the business needs and those who have the technical know-how to make them happen. Everyone on the team needs to understand what makes for a successful project, so that wasted effort is minimized.
Reward the Right Behaviors
Too often in development organizations, management metrics are out of alignment with process goals. Hours worked are a popular metric teams use to evaluate members, although often proxies like hours spent at the office, or time spent logged into the system, are used. With Agile, the focus should be on results. As long as a team meets the stated goals of a project, the less time spent working on the solution, the better. Remember, the key is efficiency, and developing software that solves the problem at hand with as few bells and whistles as possible. If a team is consistently beating it’s time estimates by a significant margin, it can recalibrate their estimation procedures. Spending all night at the office working on a piece of code is not a badge of honor, but a failure of the planning process.
Full adoption of Agile takes time. You cannot expect a team to change it’s fundamental philosophy overnight. The key is to keep working at it, taking small steps towards the right environment and rewarding progress. Above all, management needs to be transparent about why it considers this change important. A full transition can take years of incremental improvement. Above all, be conscious that the steady state for your team will likely not look exactly like the theoretical ideal. Agile is adaptable and each organization should create the process that works best for its own needs.
If you want to learn more about building an Agile culture, check out the following resources:
- Bert Girardi’s article on cultural agility.
- Mario Moreira’s Agile adoption roadmap.
- Information Week’s article on creating an Agile culture.
In your experience, how long does it take for a team to fully convert to the Agile way? What is the biggest roadblock to adoption? How is the process initiated and who monitors and controls progress?
“Scrum process” image By Lakeworks (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html) or CC BY-SA 4.0-3.0-2.5-2.0-1.0 (http://creativecommons.org/licenses/by-sa/4.0-3.0-2.5-2.0-1.0)], via Wikimedia Commons