Top Technology Trends

Top Tech Trends – Sunday

Naturally the room was packed. Note to ALA: Please schedule this in a ground-floor room next time – the elevators were packed and the crowds backed up both going up and coming down.

I think this is only the second time I’ve bothered going to Top Tech. Way back before I started going to ALA, I’d read about Top Tech and since I was already in a universitywide library automation office at the time, the predictions were such a yawn. Oh, the sheer jaded ennui of it all.

Now that I’m in a public library after a few years in the private sector, the concerns of academic libraries are more off my horizon, so I go in expecting more of it to be new to me. And indeed, some of it is. I’ve also learned it’s worth going to just because it’s funny. Many good wisecracks were made. My advice: drop the jaded ennui and give Top Tech a try next time you’re at Conference.

Most of the rest below is somewhat verbatim: "I" means the speaker, not me. Since I didn’t keep up with everything, though, some of the below may make much less sense out of context. TTT is nothing if not fast-paced 🙂

Panelists were:

  • Tom Wilson
  • Andrew Pace
  • Roy Tennant
  • Karen Schneider
  • Eric Lease Morgan
  • Marshall Breeding
  • Milton Wolf
  • Joan Frye Williams
  • Clifford Lynch
  • and Sarah Houghton as a "virtual panelist" one might say.

Sarah couldn’t be here, so the moderator read her section, which was the same as you can read on this very blog!

Not specific technologies but related larger issues:

– A growing and massive and as I recently heard wicked problem, that has to do with the amount of data we deal with. We’ve talked here before about price of storage declining but while disk space prices are declining, prices of storage systems are not. A certain portion of that data is in some cases more valuable in the long term than currently – we start to look at things from an archiving perspective. I think we have a major challenge ahead of us: Digital archiving is a statistical proposition, we don’t know how long certain materials are going to last. What we’re doing is producing a statistical likelihood of data corruption or equipment failure.

– Disruptive technology, or disruptive change. See books by Christensen: The Innovator’s Dilemma, The Innovator’s Solution, I think a third one is coming out now. What are the implications of these kinds of technological changes on our organizational structures and how we do business? There is a pattern of the most innovative things being done by smaller organizations, very difficult for larger organizations to make significant changes when new opportunities arise. Do we hesitate to affect our installed base, reluctant to change systems we have trained users on? Or do we take risks?

Refrain from my Boston TTT – if you remember: The OPAC sucks. This was misquoted later as "our OPAC sucks." [laughter]

Dis-integration of library systems, but consolidation of ILS vendors. Polar opposites and very little between the two.

Four legs of the stool:

– Online catalog: It still sucks. We’re working with Endeca to build a new guided navigation. Prediction is, a vast portion of our collection will be discoverable that wasn’t before

– Getting the things out of our system – ILS inadequate – serials and e-resources – building a new system that can handle them

– Digital repositories – electronic archiving and paper archiving, we are building another suite of tools

I don’t like "silos" because that implies you could build bridges between them, but what I’m seeing is more like "A Horizon of Basements" – digging tunnels from one basement to another, danger of running into a random basement

– Better metasearch, fourth leg of the stool.

Last little trend that I will put in the wake of the stool (?!): challenges – managing the context of the user (demographic, disciplinary, age, any number of things) – systems or even standards around managing the context

A small plug for NISO metasearching workshop. Pick up flyers!

1. Mix of technologies
What’s being called "Web 2.0" [hm, does anybody besides O’Reilly call it that?]: web services, good way to dig tunnels between the basements?
AJAX – highly interactive, take data streams from multiple sources – Google maps +

2. Silo systems suck. I can’t say "basement systems," it just doesn’t have that ring. [laughter] We are increasingly demanding APIs from our vendor and more importantly, doing useful work with those, and that’s a good thing.

3. Groups of individuals are getting their hands into our business. NSF funding caused computer science researchers to decide libraries are interesting. Large well-funded company – Google – getting into our business. We have to keep from getting distracted by oh, my gosh, Google is paying attention to me! Also a bad thing for potential funding for library digitization initiatives. Also the scope of what Google is doing is going to bring the copyright hammer down on us all, threatening fair use.

[My favorite and yet least favorite quote:] Google is more like Microsoft than Google is like us.

Standards change. They have changed during LII migration!

That ILS sucking: all citation databases suck. The OPAC is one of the last citation databases. Metadata without an object is like a canary without a song. Work on re-integrating content into the OPAC .

Information is continuing to become more interactive, pre- and post-filtering, more of a conversation. See the Long Tail issues: incorporating user rankings, establishing blogs. Conversations are taking place right now.

I don’t like "basements" or "silos", what we’re experiencing right now is citadel networking – very high quality resources beyond a citadel gate. We have a community of 25,000 users at this conference – yet authenticating here doesn’t authenticate there.

Most computers sold now are laptops, and most of those have wireless.

Increasingly bewildering DRM environment, net result for user is that I can’t listen to items my library buys. It’s like checking out a book I can’t read.

Sarah’s right about the trend to lightweight IM clients, wonder what would have happened if we had started virtual reference with them?

1. Live CDs, massive storage devices – only going to get huger. Live CDs, popular in Unixland, started as business cards. Bootable, so you carry around an OS on the CD. Now suppose you have your OS on your key fob and then a whole lot of data. Then it doesn’t matter what kind of computer you have. I can carry around everything I need and gobs and gobs of disk space on the little key fob. I think this fits in to that locked-down computers Sarah referred to.

2. Web Services: XML stream from one computer to another just has the data, combine that with other sorts of information, rebuild for a use for your particular thing. Good with dis-integration: institutional portal grabs the status of your library account based just on a small piece of your information and displays it to you [i.e., the portal doesn’t need to know about your library account, it just remotely grabs a response from the library system and feeds it into your portal page display]

3. Publishing is not going to go away, open access is going to increase but won’t overtake publishing, same thing with open source software versus proprietary. Behooves us to hedge our bets and be able to handle both.

4. Preservation of digital materials is a pressing problem. We are creating a digital dark age right now. We need to ask ourselves, what kind of information do we need to be preserving for the long haul? The ones that our own institutions are creating. When stuff is being born digital you have to collect and archive it.

5. Decreasingly expect people to come to your website just as you can decreasingly expect people to your physical location. Web services play a really good role in that: make yourself available in their environment. XUL for Mozilla

6. Customization is not going away. Personal information collection not necessary for personalization. Collection, dissemination, evaluation of information is necessary. For customization, for managing context of the user. Post-filtering.

– Suddenly changed business landscape. Two companies suddenly merged, Google and Microsoft [much laughter] — I mean, Sirsi and Dynix. Where are decisions happening? Marketplace? Libraries? No, in the board room. A lot of business decisions have impact on our software. Keep aware of the business environment advisable, make sure our voices are heard. Library automation landscape very fragmented. Waiting for a business environment that can finally deliver us software that we don’t hate. Now we have in Sirsi Dynix 185 people doing development, so there is potential they have some resources to make some progress long overdue. There is now no excuse that “we don’t have the resources.”

– Enterprise systems point of view. Departmental computing – I hesitate to use the word “silo” – is going away. Important that library automation systems fit into that enterprise system. They don’t do that well, maybe web services will help. Integrate also into the global enterprise network, with Google, expose data to services of others.

– Libraries hold back on wireless hot spots because of security issues, a lot of that is resolved now

– Future trends, limitations on bandwidth are relaxing, faster emerging standards but not being released real soon (802.11n?)

– People driving and at the same time looking at GPS and taking text messages …

– We have a large number of boomers who are going to retire. They’re interested in clean water and safe communities. They want to be involved. Many are highly trained and skilled professionals. More older adults go to the library than to the senior center.

Older adults are learning about technology and Internet for the first time. The fastest growing segment of US population is those 85 and older. What are we doing for this group? Shopping for adaptive technology? Vendors don’t know what adaptive technology is about. The retiring boomers are late bloomers with technology. They are community minded. I hope we especially in LITA learn to make use of these people. They have a lot to teach us.

Mass collaboration – file sharing, craigslist, Dean campaign, wikis: people remotely interact to create content and do transactions. These services attract large numbers of people, have very basic ground rules, support interaction directly between participants, are highly satisfying to participants.

Network becoming the locus of activity. The part about sharing nicely with others makes sense to librarians. However, the network is also becoming the new home of trust. Stranger peers are trusted more than big institutions – government, journalists. [Unstated: libraries.] Each news story has the NY Times version and the blogger version, and the blogger version is becoming the trusted source. Bloggers are doing the digging, outing people and fact checking. People aren’t trusting the command-and-control information source. Even when "civilians" acknowledge institutions are doing their best, they don’t most trust the information from the institution. The perception is, if it holds up to the scrutiny of others it is more trustworthy, the QA has been done on it.

Lay ground rules; let users contribute to the environment next to the information provided from us; the trust trickles up. Library’s job includes creating mass collaboration environments and getting out of the way. Let people "enhance, enrich" to give people what they will trust. Even if someone likes the quality of a factoid, they are not moved to action unless it’s personal.

Mass collaboration is zillions of tiny tweaks. If we can think in these terms, that could roll into an even more central, trusted position for us. Question is not whether this will help your institution but whether failing to do so will hurt you.

My final bow on TTT, need new voices, thanks to all the people who have turned out and made it the big and random thing that it is. [I think "random" was what she said …unless she said something else.]

I came into this with a good list, pleasantly surprised to see panel has not hit on any one of my four points.
I’ll try to leave us collectively some time for conversation.
– Emerging issue of data curation: “The Data Deluge” article, late May report from National Science Foundation on long-term stewardship of data. A need to train “data scientists” is being discussed.
– Increasing transition from making images to manipulating datasets in e.g. photography. Ramifications are going to go more broadly than you might think.
– Text corpora, very very large publicly manipulable sets of text — computable, available for text mining, indexing, etc.; all the public domain and open access text in one giant set. Article is available online, “The Rise of the Plagiosphere”: more and more giant complete set of all text means, for one thing, plagiarism will become more of a concern for some. Such that every child will be encouraged to check their thoughts against the entire text of everyone else in the world’s writings.
– This month’s ACM cover piece: super high resolution imagery of Michelangelo’s David. Have a serious conversation about what we mean to do with old cultural historical artifacts of various kinds that have controlled access – 3d reproduction capabilities provided by libraries to the public in the future?

Selected portions of the Q&A


Cliff: I’m not sure this is really new. Communities have been coming up with bottom-up unstructured descriptors for ever. You can facilitate the process if you put it where you can do data manipulation on it. Karen: It’s endearing that the civilian community has discovered metadata and they think they invented it. [pause] I’m just waiting for next year when they all discover authority control. [laughter] Roy: Folksonomies suck. On Flickr one of the highest-use terms is “me,” so …

Wikis have some use but leaving info to blogosphere not going to be a good source!

Joan: [Didn’t get her words exactly, which were much more polite, but to paraphrase, no duh!] If there is a way to set ground rules that encourage fact checking etc. then that could enhance side by side with information we provide. Blogosphere is not going to replace our information. But we can’t expect people to operate in our context, they are going to operate in theirs. Tom: One of the things we can do really well is teach people to evaluate information. Even if you take blogs out of the picture. Blogs are like folksonomies: not necessarily new. We’ve always had this issue. If you only subscribe to one newspaper you’re not only going to get just one perspective on an issue, but the selection of what constitutes an issue is defined by that paper.

GPS data integrate with OPAC data etc.

Cliff: georeferencing – surprising we don’t have more of that built in. But it’s also messier than it might sound: same place names for multiple locations, place names change over time, etc. – but it does seem like a basic service, that it seems right for becoming much more broadly used. There’s a break between the satellite view and the close-up view of a geographic area, and at either level there is a lack of data on every item represented in the view. Andrew: geospatial analysis found that certain shelves in library were quite popular. Some libraries are doing You Are Here services. But this is another argument for making the OPAC better – will enable people to find online rather than having to browse shelves.

My favorite visual: From my balcony perch, viewing the rather dimly lit stage, a line of blue lights along half the panel: they were wearing their LITA light-up necklace trinkets.