We've Moved!

Think Clearly has a new home! Click here to see our latest posts.

*If there’s older content you’d like to catch up on, you can browse right where you are, until Friday, April 5th.
We know it takes a minute to get used to change!

Machine-Based Active Listening in Libraries: Technology Trends that Aren’t that Out-There Anymore!

Posted by Stephen Abram on 3/5/2019
Stephen Abram
Find me on:
Machine-Based Active Listening in Libraries

More than the walls have ears! Have you experienced enjoying a conversation with a friend and then receiving ads on Google or Facebook related exactly to what you were talking about—even though you’d never done a search on that topic?

I have. My wife has. My friends are noting it too.

This is the direct result of the always on universe of Siri, Alexa, Cortana, Hey Google and all that.

Companies like A.C. Nielsen, the TV ratings and retail folks, have started using these entities as a replacement for diaries. It allows seamless collection as well as the promise of more honest data collection (Were you really watching 60 Minutes or Survivor?).

Machine-based Active Listening Defined

Per Wikipedia, “Computer audition (CA) or machine listening is a general field of study of algorithms and systems for audio understanding by machine. Since the notion of what it means for a machine to "hear" is very broad and somewhat vague, computer audition attempts to bring together several disciplines that originally dealt with specific problems or had a concrete application in mind. The engineer Paris Smaragdis, interviewed in Technology Review, talks about these systems— ‘…software that uses sound to locate people moving through rooms, monitor machinery for impending breakdowns, or activate traffic cameras to record accidents.’

Inspired by models of human audition, CA deals with questions of representation, transduction, grouping, use of musical knowledge and general sound semantics for the purpose of performing intelligent operations on audio and music signals by the computer. Technically this requires a combination of methods from the fields of signal processing, auditory modelling, music perception and cognition, pattern recognition, and machine learning, as well as more traditional methods of artificial intelligence for musical knowledge representation.”

Active ALWAYS-ON listening allows for the collection of metadata related to context, location, other people, and more. Yes, it’s scary and unregulated!

Applying Machine-Based Active Listening in Libraries

However, are there any ideas out there for library applications? How can we make use of machine-based active listening in libraries? Please share your ideas in the comments.

If you’re in a secure facility, is your phone listening? What about conversations outside the office with colleagues?

Can special librarians learn from these entities and data collection methods, and provide a better research experience? How might we play with this?

–Stephen


Stephen Abram is a popular Lucidea Webinars presenter and consultant. He is the past president of SLA, and the Canadian and Ontario Library Associations. He is the CEO of Lighthouse Consulting and the executive director of the Federation of Ontario Public Libraries. He also blogs personally at Stephen’s Lighthouse. Check out his new book from Lucidea Press, Succeeding in the World of Special Librarianship! See more of Stephen’s posts on tech trends in libraries.

New call-to-action

Topics: Special Libraries, Artificial Intelligence, Strategy