One thing we know about the web is that it’s very easy to index and store and retrieve text, and very very difficult to make ISAR systems for other sorts of media. We’ve seen an inkling of what’s to come with Google Video which uses the nifty hack of indexing the closed captioning to give entry points into visual content. Of course, as we know, closed captioning isn’t perfect.
Moving on…. there have been a few library bloggers podcasting lately, including Open Stacks’ Greg Schwartz [welcome back!]. Matt Haughey has been talking about podcasting on his blog for a while now. It’s an interesting idea, and a great way to push regular audio content, if you’re already creating it. I’m personally very rarely plugged in to my ‘pod for that long at a stretch, and I’m just not sure the usual format of “hey get this MP3 stream automagically in your feed” works for the way I currently consume media, but I’m willing to be persuaded otherwise. I’d love to see some good indexing/search features so you didn’t just have a title/author to go by: where’s all the spicy metadata?
On the other hand, visually disabled patrons who have been after us to get more information accessible via the voice mail system [new titles, possibly even book chapters] might find this really incredibly useful. This whole post was really just a way to get around to mentioning PennSound, a directory of poetry recordings. While the site has some interface design issues, it has a great vision and so far a pretty good execution. I feel like I could spend entire days digging around in poetry audio archives [a favorite], perhaps I should try podcasting that?