My Day at DPLA – Part 2

Digital Public Library of America sticker

The second half of the day was dedicated to the beta sprint presentations, which laid out several component pieces that could be reviewed for incorporation into the DPLA project, whatever it may become.  The Beta Sprint is a technique used in software development to develop working models of software to demonstrate operability.  The sprinters work ruthlessly to push out a piece of working code as fast as possible.  The DPLA Secretariat got 39 models and they selected six major ones for a long presentation and three shorter ones for a “lightning round” presentation.  All of this work, unpaid volunteer work by major institutions and college students.  Yeah.

The first presentation was Library of Congress, National Archives and Smithsonian.  Smithsonian created an intermediary metadata layer that sat over the digital collections of all three institutions and mapped common fields into a unified search function.  MAJOR.  Second presentation was was the Digital Library Federation, IMLS and DCC.  They created an actual live model that integrated data sets from a few hundred small cultural heritage institutions around the entire country.  This led to a question about curation of the content and who can be included and who gets excluded.  No clear answer to that at this stage.  The third presentation was for a product called ExtraMuros.  OMG, it is mind poppingly cool. This allows you to not only search across multiple document types including full text book searching, photo and video collections in partner institutions but also on the web via sites like Flickr and YouTube, BUT ALSO allows you to play with the content and create new collections, new documents, and enhance existing documents by overlaying and integrating multimedia resources into a text.  I was blown away.  The next presentation was a consolidated government documents interface from University of Minnesota, Hathi Trust and CIC.  It was primarily a mapping and data scrubbing layer that would create greater access to historical government documents, which are notoriously difficult to navigate.  Interestingly the GPO was not involved, nor were they interested.  As a former GPO employee I was a little surprised, because they have an army of catalogers pumping out records every day.  Who knows.  Then the folks from Athens, the one in Greece, presented a product called MINT.  MINT is a metadata mapping product that allows you to create the connections between the products in your data sets and everyone else’s data sets.  They also discussed a minimum viable record standard that they apply for data to be discoverable using their system.  Looked easy.  Finally were two coordinated products called LibraryCloud and Shelf Life.  Library Cloud is exactly what it sounds like, a data cloud server for library content that backs up local data and serves it up for you.  Shelf-Life was much like an OPAC interface that allows you to interact with all the different types of virtual objects in the DPLA catalog through visual shelf arrangements, and incorporated a lot of social media elements such as public reviews, comments, tagging and ranking of data.  I wasn’t totally sold on the look of it, but that’s obviously something that can be changed.

Then there was the lightning round.  First up was Bookworm, which combined the N-Gram viewer and the library’s metadata to create a more powerful search result system.  There was a hilarious moment in here where the undergraduate math student was explaining how to use the product and said “Social Sciences is ‘H’ for some reason” and the entire room burst into laughter.  Silly undergraduates not understanding the Library of Congress Classification System.  It was good, and made great use of variable data visualization techniques.  Next was a method for creating profiles for the cultural institutions and the content that they share with the DPLA.  Meh.  The final one was a project called WikiCite, which would create a citation index of digital information, as well as caching links that are referenced and cited as sources for Wiki articles.

After this we broke for the afternoon tea and had a chance to go and explore some of the poster sessions.  I primarily just hung around looking to see if I knew anyone else. I didn’t really see anyone that I hadn’t already run into.  It was a conference of maybe 300 people so you got to see a lot of the same people over and over again.

The final panel of the day was the report back and mission statements from the six work streams to see where they were headed.  I’m just going to identify the work streams and their mission statements, so I can move on to future thoughts.

  • Audience: Create a digital public Library of America that is a trusted first platform for knowledge online and is universally accessible, participatory, and compelling for all.
  • Content and Scope: Facilitate the discovery and exposure of digital heritage content for permanent, open, public access for the enhancement of knowledge and community.
  • Financial: Explore and develop mechanisms to generate ongoing support for the DPLA. Generating recurring demand is implicit in this statement.
  • Governance: Develop a system of decision making and management for the DPLA.
  • Legal: Illuminate legal issues and, where feasible, provide information and options for addressing legal issues for America’s libraries as they go digital
  • Information Technology: Establish the technical and normative principles of the technological framework that will best support the DPLA’s aims.

As you can see from this, it’s all a little vague, and that’s good at this stage, because they’re still defining the future of the project.  But they’ve also got a very aggressive schedule and a deadline of 18 months to a deliverable product.

Yeah.

So, that kind of wrapped it up there at the end and I was left with a ton of questions, all of which will have to wait for answers.

What kind of product is this going to be?

Who’s going to be using it and what are their needs?

How can the public library use this resource and promote its use with their user base?

How can libraries and cultural institutions become contributors to this project as well as users?

Will the general public be able to create content, share it with the DPLA and be able to expect longevity and access?

Will the DPLA advocate for copyright reform to increase digital access, and actually be able to compete with the stakeholders?

Can the federal government or local governments or public/private partnerships create an internet corps of engineers to enhance access?

Will this product start to change average people’s minds about copyrights and accessibility of content?

Would the DPLA start to challenge the publishing industry to end EULAs and DRM on eBooks to increase digital adoption?

Are we just going to stop with the United States or will we push this toward a global digital culture revolution?  With the U.S. and Europe on board this digital train, South American, Africa and Asia ought to be close behind.

I’m going to end with the vision of the starship library that I wrote about last month.  This is how we get there. By partnering together to make the entire cultural heritage of the world universally accessible, downloadable, remixable, and free.  With this level of access and collective urge to make things available we will get to that point.  And when we finally reach another world, we can start building a new collection, with the unified wisdom of our entire planet behind us.

I am so ready to take that big step.


I’m going to edit this to add one very important thing.  This project is going to revolutionize the web for one very simple reason.  Metadata.  We have been living in a world where blunt force, raw searching yields millions of useless hits.  The value of a service like the DPLA is that it is in fact curated by librarians, archivists, museum curators, as well as the public who volunteer their efforts to make it relevant.  This is the hybrid of the old school library catalog and the new school wiki pages, where we have expert metadata people working round the clock to make things accessible, and average people dedicating their personal knowledge and time to make that metadata even more relevant.  This is going to fundamentally change how we use the web, because I will guarantee you that website owners are going to want to get in on this somehow.  And that means that they are going to have to generate metadata for their work to make it accessible and relevant to the collection, and then the users of those sites are going to curate the hell out of them.  Is that Web 3.0?  2.5?  I don’t know, but it’s a radical shift in an excitingly old/new way.

Advertisements

4 comments on “My Day at DPLA – Part 2

  1. Tony says:

    Small (but significant) typo in your sentence about LibraryCloud: “data could server” #cloud

  2. Great post – would love to talk to you about the DPLA Content and Scope workstream – and of course – metadata!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s