This is a totally free form brain dump, but bear with me, because it goes to interesting places.
In prepping for my speaking engagement in March, I’ve been reacquainting myself with a lot of the singularity literature out there, and diving into a bunch of brain science literature. While reading Michael Chorost’s book World Wide Mind I ran across something that just triggered a cascade of connections. In one of the later chapters he talks about experiments that mapped neuron firing in mice under different conditions and how different clusters fired during different events. This in itself was not surprising, but what was surprising was that under three different emergency conditions (high wind simulation, dropping elevator simulation and earthquake simulation) that the mouse’s neurons would actually categorize the different events in clusters.
Tsien had deconstructed the memories of three different events into distinct but overlapping sets of neurons. Furthermore, those sets were hierarchically organized. The startle clique fired in response to every disturbing event, so it was at the “bottom” of the hierarchy. It was always invoked. Other cliques responded only to some events, so they were “higher up.” Not higher in any literal spatial sense, but in a logical sense; they fired more selectively. By looking at which cliques fired, Tsien could infer which of the three experiences the mouse was having. He had decoded the neural structure of three different but physically interrelated memories.
—World Wide Mind, by Michael Chorost, p. 146
This description describes how the brain develops understanding of all sorts really, not just the experiences of disaster situations, but absolutely everything that we ever see. Every transaction we ever have in life is mapped in these overlapping neural cliques that all build upon previous experiences to contextualize new experiences. Everything in our minds is built upon a superstructure of previous experiences, a taxonomy of our life.
This made me start thinking about words and how words are the tiniest branches of these mental taxonomies. Each word is mapped in these interlocking hierarchies within the brain. So when we hear words, we recall the sensory experiences associated with those things. When we think of apples we think of all of the qualities of apples, but also in the substrate apples are built on the fruit concept, which is built on the food concept and the plant concept. Then there are colors and shapes, and smells and tastes. Varieties of complex tastes are summoned based on previous experience. All of these elements are part of memory.
Think about the first time you saw a strange animal, something that challenged your images of animals. A platypus for instance. You look at the platypus and you instantly try to categorize this thing. You know it’s an animal, but is it a bird, an otter, a beaver… What in the world is this? As an adult you can look it up on Wikipedia and discover that it’s a monotreme, an egg laying mammal, venomous to boot, all sorts of bizarre little facts about it. But your first instinct when looking at the thing is to try and figure out where it belongs. That’s because your brain needs to know where it belongs in order to retain the memory. I would wager that if we were able to do a brain scan that looks at something like the platypus we would discover that our brains cobble it together from the pieces of other more commonly known animals. Then again, perhaps the same is true of all animals.
This also got me thinking about the importance of these mental taxonomies to philosophy. Plato’s entire thesis of “forms” is derived from this very notion. Ur-things are the basis for our understanding of everything around us.
As I was walking down the street, reeling from this information I started making connections to all sorts of other things. The Rolling Stones “Jumpin’ Jack Flash” was rolling through my brain (as that was what I was listening to before I got to work). And then I got to thinking about all of the problems of misremembered lyrics (The Stones being one of the worst culprits of lyrics confusion). The first time we hear a song when things are unclear, we try to make sense of it. Our brains try to wrap themselves around what’s being said and what’s we know of our language. If we don’t understand the words we try to find words that make sense, or don’t make sense but sound right. And those start to map through our neurons such that every time we hear the song, no matter what we later find out the lyrics to be, it’s going to be hard to remember the correct words. Our brains have built up a superstructure based on this incredible hierarchy of experiences, they conjure up all kinds of images and sensory data that become nearly impossible to push out of our minds.
The organizational systems that we’ve developed for education and the classification and arrangement of books are also built around this. Francis Bacon divided the sum of human knowledge into History, Poesy and Philosophy, working us back into broader underlying concepts. The Library of Congress Classification and the Dewey Decimal System are clustered in much the same fashion, though their methods for classification and expansion differ dramatically. Even bookstores use BISAC categories to cluster things into recognizable patterns, rough as they are.
And yet in the digital world we have decontextualized everything. Google search results are a title, blurb and a site in a ranked and weighted algorithm that just sorts things out into what may be most relevant based on the crumbs you’ve thrown to it. We have the body of all the worlds knowledge, but it’s like one of those monastic libraries where everything was organized numerically in the order it was received and hardly anything was retrievable. And now Google has attempted to enhance relevance based on your human relationships. Theoretically these human connections reveal something about our information preferences, and perhaps they do. But they don’t necessarily lead us to places that humans have contextually arranged themselves. It’s telling though that services like Yahoo! and About.com, which built themselves on human oriented organizational structures have kind of crumbled under Google’s weight. Why, if our brains are structured for contextualized learning to do turn to the search engine that has the least context? Perhaps because we’re turning to Google for quick, correct answers and not for learning. The process of drilling down through larger superstructures of context is tedious and exhausting. The closest we get to contextualized search is through topical databases, which, if our library numbers are any indication, is increasing.
In thinking about the future of libraries and where things are moving, digital book collections are definitely expanding, and we’re using our library catalog systems to help provide access to those items. Some catalog systems provide a virtual shelf visualization tool to help you get the look and feel for what is in the collection. I’m imagining some lovely virtual reality program, ala the Star Trek holodeck or the shuffling shelves of The Matrix, that allows you to look through the vastness of human literature, organized and searchable via human organizational standards through the centuries of metadata built up around them, rearrangeable into different classification systems at a single command. Recontextualizing works for a new environment by integrating look, feel and the power of the human mind.