We live in an ecosystem of hyper-information. In an extraordinarily short period of time we have moved from scarcity to ubiquity. The result has been unprecedented information access. It has also become not just overwhelming but debilitating.
Too. Much. Information.
Or, to be more accurate: Too. Little. Synthesis.
The challenge of literacy isn’t our ability to crank out more stuff, it’s our inability to process it, to interpret it, and to make any use of it. In the early days of the Internet (circa 1998), which now seem quaint in terms of information availability, the metaphor was “drinking from a fire hose.”
Lyman and Varian, in How Much Information? 2003, determined that 5 exabytes of new information was stored in 2002. They answer the question How big is five exabytes? with:
“If digitized with full formatting, the seventeen million books in the Library of Congress contain about 136 terabytes of information; five exabytes of information is equivalent in size to the information contained in 37,000 new libraries the size of the Library of Congress book collections.”
However, they also calculated the flow of information (e.g. telephone, radio, TV, Internet). Here the new information flowed in 2002 amounted to 18 exabytes. According to Pingdom, the Internet monitoring company, there were 500 million websites in 2011. Of these ~350 million were added within the last 12 months. Eric Schmidt, former CEO of Google, has said that we create as much information in two days as we did from the dawn of civilization up until 2003. International Data Corporation estimates that the world’s total information will increase to 2.7 zettabytes in 2012. A zettabyte is a billion terabytes.
While estimates of how much information is in the world are entertaining if questionable, they do illustrate the magnitude of what we are up against.
We have invented a technology that exceeds our capacity to use it. What to do?
Our short term, working memory is overwhelmed by an onslaught of information, experiences, and distractions of all sorts. Is our “stone-age brain”, as Torkel Klingberg calls it (The Overflowing Brain, 2009), able to manage? Are we Cro-Magnon humans dazed and confused in a world our brains are poorly adapted to deal with? Klingberg, a cognitive neuroscientist from the Karolinska Institute, says no but we do have to learn how to process information in a more thoughtful, aware, and attentive manner.
Danny Hillis (of Thinking Machines), wants to reinvent the architecture of the Internet to help solve this problem, he calls it the “knowledge web:”
“if humans could contribute their knowledge to a database that could be read by computers, then the computers could present that knowledge to humans in the time, place and format that would be most useful to them. The missing link to make the idea work was a universal database containing all human knowledge, represented in a form that could be accessed, filtered and interpreted by computers. One might reasonably ask: Why isn’t that database the Wikipedia or even the World Wide Web? The answer is that these depositories of knowledge are designed to be read directly by humans, not interpreted by computers. They confound the presentation of information with the information itself.”
Currently the Internet is designed for people; web pages are viewed, read, and listened to by people. That’s the problem. People. We are very poor information processors; much of the information on the web is missed, poorly understood or ignored by us.
The emerging “Internet of Things” is based on ubiquitous sensors, embedded systems, and the networking of their continuous data flows. The growing data storm is already well beyond our capacity to comprehend. The solution for Hillis is to design the next generation Internet for computers; computers as creators and consumers of the information. It is an Internet designed to be used by machines. It wouldn’t make much sense to us, and that’s fine, it isn’t supposed to. The “Internet-for-computers” Internet will encode, transmit, correlate, and synthesize information in ways that are machine friendly and machine accessible.
We should remember that information overload isn’t a new phenomenon. We have been here before. And the results are instructive.
Too Much To Know & Too Big To Know
Ann Blair, in Too Much To Know: Managing Scholarly Information Before the Modern Era (2010), recounts the information exploration of the early modern period (1500-1800). Gutenberg had launched a revolution and books were, in their terms, flooding the market. Too many. Too much to know.
The result was considerable anxiety about the abundance of books and their negative impact of civilization. Blair quotes Adrien Baillet from his 1685 Jugemens des sçavans:
“We have reason to fear that the multitude of books which grows every day in a prodigious fashion will make the following centuries fall into a state as barbarous as that of the centuries that followed the fall of the Roman Empire.”
Blair identifies the many solutions developed during this period (encyclopedias, commonplace books, etc.) to respond to the overload challenges and concludes by saying:
“these works devised innovative methods of managing textual information in an era of exploding publications to which our own methods of reading and processing information are indebted.”
As information increased, new methods and tools emerged. Literacy is about change. It, and its containers, have evolved to respond to the challenges of the time. In a provocative view of epistemology in the network era, David Weinberger, in Too Big To Know (2011), extrapolates this even further:
“The real limitation isn’t the capacity of our individual brains but that of the media we have used to get past our brains’ limitations.”
In this world of abundance, knowledge is not a library but a playlist tuned to our present interests. It is not eternally truthful content but subject matter good enough for our current task. It is not a realm but a path that gets us where we’re going.”
By encouraging us to move from “long-form thinking” (book shaped) to “web-form thought” (network shaped), Weinberger identifies the book as “such a bad fit for the structure of knowledge it’s intended to represent and enable.” He argues that
“Long-form thinking looks the way its does because books shaped it that way. And because books have been knowledge’s medium, we have thought that that’s how knowledge should be shaped … But now that our medium can handle far more ideas and information, and now that it is a connective medium (ideas to ideas, people to ideas, people to people), our strategy is changing. And that is changing the very shape of knowledge.”
As with the information innovations arising in the early modern era, trying to understand the complexity of a networked epistemology will require new tools and techniques. A new literacy.
We have created a technology (the alphabet) which we are now unable to manage.