Cognition, information, knowledge and the limits of serial computing
BBC - Radio 4 - Today Programme Listen Again 11 Sept 2007 08:50 It's the 50th anniversary of the British Computer Society. But what can we expect over the next half century? Will our levels of dependence on the internet and computers change?One of the guests on the programme, Oliver Sparrow, made the following prediction:
"We will know whether there's a transcendent bit to the human mind by 2050, we will know exactly what cognition is and how we think and probably be able to emulate it."Well, here's an alternative prediction. No, we won't! This prediction assumes a lot about both the nature of cognition and mind, e.g. that they are objective phenomena as described by the language of our daily speech and the language of experts, and about our ability to come to grips with it, e.g. that we can easily capture it through the same tools that we are used to capturing information about the accessible and the not so accessible world. But if we look at the last 50 years of computers and mind research, we should radically limit our expectations of the next 50 years. While computer power (or rather its transistor prerequisites as described by Moore's law) as increased geometrically, our ability to emulate human cognition has increased almost not at all. Let's look at expert systems. It has been over 40 years since ELIZA and we would be very hard put to find a system that can do much more than that, today. The same goes for machine translation. Speech recognition has not progressed almost at all in the last fifteen years. Sure, you can now dictate and have Word open at the same time but that's just tweaking. Accuracy has increased by a guestimate of 20%, usability 10 times while computer power in the same time increased 256 times. The mind boggles why it took computers so long to even draw with humans at chess. Why couldn't a regular calculator do it decades ago? Computer speed simply isn't the answer. My speech recognition teacher said years ago that we need a change of paradigm rather than an increase in computer speed and he was right.
The complexity of human cognition is such that we don’t even know how complex it is, the factors of its social embeddedness are another unknown. My prediction is that we will be as far from being able to model cognition in 2050 as we are today unless we find a way of modelling it as it is rather than modelling it on the back of our incredibly reductionist description of it. Some of the work done on bottom-up robotics seems to point in the right way. Google’s stochastic processing of prestige is also pretty good. We can pretty much keep up wih the increase in the amount of information but I doubt that we will be able to achieve a corresponding increase of knowledge as defined by the speaker. He goes on to draw the following analogy:
If we look at the amount of knowledge that the human race produced and think of it as a nice simple analogy that you have a sheet of cloth about thousand stiches by a thousand stiches. Let's call it a megabyte which is about a telephone directory's worth of information. Everything humanity did in 1920 was a bedsheet to cover the Island of Mauritius, by 1940 it had got to Madagascar, by the 1950s it was the Congo, the whole of Africa by the 60s, all of the continents of the planet by the mid-1980s. By 1990 we had a duvet cover of information produced every year to cover the whole planet by 2020 we'll have about 1800 planets' worth of information.The problem is that information and knowledge are very different. Information is a property of matter (inkblots on paper, magnetic charge of hard drive platters, etc.) while knowledge is a property of individual human beings embedded in the situational constraints of their social existence. Or possibly, it's a property of the social group that can be shared and enacted by its individual human members. The maintenance of information requires relatively little effort (keep the books dusted and the CD-ROMs safe), the maintenance of knowledge requires tremendous cognitive (remembering, organizing, communicating) and social (putting into context, speaking to the right people, maintaining prestige, ...) effort. Just like with the speed of computers not being commensurate with their ability to emulate cognition (let alone social cognition), the amount of information available (encoded in some storage devices) is not commensurate with the "amount" of knowledge, in least because it's not even certain that knowledge can be measured or even that it can 'increase' rather than just being shifted around and refocused.
Let’s illustrate on this debate itself. The one thing we already do know about the mind and cognition, is that the mind is not at all like a computer: it doesn’t have memory that works as a storage or repository of information, and it does not apply serial algorithms to the information it works on. It is not independent of the body in which it exists and it is most certainly not something that can be easily transferred from one context to another. The problem with this ‘information’ is that it is the knowledge only of a limited group of people in the AI, NLP and general cognitive science community, and even the communities and individuals that do ‘possess’ this knowledge are not sure how to act on it. Kurt Vonnegut expressed it best: “Hi ho!”
Add a new comment