The cognitive abilities of simple animals may be described in terms of algorithms. An algorithm can be thought of as a fixed, finite computer program. With “fixed” I mean that the program does not change during its execution. So, for example, the nervous system and thus the behavior of an insect, e.g. a fly or a cricket, may be simulated by such a computer program. The program can then be viewed as a description of the animal’s cognitive abilities.
We are thinking of ourselves as possessing a universal cognitive ability, i.e. the ability of creating or discovering arbitrary knowledge. If we really have this ability, then there cannot be an algorithm describing human cognition completely. Why is this so?
Think of the world as a system that produces perceptual signals, for example the stream of visual events we see with our eyes or the stream of auditory experiences we can hear (see my article https://asifoscope.org/2013/01/17/the-sounds-of-the-night-street-four-bwiteva-buea/ for an example). If you think of this as data (e.g. a video- or audio-file) you can see that it is possible to use some code to translate this into a string of numbers or characters. This is what our cameras or recording devices are doing.
Now think of an algorithm capable of producing or parsing such a stream of signals. With “parsing” I mean that the algorithm can process the information in the stream and recognize each element in it (like you can “parse” a sentence using the grammar and dictionary of the language, recognizing words and parts of speech).
So we think of an algorithm (think of the fly’s brain here) can completely analyze the stream of signals (e.g. from its eyes). The algorithm can be viewed as a complete description of the stream of signals. If we write this algorithm down as a computer program, it has a certain length. If the stream of signals is longer than the program representing the algorithm, then the algorithm can be viewed as a compressed representation of the stream of signals.
Many people today are familiar with the concept of data compression. A file on your computer can be compressed if it contains regularities, e.g. repetitions. In terms of the mathematical theory called “Information Theory” you can say that the file contains some redundancy. Another mathematical description of the compressibility of a file is its Kolmogorov-Complexity (see http://en.wikipedia.org/wiki/Kolmogorov_complexity).
Now back to our point: if a stream of signals can be described by an algorithm and it is longer than that algorithm, the algorithm can be viewed as a compression of that stream of signals. That means that the stream of signals contains a regularity described by that algorithm and hence that it contains some redundancy. If you think of the perceptive organ of the animal as an information channel, this means that it does not exhaust the information carrying capacity of the information channel. You can put some more information into that channel that the algorithm cannot process. Or, if you think of an algorithm that produces a signal, it will never be able to exhaust the information carrying capacity of the channel completely. A random signal with no regularity could exhaust the bandwidth of the channel, but an algorithm cannot produce (or parse) such a signal of arbitrary length because it can be viewed as a compressed form of the signal and thus the signal must have some regularity.
For our animal, this means that if its cognitive apparatus can completely be described by an algorithm, it must have a “blind spot”, i.e. it would not be able to grasp all possible properties of the perceptive signals it may receive. It can only perceive a subset of that perceptive signal.
A predator might develop to hide inside that blind spot. Although it might be in the field of vision, the animal might be cognitively unable to perceive it.
Such animals, such as the cricket shown here, cope with these limitations by producing a large number of offspring. For large animals that take a long time to grow, this is not possible. The have to become smarter. The answer to the problem is to increasingly develop creativity. The extreme case in this direction are we humans.
If we indeed have a universal epistemic ability, this would mean that our cognitive abilities could not be described completely in terms of an algorithm. We would have to be creative instead, in the sense that we are able to move out of the scope of any algorithmic description or – what is essentially the same – of any formal theory (i.e. any exact finite description) describing our cognitive processes.
If the knowledge we have acquired so far does not cover some aspect of our perception, we would be able to develop new knowledge that enables us to patch that blind spot. An algorithm, for the reasons explained above, cannot do that. This would also mean that a complete exact (i.e. formal or algorithmic) description of our cognitive abilities does not exist. There are no general laws of thinking and – as a consequence – no general laws of human culture. This, I think, is the reason for the methodological distinction of natural sciences and cultural sciences.
Acknowledgements: I owe a lot of the insights presented in this article to the work of my friend Kurt Ammon (see, for example, http://arxiv.org/abs/1005.0608).
The picture of the praying mantis devouring a cricket was made by Luc Viatour, see http://commons.wikimedia.org/wiki/User:Lviatour and http://en.wikipedia.org/wiki/File:Tenodera_sinensis_2_Luc_Viatour.jpg