Cognitive Science / Creativity / Incompleteness / Neuroscience / Philosophy / Sounds

On Algorithmic and Creative Animals

The cognitive abilities of simple animals may be described in terms of algorithms. An algorithm can be thought of as a fixed, finite computer program. With “fixed” I mean that the program does not change during its execution. So, for example, the nervous system and thus the behavior of an insect, e.g. a fly or a cricket, may be simulated by such a computer program. The program can then be viewed as a description of the animal’s cognitive abilities.

We are thinking of ourselves as possessing a universal cognitive ability, i.e. the ability of creating or discovering arbitrary knowledge. If we really have this ability, then there cannot be an algorithm describing human cognition completely. Why is this so?

Think of the world as a system that produces perceptual signals, for example the stream of visual events we see with our eyes or the stream of auditory experiences we can hear (see my article https://asifoscope.org/2013/01/17/the-sounds-of-the-night-street-four-bwiteva-buea/ for an example). If you think of this as data (e.g. a video- or audio-file) you can see that it is possible to use some code to translate this into a string of numbers or characters. This is what our cameras or recording devices are doing.

Now think of an algorithm capable of producing or parsing such a stream of signals. With “parsing” I mean that the algorithm can process the information in the stream and recognize each element in it (like you can “parse” a sentence using the grammar and dictionary of the language, recognizing words and parts of speech).

So we think of an algorithm (think of the fly’s brain here) can completely analyze the stream of signals (e.g. from its eyes). The algorithm can be viewed as a complete description of the stream of signals. If we write this algorithm down as a computer program, it has a certain length. If the stream of signals is longer than the program representing the algorithm, then the algorithm can be viewed as a compressed representation of the stream of signals.

Many people today are familiar with the concept of data compression. A file on your computer can be compressed if it contains regularities, e.g. repetitions. In terms of the mathematical theory called “Information Theory” you can say that the file contains some redundancy. Another mathematical description of the compressibility of a file is its Kolmogorov-Complexity (see http://en.wikipedia.org/wiki/Kolmogorov_complexity).

Now back to our point: if a stream of signals can be described by an algorithm and it is longer than that algorithm, the algorithm can be viewed as a compression of that stream of signals. That means that the stream of signals contains a regularity described by that algorithm and hence that it contains some redundancy. If you think of the perceptive organ of the animal as an information channel, this means that it does not exhaust the information carrying capacity of the information channel. You can put some more information into that channel that the algorithm cannot process. Or, if you think of an algorithm that produces a signal, it will never be able to exhaust the information carrying capacity of the channel completely. A random signal with no regularity could exhaust the bandwidth of the channel, but an algorithm cannot produce (or parse) such a signal of arbitrary length because it can be viewed as a compressed form of the signal and thus the signal must have some regularity.

For our animal, this means that if its cognitive apparatus can completely be described by an algorithm, it must have a “blind spot”, i.e. it would not be able to grasp all possible properties of the perceptive signals it may receive. It can only perceive a subset of that perceptive signal.

A predator might develop to hide inside that blind spot. Although it might be in the field of vision, the animal might be cognitively unable to perceive it.

File:Tenodera sinensis 2 Luc Viatour.jpg

Such animals, such as the cricket shown here, cope with these limitations by producing a large number of offspring. For large animals that take a long time to grow, this is not possible. The have to become smarter. The answer to the problem is to increasingly develop creativity. The extreme case in this direction are we humans.

If we indeed have a universal epistemic ability, this would mean that our cognitive abilities could not be described completely in terms of an algorithm. We would have to be creative instead, in the sense that we are able to move out of the scope of any algorithmic description or  – what is essentially the same – of any formal theory (i.e. any exact finite description) describing our cognitive processes.

If the knowledge we have acquired so far does not cover some aspect of our perception, we would be able to develop new knowledge that enables us to patch that blind spot. An algorithm, for the reasons explained above, cannot do that. This would also mean that a complete exact (i.e. formal or algorithmic) description of our cognitive abilities does not exist. There are no general laws of thinking and – as a consequence – no general laws of human culture. This, I think, is the reason for the methodological distinction of natural sciences and cultural sciences.

Acknowledgements: I owe a lot of the insights presented in this article to the work of my friend Kurt Ammon (see, for example, http://arxiv.org/abs/1005.0608).

The picture of the praying mantis devouring a cricket was made by Luc Viatour, see http://commons.wikimedia.org/wiki/User:Lviatour and http://en.wikipedia.org/wiki/File:Tenodera_sinensis_2_Luc_Viatour.jpg

14 thoughts on “On Algorithmic and Creative Animals

  1. A fixed algorithm is a special subclass of algorithms. Of course, it is possible to write algorithms and program programs which change themselves during execution. The field of Machine Learning or (Artificial) Neural Networks gives enough examples of that. And of course you can write an algorithm which is controlling these online changes. On the other hand, is it proven that we can always detect a blind spot in our perceptions and thinking? When we detect one we can patch it, yes, but can we detect all?

    • Lieber Thomas,
      I am happy to be able to great you her on my blog.
      Your comment indeed goes to the hart of the matter and deserves an extensive reply. Since I am quite busy at the moment (probably until middle of March) I will do so only later (although I hope: sooner), in the form of a separate blog article. I will then inform you by Facebook and/or email. I hope we will have an interesting discussion here.

  2. I would conjecture that the simpler the organism, the more deterministic the behaviour. Therefore, since their actions are limited by their initial condition they are unable to “invent” new responses and their actions to specific inputs should be predictable. However, this is not always the case.

    The deterministic nature of these “simple” organisms is subject to deterministic chaos which makes long term predictability generally impossible. However, the more deterministic the nature of the actor, the higher the degree of predictability.

    The highly deterministic nature of simple organisms is a strong factor in their reproductive and survival strategies. The more complex the organism, the more susceptible to deterministic chaos it becomes and the more varied and unpredictable its behaviour.

    A hundred different individuals might respond in a hundred different ways to the same input.

    The application of chaos theory to the human condition goes a long way to understanding why it’s so hard to understand.

    • 🙂 LOL Hope this does not drive you out of my blog. Look for my article https://asifoscope.org/2012/11/30/nerdy-stuff/. That is where I have been warning my followers of such stuff. I am planning more in this direction in a couple of weeks when I have more time. An old freind of mine (Thomas Christaller) just posted a comment here. That is probably the start of a serious scientific/philosophical discussion revolving around issues of artificial intelligence and cognitive science. You may ignore these postings (or better, make fun about them).

  3. Perhaps I missed it, but why would an algorithm limit our potential scope of knowledge? It is evident that we are capable of attaining knowledge outside of our experience; therefore our minds are capable of neurological plasticity. But why does this correlate to the absence of an algorithm? Could a flexible algorithm exist that is not susceptible to predetermined confines?

    Also, what do you mean by this: “There are no general laws of thinking… ”

    On a basic level there are laws of thinking. We can only cogitate on concepts that originate in experience. Wouldn’t that qualify as a law of thinking?

    My apologies. Epistemology is a topic near and dear to me. My next post will be on this topic and I may have misunderstood your work because I’m too close to it.

  4. Pingback: Limits of Complete, Exact Descriptions | Creativistic Philosophy

  5. Pingback: Blind Spots and As-If-Constructions | The Bubbling of my Thoughts

  6. Pingback: Thoughts about Intelligence and Creativity | The Bubbling of my Thoughts

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s