Author: Peter Bradley
Symbolic models have the following features:
Features of a Symbolic Model
Let us take an example. The symbol '1' is character. In normal text, it represents the number one. When you read this text, you probably read it as a one. But this is a matter of interpretation, not a property of the symbol itself. For example, we could use the same symbol to represent the state of being 'on'. In fact, we do use it in this way on certain appliances — switches often have '0' and '1' marked on them to represent that the appliance is 'off' and 'on' respectively. The interpretation of the symbol (its semantics) is independent of the shape of the symbol (its syntax).
Here's another example: the string '11' represents, in normal English, the number eleven. But in binary, it represents the number three. In hexidecimal (a base 16 number system often used in different Internet protocols), it represents the number seventeen. None of these differences in interpretation affect the fact that it looks like two vertical strokes placed side by side. The rules that govern the processes of a symbolic model operate on how the symbol 'looks', not what it is interpreted to mean.
Symbolic models are extremely complicated, so it will serve us well to start from the very beginning: a Turing Machine. In the next few modules, we will explore symbolic models in their most primitive form. The simple machines we will consider model the processes of basic logic and arithmetic. The gap from modeling the process of arithmetic and grammar to the processes of thought is a large one, but it is one that many have thought bridgeable. In this module, we consider some of these thinkers, and how, if at all, we could test their bold hypothesis.
Portions of these modules are adapted from "Cognitive Modeling, Symbolic", by Whit Schonbein and William Bechtel; and "Symbolic vs Connectionist" by Chris Eliasmith and William Bechtel, both entries in the Encyclopedia of Cognitive Science (2000), Macmillan Reference Ltd.