The first Iris robot was built for use in the classroom, to help students reflect upon contemporary theories of the mind -- in particular, the theory called functionalism. According to functionalism, the human mind is essentially a piece of computer software running on the brain (which is the hardware, or "wetware" as it is sometimes called). However, Iris.1 has no "sensory apparatus." So while Iris.1 can talk a good game, telling us all kinds of things about the block and the cup on the table, it cannot "see" the block or the cup. There are many who believe that without a "causal connection" of some kind between the words that the robot uses and the objects to which the words refer, it is unreasonable to believe that the robot's words, 'block' and 'cup,' genuinely refer to anything. To overcome this objection, it was necessary to give Iris sensory access to the world. We've given Iris "sight", with a video camera and neural net software. At this stage, Iris.2's visual capability is restricted to the ability to recognize X's and O's which together with other improvements enables Iris to to play a game of tic tac toe (and even "learn" to play better over time).
Click here to download a QuickTime Video (4.8 meg .mov file) of Iris.2 playing Tic Tac Toe.
To download a Quicktime Player to play .mov files click here The software is available for both Windows and Mac.
Below is a list of software components of Iris.2. For a bit more information, visit the Software page.
Iris.2, like its predecessor, relies on the ProtoThinker software as it's primary artificial intelligence program. ProtoThinker contributes a natural language processor, deductive and some inductive inference capabilities, moral commitments and many other "cognitive" abilities.
With the first Iris robot, all of the ancillary software was "tacked onto" the ProtoThinker software. We wanted to add a text-to-speech program and a robotic arm, so we simply told John Barker (the author of ProtoThinker) to make additions to the ProtoThinker software to accomodate them. But this is not a sound long term strategy. As our robots become increasingly sophisticated, there will be more and more AI programs that must be seemlessly integrated. To accomplish that, there must be a central AI program (a meta-level mind) that co-ordinates all components of the robot -- hardware as well as software. The central control program for Iris.2 was written by Matt Carlson while an applied computer science major at Illinois State University.
We are grateful to Lucent Technologies (aka "Bell Labs") for allowing us to use the AT&T FlexTalk(TM) Speech Synthesis, Release 1.2 text-to-speech program. The TTS program is not integrated with the ProtoThinker software using the same TTS monitoring program that Iris.1 uses. Instead, the central control program can be set to recognize and integrate a TTS program.
The tic tac toe program is a "learning" program. Our undergraduate researchers are presently at work developing a variety of different learning strategies for future programs.
Neural net software that has been trained to recognize certain patterns, takes an image from the Connectix camera, and is able to determine if a particular square is full and whether it has an X or an O.
Iris.2 presently consists of three computers networked with Novell software. Soon we hope to be able to run Iris.2 on a single Pentium computer.
Instead of the $5000 Microbot Teachmover robotic arm used in Iris.1, we have chosen a much less expensive and more versatile, Robix arm, costing less than $600. Iris.2 has two arms, a gripper to pick up objects and a drawbot for drawing X's and O'x.
Iris.2 has been given visual capability with a Connectix Color CAM and neural net software.