Even with assignments such as working in NetLogo and analyzing the Game of Life, I'm sensing some restlessness during class because we're talking rather than doing. And while the six hour class on Feb 21 will be more doing, we still have some ways to go before we get there. Showing such strong examples as Karl Sims's creature evolution (embedded at Evolving Virtual Creatures: The Definitive Guide) or older (Cog) and newer (Domo) examples from Brooks's lab only goes so far. But we need to talk about goals before we start designing.
Pfeifer's and Scheier's overview of agents in Chapter 4 of their text stimulated more discussion than I expected, in large part because, as a student noted very early on, we need a working definition of autonomy. Pfeifer and Scheier focus their discussion on complete autonomous agents, and several in the class questioned the necessity of having such things and the ability to create them.
In particular, we discussed how much we (i.e. humans) lose their autonomy because of the necessity for productive interactions with the environment. One student noted early on that a fully isolated human was hardly autonomous because it would quickly break down. But too much reliance on the environment, including other agents, certainly puts a drag on one's independence. This discussion helped establish the point that autonomy is a continuum with the implication, not explicitly stated in class, that the designer has to decide what level of autonomy is appropriate for the desired goals. (We did an aside into Kelly at that point; while his book is titled Out of Control, he more often discusses what is the reasonable balance between control and a hands-off approach.)
The discussion of the particulars Pfeifer and Scheier say should be taken into account when designing (embodiment, self-sufficiency, adaptability, situatedness) took an interesting turn. I sensed a consensus that adaptability was likely the most important of these. Embodiment is necessary, but almost trivially so if you're working in a real world. Self-sufficiency and situatedness are also important, but become so as a means to being able to adapt. What appears to be most compelling is the ability of an agent to make do with its environment, no matter how that environment changes. If it can roll over everything in its way, then it must be doing pretty good. Is it intelligent? Maybe, but at least it's successful.
That led to the last bit about universality. Agents can and should be adaptive, but we can never design them to be adaptive to every possible circumstance they may encounter. We just want to make sure that we can take account of things that might reasonably be expected to happen. Cut down on the design and programming; increase the likelihood of success in the agent's niche.
Next week, we delve more into Brooks, start designing something, and look more into artificial evolution in the Kelly, Pfeifer and Scheier, and Avida perspectives.