Emotionen und Mirrorneurone (Bewusstseinsmodelle II)

Edge

Once you’ve got an AI system that says, “I know on principle I’m just a bunch of silicon circuits, but from the first-person perspective, I feel like so much more,” then maybe we might be onto something in understanding the mechanisms of consciousness. Of course, if that just happens through somebody programming a machine to imitate superficial human behavior, then that’s not going to be so exciting. If, on the other hand, we get there via trying to figure out the mechanisms which are doing the job in the human case and getting an AI system to implement those mechanisms, then we find via some relatively natural process, that it A) finds consciousness in itself and B) is puzzled by this fact. That would at least be very interesting.

And the interview details

GERSHENFELD: What do you think about the mirror tests on elephants and dolphins for sense of self?
CHALMERS: Those are potential tests for self-consciousness, which, again, is a high bar. There are plenty of animals that don’t pass them. So, are they not self-conscious? No. They’re probably just not very good with mirrors.
GERSHENFELD: But do you think that’s a falsifiable test of sense of self?
CHALMERS: That’s pretty good evidence that the animals who pass it have certain kinds of distinctive self-representations, yes. I don’t think failing it is any sign that you don’t. I would also distinguish self-consciousness, which is a very complicated phenomenon that humans and a certain number of mammals may have, from ordinary conscious experience of the world, which we get in the experience of perception, of pain, of ordinary thinking. Self-consciousness is just one component of consciousness.