Internal models are assumed to be recruited in higher level tasks. How an internal model of the body can be utilized by perception and how a multimodal representation can be learned in an unsupervised fashion has been investigated in this new paper, presented at the ECAL in Paris:
Schilling, M. (2011). ”Learning by seeing—associative learning of visual features through mental simulation of observed action”. In R. Doursat (Ed.), Proceedings of the ECAL 2011, Paris: MIT Press, pp. 731–738.
Internal representations employed in cognitive tasks have to be embodied. The flexible use of such grounded models allows for higher-level function like planning ahead, cooperation and communication. But at the same time this flexibility presupposes that the utilized internal models are interrelating multiple modalities. In this article we present now an internal body model serving motor control tasks can be recruited for learning to recognize movements performed by another agent. We show that—as the movements are governed by an equal underlying internal model—it is sufficient to observe the other agent performing a series of movements and that there is no supervised learning necessary, i.e. the learning agent does not require access to the performing agents postural information (joint configurations). Instead, through the shared underlying dynamics the mapping can be bootstrapped by the observing agent from the sequence of visual input features.