このエントリーをはてなブックマークに追加


ID 33063
FullText URL
Author
Ito, Kazuyuki
Imoto, Yoshiaki
Takeshita, Mitsuo
Abstract

Reinforcement learning is one of effective controller for autonomous robots. Because it does not need priori knowledge and behaviors to complete given tasks are obtained automatically be repeating trial and error. However a large number of trials are required to realize complex tasks. So the task that can be obtained using the real robot is restricted to simple ones. Considering these points, various methods that prove the learning cost of reinforcement learning have been proposed. In the method that uses priori knowledge, the methods lose the autonomy that is most important feature of reinforcement learning in applying it to the robots. In the Dyna-Q, that is one of simple and effective reinforcement learning architecture integrating online planning, a model of environment is learned from real experience and by utilizing the model to learn, the learning time is decreased. In this architecture, the autonomy is held, however the model depends on the task, so acquired knowledge of environment cannot be reused to other tasks. In the real world, human beings can learn various behaviors to complete complex tasks without priori knowledge of the tasks. We can try to realize the task in our image without moving our body. After the training in the image, by trying to the real environment, we save time to learn. It means that we have model of environment and we utilize the model to learn. We consider that the key ability that makes the learning process faster is construction of environment model and utilization of it. In this paper, we have proposed a method to obtain an environment model that is independent of the task. And by utilizing the model we have decreased learning time. We consider distributed autonomous agents, and we show that the environment model is constructed quickly by sharing the experience of each agent, even when each agent has own independent task. To demonstrate the effectiveness of the proposed method, we have applied the method to the Q-learning and simulations of a puddle world are carried out. As a result effective behaviors have been obtained quickly.

Keywords
knowledge based systems
learning (artificial intelligence)
planning (artificial intelligence) robots
Note
Published with permission from the copyright holder. This is the institute's copy, as published in Computational Intelligence in Robotics and Automation, 2003. Proceedings. 2003 IEEE International Symposium on , 16-20 July 2003, Volume 3, Pages 1120-1125.
Publisher URL:http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1222154
Copyright © 2003 IEEE. All rights reserved.
Published Date
2003-7
Publication Title
Computational Intelligence in Robotics and Automation
Volume
volume3
Start Page
1120
End Page
1125
Content Type
Journal Article
language
English
Refereed
True
DOI
Submission Path
mechanical_engineering/4