当前位置: 首页 > 学术信息 > 正文
学术信息

美国内华达大学周泽健博士学术报告

发布时间:2019年12月23日 来源:suncitygroup太阳新城官网 浏览次数:

报告题目:From Multiple to Massive Robots' Game -- Deep Reinforcement learning and Approximate Dynamic Programming Approaches

时间: 2019年12月24日下午16:00

地点:科教北楼102

报告内容简介:

Due to enormous diversity gain from a larger population, massive multi-robot systems have recently attracted growth interests from academic research societies as well as industrial companies. Thus, to find the optimal policy for multi-robot systems, which is often the Nash Equilibrium point, becomes more and more important. In recent studies, the most popular algorithms for finding optimal strategies for multi-robot systems are Deep Reinforcement Learning and Approximate Dynamic Programming. These algorithms all have certain constrains when applied on a large-scale population of robots. The most common problems are non-stationary environment or player, the curse of dimensionality, challenge of communication, and blocked observation. To overcome these difficulties, a novel online reinforcement learning algorithm named the Actor-critic-mass algorithm is developed by integrating the Mean-field Game theory into the ADP technique. In this talk, I will start with the connection between the Deep Reinforcement Learning (DRL) and ADP technique for single agent. And then introduce some popular DRL and ADP algorithms for multi-robot systems. Finally, the Mean-field games and the Actor-critic-mass algorithm will be analyzed in detail. I will also present some simulation results as well as some preliminary hardware test results at the end.

联系电话:0731-88830700

版权所有:suncitygroup太阳新城(中国)集团官方网站