Invited Speakers (alphabetical order)
May 10, Wed. 9:00-10:00 Benfield
May 11, Thurs. 9:00-10:00 Singh
May 11, Thurs. 18:15-19:15 Wooldridge
May 12, Fri. 9:00-10:00 Asada
Brief Description of Speakers and Talks
Minoru Asada received the B.E., M.E., and Ph.D., degrees in control engineering from Osaka University, Osaka, Japan, in 1977, 1979, and 1982, respectively. He became a Professor of Mechanical Engineering for Computer-Controlled Machinery, Osaka University in April 1995. Since April 1997, he has been a Professor of the department of Adaptive Machine Systems at the same university. From August 1986 to October 1987, he was a visiting researcher of Center for Automation Research, University of Maryland, College Park, MD. He has received the 1992 best paper award of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS92), and the 1996 best paper award of RSJ (Robotics Society of Japan). He was a general chair of IEEE/RSJ 1996 International Conference on Intelligent Robots and Systems (IROS96). Since early 1990, he has been involved in RoboCup activities, and now he is the president of the RoboCup Federation. IEEE Fellow since 2005. He has been a research director for JST ERATO Asada Synergistic Intelligence Project since September, 2005.
Towards Emergence of Communication: A Cognitive Developmemtal Robotics Approach
This paper presents an introduction of the JST ERATO Asada Synergistic Intelligence Project (hereafter, SI) and its preliminary studies on emergence of communication. The project aims at building cognitive developmental artificial agents (humanoids), understanding natural agents (humans), and their mutual feedbacks. The project consists of four groups: (1) Physio-SI: whole body dynamic motions such as walking, running, and jumping, (2) Perso-SI: cognitive developmental robotics including body image, imitation, and language communication, (3) Socio-SI: emergence of communication and society by androids, and (4) SI-mechanism: neuroscientific supports for Physio, Perso, and Socio-SIs. As the preliminary studies, vowel acquisition based on visual and auditory mutual imitation, and active lexicon acquisition based on curiosity are given. The former aims at modelling mother-infant interaction in vowel acquisition based on the findings in infant studies, such as lip shape imitation and bias from mother's vocalization onto the infant auditory perception. The latter proposes a lexical acquisition model which makes use of curiosity to associate visual features of observed objects with the labels that is uttered by a caregiver.
Steve Benfield is CTO of Agentis Software and has more than 20 years
of experience as a proven technology visionary, evangelist, architect,
and educator. Previously, Steve was vice-president of strategy &
technology evangelism at ClearNova and was responsible for defining,
articulating, and driving the execution of ClearNova's technology
strategy and vision. At SilverStream, a publicly traded application
server vendor, Steve established the company as a visionary in Web
Services and SOA through a set of standards-based tools & frameworks
for legacy integration & portal services and helped drive it to
successful acquisition by Novell. Prior to SilverStream, Steve served
in executive management positions for several start-up ventures and
Steve enjoys bringing practical new technologies to corporate IT and helping change how real-world applications are built and delivered. Steve has always been near the leading edge of IT technology as an early commercializer and evangelist of client/server development, object- orientation, application servers, web applications, SOA/web services, and Ajax.
Steve joined Agentis to help commercialize and popularize Agent technology. Just as object-orientation changed how applications are built; so will agent-orientation. Steve's vision is to provide tools, training, and expertise to make sure that developers who are not experts in agent technology can successfully create and benefit from agent-oriented applications.
Making a Strong Business Case for Multiagent Technology
Agent-oriented approaches are still nascent in corporate business application development. In this talk, we'll cover reasons businesses should consider adopting agent technology and highlight lessons learned in commercializing and implementing agent-based systems in real-world business situations. Most notably we discuss gains in programmer productivity achieved by using AdaptivEnterprise, a commercial IDE and J2EE runtime engine based on BDI agents. In a wide range of complex business applications, we show that the use of BDI technology incorporated within an enterprise-level architecture can improve overall project productivity by an average 350%. For java coding alone, the increase in productivity, across multiple applications, was over 500%. In competitive bids for large scale enterprise applications, AdaptivEnterprise was able to reduce total cost of ownership between two- and three-fold. In addition, the agent approach allowed the business to change and extend solutions quickly and helped bridge the semantic gap between the business side and IT development. We will also cover one or two case studies describing why agent-technology was chosen, how it was used, and how it benefited the business. Finally, we'll discuss some ideas on how to move agents from the lab into mainstream corporate IT.
Satinder Singh is an Associate Professor of Electrical Engineering and Computer Science in the University of Michigan, Ann Arbor. His main research interest is in the old-fashioned goal of Artificial Intelligence, that of building autonomous agents that can learn to be broadly competent in complex, dynamic, and uncertain environments. The field of reinforcement learning (RL) has focused on this goal, and accordingly his deepest contributions are in RL. He has more recently also been contributing to computational game theory and mechanism design.
Rethinking State, Action, and Reward in Reinforcement Learning
Over the last decade and more, there has been rapid theoretical and empirical progress in reinforcement learning (RL) using the well- established formalisms of Markov decision processes (MDPs) and partially observable MDPs or POMDPs. At the core of these formalisms are particular formulations of the elemental notions of state, action, and reward that have served the field of RL so well. In this talk, I will describe recent progress in rethinking these basic elements to take the field beyond (PO)MDPs. In particular, I will briefly describe older work on flexible notions of actions called options, briefly describe some recent work on intrinsic rather than extrinsic rewards, and then spend the bulk of my time on recent work on predictive representations of state. I will conclude by arguing that taken together these advances point the way for RL to address the many challenges of building an artificial intelligence.
Michael Wooldridge a Professor of Computer Science at the University of Liverpool, UK. He has been active in the research and development of multi-agent systems for more than fifteen years. His particular research interests include the use of logic/formal methods for reasoning about multi-agent systems, methodologies for multi-agent systems, negotiation, and computational complexity in multi-agent systems. Wooldridge has published over a two hundred articles on the theory and practice of agent-based systems; his research monograph "Reasoning about Rational Agents" was published by The MIT Press in 2000, and his introductory textbook "An Introduction to Multiagent Systems" was published by Wiley in 2002. He has also edited thirteen other books in the area. Wooldridge is editor-in-chief of the International Journal of Autonomous Agents and Multi-Agent Systems (Springer), and an editorial board member for the Journal of Applied AI (Taylor & Francis), the Journal of Logic & Computation (Oxford UP), the Journal of Applied Logic (Elsevier), Knowledge, Rationality, and Action (Springer), The Knowledge Engineering Review (Cambridge UP), and Engineering Applications of Artificial Intelligence (Elsevier).
With a Little Help from my Friends: On the Logic and Complexity of Cooperation
The past five years have witnessed an explosion of interest in the use of cooperation logics for reasoning about multi-agent systems. Alternating-time Temporal Logic (ATL) is perhaps the best known example of such a logic. Using ATL, it is possible to reason about what states of affairs coalitions can collectively bring about and maintain, and how these strategic abilities change over time. These logics are attractive for several reasons. First, they can be shown to generalise other well-known and widely used formalisms for reasoning about reactive systems, while at the same time enjoying similarly attractive computational properties. Additionally, they have precise links to game theory. In this talk, I will begin by informally introducing such logics, and will demonstrate why they are of interest to the multi-agent systems community. I will then describe an ATL-based framework for coordinating multi-agent systems with social laws, and will discuss how this framework relates to deontic logic. I will then discuss the computational complexity of the various reasoning problems associated with the logic and some of the subtleties associated with these results. Finally, I will illustrate the links between the logic and coalitional games - in particular, qualitative coalitional games. The talk includes joint work with Thomas Agotnes, Paul E. Dunne, Wiebe van der Hoek, Wojtek Jamroga, Alessio Lomuscio, Carsten Lutz, Marc Pauly, Mark Roberts, Dirk Walther, and Frank Wolter; I gratefully acknowledge the support of these outstanding co-authors.