Fifth International Joint Conference on AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS (AAMAS 2006)
Future University, Hakodate, Japan, May 8-12, 2006
Instructions for Tutorial Presenters:
Data projectors (beamers) are available in the tutorial rooms. To facilitate PowerPoint or other presentations, presenters should bring their own laptop. Standard cable connectors will be available, but presenters with computers that require a special adapter should bring their own. We strongly advise to bring electronic backups in a common format (i.e., ppt, pdf, or ps). The conference will not have a general system for the loan of laptops, but in the case of tutorials presenters it will be possible in an emergency to provide a machine so that a tutorial can actually take place.
Reasoning about Cooperation
Presenters: Thomas Agotnes, Wiebe van der Hoek, and Michael Wooldridge
Monday May 8, full day
Programming languages and Development Tools for MAS
Presenters: Rafael H. Bordini, Mehdi Dastani, Joao Leite, and Michael Winikoff
Monday May 8, afternoon, half day
Auction-Based Agent Coordination
Presenters: Sven Koenig, Robert Zlot, and E. Gil Jones
Tuesday May 9, morning, half day
- T4 (CANCELLED)
Model-Driven Development of MAS in Service-Oriented Architectures
Presenters: Klaus Fischer and Arne-Jorgen Berre
- T5 (CANCELLED)
Introduction to Method Engineering
Presenters: Brian Henderson-Sellers, Cesar Gonzalez-Perez, and Magdy Serour
- T6 (CANCELLED)
Designing communication in multiagent systems
- T7 (CANCELLED)
Trust and Reputation Systems for Agent Communities
Presenter: Audun Josang
Engineering Self-Organizing Applications
Presenters: H. Van Dyke Parunak
Monday May 8, morning, half day
Learning and Evolution in Agent Based Systems
Presenter: Sandip Sen
Tuesday May 9, morning, half day
Multiagent Resource Allocation
Presenter: Ulle Endriss
Tuesday May 9, afternoon, half day
- T11 (CANCELLED)
Distributed Constraints - Algorithms, Performance, Communication
Presenter: Amnon Meisels
Cooperation sits at the heart of Multi-Agent Systems. It combines strategical (how to cooperate to achieve a certain goal?), social (with whom to cooperate?), competitive (can we join our forces and achieve something against all other's behaviour?), dynamic (how to form a coalition now to achieve something later?) and informational (given the knowledge and ignorance of a coalition, how to proceed?) issues. Research in MAS has experienced a flourishing interest in formal approaches to cooperation, in which languages represent the reasoning of or about agents in coalitions, and models typically represent the effect of agents forming coalitions over time.
In the first half of our tutorial, we use Alternating-time Temporal Logic (ATL) as a unifying framework to present several issues in cooperation. ATL is an extension of CTL and allows one to express that a coalition G can, no matter what other agents do, guarantee some temporal property. We show how ATL can be used to represent the information of the agents, social behaviour and strategies and actions. We also explain how model checking can be used to verify properties of the overall system in game like scenarios. In the second half, we connect the logical approach with another formal framework for cooperation: game theory. Two links between the logical and the game theoretical approaches are drawn. First, we introduce Coalition Logic (CL) which, on the one hand, is equivalent to the next-time fragment of ATL and, on the other hand, is semantically based in a certain type of formal games. Second, we discuss how to express properties of several types of games such as solution concepts in formal logical languages.
With the significant advances in the area of autonomous agents and multi-agent systems in the last few years, promising technology has emerged as a sensible alternative for the design of systems that can operate in complex and dynamic scenarios. However, in order for this technology to become accessible to multi-agent systems researchers and practitioners, it is necessary that programming languages and tools that are appropriate for developing such systems become widely known and thoroughly understood. This course aims at introducing novices, researchers, and developers, from both academia and industry, to some of the languages, techniques, and tools that are currently available to support effectively the implementation of multi-agent systems.
Teams of agents are more robust and potentially more efficient than single agents. However, coordinating teams of agents so that they can successfully complete their mission is a challenging task. This tutorial will cover one way of efficiently and effectively coordinating teams of agents, namely with auctions. Coordination involves the allocation and execution of individual tasks through an efficient (preferably decentralized) mechanism. The tutorial on "Auction-Based Agent Coordination" covers empirical, algorithmic, and theoretical aspects of auction-based methods for agent coordination, where agents bid on tasks and the tasks are then allocated to the agents by methods that resemble winner determination methods in auctions. Auction-based methods balance the trade-off between purely centralized coordination methods which require a central controller and purely decentralized coordination methods without any communication between agents, both in terms of communication efficiency, computation efficiency, and the quality of the solution.
The tutorial will use the coordination of a team of mobile robots as a running example. Robot teams are increasingly becoming a popular alternative to single robots for a variety of difficult tasks, such as planetary exploration or planetary base assembly. The tutorial covers auction-based agent coordination using examples of multi-robot routing tasks, a class of problems where a team of mobile robots must visit a given set of locations (for example, to deliver material at construction sites or acquire rock probes from Martian rocks) so that their routes are optimized based on certain criteria, for example, minimize the consumed energy, completion time, or average latency. Examples of multi-robot routing tasks include search-and-rescue in areas hit by disasters, surveillance, placement of sensors, material delivery, and localized measurements. We also discuss agent-coordination tasks from domains other than robotics. We give an overview of various auction-based methods for agent coordination, discuss their advantages and disadvantages and compare them to each other and other coordination methods. The tutorial also covers recent theoretical advances (including constant-factor performance guarantees) as well as experimental results and implementation issues.
The success of many multi-agent systems (MAS's) depends on the existence of some form of organization among the individual agents that allows the system as a whole to achieve more than any single agent. In many traditional MAS's, organization is imposed by the system designer, or achieved dynamically by centralized control, or constrained by existing organizations in the problem domain. These mechanisms face severe limitations as the number of agents increases and as the problem domain becomes increasingly distributed and dynamic.
An alternative approach is systems that organize themselves through local interactions. Self-organizing applications (SOAs) take inspiration from biology, physical world, chemistry, or social systems. Typical examples of SOAs are systems that reproduce socially-based insect behavior, such as robots or Artificial Life. The global structure and behavior of these complex systems emerges from local interactions between the different entities or agents that form the whole system without explicit representation of these global patterns on the level of the individual.
The main characteristic of all these applications is their ability to achieve complex collective tasks with relatively simple individual behaviors, without central control or hierarchy.
Self-organization does not happen automatically in just any population of agents, but depends on certain basic principles and dynamics that have begun to be understood over the past few years. Some conventional methods, like hierarchical control, are alternatives to self-organization, while others, like distributed market schemes, depend implicitly on self-organizing dy amics and would be more effective if these dynamics were understood and taken into account.
This half-day tutorial focuses on the current state of practice in engineering SOA's and sets the stage for the AAMAS 2006 ESOA Workshop. The tutorial gives a structured synthesis of the field as it is now, including the needs that motivate the effort, the major classes of techniques, and examples of applications. Furthermore, the tutorial includes a group exercise in engineering a self-organizing application based on domain problems proposed before or at the day of the tutorial.
Learning and adaptation are effective techniques by which agents can evolve coordination strategies that meet the demands of changing environments and requirements of individual agents. On the one hand these techniques reduce the burden of designing agents in complex environments by allowing designers to focus on key characteristics rather than tuning parameters, and on the other hand, they allow agents in open, dynamic societies to adapt to changing environmental conditions and performance requirements.
The tutorial surveys the state of the art in agent systems that learn on-line based on their interactions with human users, other agents, and the environment. The range and scope of learning possibilities and opportunities in agent based systems is highlighted by examples of desktop, internet-based, and physical agents. Pertinent machine learning techniques are illustrated, and new challenges for machine learning research, in the context of agent systems, are identified. We will start by discussing the issues concerning single agents learning from environmental feedback, and then move on to discuss specific techniques to assist multiple agents learning concurrently in a shared environment. The tutorial will also briefly summarize major approaches in designing agent behaviors as well as co-adapted agent societies using evolutionary techniques. In particular, the tutorial will include discussion of reinforcement learning classifier systems, co-evolving agent strategies, and solving social dilemma problems by evolutionary adaptation.
The allocation of resources within a system of autonomous agents, that not only have preferences over alternative allocations of resources but also actively participate in computing an allocation, is an exciting area of research at the interface of Computer Science and Economics. This tutorial will cover some of the most salient issues in Multiagent Resource Allocation. In particular, we are going to review various languages to represent the preferences of individual agents over alternative allocations of resources, as well as different measures of social welfare to assess the overall quality of an allocation. We are also going to discuss pertinent issues regarding allocation procedures and present important complexity results. The tutorial will emphasise two important themes in Multiagent Resource Allocation: (1) properties of distributed negotiation schemes (as opposed to the centralised approach taken in recent work on combinatorial auctions); and (2) the consideration of fair division issues in multiagent systems research. This tutorial is based on the work of the AgentLink Technical Forum Group on Multiagent Resource Allocation (http://www.illc.uva.nl/~ulle/MARA/).
IMPORTANT DATES for Tutorials
- February 5, 2006:
- Deadline for submitting tutorial materials
- May 8 - 9, 2006:
- AAMAS-2006 Tutorials
Submissions and Inquiries
Proposals and inquiries should be sent by email (in ASCII) to the tutorials chair:
Dept. of Computer Science & Engineering