Personal tools
You are here: Home Member Resources info NIAD&R Info Multi-agent Systems applications and Intelligent Text Mining

Multi-agent Systems applications and Intelligent Text Mining

Two years ago we have started a new line of research related to Natural Language processing and Text Mining. This work includes a partnership with the Linguateca Project. Moreover we also aim at applying agent and multi-agent architectures, negotiation protocols and learning algorithms to specific application domains.


Eugénio Oliveira, Luis Sarmento, António Castro, Rosaldo Rossetti, J. L. Pinto, Nuno Sousa, Sónia Rocha, Gabriel Pereira


 Eugénio Oliveira

Research direction:

Two years ago we have started a new line of research related to Natural Language processing and Text Mining. This work includes a partnership with the Linguateca Project. Moreover we also aim at applying agent and multi-agent architectures, negotiation protocols and learning algorithms to specific application domains.

Natural Language Processing and Text Mining

Research goal:

We intend to build up algorithms to mine very large Data Bases of portuguese text. This implies the following more precise research goals: 1- development of semantic analysis tools for information extraction purposes; 2- development of lightly supervised machine learning methods for adaptive semantic analysis tools from raw text and a small set of examples (bootstrapping techniques); 3- development of automatic question-answering systems.

Recent work (2006):
  • In 2006, most of the work was dedicated to the development of a named-entity recognition (NER) system, SIEMES. The system relies on a very large gazetteer for Portuguese (REPENTINO), a small specific-domain lexicon and a set of manually encoded rules. SIEMES is already able to identify and classify more than 100 different types of entities. We have participated, with this system, in the HAREM evaluation contest organized by Linguateca (
  • The SIEMES NER system was also used as the underlying semantic analysis tool of RAPOSA, an automatic question-answering system for Portuguese that was also developed during last year. RAPOSA participated in the QA@CLEF evaluation track promoted by the Cross-Language Evaluation Forum (
Current and future work:
  • In 2007 we are focusing on developing machine learning methods to be used in the construction of the key components of a wide-scope semantic analysis systems namely, sets of rules, specialized lexicons and gazetteers. We are specially focusing on bootstrapping methods that try to expand and generalize a set of seed examples given by the user by searching vast amounts of text. For this purpose, we are using the Wikipedia collection which allows us to test our machine learning methods in several natural languages. We hope that these machine learning methods will enable us to quickly build up the resources required for instantiating semantic analysis systems specialized for different applications and languages.

A Multi-Agent System for Intelligent Monitoring of Airline Operations

Research goals:

We intend to specify and implement a multi-agent system for monitoring of airline operations, including intelligent crew, aircraft and passenger problems recovery.

Recent work (2006):
  • The Multi-Agent System deals with different operational bases and all bases cooperate to find the solutions to the local problems. Robustness is a key feature and we achieve that through redundancy in finding the possible solutions to the problems, using specialized agents that compete in finding the best solution to be applied.
  • To be an ”Intelligent System” some kind of learning must be available. We are using learning to define the crew member’s profile, to learn the use of stand by crew members and include this learning in future crew scheduling and in suggesting new solutions based on previous decisions.
  • To foster the cooperation between different airline companies we explore the possibility of having a ”kind of electronic market” of crew members and aircrafts, to be used in crew and aircraft recovery. This would work as a ”market” of solutions to specific local problems and these solutions would compete with the recommended local solutions.
Current and future work (2007):
  • A master thesis is being produced.
  • To specify and implement a prototype that will evaluate the hypothesis made over relevant scenarios.

Control Strategies Characterization for Heterogeneous MAS

Research goals:

To extract good control strategies emerging from heterogeneous multi-agent interaction. The application domain is Traffic Control for metropolitan regions

Recent work (2005 - 2006):
  • Development and implementation of prototype software for microscopic simulation models to assess the project requirements as for the introduction of agent architectures for intelligent traffic control at different levels;
  • Evaluation of current microscopic traffic simulators supporting different traffic control models and offering facilities to integrate the concept of agents. Examples include SUMO, ITSUMO, MITSIM, DRACULA, Paramics and AIMSUN2;
  • Assessment of agent-based methodologies for multi-agent systems specification and development, in which case a combination of GAIA and AUML was adopted;
  • Adoption of a GIS package to support the implementation of the parametric data structures underlying the MASTTER Lab framework, offering adequate tools to handle and analyze spatial and geographically referenced information;
  • First meetings between the Portuguese and the Brazilian partners for mutual understanding of the project objectives and strategic planning of activities. An add-in component was implemented and introduced in the ITSUMO simulator to support further developments of the project, allowing integration of different agent controllers in the simulation loop. Dr. Rosaldo Rossetti visited Prof. Ana Bazzan’ group in November 2005, in Porto Alegre, Brazil;
  • Following meetings between the Portuguese and the Brazilian partners for re-evaluating the project objectives and asses its progress. Different simulation scenarios were proposed to test different approaches to traffic control in two basic levels, one local and another global. For the former, Q-Learning agents were implemented to control traffic lights at intersections accounting for variable recurrent flows, whereas some discussion started on possible alternatives for the global perspective of control in the second level. Profs. Ana Bazzan and Roberto Silva visited our LIACC group in February 2006.
Current and future work:
  • Developing agents at the second level of control, to be integrated in the simulation framework and to cooperate with agents at the first level;
  • Building up different algorithms for tra?c control agents and making them inter-operate in a dynamic environment, where different interactions, relying either on explicit communication or not, can be observed. Interactions can happen in different ways, ranging from cooperation to competition, with mechanisms for dynamic coalitions or team formation;
  • Making available a suitable agent-based traffic control simulator to test and assess different control strategies. It is also expected to support decisions of experts and practitioners in devising and selecting control policies to be applied in the real world;
  • Modeling different driver behaviors in simulated scenarios and their interactions (either implicit of explicit) with intelligent control devices and other intelligent transportation solutions. A future work in this way is being proposed with the objective of studying how different groups of cooperating and competing multi-agent systems can learn in partially observed and highly dynamic and uncertain domains;
  • Inducing dynamic control hierarchies of control, with dynamic placement of agents at each level of the hierarchy. Specifying learning agents to deal with control results at a lower level;
  • Modeling informational agents to overcome problems of inaccessibility so as to tailor information to foster agents’ learning in the transportation domain.

4-legged robotic surveillance- ”Smart Guardian”

Research goals:

The main goal of the Smart Guardian project is to create an Agent that makes use of Learning techniques when patrolling and detecting intruders in a dynamic environment.

Past work (2005-2006):
  • To define the basic architecture that enables the robot to be sufficiently curious to investigate parts of the unknown world and to move to locations pointed out by the user. We are using Tekkotsu framework (developed at CMU) a C++ layer on top of OPEN-R that provides a greater abstraction over the low level details of the robot. The Robot Interface has been abstracted to a common model, allowing adapting to other robots with minimal changes.
  • A BDI Agent Architecture that is adapted for Real-Time to control the Robot, has been designed and implemented.
  • A realistic Simulator has been implemented that can be used to test agents before deploying the system into real robot.
  • The World Model Updating Algorithm first steps have been implemented allowing the robot to map the environment within range of its sensors, although the robot still doesn’t move using this model.
  • Analysis of the algorithms for the Path Planning has been completed and ready to be implemented.
Current and future work (2007):
  • To document the Platform developed to use the AIBO has a Robot, thus enabling it to be used in the classroom.
  • Specify the Algorithm and Techniques that need to be implemented to make the system a Multi-Agent System.

Agent-based Electrical Energy e-Market

Research goals:

To design a secure platform to enable trusted encounters between agents representing energy costumers and suppliers in an Electronic Market. Current European efforts for the establishment of both deregulated Electrical Energy Markets and Electronic Commerce platforms can be brought together through appropriate multi-agent platforms enabling autonomous agents interaction for automatic trading.

Recent work:
  • In the specification of the multi-agent system encompassing the needed functionalities for the Electrical Energy e-market, we have until now emphasizing security procedures, accountability of the communications, good performance and software portability. Also, integration with legacy systems has been privileged. In our Electricity E-Market, agents authenticate through digital certificates, while messages between the market operator and the market agents are digitally signed. We have selected the TLS/SSl protocol and, as for the message digital signatures is concerned, the open standards are being used. They rely on classical MAC and cryptography algorithms used in SSL. Market operator is seen as a trusted third partner, responsible for registration, auctions and matching bids and others.
Current and future work:
  • To apply and integrate all those developed algorithms in a single platform for an Energy market auction-based simulation

Multi-Agent System for Web searching

Research and Development goals:

We are designing a multi-agent system which tries to capitalize from different agents parallel Web searching tasks to enhance the overall system performance on finding relevant web pages for specific users.

Recent work (2006):
  • The final Agent-based Information Retrieval System has been implemented and evaluated. Final results have shown significant improvements on searching for relevant information when compared to traditional web searchers.
  • A Master’s thesis has been successfully submitted.
Document Actions