Top Banner
1-4244-0537-8/06/$20.00 ©2006 IEEE 1 Abstract— This paper describes work done at ITESM-CEM to develop a team for RoboCup Four-Legged league. Our efforts have been focused on adapting the core code of GT 2004 to our own vision, strategy and communication systems. The result is a team that can play soccer in a unified manner and that serves as a test bed for more research on vision, collaboration, strategies and communication between robots. Index Terms— RoboCup Four-Legged League, AIBO, Team strategies, Vision. I. INTRODUCTION ec Rams participated for the first time in RoboCup in Japan 2002. Since then it has evolved in several ways. First, we developed our own code from scratch to compete in Japan. Later we continued developing our own code with a lot of success in the vision and behavior for each robot but had many problems with AIBO’s locomotion. Finally, based on wrap-up meetings at RoboCup 2004, where emphasis was on re-use code developed from teams in the Four-Legged League to research more issues and not to redevelop code already available, we decided to adapt a well tested platform to approach the RoboCup competition. We selected to use as our core architecture the source code released by the German Team in 2004. From this code we have been working in the following research issues: Color segmentation invariant to illumination changes. Manuscript received September 22, 2006. This work was supported by the “Sensor Based-Robotics” project at Tec de Monterrey CEM under Grant 2167-CCEM-0302-07. Carlos Reyes is a Masters student in Computer Science at Tec de Monterrey, CEM. Jorge Ramírez is a research-lecturer at Tec de Monterrey, CEM. Works at the Computer Science department. All other authors are B.Eng. students at Tec de Monterrey, CEM.. Our color segmentation techniques have evolved from basic threshold techniques to the use of implicit surfaces and statistical techniques for more accurate color detection. We are working on extending this approach to achieve non-supervised color segmentation, as well as an autonomous online color class adaptation approach for dynamic environments. This work has been very successful and a big improvement was made for RoboCup 2004 as in the Illumination Challenge our robot was able to see the ball, but our locomotion was not robust enough to complete the challenge. Team strategies acquisition. We have developed communication among robots to exchange environmental and decisional information among them. We developed a meta- language based on XML, which is similar to XABSL, that gives us the flexibility to program and test robot behaviors in a very short time [1]. This tool is working at the moment on a simulator that allows fast debugging of strategies and basic actions, as well as communication among them. Decisions made by robots are made using evaluation functions. At the moment we are adapting this module to AIBOs. It is also important to mention that as a result of deciding to base our code on the German Team, we have been able to work faster in our research and this has allowed us to win the First Mexican Open in August of 2005 and also the Championship of the First Latin American RoboCup held in Brazil in September of 2005. II. TEAM STRATEGY DECISION MODULE A. Background Since 2005, all work of module decision was centered in the generation of new players and improvement of the current communication protocols at that moment. To achieve these goals, several strategies were followed mainly referred to the correct use of our available tools. Firstly, our efforts were Team Description Paper TecRams Carlos Reyes, Erwin Flores, José Tapia, Tayde Ascencio, Alejandro Mier, Edgar Velázquez, Erick Cruz, Erik Millán, Gisela Muciño, Guillermo Villareal, Marco Silva, Alberto Sobrino, Leonel Toledo, Jorge Ramírez Proyecto Robots Cuadrúpedos Tecnológico de Monterrey Campus Estado de México Carretera Lago de Guadalupe Km 3.5 Atizapan, Estado de México, CP: 52936, México [email protected] http://www.cem.itesm.mx/robocup T 214
8

Team Description Paper TecRams

Jan 15, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

1

Abstract— This paper describes work done at ITESM-CEM to develop a team for RoboCup Four-Legged league. Our efforts have been focused on adapting the core code of GT 2004 to our own vision, strategy and communication systems. The result is a team that can play soccer in a unified manner and that serves as a test bed for more research on vision, collaboration, strategies and communication between robots.

Index Terms— RoboCup Four-Legged League, AIBO, Team strategies, Vision.

I. INTRODUCTION

ec Rams participated for the first time in RoboCup in Japan 2002. Since then it has evolved in several ways.

First, we developed our own code from scratch to compete in Japan. Later we continued developing our own code with a lot of success in the vision and behavior for each robot but had many problems with AIBO’s locomotion. Finally, based on wrap-up meetings at RoboCup 2004, where emphasis was on re-use code developed from teams in the Four-Legged League to research more issues and not to redevelop code already available, we decided to adapt a well tested platform to approach the RoboCup competition. We selected to use as our core architecture the source code released by the German Team in 2004. From this code we have been working in the following research issues:

Color segmentation invariant to illumination changes.

Manuscript received September 22, 2006. This work was supported by the

“Sensor Based-Robotics” project at Tec de Monterrey CEM under Grant 2167-CCEM-0302-07.

Carlos Reyes is a Masters student in Computer Science at Tec de Monterrey, CEM.

Jorge Ramírez is a research-lecturer at Tec de Monterrey, CEM. Works at the Computer Science department.

All other authors are B.Eng. students at Tec de Monterrey, CEM..

Our color segmentation techniques have evolved from basic threshold techniques to the use of implicit surfaces and statistical techniques for more accurate color detection. We are working on extending this approach to achieve non-supervised color segmentation, as well as an autonomous online color class adaptation approach for dynamic environments. This work has been very successful and a big improvement was made for RoboCup 2004 as in the Illumination Challenge our robot was able to see the ball, but our locomotion was not robust enough to complete the challenge.

Team strategies acquisition. We have developed

communication among robots to exchange environmental and decisional information among them. We developed a meta-language based on XML, which is similar to XABSL, that gives us the flexibility to program and test robot behaviors in a very short time [1]. This tool is working at the moment on a simulator that allows fast debugging of strategies and basic actions, as well as communication among them. Decisions made by robots are made using evaluation functions. At the moment we are adapting this module to AIBOs.

It is also important to mention that as a result of deciding to

base our code on the German Team, we have been able to work faster in our research and this has allowed us to win the First Mexican Open in August of 2005 and also the Championship of the First Latin American RoboCup held in Brazil in September of 2005.

II. TEAM STRATEGY – DECISION MODULE

A. Background Since 2005, all work of module decision was centered in the

generation of new players and improvement of the current communication protocols at that moment. To achieve these goals, several strategies were followed mainly referred to the correct use of our available tools. Firstly, our efforts were

Team Description Paper TecRams

Carlos Reyes, Erwin Flores, José Tapia, Tayde Ascencio, Alejandro Mier, Edgar Velázquez, Erick Cruz, Erik Millán, Gisela Muciño, Guillermo Villareal, Marco Silva, Alberto Sobrino, Leonel Toledo,

Jorge Ramírez

Proyecto Robots Cuadrúpedos Tecnológico de Monterrey Campus Estado de México

Carretera Lago de Guadalupe Km 3.5 Atizapan, Estado de México, CP: 52936, México

[email protected] http://www.cem.itesm.mx/robocup

T

214

Page 2: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

2

based on the understanding of the XABSL tool from the basic function to the generation of XML files in order to complete the generation and modifications of new player behaviors and game strategies.

Other great challenge that was confronted by the decision

module was the creation of a new simulation tool that satisfied our needs. This tool was developed with the necessary conditions to prove different behaviors and communication events from simple XML specifications. It provides, as well, a friendly interface to evaluate the response of the robots to the newly introduced behaviors.

B. Behavior definition Behavior definition is done through XML files, mainly for

the easy interpretation that this format provides and the great number of available tools for handling it. These same behaviors introduce a State Machines in which actions may be basic skills or other behaviors designed in this same language. The file generated by the previous machine is interpreted by an Automaton Execution Machine inside the robot.

The system links low-level programming with the behavior

definition, described in the previous paragraph, by a standard programming code that may be used in different fields. Basic robot functions, such as movements or image processing, are implemented in robot’s native language. These functions relate to the XML defined behaviors through a system of events and actions.

The development environment includes a simulator, which

can contain robots with different characteristics and a visual environment similar to real area where robots perform, with the possibility to modify it to fit the current characteristics of the competition (Figure 1). The simulator has different tools for detailed analysis; it may show, at any time, the current automaton execution stage as well as the events each robot can perceive [1].

C. Tools C++ Coding. Direct coding on native language of a robot

was the first approach our team took towards solution definition. This provided a total understanding of the functioning and interaction of the different modules involved in competition. However, there is an important drawback and this is the complex administration results.

Fig. 1. Football agent simulation. The simulator provides a graphic way to

analyze a behavior or even navigate through the actions with their associated behaviors or atomic actions. It shows the transitions between them and the

active states. XML Behavior Control. Despite innovations on compilers

and new programming languages, robots’ native language is still difficult to understand by people not involved in computer programming. This tool, developed entirely by Tec Rams Decision Module intends to facilitate the task of defining an agent’s behavior by providing the user with tools for programming, testing, and analysis to avoid exclusion of people mentioned before. It is based on a strategy to create a standard, intuitive, and simple platform in which to create an execution model.

In order to fulfill the expectations, the strategy definition

platform should count with the following characteristics: 1. Simple: User should be able to define players

without knowing the native programming of the robot.

2. Hierarchical: It should be divided into several levels to maintain low complexity on each of them.

3. Scalable: The main architecture should allow easy development of new modules.

4. Versatile: The platform should be applicable for different systems in order to define different tasks in other fields.

5. Standard: It should maintain a general strategy model and task definition to function in different robots and systems to take it as base for future research on agents and multiagent systems.

This tool bases its function in behavior definitions through

XML files. These files represent Deterministic Finite Automata (DFA) on which the whole platform was developed. Through the module of strategy design, the behavior of a robot is defined in XML instead of using C++, native language of AIBOS. In this case, the robot will implement a generic machine that can execute any automaton with the proper behavior defined in the XML document. This offers a number

215

Page 3: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

3

of advantages such as: a standardized description, portability, and multiple development tools.

The agents may change their strategy without having to

recompile the code as the behavior model is serialized with the XML document. So, the change process implies a simple modification of XML file and not on the source code of the agent. This modularization brings a more efficient and versatile process by hiding implantation details from high-level strategies.

By using the scheme presented on Figure 2, changes in

player strategy can be realized rapidly without having to handle low-level implementation details. The XML tag system allows to context each of its elements. With this, the document is easily understandable by any people, even those without any relation with computer science.

Fig. 2. Data flow in decision system

The tool was developed under an event-oriented paradigm

similar to the one used in Tekkotsu, application development framework by Carnegie Mellon University, when behavior changes depend on exterior environment events, such as finding the ball or falling. The difference resides in the fact that Tekkotsu’s state-machine programming is realized directly in robot’s native language rather than on XML documentation as in the present tool.

Object modeling on XML, because of the inner structure of

XML, is as a hierarchical tree. In which nodes represent states and edges represent transitions, as shown in Figure 3.

acciónbásica

acciónbásica

accióncompuesta

Conjunto de comportamientos

comportamientoestado

transición

evento1 ^ evento2 …acción1, acción2…

transición

comportamiento

comportamiento

estado

estado

transición

transición

Fig. 3. Agent execution model

Currently there are two possible virtual soccer worlds: 2003

(Figure 4) and 2005 (Figure 5) fields which differ in size, borders and poles. In the virtual world’s environment section you can select either to simulate different behaviors. If new environment features are required, new objects of type FootballSpace and VirtualWorld can be created. Method setVirtualSpace from class world.Environment should be added, and it is highly recommended to base on existent virtual worlds. Taking this last figure into XML requires the implementation of different elements.

Fig. 4. 2003 Football field virtual space

216

Page 4: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

4

Fig. 5. 2005 Football field virtual space

III. VISION – PERCEPTION Perception module is in charge of many tasks performed to

acquire all the information about the surroundings of the robot and deliver it to the Decision module in order to be able to perform the best tactics. This involves image processing to recognize objects of interest from the environment and extract important features about them, such as distance, and orientation. The module is also in charge of moving the robot’s head to track objects, and localize the robot in the environment. In the following sections we describe an overview of the techniques, algorithms and considerations used in this Perception module.

A. Using implicit surfaces to define color classes Although the approaches based on quadric surfaces,

especially on ellipsoids, demonstrated better results than with simple thresholds, we still found some points for improvement:

The surfaces are better suited to a particular color space,

since each one adjust more precisely the particular distribution of color in a given signal. For example, cones adapt well the color samples in RGB, while ellipsoids work better in YUV.

Even working in the preferred color space for a given quadric, it is still prone to misclassification problems, since normally the cloud of samples does not describe the same shape than the selected surface.

Because of these reasons, we figured out a new

classification technique that overcome such difficulties and preserves the simplicity from the other approaches. The new technique is based on implicit surfaces as the limiting contour for color classes.

Implicit surfaces were chosen as they can easily

differentiate when samples are inside or outside them, providing a natural interface for color classification. In

addition, it is easy to blend different surfaces. In this way, a set of disjoint samples can produce a continuous surface enclosing all the intermediate points from a color class.

Our approach starts from a set of images from which a user

selects a set of color samples. Then, a number of spherical primitives are distributed uniformly along the cloud of samples using the k-means algorithm. Once located, the radius of the primitives is obtained from the standard deviation of the samples contained by each primitive. Finally, these primitives are blended to produce the final surface.

Once the color classes are defined, a look-up table is

exported to a text file loaded into the robot for efficient image processing. The configurations produced and the resulting implicit surfaces fit closely the point samples used in the process, producing an accurate representation as depicted in Figure 6. A detailed description of this technique can be found in [2].

Very good classification results have been obtained from

this method. In Figure 6, a classification is tested in different lighting conditions. The images processed by our algorithm identify colors very well even with extreme changes in illumination. This figure also shows the color subspace that is used to identify the color. A traditional approach bounds the samples in this subspace by a cube, leading to misclassifications shown in the lower images. While this tolerance to lighting conditions shows an improvement over traditional techniques, this procedure can be extended to automatically adapt the color subspace dynamically as required by the environment.

Fig. 6. Robustness to illumination changes. Yellow color is being classified and replaced by blue on the images. Upper row: Image segmentation using our approach with different light intensities. Lower row center: Color subspace used for upper row images. Lower row edges: Image segmentation using traditional cubes on the same images. �

B. Segmentation routine. Our segmentation algorithm implies the combination of

three techniques, organized in two stages. First, scan lines are used to extract a set of color seeds. The extraction process relies in the definition of color classes. In this case, we are looking to identify pixels with a high probability of belonging to the class, so color classes are defined strictly with the

217

Page 5: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

5

implicit surfaces classification technique. From these seeds, the region growth algorithm will locate regions of interest by using a more relaxed color class as the homogeneity criterion. Thus, two classes of different probabilities are defined for each color. The following sections will describe each of the parts of this algorithm. A detailed description of this technique is shown in [3].

Scanlines and horizon detection The use of scanlines is a simple way to segment an image

without processing every single pixel. This technique relies on a sampling pattern that selects a small set of pixels for analysis, reducing processing time.

In particular, we use the approach from Jüngel et al. [4, 5],

where a set of lines is used for processing. These lines are perpendicular to the horizon, and their density varies according to their distance to the camera, which is approximated based on the proximity of each pixel to the horizon line. As the pixel is closer to the horizon, it is more probable that it belongs to a distant object, so the sampling density should be higher than for pixels away from the horizon line. This density is exemplified in Figure 7.a.

However, this approach requires the horizon of the camera

for each picture. This horizon is obtained from the kinematic model of the camera, using the field plane as reference. A method to calculate the horizon was proposed by Jüngel et al. [6]. In this method, the horizon line is defined as the set of pixels which are located at the same height from the field plane as the optical center of the camera. Hence, the horizon is defined as the intersection between the camera projection plane P and a plane H, which is parallel to the field plane and crosses the center of projection of the camera c. This is illustrated in Figure 7.b.

Fig. 7. a) Horizon-oriented scanlines for a sample image. Higher detail is selected for farther areas, while lower detail is chosen for closer areas. b) Planes used in the extraction of the horizon line. �

C. Localization system

Currently we are working on the Monte Carlo probabilistic techniques to obtain a faster and more accurate localization using all the data that currently can be obtained from the field's landmarks, and eventually using the lines and borders observed in the field. Our algorithms are currently based in the work depicted in [7].

From the information obtained from the image processing

routines is necessary to estimate a position for the robot. This position is modeled as a distribution of hypothesis for robot's pose and orientation, initially random, which eventually will converge to the real robot position accordingly to the adjustments on the hypothesis performed by a motion and perception models.

The motion model represents the effects of actions on

robot's pose; this means measurements of the odometry and an estimation of its error. The perception model, the one that has the biggest influence in hypothesis adjustment, is based in the horizontal direction from the robot to the landmarks used for localization.

With these points of reference, the adjustment is executed

accordingly to the probability of correctness of the hypothesis: if the hypothesis has large probability of been correct then it has a random but small adjustment (in pose and orientation), but if its probability was small then the adjustment is bigger. Finally, the algorithm applies a re-sampling step where bad hypothesis are eliminated and new, calculated ones are inserted, with the intention of speeding up the convergence.

Some of the results we have achieved with the described

algorithm are shown in Figure 8. As can be seen, the algorithm converges to the real robot position and orientation, but sometimes it takes several iterations to reach it.

218

Page 6: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

6

Fig. 8. Results in distribution of pose hypothesis while turning robot's head. Triangles show position and orientation for the hypothesis. Darker triangles mean small probability, while brighter one express high probability. Robot with light color represents the estimated pose for the robot. a) Real position of the robot (blue goal). b) After 50 images processed.

IV. LOCOMOTION For the last 10 months, Tec Rams’ locomotion team has

done an extensive research and analysis for using alternative platforms that would facilitate our team to generate completely new movements for the robots, as well as having an optimal walking and a game strategy that would let us choose to perform certain kicks in the most appropriate situations. This was done by comparing the following utilities:

1. Tec Rams: This is a completely new development

by the ITESM-CEM team and is not based on any other code. It was the one used for previous competitions but left us behind other experienced teams.

2. Tekkotsu: it is the platform developed by the Carnegie Mellon University, as an alternative solution. The following results were obtained specifically in the Locomotion module. For the AIBO’s special movements and kicks, we have two types of files:

The first one is for movements that require only one step to be completed, for example opening the mouth, stretching a leg, etc. These files have extension *.POS.

The second one controls a complete sequence of movements; in this case we have to introduce the specific positions for each actuator and the time it takes be

completed. These files have extension *. MOT.

In addition, the robot position can be manually

manipulated and stored in a *.POS file. Then in a *.MOT file, a call can be done to the position files which were created and thus obtain a faster and efficient sequence of movements, which benefits our team.

Nevertheless, the *.MOT files cannot be

manipulated in a direct way using the robot; but we modified them in a local way and then downloaded to the robot. This represents a waste of time when generating new movements.

In another subject, the Tekkotsu’s walking system

offers up to 50 modifiable parameters with which a walking path can be completely created depending on our necessities. In the walking style of Tekkotsu, the AIBO follows a rhythmical sequence that modifies the size of the stride instead of increasing its speed.

3. GT2004: it’s the German Team’s code, which

happens to be extremely convenient to use because of its modular structure and stability. It also has an optimal walking because it handles quite well the walking parameters for having acceptable speed and direction. Thus, the AIBO goes from one point to another in a short time and without damaging the servomotors.

Based on the structure of the GT2004, the locomotion team

has generated new kicks as well as cheering and artistry. These kicks have already been incorporated into the KickEditor which is a Kick Selection Table based on the AIBO’s position in the field and the distance of ball from the AIBO in order to perform certain kicks if there is an appropriate situation. All the files generated for this kick selection have been completely created by Tec Rams.

All the kicks from the GT2004 were tested and evaluated

according to efficiency, functionality, speed, force of the kick, avoidance of damage to the servomotors and jamming. Many kicks were segregated and just a few were kept and improved. For doing this, we determined experimentally the maximum and minimum values for each joint angle in order to calculate the necessary joint angles for reaching a certain position and perform a kick.

The new kicks developed by Tec Rams involve a new block

for the goalie, cheering and artistry for all the AIBOs whenever they score or get scored, and dribbling without holding the ball, from our field up to ¾ of the whole field. Our dribbling is completely based on the *.kst files, which are the files used by the Kick Selection Table.

219

Page 7: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

7

The following table shows a comparison between the

locomotion platforms investigated by Tec Rams team:

Functions allowed Tekkotsu German Team

Position files Position sequence files Possibility of using

position files to generate a position sequence of movements

Possibility of working online with the robot to manipulate sequence files. Real time modifications

Handling of angles to control the motor position

Scale in Degrees

Scale in mili-radians

Count per time between one position and the next one.

Hence, we decided to use the GT2004 because of the

advantages it has over Carnegie’s Tekkotsu platform, and since then, we have had dramatically positive results that have encouraged our team to continue researching and developing new utilities.

Besides Tekkotsu and GT2004 we tried using the Skitter –

AIBO Performance Editor because of its variety of features, effects and facility of editing. But we discarded it rapidly because its simulations did not consider physical ground, thus it was uncertain that the movements generated were actually going to be executed as in the simulation.

V. COMMUNICATION

A. Previous work The goal and intent of the communication sub module is to

provide a means to enhance the capabilities the decision module can make by allowing the robots to exchange environmental and decisional information among them. This information is used both to increase the factors and conditions that the players take into account when taking a decision, and as a means to allow the robots to make agreements and coordinated moves.

Initially the communication sub module only focused on the

maintenance work that implied adapting our team's code to the changes that the GameController saw during the 2005. However it quickly evolved from that and saw other important initial advances like enhancing the decision our goalie could take by taking into account the position of the ball as reported by the defenders themselves.

As such it does not come as a big surprise that the main use we have made of this sub module has been as a tool to the behavior module. However, with the advent of the passing challenge, it becomes obvious that communication will become a bridge for many of the disciplines that our team has (included but not limited to, vision, locomotion, decision itself etc).

B. Current work In particular, we have defined a stripped version of the

Contract Net model. This version potentially allows the dogs to make a pseudo - contract among them when they desire to pass the ball to a teammate. We achieve this by defining a passing state machine, whose transitions are activated by the different acknowledgments that each robot sends throughout the contract process.

To be more specific, the model we have defined can be

represented in the following way:

1. Two states for the robot that wishes to send the pass: waitPass and readyPass

2. Two states for the robot that acknowledges to be willing to receive the pass: waitReceipt and readyReceipt

3. A null state As we can see, the state machine is extremely simple,

representative of its merely informing nature. We have 2 series of states that make a transition among themselves depending on whether the robot is ready to perform the pass or if it is still getting ready. (We also have a timeout mechanism to perform the translations as a backup device)

The preparations of the receiving robot include turning in

order to face the robot that will be passing the ball, and getting in a position in which it can appropriately receive the ball. The passing robot's preparations are very much akin. (That is, getting from the point to where it received the ball to the point where it can accurately perform a pass. As expected, only when a passing robot detects that both its partner and itself are ready, then it will proceed to make the pass.)

Moreover, this state machine also works as a way of telling

the potential receiving robot that there is another robot that is interested in initiating a passing transaction with them, when they receive this passingMessage with an indication that there is another robot that is planning to get ready to do the pass. If they acknowledge and accept the contract with the proposing player they will return a message with the preparing receipt state activated. Otherwise they will just consistently reply with a message with the hiatus state activated. Although this holds little meaning for the purposes of the passing challenge, it prepares us for a further and more serious implementation of a pass system on a normal game.

220

Page 8: Team Description Paper TecRams

1-4244-0537-8/06/$20.00 ©2006 IEEE

8

VI. CONCLUSIONS AND FURTHER WORK Work learning and adapting GT 2004 is almost finished.

We have successfully adapted our vision algorithms and are generating a translator from our own XML generated code that can be used with the locomotion module of GT 2004. This will allow us to have all the functionality of the GT 2004 plus the easiness of use of our XML code and its simulation facilities.

All of this work has allowed us to have a better robotic team

than the one we had when we were generating code from scratch. It has also permitted us to focus more on new research issues and less on re-implementing work already done by others. Finally, as we have been able to work faster in our research, this has allowed us to win the First Mexican Open in August of 2005 and also the Championship of the First Latin American RoboCup held in Brazil in September of 2005.

Further work is to continue with automatic autonomous segmentation and invariant segmentation for our vision module, restart of research in locomotion and continue with generation of team strategies using communication and interaction protocols based on multi-agent systems.

ACKNOWLEDGMENT The last author wishes to acknowledge the work of all the

members of the “Robots Cuadrúpedos” project, for their dedication, hard work and effort in this project despite being it a volunteer work.

REFERENCES [1] J. L. Vega, M. A. Junco, J. A. Ramírez, Major behavior definition of

football agents through XML. The 9th International Conference on Intelligent Autonomous Systems (IAS-9), pp. 668-675, March 7-9, 2006, Tokyo, Japan

[2] R. Álvarez, E. Millán, R. Swain, A. Aceves, A., Color image classification through fitting of implicit surfaces, in 9th Ibero-American Conf. on Artificial Intelligence (IBERAMIA), 677-686, ed., Cholula, México, Lecture Notes in Artificial Intelligence, Springer Verlag., 2004.

[3] R. Álvarez, E. Millán, R. Swain, Multilevel Seed Region Growth Segmentation, in MICAI 2005: Advances in Artificial Intelligence: 4th Mexican International Conference on Artificial Intelligence, 359 – 368, Monterrey, Mexico, Lecture Notes in Computer Science, 2005.

[4] J. Bach, M. Jüngel.: Using pattern matching on a flexible, horizon-aligned grid for robotic vision, in Concurrency, Specification and Programming, 2002, 11-19

[5] M. Jüngel, J. Hoffmann, M. Lötzsch, A real-time auto-adjusting vision system for robotic soccer, in 7th International Workshop on RoboCup 2003 (Robot World Cup Soccer Games and Conferences), Lecture Notes in Artificial Intelligence, Springer Verlag , 2003.

[6] M. Jüngel, A vision system for RoboCup, Diploma thesis. Master's thesis, Institut für Informatik Humboldt-Universität zu Berlin , 2004.

[7] T. Röfer, M. Jüngel, Vision-based fast and reactive Monte-Carlo Localization, in IEEE International Conference on Robotics and Automation, 2003.

221