Top Banner
Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana *† , Lu´ ıs Correia * , Magno Guedes ‡† , Jos´ e Barata * LabMAg, Universidade de Lisboa, PORTUGAL UNINOVA, Universidade Nova de Lisboa, PORTUGAL R&D Division, IntRoSys, S.A. Email: {pedro.santana,luis.correia}@di.fc.ul.pt, [email protected], [email protected] Abstract—This article argues in favour of using visual attention mechanisms on off-road robots and of exploiting the social in- sects metaphor for their robust implementation. Visual attention helps these robots focusing their perceptual resources on a by- need basis when dealing with the complexity of unstructured environments. However, focusing perceptual resources is a hard problem given the well known speed-accuracy trade-off and the fact that several foci of attention may need to co-exist and interact with both memory and action selection. The similarity between the task of deploying parallel foci of attention and the foraging behaviour exhibited by army ants motivates the use of the social insects metaphor to solve the problem at hand in a self- organising, and consequently, robust way. All these arguments are phenomenologically supported by experimental work recently published on three foundational aspects of off-road mobility: obstacle detection, trail detection, and local navigation. I. I NTRODUCTION The valuable scientific data generated by the Mars rovers in their missions [1], which could not be obtained by other means, clearly shows the importance autonomous field robots have for the future of humankind. Besides helping us finding a new extra-terrestrial home, autonomous field robots can help us maintaining our current one. Examples are wildlife moni- toring [2] and water quality monitoring [3]. As in the extra- terrestrial case, these activities also occur in geographically inconvenient regions, which has been limiting their execution altogether. In these sense, autonomous field robots, as the one depicted in Fig. 1, may be the key for a successful maintenance of Earth ecosystems. Field robots will also be practical to help us in preventing natural disasters, as in the case of fire detection [4], and even to cope with them, as in Search & Rescue missions [5]. Human- centred activities can also benefit largely from including these robots, such as in the case of agriculture [6] and humanitarian demining [7], [8]. Common to all these tasks is that they are, in different extents, dangerous and physically demanding. In sum, field robots are an instrument to solve ecological, social, and economic problems. The cost of having a damaged robot in a remote environ- ment is extremely high as the whole mission may become compromised. A straightforward solution to this is to exploit redundancy so that there is no single point of failure in the system, making multi-robot systems a natural choice. In order to avoid raising the cost of the whole system, each Fig. 1. The off-road robot [9] used in the experiments. robot must be cheap, and consequently, provided with reduced energetic capacity and reduced computational resources. These two constraints are also present in situations where robot size and weight are highly constrained, such as in the case of miniature aerial robots [10]. It must also be taken into account that in most of the identified tasks robots must operate for long periods of time, which poses additional constraints in their efficiency; eventually, these robots will need to perform energy harvesting [11]. Finally, being energetically efficient is also relevant under an ecological perspective. A. Visual Attention An important sensory modality for these robots is vision. Being passive, vision requires low power to operate. Moreover, vision provides the robot with (multi-spectral) appearance, volumetric, and motion data. Also important, the acquisition of all this information is synchronised. Finally, vision sensors are small and lightweight. These characteristics are possibly the reason why vision is ubiquitous in Nature. However, the richness of vision comes along with complex processing. Due to their unstructured nature, off-road environments are particularly demanding in this regard. That is, there is little predictable structure in those environments to leverage on. This complexity calls for a fine and contextualised focus of computational resources on the most relevant aspects of the 978-1-4244-9312-8/11/$26.00 ©2011 IEEE 2255
6

Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

May 02, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

Visual Attention and Swarm Cognition TowardsFast and Robust Off-Road Robots

Pedro Santana∗†, Luıs Correia∗, Magno Guedes‡†, Jose Barata†∗LabMAg, Universidade de Lisboa, PORTUGAL

†UNINOVA, Universidade Nova de Lisboa, PORTUGAL‡R&D Division, IntRoSys, S.A.

Email: {pedro.santana,luis.correia}@di.fc.ul.pt, [email protected], [email protected]

Abstract—This article argues in favour of using visual attentionmechanisms on off-road robots and of exploiting the social in-sects metaphor for their robust implementation. Visual attentionhelps these robots focusing their perceptual resources on a by-need basis when dealing with the complexity of unstructuredenvironments. However, focusing perceptual resources is a hardproblem given the well known speed-accuracy trade-off and thefact that several foci of attention may need to co-exist andinteract with both memory and action selection. The similaritybetween the task of deploying parallel foci of attention and theforaging behaviour exhibited by army ants motivates the use ofthe social insects metaphor to solve the problem at hand in a self-organising, and consequently, robust way. All these argumentsare phenomenologically supported by experimental work recentlypublished on three foundational aspects of off-road mobility:obstacle detection, trail detection, and local navigation.

I. INTRODUCTION

The valuable scientific data generated by the Mars roversin their missions [1], which could not be obtained by othermeans, clearly shows the importance autonomous field robotshave for the future of humankind. Besides helping us findinga new extra-terrestrial home, autonomous field robots can helpus maintaining our current one. Examples are wildlife moni-toring [2] and water quality monitoring [3]. As in the extra-terrestrial case, these activities also occur in geographicallyinconvenient regions, which has been limiting their executionaltogether. In these sense, autonomous field robots, as the onedepicted in Fig. 1, may be the key for a successful maintenanceof Earth ecosystems.

Field robots will also be practical to help us in preventingnatural disasters, as in the case of fire detection [4], and even tocope with them, as in Search & Rescue missions [5]. Human-centred activities can also benefit largely from including theserobots, such as in the case of agriculture [6] and humanitariandemining [7], [8]. Common to all these tasks is that they are,in different extents, dangerous and physically demanding. Insum, field robots are an instrument to solve ecological, social,and economic problems.

The cost of having a damaged robot in a remote environ-ment is extremely high as the whole mission may becomecompromised. A straightforward solution to this is to exploitredundancy so that there is no single point of failure inthe system, making multi-robot systems a natural choice. Inorder to avoid raising the cost of the whole system, each

Fig. 1. The off-road robot [9] used in the experiments.

robot must be cheap, and consequently, provided with reducedenergetic capacity and reduced computational resources. Thesetwo constraints are also present in situations where robot sizeand weight are highly constrained, such as in the case ofminiature aerial robots [10]. It must also be taken into accountthat in most of the identified tasks robots must operate forlong periods of time, which poses additional constraints intheir efficiency; eventually, these robots will need to performenergy harvesting [11]. Finally, being energetically efficient isalso relevant under an ecological perspective.

A. Visual Attention

An important sensory modality for these robots is vision.Being passive, vision requires low power to operate. Moreover,vision provides the robot with (multi-spectral) appearance,volumetric, and motion data. Also important, the acquisitionof all this information is synchronised. Finally, vision sensorsare small and lightweight. These characteristics are possiblythe reason why vision is ubiquitous in Nature. However, therichness of vision comes along with complex processing.Due to their unstructured nature, off-road environments areparticularly demanding in this regard. That is, there is littlepredictable structure in those environments to leverage on.

This complexity calls for a fine and contextualised focus ofcomputational resources on the most relevant aspects of the

978-1-4244-9312-8/11/$26.00 ©2011 IEEE 2255

Page 2: Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

environment. This is called visual attention and it is knownto be widespread in the animal kingdom [12] and it hasbeen extensively studied in humans [13]. Under this line ofreasoning, the main hypothesis being defended in this articleis that:

parsimony, robustness, and performance of off-roadrobots is improved if visual attention mechanismsare employed in their control systems.

In other words, by focusing perception: (1) computation andconsequently energy are more efficiently used; (2) the robotbecomes less sensitive to noise and erroneous environmentalcues (i.e., false positives); and as a consequence of the previoustwo, (3) faster robot motion and reduce robot size are enabled.

B. Synergy Oriented Design

Motivated by the dynamical systems approach to humancognition [14], [15] and by the behaviour-based approach torobotics [16], an underpinning of the models presented in thispaper is the extensive use of synergies, i.e., cross modulationbetween the components of the control system, at both macroand micro levels. An example of a macro synergy consideredin our work is the guidance provided by a visual saliencymap to an obstacle detector, whose results are in turn used tomodulate the visual saliency map itself (see Section II).

An example of a micro synergy considered in our workis the set of interactions that abstract pixel-wise entities, i.e.,virtual agents, engage to coordinate their actions when search-ing for obstacles in the robot’s visual field (see Section IV).The use of multiple agents is motivated by the hypothesis incognitive science supporting the existence of multiple foci ofattention [17], [18]. With the purpose of attaining robustness,scalability, modularity, and cheap design, these agents performcollectively in a self-organising way.

Cross-modulation between both action selection and per-ceptual processes is yet another exploited synergy (see Sec-tion IV). It aims at allowing obstacles to be detected in aby-need basis, i.e., according to their relevance to the actionselection process, as conceived in active vision paradigm [19],[20], [21].

Being sensorimotor coordinated units, these agents canexploit the benefits of active perception and so actively shapetheir sensory input [22], use their sensorimotor history toinduce long-range influences on other agent, and in the limitimprove their own behaviour. In a sense, these agents areinformation particles moving on the system [23], or fromanother perspective, they are carriers of information [24].Therefore, these units are potentially more capable of en-capsulating complexity than neurons, which makes the taskof modelling behaviour from a bottom-up perspective moreamenable.

The use of multiple agents allows the modeller to exploitbiological knowledge obtained from processes that can befound in Nature with similar properties. An example is theself-organised behaviour exhibited by social insects, like bees,

ants, and termites, whose collective intelligence [25] hasconsiderable similarities with neuronal processes underlyingcognition [26], [27], [28], [29], [30], [31]. This leads to thesecond hypothesis addressed in this article:

the synthesis of self-organising robot cognitive be-haviour is facilitated if the social insects collectivebehaviour metaphor is used as design pattern.

This metaphor is particularly powerful due to the easinessof tracking social insects activity from the collective downto the individual, at least when compared to the difficulty ofinspecting the detailed nervous system activity. Hence, thismetaphor enables a convenient way of transferring knowl-edge from Nature to engineering. Apart from relying onproven solutions, using biological inspiration has an additionalmethodological advantage: assumptions done in the carryingout of a specific research are less prone to become deprecatedas a result of parallel or subsequent research. This follows fromthe existence of a common underlying structure biasing theprocesses how assumptions are devised, i.e., Nature’s physicalprinciples.

C. Self-Organisation Enabling Properties

Being self-organised, the overall system’s behaviouremerges from the interaction of these units, which is oftendone indirectly by changing and sensing the environment, aphenomenon known as stigmergy [32]. High level structuresemerge from the bottom-up system operation, meaning thatthe system is fully specified by the logic ruling the simpleagents, which are in great extent homogeneous. Consequently,the system’s design space is small and fully grounded throughthe set of sensorimotor rules controlling the agents, which caneither be designed by hand, learned, or even evolved.

Pushing forward the knowledge regarding self-organisingcognitive systems, might be interesting to better explore thecapabilities of emerging parallel computational paradigms,and more radically those considered in amorphous computingliterature [33]. The latter envisions the application of micro-fabricated particles or engineered cells for the implementationof massively distributed computation. The distributed, stochas-tic, and unreliable nature of these particles or cells demand fornew computational paradigms capable of coping with thesecharacteristics, which are drastically different from the highlyreliable and deterministic computational units currently used.As self-organising systems do need some level of randomnessto operate and exhibit graceful degradation in the face ofunreliable elements, swarm cognition is a natural candidatefor the task at hand.

Finally, studying the synthesis of cognition under a swarmperspective takes a step further towards an unified method-ology for the development of robot control systems andmulti-robot coordination mechanisms. Basically, it is all aboutcoupling sensorimotor coordination loops.

A critical point of swarm research, and adaptive behaviourin general, is the potential for cross-breeding between both

2256

Page 3: Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

natural and artificial sciences. Evidences from the Naturalworld can be used to seed the modelling process in engi-neering. Conversely, operational engineering models can beused to induce hypotheses regarding specific aspects of theNatural world. The unexpected importance of a given variablein an engineered model may trigger the interest in assessingthe relevance of its natural counterpart. Interestingly, newdevelopments, such as bacterial micro-robots [34], show thatthe future may bring part natural part artificial “agents”, whosecollective cognitive operation will most probably rely in self-organising principles.

D. Validation

The phenomenological support for the two hypotheses ad-dressed in this article is given by experimental results obtainedin previous research on three foundational aspects of off-road mobility: obstacle detection, trail detection, and localnavigation. This previous work being published elsewhere[23], [29], [35], [36] is briefly surveyed in the followingsections. The particularities of each case-study elicit differentaspects of visual attention, and consequently enrich the overallargument supporting the two hypotheses herein defended.

First, in Section II, a way of using visual saliency to speedup and augment the robustness of obstacle detection in all-terrain environments is overviewed (details in [35]). Then, inSection III, visual saliency is demonstrated to be also usefulin the task of trail detection (details in [36]). For this purpose,a swarm-based model is shown to enable a robust and fastto compute exploitation of top-down a priori knowledge onthe trail’s approximate layout. Finally, in Section IV, with thepurpose of implementing a complete local navigation system, aswarm cognition model capable of integrating visual attention,action selection, and spatial memory is described (detailsin [23], [29]). Hence, these models encompass both macroand micro synergies and exploit self-organisation in a swarmcognition framework.

II. VISUAL ATTENTION FOR OBSTACLE DETECTION

Off-road terrains are highly uneven, which demands forcomplex, and consequently, computationally intensive obstacledetection techniques. In order to make stereo-based obstacledetection fast and robust, we have proposed a hybrid architec-ture [35]. The proposed architecture (see Fig. 2) exploits thebest of two complementary detection techniques. One of thetechniques, which is fast and so ideal for a first scan, considersobstacles 3-D points that stand out from an estimated ground-plane by a given minimum height [37]. To avoid a largenumber of false positives in uneven terrain, where no dominantground-plane is assured to exist, the detection threshold mustbe set high so that only large obstacles are detected. Thepresence of a large obstacle subsuming the presence of a smallobstacle, in what regards local navigation, means that it isonly necessary to search for small obstacles in the regionswhere no large obstacle has been found. To relax the planarground constraint, essential to enable small obstacle detection,these regions are analysed by a detector that operates based

on geometrical constraints between neighbour 3-D points [38].The high computational cost associated to the accuracy of thismethod is compensated by the focus of attention provided bythe other faster yet less accurate detector.

Fig. 2. Hybrid obstacle detection architecture (from [35]).

Although the small obstacle detection technique is alreadyfocused by the large detection technique, its computationalcost still remains too high. However, we have shown [35] thatby adapting a well-known bio-inspired visual saliency model[39] to the problem at hand, and by using it to focus theobstacle detector, computation can be reduced in about 20times. In short, the computed visual saliency map allows theobstacle detector to focus stronger its operation on the regionsof the environment that detach more from the background,and consequently are more prone of belonging to an object.Besides reducing computational cost, this approach also helpsreducing the false positive rate. Fig. 3 illustrates the model’soperation in a typical off-road situation. Refer to [35] fordetails.

III. VISUAL ATTENTION FOR TRAIL DETECTION

Most of the challenges in trail detection are related to thelack of a well defined morphology or appearance of trails. Thislimits the application of learning and model-based approaches.

2257

Page 4: Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

Fig. 3. Obstacle detection example. The saliency map (top-right) is brighteron regions whose corresponding pixels in the image obtained from the leftcamera of the employed stereo-vision sensor (top-left) detach more fromthe background. Note the sparse number of pixels where the small obstacledetector has been actually applied (white pixels in bottom-right), as a resultof being guided by the saliency map. The output of the system is overlaid inthe input image on the bottom-left, where white pixels correspond to regionsfurther than 20m or where 3-D information is missing.

In alternative, we have exploited the observation that trailsare usually salient structures in the robot’s visual input [36].Having experimentally demonstrated this, we have also shownthat a straightforward application of visual saliency models tothis task lacks robustness. This owes mostly to the frequentpresence of distractors in the environment, which rendersthe most salient region in the environment not necessarilyon the actual trail location. To overcome this limitation, wehave proposed a novel use of top-down knowledge about thetypical trail’s overall layout in the computation of the saliencymap. The result was the ability to produce saliency maps thatrobustly localise the trail in 91% of the cases [36].

In short, the method starts by deploying a set of virtualagents (hereafter p-ants) in two previously computed con-spicuity maps, one for colour, CC , and another for intensityinformation, CI . A conspicuity map is like a saliency map butconfined to a single visual feature. In typical saliency compu-tational models (e.g., [39]), conspicuity maps are blended togenerate the final saliency map, S. Alternatively, in our modelthe saliency map is the blend of the pheromone maps, PC

and PI , generated by the set of p-ants, according to the antforaging metaphor. A p-ant starts its motion in the bottomregion of a given conspicuity map and move on it, whiledeploying pheromone on the pheromone maps, according to:(1) the fusion of a set of behaviours; (2) a random factor; and(3) a pheromone-attraction factor. Behaviours allow p-ants toexploit local conspicuity information in order to approximatetheir trajectory to the trail’s skeleton. The pheromone-basedinteractions allow p-ants to implicitly help each other inbuilding a consensus on the best approximation.

The final saliency map is integrated across time in adynamical neural field, F, which feeds back the conspicuitymaps in the subsequent frame. This process endows p-ants

with historical influence, which is key for tracking the trail.Fig. 4 overviews and illustrates key aspects of the model. Referto [36] for details.

Fig. 4. Trail detection architecture (from [36]). The brighter the pixels on theconspicuity and pheromone maps the higher the conspicuity and pheromonelevels, respectively. The red overlays in both pheromone maps are illustrativepaths of two p-ants. Note that the activity on the neural field correspondsaccurately to the location of the trail in the input image, I.

IV. SWARM COGNITION FOR LOCAL NAVIGATION

To solve the problem of local navigation we have proposeda model encompassing two interacting processes [23], [29],one for perception and another for action selection (see Fig. 5).Basically, after receiving a new frame, both perceptual and ac-tion selection processes interact for several iterations before afinal motor action decision is reached and eventually engaged.These interactions occurring between both processes allowthem to progressively unfold in parallel, and consequently, toenable accurate deployment of visual attention. Experimentalresults have shown the ability of the model to robustly controlthe robot with less than 1% of its visual input being analysed[23], [29].

At each interaction, the action selection process sends amessage to the perceptual process with an action utility vector,which states the preference the action selection process hason the set of possible actions. The action utility vector iscomputed according to a desired heading of motion andconstrained by information about free-space connectivity ofthe local environment, which has been sent by the perceptual

2258

Page 5: Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

Fig. 5. Local navigation architecture (from [23]).

process in the previous iteration. In turn, the perceptual processuses the just received utility vector to iterate its search forobstacles on a by-need basis, and thus, parsimonious way.

A perceptual process’ iteration starts by appending new p-ants to the system. Then, each p-ant moves, according to itsset of behaviours (see Fig. 6), updating its 2-D position in thevisual input. These behaviours include the ability of p-ants tointeract via pheromones in order to improve their search forthe obstacles. Possibly, some of the p-ants will find an obstacleto track across iterations and even frames. In between frames,the position of every p-ant is updated to compensate for anyrobot motion that has occurred. This way, its position is keptrelative to the environment and consequently invariant to robotmotion. This update is done by applying to the p-ant’s 3-Dposition, obtained with stereoscopy from its 2-D position inthe visual input, a transformation matrix representing the robotmotion. The transformed 3-D point is then projected backonto the visual input with a projection matrix (obtained fromcalibration). In the process, due to robot motion, this projectionmay fall out of the visual input, which means that the p-anthas been moved out of the robot’s visual field of view. If atthat moment the p-ant was tracking an obstacle, it is now saidto constitute part of a body-centric local map. To maintain thelocal map updated (i.e., body-centric), p-ants in that situationare also motion compensated in between frames. If a givenpart of the environment is revisited, the projected positions ofsome of these p-ants will be again within the visual field ofview. Then, these p-ants are reactivated in order to track theirassociated obstacles in the visual input. At the end of eachiteration, the 3-D positions of all p-ants that are tracking anobstacle, either in or out of the robots visual field of view,may be used to inform the action selection process regardingfree-space configuration. Refer to [23], [29] for details.

V. CONCLUSION

Previous work on the use of visual saliency and swarmcognition to enable fast and robust off-road robots wasoverviewed. The purpose of this survey is to provide phe-nomenological support to the two main hypotheses raised inthis article: (1) parsimony, robustness, and performance of off-road robots is improved if visual attention mechanisms areemployed in their control systems and (2) the synthesis ofself-organising robot cognitive behaviour is facilitated if the

(a) t0

(b) t1

(c) t2

(d) t3

Fig. 6. Local navigation example (from [23]). The example refers to asituation where the robot has to move towards a goal location 30m ahead ofits start position. To reach its goal (orange dot), the robot circumnavigates thelarge obstacle area visible in a) and b), which as a consequence of the robotmotion leaves the sensor’s field of view in c) and d). Left: p-ants overlaid inthe image obtained from the left camera composing the stereo-vision sensor,with vertical lines corresponding to the projection of possible robot motions(the darker the higher their utility; absent lines represent zero utility actions).Right: body-centric top-view, with robot’s path depicted in blue. Red dotsrepresent p-ants that are leaving the bottom region of the input image tosearch for obstacles. Green dots represent p-ants that have found an obstacleand are consequently tracking them. Some of them are out of the field of viewand so are only represented in the top-view figure. Yellow dots represent p-ants that are in a too cluttered region, i.e., with too many of other p-ants, orthat for some reason have lost the obstacle they were tracking. When in thismode, p-ants locally search for a non-cluttered obstacle region. The level ofclutter is measured by the level of pheromone in the corresponding region(not represented).

social insects collective behaviour metaphor is used as designpattern.

The positive results obtained with visual attention for ob-stacle and trail detection, as well as for local navigation,confirm the first hypothesis. The positive results obtained from

2259

Page 6: Visual Attention and Swarm Cognition Towards Fast and ...lcorreia/papers/ISIE-2011.pdf · Visual Attention and Swarm Cognition Towards Fast and Robust Off-Road Robots Pedro Santana

the extensive use of swarm models for trail detection andlocal navigation also bear support to the second hypothesis.These models rely heavily on self-organising properties, whichendow the system with robustness in the face of unforeseensituations and allow complex solutions to emerge from theinteraction of simple, and thus fast to compute, elements.

We have also discussed the importance of consideringswarm-based solutions to better exploit emerging technologieswhere parallelism and randomness and prominent aspects.That is, being the computational models already parallelby design, we do not need ineffective ad-hoc adaptations.Furthermore, being bottom-up, the system’s overall behaviouremerges from the interaction of simple, mostly homogeneous,elements. Hence, this approach allows learning, if required, tooccur at a well specified low dimensional space.

As future work we envision the integration of all thesecontributions into a single framework capable of running on aparallel machine, such as a FPGA or a GPU. Motivated by thegood results obtained with this pioneer work on the synthesisof embodied swarm cognition, we plan to further explore thetheme in both theoretical and experimental domains.

ACKNOWLEDGMENT

This work was partially supported by FCT/MCTES grantNo. SFRH/BD/27305/2006 and CTS multi-annual funding,through the PIDDAC Program funds.

REFERENCES

[1] S. Squyres, A. Knoll, R. Arvidson, J. Ashley, J. Bell III, W. Calvin,P. Christensen, B. Clark, B. Cohen, P. de Souza Jr et al., “Explorationof Victoria crater by the Mars rover Opportunity,” Science, vol. 324, no.5930, p. 1058, 2009.

[2] P. Tokekar, D. Bhadauria, A. Studenski, and V. Isler, “A robotic systemfor monitoring carp in Minnesota lakes,” Journal of Field Robotics,vol. 27, no. 6, pp. 779–789, 2010.

[3] G. Sukhatme, A. Dhariwal, B. Zhang, C. Oberg, B. Stauffer, andD. Caron, “Design and development of a wireless robotic networkedaquatic microbial observing system,” Environmental Engineering Sci-ence, vol. 24, no. 2, pp. 205–215, 2007.

[4] L. Merino, F. Caballero, J. Martınez-de Dios, J. Ferruz, and A. Ollero,“A cooperative perception system for multiple UAVs: Application toautomatic detection of forest fires,” Journal of Field Robotics, vol. 23,no. 3-4, pp. 165–184, 2006.

[5] R. Murphy and S. Stover, “Rescue robots for mudslides: A descriptivestudy of the 2005 La Conchita mudslide response,” Journal of FieldRobotics, vol. 25, no. 1-2, pp. 3–16, 2008.

[6] D. Johnson, D. Naffin, J. Puhalla, J. Sanchez, and C. Wellington,“Development and implementation of a team of robotic tractors forautonomous peat moss harvesting,” Journal of Field Robotics, vol. 26,no. 6-7, pp. 549–571, 2009.

[7] P. Santana, J. Barata, and L. Correia, “Sustainable robots for humani-tarian demining,” International Journal of Advanced Robotics Systems,vol. 4, no. 2, pp. 207–218, June 2007.

[8] M. K. Habib, “Humanitarian demining: Reality and the challenge oftechnology - the state of the arts,” International Journal of AdvancedRobotic Systems, vol. 4, no. 2, pp. 151–172, 2007.

[9] P. Santana, C. Candido, P. Santos, L. Almeida, L. Correia, and J. Barata,“The Ares robot: case study of an affordable service robot,” in Proceed-ings of the European Robotics Symposium. 2008 (EUROS). Prague:Springer, March 2008, pp. 33–42.

[10] A. Beyeler, J. Zufferey, and D. Floreano, “Vision-based control of near-obstacle flight,” Autonomous Robots, vol. 27, no. 3, pp. 201–219, 2009.

[11] K. Low, G. Podnar, S. Stancliff, J. Dolan, , and A. Elfes, “Robot boatsas a mobile aquatic sensor network,” in Proceedings of ESSA Workshop,2009.

[12] M. Land, “Motion and vision: why animals move their eyes,” Journalof Computational Physiology A, vol. 185, pp. 341–352, 1999.

[13] A. Oliva and A. Torralba, “The role of context in object recognition,”Trends in Cognitive Sciences, vol. 11, no. 12, pp. 520–527, 2007.

[14] R. D. Beer, “A dynamical systems perspective on agent-environmentinteraction,” Artificial Intelligence, vol. 72, no. 1-2, pp. 173–215, 1995.

[15] E. Thelen and L. B. Smith, A dynamic systems approach to thedevelopment of cognition and action. The MIT Press, 1996.

[16] R. C. Arkin, Behavior-Based Robotics. The MIT Press, may 1998.[17] Z. W. Pylyshyn and R. W. Storm, “Tracking multiple independent

targets: evidence for a parallel tracking mechanism.” Spatial Vision,vol. 3, no. 3, p. 179, 1988.

[18] M. M. Doran, J. E. Hoffman, and B. J. Scholl, “The role of eyefixations in concentration and amplification effects during multipleobject tracking,” Visual Cognition, vol. 17, no. 4, pp. 574–597, 2009.

[19] R. Bajcsy, “Active perception,” Proceedings of the IEEE, vol. 76, no. 8,pp. 996–1005, 1988.

[20] J. Aloimonos, I. Weiss, and A. Bandyopadhyay, “Active vision,” Inter-national Journal of Computer Vision, vol. 1, no. 4, pp. 333–356, 1988.

[21] D. H. Ballard, “Animate vision,” Artificial Intelligence, vol. 48, no. 1,pp. 57–86, 1991.

[22] O. Sporns and M. Lungarella, “Evolving coordinated behavior by maxi-mizing information structure,” in Proceedings of ALife X. Bloomington:MIT Press, 2006, pp. 3–7.

[23] P. Santana and L. Correia, “Swarm cognition on off-road autonomousrobots,” Swarm Intelligence, vol. 5, no. 1, pp. 45–72, 2011.

[24] L. Michael, “Ant-based computing,” Artificial Life, vol. 15, no. 3, pp.337–349, 2009.

[25] N. R. Franks, “Army ants: a collective intelligence,” American Scientist,vol. 77, no. 2, pp. 138–145, 1989.

[26] K. M. Passino, T. D. Seeley, and P. K. Visscher, “Swarm cognition inhoney bees,” Behavioral Ecology and Sociobiology, vol. 62, no. 3, pp.401–414, 2008.

[27] I. Couzin, “Collective cognition in animal groups,” Trends in CognitiveSciences, vol. 13, no. 1, pp. 36–43, 2009.

[28] J. A. R. Marshall and N. R. Franks, “Colony-level cognition,” CurrentBiology, vol. 19, no. 10, pp. 395–396, 2009.

[29] P. Santana and L. Correia, “A swarm cognition realization of attention,action selection and spatial memory,” Adaptive Behavior, vol. 18, no. 5,pp. 428–447, 2010.

[30] V. Trianni and E. Tuci, “Swarm cognition and artificial life,” in Pro-ceedings of the European Conference on Artificial Life (ECAL), 2009.

[31] J. Turner, “Termites as models of swarm cognition,” Swarm Intelligence,vol. 5, no. 1, pp. 19–43, 2011.

[32] P.-P. Grasse, “La reconstruction du nid et les coordinations inter-individuelles chez bellicositermes et cubitermes sp. la theorie de lastigmergie: Essai d’interpretationdu comportement des termites con-structeurs,” Insectes Sociaux, vol. 6, pp. 41–80, 1959.

[33] H. Abelson, D. Allen, D. Coore, C. Hanson, G. Homsy, T. Knight Jr,R. Nagpal, E. Rauch, G. Sussman, and R. Weiss, “Amorphous comput-ing,” Communications of the ACM, vol. 43, no. 5, pp. 74–82, 2000.

[34] S. Martel and M. Mohammadi, “Using a swarm of self-propelled naturalmicrorobots in the form of flagellated bacteria to perform complexmicro-assembly tasks,” in Proceedings of the IEEE International Con-ference on Robotics and Automation (ICRA). IEEE, 2010, pp. 500–505.

[35] P. Santana, M. Guedes, L. Correia, and J. Barata, “Stereo-based all-terrain obstacle detection using visual saliency,” Journal of FieldRobotics, vol. 28, no. 2, pp. 241–263, 2011.

[36] P. Santana, N. Alves, L. Correia, and J. Barata, “Fast trail detection:A saliency-based approach,” in Proceedings of the International Con-ference on Robotics and Automation (ICRA 2010), Anchorage, Alaska,2010.

[37] K. Konolige, M. Agrawal, M. R. Blas, R. C. Bolles, B. Gerkey, J. Sola,and A. Sundaresan, “Mapping, navigation, and learning for off-roadtraversal,” Journal of Field Robotics, vol. 26, no. 1, pp. 88–113, 2009.

[38] R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacledetection and terrain classification for autonomous off-road navigation,”Autonomous Robots, vol. 18, no. 1, pp. 81–102, 2005.

[39] L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visualattention for rapid scene analysis,” IEEE Trans. on Pattern Analysisand Machine Intelligence, no. 11, pp. 1254–1259, 1998.

2260