Abstract
A novel distributed hunting approach for multiple autonomous robots in unstructured mode-free environments, which is based on effective sectors and local sensing, is proposed in this paper. The visual information, encoder and sonar data are integrated in the robot's local frame, and the effective sector is introduced. The hunting task is modelled as three states: search state, round-obstacle state, and hunting state, and the corresponding switching conditions and control strategies are given. A form of cooperation will emerge where the robots interact only locally with each other. The evader, whose motion is a priori unknown to the robots, adopts an escape strategy to avoid being captured. The approach is scalable and may cope with problems of communication and wheel slippage. The effectiveness of the proposed approach is verified through experiments with a team of wheeled robots.
1. Introduction
Inspired by distributed multi-agent systems in nature with the characteristics of parallelism, adaptation and fault-tolerance, multiple robotic systems have attracted considerable interest [1–4]. This requires the robots to work cooperatively without any conflict for better performance of the system. With the increasing demand for multiple robots working in unstructured and dynamic environments, the difficulties of organizing and coordinating them are augmented. Robotic systems may also suffer from communication problems. In this situation, maximizing local sensing provides a better solution.
As a representative yet challenging test-bed for multiple robots, the hunting problem has been specifically researched due to inherent dynamic characteristics in competitive environments. The objective of the hunting is to enable a team of robots to tactically search and hunt an evader with possibly adversarial reactions. Its potential applications include hostile capture operations, as well as security or search and rescue scenarios. In this paper, we are interested in multi-robot distributed hunting based on local sensing in unstructured model-free environments. In such a scenario, some common sensors, such as CCD cameras, sonar sensors and encoders are used to acquire the information, and a practicable approach is proposed that may be readily implemented by ordinary mobile robots.
The hunting problem has been widely studied by many researchers. Two classes of approaches have been investigated: one involves an environment model and the other considers environments without or regardless of a model. The former approach builds the environment in the form of a grid or graph, off- or on-line. In [5], multiple robots pursue a non-adversarial mobile evader in indoor environments with map discretization, and simulated results are presented. In [6,7], the hunting and map building problems are combined. A team of unmanned air and ground vehicles are required to complete the task, the air vehicle playing the role of supervisory agent that can detect the evader but not capture it. In [8], a hunting algorithm is given based on a grid map. The case with one or more hunters pursuing an evading prey on a graph is presented in [9]. The maintaining of visibility of an evader by a pursuer is investigated in [10,11].
There also exist many approaches that work without environmental modelling or independently of a model. Yamaguchi presents a feedback control law for coordinating the motion of multiple mobile robots to capture/enclose a target by making troop formations [12], which is controlled by formation vectors. Cao et al. study the hunting problem of multiple mobile robots and an intelligent evader, and the proposed approaches are verified by simulations [13,14]. In [15], the prey is hunted by the robots with four modes (navigation-tracking, obstacle avoidance, cooperative collision avoidance, and circle formation). In [16], the problem of pursuit evasion games is considered with the aid of a sensor network. Biologically inspired approaches have also been introduced: Alfredo Weitzenfeld discusses hunting using the inspiration of wolf packs [17,18].
Other related work includes target tracking, which may provide some helpful solutions. Multi-robot tracking of a moving object using directional sensors with limited range was carried out in [19]. Tracking objects with a sensor network system consisting of distributed cameras and laser range finders is addressed in [20]. Liu et al. study multi-robot tracking of a mobile target [21], and a three-layer (monitoring layer, target tracking layer and motor actuation layer) framework is given.
The main contribution of this paper is to provide an effective sector-based distributed hunting approach for multiple autonomous robots in unstructured model-free environments. The cooperation emerges through local interaction using simple and specific individual activities. The proposed approach may avoid problems of communication, and the long-term influence of wheel slippage is also eliminated.
The rest of the paper is organized as follows. Section 2 gives the distributed approach for the hunting system based on local sensing and effective sector. Section 3 depicts the escape strategy for the evader. Experimental results are presented in section 4, and section 5 concludes the paper.
2. The distributed approach for the hunting system
2.1. Control structure
The hunting control structure for multiple autonomous robots with a smart evader is shown in Fig. 1. The ambient environment information of an individual robot is acquired by local sensing. The vision system can recognize and localize interested objects, including teammates and the evader, which are within its sight. Considering that the vision system sometimes cannot provide valid data, the encoder information is combined to estimate the relative positions. The sonar data are used to detect the potential dangers. The effective sector that implies possible collision-free motion regions is then introduced. Provided with local sensory information and effective sectors, the robot selects the suitable task state for the current situation from search, round-obstacle and hunting states, which provides the solution to effective hunting. The decision results are then sent to the actuators. The evader is endowed with a certain intelligence and tries to escape by an effective sector-based strategy based on its sonar data.

Control structure for hunting system
2.2. Local sensing
Each robot is defined by a local polar coordinate frame whose pole is the robot centre with the polar axis direction of its heading. The vision system of an individual robot consists of three cameras Sv(i)(i=1,2,3) with a limited field of view, shown in Fig. 2, where the arrow shows the robot's heading.

Vision system of an individual robot
Each robot has a unique column marker, which is colour coded with upper and lower parts. A finite set of distinctive colour combinations is predefined. The robot may identify the interested objects, including teammates and evader, through visual recognition, and then the relative information in its local frame may be approximately calculated. When an interested object is out of sight, an estimation of relative positions is necessary within a certain time by integrating the historical data with encoder information.
An array of sonar Sk(k=0,1,…ks-1) is used to detect the surrounding environment and the layout is shown in Fig. 3 with ks=16. Each sonar sensor has a bounded sector range and we denote the offset angle of sensor Sk as

Sonar sensors array
In order to avoid regarding the detected evader as an obstacle, it is necessary to eliminate the evader-related information. Assume that the robots and the evader have the same size, with radius r. We denote with (

Filtering of evader-related information
The sensor numbers
Thus the sensors set corresponding to Ψ is given as follows:
Project St in Ω with
2.3. Effective Sector
The effective sector is introduced to represent possible collision-free regions for an individual robot. We label as
There exists an effective sector

Effective sector
c1)
c2)
c3)
If all sonar sensors have not detected objects, or all detecting distances are greater than
If the central line
2.4. Hunting Task Model
The individual robot acquires information on the evader, teammates and obstacles in its local frame by local sensing, and the hunting task is modelled as three states: search state, round-obstacle state and hunting state, as shown in Fig. 6.

Modelling of the hunting task
When the robot has no information about the evader, including the failed prediction of the evader (
Before describing each state in detail, we first give the directional passageway DP, which indicates whether the corresponding direction is safe or not. DP is described as a directional rectangle whose length and width are

The directional passageway DP
For DP whose orientation angle is
2.4.1. Search State
In this state, the robot wanders around to find the evader. In the case of a just failed prediction, the robot will firstly rotate for a certain time based on the evader's historical observation information.
2.4.2. Round-obstacle State
First of all, the robot should determine the preferred side of two sides separated by
How to select the preferred side is the problem to be addressed. We denote with
After the preferred side is obtained, the robot will watch the evader carefully with no effective sectors. For other cases, from the starting edge of the first effective sector corresponding to the preferred side, the safety of directional passageways is judged at an angle interval of
2.4.3. Hunting State
Each robot is required to decide the occasion to coordinate according to the distribution of ambient teammates and the evader. Robot

The coordination based on local interaction
Let
When there is no coordination with other teammates,
2.5. Motion Control
Based on these three states, if the robot needs to watch the evader, it will rotate to bring the evader within a minor angle range of the robot's heading; if not, an ideal direction is generated and
Once a robot finds the evader visually and
3. Escape Strategy for the Evader
Consider a situation where the evader tries to evade being captured by the robots with sonar data. The motion of the evader is not known to the robots a priori. It adopts the same sonar model as that of the robots. When there is no danger in a virtual circle around the evader with a given radius of
The evader finds all effective sectors clockwise from sensor S0. Similar to the effective sector shown in Fig. 5(a), let
If
If the sector angle of
As soon as the evader continuously discovers that there are no effective sectors or the safe directional passageway, or
4. Experiments and Results
In this section, the proposed approach is experimentally evaluated by a team of wheeled robots. The evader is also a robot.
Some parameters in the proposed approach are set as follows:
Several representative experiments are conducted. Besides the motion trajectories based on encoder information, the state diagram reflecting the variation among the states is also presented. For better demonstration, three original states are subdivided into the following states: search, round-obstacle, hunting with coordination, and hunting without coordination, which correspond to m_state=1, 2, 3, 4, respectively.
Experiment 1 adopts two robots, R1 and R2, to pursue a static evader. The initial positions of R1, R2 are S1 and S2, respectively. The motion trajectories of the two robots are shown in Fig. 9(a) and the state diagram of the robots is depicted in Fig. 9(b). The task is completed smoothly by local coordination between these two robots.

A hunting experiment with two robots and a static evader
Experiment 2 requires three robots, R1, R2 and R3, to pursue a moving evader: their initial positions are S1, S2, S3 and SE, respectively. The experiment result is shown in Fig. 10. It can be seen that the hunting task is accomplished through the efforts of all robots.

Trajectories of the robots and the evader for experiment 2
Experiment 3 is conducted to test the robustness of the proposed approach. Two pursuer robots, R1, R2, and an evader, with initial positions S1, S2 and SE, are involved in this scenario. R2 is assumed to suddenly stop because of a fault. The motion trajectories are depicted in Fig. 11 and Fig. 12 gives the state diagram of the robots. Initially, only R1 sees the evader and it will directly pursue the evader. After the evader is detected by R2, R2 also pursues the evader. After a little while, R2 thinks about the coordination with R1 and its m_state becomes 3 until it stops at location G2. As for R1, it continues to execute the task and finally the evader is captured at location GE.

Motion trajectories for experiment 3

State diagram of R1 and R2
5. Conclusions
In order to complete hunting tasks in dynamic and unstructured environments, and considering the need to reduce communication and provide better expandability, this paper proposes a novel and practical hunting approach for a group of autonomous mobile robots based on effective sectors and local coordination. Teammates, evader and obstacles are represented in the robot's local frame. The hunting task is modelled and coordination emerges through local interactions of individual robots. The experimental results prove the effectiveness of the proposed approach.
Footnotes
6. Acknowledgments
This work is supported in part by the National Natural Science Foundation of China under Grants 61273352, 61175111, 61227804, 60805038.
