当前位置:首页 >> IT/计算机 >>

Control of car-like (wheeled) multi robots for following and hunting a moving target


Scientia Iranica B (2011) 18 (4), 950–965

Sharif University of Technology Scientia Iranica
Transactions B: Mechanical Engineering www.sciencedirect.com

Control of car-like (wheeled) multi robots for following and hunting a moving target
H. Sayyaadi ? , H. Kouhi, H. Salarieh
School of Mechanical Engineering, Sharif University of Technology, Tehran, P.O. Box 11155-9567, Iran Received 22 December 2010; revised 8 May 2011; accepted 12 June 2011

KEYWORDS Multi robot; Multi agents; Following and hunting target; Car-like; Nonholonomic; Kalman filter.

Abstract The main purpose of this paper is to design a decentralized controller for some car-like (wheeled) multi robots to follow and hunt a moving target. Considering geometric dimensions, mass and moment of inertia, robots are very similar to actual cars in which the outputs of the controller are steering and driving wheel torques. All robots are equipped with range and bearing sensors along with antenna, to communicate radio wave signals. A Kalman filter is implemented to estimate relative position, state variables of the target and state variables of other robots. The controller is designed to carry out the group maneuver of the system, based on the system dynamics analysis of inertial agents, as well as minimizing the norm of the error between desired and actual acceleration. A simulation study considering the group maneuver of four robots has been carried out, and the achieved results in positions, velocities and other relevant state variables of agents are depicted in the paper. The control torques of the steering system and driving wheels are derived and depicted appropriately. These results guarantee the performance of the addressed controller coupled with the Kalman filter, despite the existence of nonholonomics in dynamics, inertial behaviors, etc. ? 2011 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved.

1. Introduction Over the last few years, multi robot control has attracted a great deal of attention from among engineers and researchers [1], the main reason being the fact that multi robots are able to perform tasks that a single robot is unable to do. To be more specific, they are robust to single agent failures, they need much less information than does a single robot, and they are all capable of covering a region, flocking, group searching, following and hunting moving targets, etc. For example, some previous research work, including fundamentals in Multi Agents and Networking, carried out by researchers, are:

1. The proportional task allocation problem; a combination of deployment and consensus problems was undertaken by Sayyaadi and Moarref [2]; 2. Finite time consensus in directed switching network topologies, based on a time-delayed algorithm and communications, was undertaken by Sayyaadi and Dostmohammadian [3]. Considering the desired mission of a group operation, a proper robot should be chosen for each task. Among different types of robot, the predominant role of wheeled robots is undeniable, due to the fact that they can travel long distances while carrying equipment needed for group operations. Considering practical applications in industry, communications and transportation, car-like robots are known to be the most predominant among wheeled robots. Easy manufacturing, low maintenance costs, the ability to carry loads and heavy equipment, having adequate energy on board, better efficiency, the ability to maneuver at high speed, proper suspension systems, etc. are the main reasons for the superiority of car-like robots. The importance of car-like robots essentially corresponds to the features of following and hunting moving targets. These features have a great deal of importance in military applications, security implementations and other scientific and practical fields. For example, they can be considered as security coverage operations for the leader of a group of agents.

?

Corresponding author. E-mail address: sayyaadi@sharif.edu (H. Sayyaadi).

1026-3098 ? 2011 Sharif University of Technology. Production and hosting by Elsevier B.V. All rights reserved. Peer review under responsibility of Sharif University of Technology. doi:10.1016/j.scient.2011.07.006

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

951

Nomenclature

Φ v

Steering angle of robot Front wheel velocity IC Moment of inertia relative to geometric center of robot L Robot’s length rw Radius of robots wheel τ1 Steering torque of robot Index d Desired kΦ Constant of steering controller T q = [x y] Symbol of geometric center of robot C (θ ), S (θ ) Symbol of sine and cosine τmax , τmin Maximum and minimum of robot’s actuator torque Index t Symbol of moving target kt Coefficient of target escape force Ft Escape force of moving target Bt Input matrix of target dynamic Xi Vector of robot i’s variables Xij Vector of relative variables between two robots Xit Vector of relative variables between robot and target Rij Relative distance between two agents p0 Signal strength attenuation in one meter distance EKF Extended Kalman Filter i wp Added noise to the signal strength sensor of robot i i wR Added noise to the distance sensor of robot i it wyr Added noise to y coordinate of the sensor of tracer ei Gaussian white noise with characteristic ?ti F Estimation of target escape force by robot i kp Proportional control coefficient between two robots kt Proportional control coefficient between a robot p and target Rt Equilibrium distance between robot and target Z ij Output of robot i sensor containing agent j data Θ ij Output matrix of the sensor of robot i containing agent j data ij R1 Covariance matrix of exerted noise on agent j which is used in the estimation algorithm of robot i kij (t ) Kalman filter gain which is used to estimate state variables of agent j by robot i θ Body angle of robot m Mass of robot d Geometric distance of robot center from front and rear β Angle between steering and x axis cs Constant of steering damping τ2 Driving torque of robot Φd Desired steering angle dt Step time for discretization ¨d, q ˙d q Desired velocity and acceleration of robot’s center J Cost function of robot performance ρ Coefficient of actuator filter mt Mass of moving target Ct Damping of target dynamic xt , yt Coordinates of target center

At XCi Xt xij , yij

Ψij α
E

i wΨ wxr

σ φj (t )
i Ferror kd kt d

RR ? ij Z

φij (t )
R2 pij (t )
ij

State matrix of target dynamic Vector of robot i center variable Vector of target center variables Relative position between robot i and j Viewing angle of agent j by robot i Coefficient of signal strength attenuation Expected value Added noise to the view angle sensor of robot i Added noise to x coordinate of the sensor of tracer Standard deviation of noise w Sate transient matrix of robot j from inertia coordinates system point of view Estimation error of target escape force Derivative control coefficient between two robots Derivative control coefficient between a robot and target Equilibrium distance between two robots Output Estimation of the sensor of robot i containing agent j data State transient matrix of agent j in coordinates systems attached to robot i Covariance matrix of the sensor of robot i containing agent j data Covariance matrix of the state variables of agent j which are used in robot i estimation algorithm

In this paper, the robots are considered to behave nonholonomically in dynamics and require vigorous control, while considering their geometries and inertial effects too. Also, steering and driving torques are considered to be the output of controllers. The saturation and filtering effects of the actuators are also considered here. The robots in the group are controlled in such a way that each robot responds with an appropriate reaction, based on the control algorithm and the information passed down from other robots and the target (decentralized control). The force exerted on the moving target keeps the target away from the group. Each robot, in order to estimate the relative position and state variables of the other robots, is equipped with an antenna, which measures the direction and strength of the transmitted signals from other robots, and uses it for estimating the state variables and relative positions of moving targets from the range and bearing sensors. Using the output of these sensors and invoking the Kalman filter, each robot estimates the needed state variables of other agents and uses them in its control algorithm to operate the group maneuver. In the following, some work related to following and hunting a moving target is presented. A control algorithm is designed for a unicycle-type robot in order to hunt a moving target, supposing that all information is at hand [4]. A task allocation method for multi-robot cooperative hunting behavior is proposed based on an improved auction algorithm [5]. Consider the problem of the navigation and guidance of a unicycle-type mobile robot towards a maneuvering target, based on measurements concerning only the distance from the robot to the target [6]. A hybrid control approach to the problem of steering a group of unicycle-type mobile robots to reach desired relative positions and orientations, with respect to a specific target and other group-mates, is referred to as the cooperative target enclosing problem [7]. By using a hybrid control approach, some unicycle-type robots have been controlled to follow and hunt a moving target. In this approach,

952

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

basic assumptions are that all information, such as speed and relative variables, are accessible [8]. Unicycle robots are controlled to enclose and capture a target. That work only uses local information gathered by on-board relative-position sensors assumed to be noisy, anisotropic, and unable to detect the identity of the measured object. Communication between robots is provided by limited-range transceivers [9]. The approach of cooperative hunting for multiple mobile targets by a multi-robot is addressed in [10], which divides the pursuing process into forming pursuing groups and capturing the targets. Some drawbacks of these works are as follows. Robots are modeled as unicycle or as point masses in the plane. The kinematics of the robot is used for designing the controller. The target moves in a specific path and does not escape the robots. Positioning by image processing is not applicable at far distances, and some of them suppose all information is accessible. Some studies related to car-like robot group maneuvers are addressed here. The formation of car-like multi robots and their change to another formation have been studied in [11]. The most important work regarding car-like robots was described in a doctoral dissertation [12]. In this research study, the main goal was to control multiple car-like robots following a leader; all the system moving in a certain geometrical shape. It was presumed that each robot had a leader and this configuration continues chain-like to the main leader. Therefore, the control algorithm for each robot was defined such that they follow the leader at a certain relative distance and angle [13]. The main leader, ahead of the group, was responsible for guidance. It was assumed that the main leader had a constant velocity; meanwhile, the design of the controller for some accelerated inertial agents had its own specific rules. The control algorithm was based on an optimization procedure in which the cost function, using numerical methods, uses a lot of CPU time, because of the huge amount of calculations. Moving through the obstacles and changing the formation is another research topic [14]. Integration of the wireless network signal strength data with vehicle sensor information, by means of a Kalman filter, is proposed, to estimate the relative position of each vehicle in a robot formation [15]. The WiFi (Wireless Fidelity) data consist of the Reception Signal Strength (RSS) and the angle of the maximal RSS with respect to robot orientation [16]. This estimator was not incorporated into the group movement algorithm and was defined for just one robot. In order to represent the control algorithm and design the estimator, only kinematic equations were used, and mass, moment of inertia and main inputs, which are torques, were neglected. Based on the type of operation, each car-like robot needs a specific controller. The work undertaken in regard to controlling a robot is: parking a robot in a specific place, finding the best path for movement [17,18], and robot movement on a path with obstacles [19] etc. There are few studies about hunting a moving target, and performing this operation using car-like robots is the goal of this present article. The target escapes from the robots and does not have any geometric and kinematic constraints; it moves freely and it is assumed that all necessary information is not available. Therefore, sensors and estimators are used. The design of an estimator is based on the dynamics equations of the robot. The control algorithm is designed in such a way that a series of robots or obstacles can be added to the group without interrupting the group maneuver. Because the behavior of a group of inertial agents is different from a group of non-inertial agents, in order to design the controller, the dynamics of a group

of inertial agents is explicated for the group of robots and the controller is designed based upon it. In multi agent systems and multi robot applications, it is very difficult to predict the path each one takes. The control algorithm used in this article is easy to apply and, compared to other methods, a huge amount of calculation is not needed. Using a decentralized controller using only local information, compared to previous work, the assumptions in this paper are more compatible with real ones. It is not necessary to design a controller for the robot body angle directly; it will be done indirectly and the algorithm is scalable. In this paper, the agent is used for both robot and target. This article includes an explanation of kinematics and the kinetics of the robot, the design of the controller for the robot, an explanation of the dynamics of the moving target, an explanation of the sensors, the design of the estimator for the robot and moving target, the design of the controller for the group of robots, an explanation of the indirect control of the body angle of the robot, simulation results, nomenclature and, finally, concluding remarks. 2. Kinematics and kinetics of the car-like robot As seen in Figure 1(a), the robot used here is a car-like robot equipped with four wheels, while considering its geometries, mass and moment of inertia. The steering and driving torques on the front wheels are the outputs of the controller. In addition to operating control algorithms and defining torques, robots are sensory equipped and capable of measuring the steering angle, the body angle of the robot, wheel speed and, as a result, the velocity and acceleration of each point of themselves. Because each robot measures its body angle, it is able to be aware of the inertial coordinate in all positions. Measuring the body angle is done by using an INS [20], compass based sensory system. Based on the theorems developed in the kinetics and kinematics of car vehicles, the steering angles of the two front wheels have a slight difference, in order to allow an instantaneous axis of rotation. These angle differences do not cause noticeable discrepancies between two and four wheel robot model equations. In order to obtain its motion equations, it is required to assume that the robot is two wheeled [19,21]. In Figure 1, θ is the body angle, Φ is the steering angle, v is the front wheel speed, m is the robot mass, c is the geometrical center of the robot, d is the distance between the geometrical center and the two wheels, Ic is the moment of inertia, with respect to the geometrical center, xy is the symbol of the inertial coordinate system, F1 is the robot driving force (the reaction of the force that the wheels exert on the ground surface), F2 is the force exerted on the robot (from the robot inertia and velocity constraint of the front wheel) and F3 is the force exerted on the robot (from the robot inertia and velocity constraint of the rear wheel). It is also assumed that the steering angle constraint is ?π /2 ≤ Φ ≤ π /2. The mass and inertia of every wheel are neglected, and the vehicle moves on the horizontal plane. There is also no slippage between the wheels and the surface. Kinematic equations are derived, based on the above presumptions, and also assuming the robot to be rigid. Eqs. (1) and (2) define velocity constraints on front and rear wheels; this means that the linear and angular velocities of the robot have to be satisfied in these two:

˙ cos Φ = 0, ˙ c sin β ? y ˙ c cos β ? dθ x ˙ = 0. ˙ c sin θ ? y ˙ c cos θ + dθ x

β = θ + Φ,

(1) (2)

xc and yc are the Cartesian coordinates of the robot center. In this article, x, y, C and S are abbreviations used instead of xc , yc ,

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

953

a

b

Figure 1: Schematic diagram of a four wheeled car-like robot (a), and simplified to two wheeled robot (b).

cos, sin. Using Eqs. (1) and (2), and using the newly mentioned abbreviations, the following equations are derived [22,23]: a1 = C Φ C θ ? a3 = SΦ 2d Sθ SΦ 2

,

a2 = S θ C Φ +

C θ SΦ 2

the dynamics equations of the robot [23,20]. By defining some new parameters in Eq. (13), using Newton–Euler equations, Eq. (14) is obtained. k1 = I dm k1 C Φ 2

,
(3) (4) (5) (6)

,

2 2 a6 = a2 1 + a2 + k1 da3 ,

,

a7 =

˙ = v a1 , x ˙ = v a2 , y ˙ = v a3 . θ

a3 ? a5 a2 ? a4 a1 , Sθ ?C θ 1

(13)

?

Cβ Sβ SΦ

Sβ ?C β ?C Φ

?? ?

F1 F2 F3

¨ x ¨ ?. = m? y ¨ k1 θ

?

?

(14)

Eqs. (4) and (5) are the velocity components of the robot center and Eq. (6) is the angular velocity of the robot, expressed as a function of front wheel speed (v ) and angles, Φ , θ . ˙ are kinematic ˙, y ˙ and θ v and Φ are kinematic inputs and x outputs. Based on these equations, one cannot find any value for ˙ d = v a1 , v that simultaneously satisfies the three equations, x ˙d = v a3 , where d stands for the desired condition. ˙ d = v a2 , θ y ′ ′ In Eqs. (7) and (8), τ1 and τ2 are the main torques of steering and driving, rw is the radius of the wheel and cs is the damping factor of the steering system.
′ ˙ = cs τ1 Φ = τ1 ,

Eq. (14) expresses the relation between forces and the acceleration of the robot. By replacing Eqs. (10)–(13) into Eq. (14), solving the system of equations and deriving F1 , Eq. (15) is derived. Eq. (15) expresses the relation between driving torque exerted on the wheel, steering torque and derivative of the wheel speed.

τ2 = F1 = m(v ˙ a6 + τ1 v a7 ).

(15)

Robot governing equations can be written in state space format as follows: x1 = x, x5 = θ ,

˙, x2 = x ˙, x6 = θ

x3 = y, x7 = v, 0

˙, x4 = y
x8 = Φ . 0 ? (16)

(7) (8)

F1 =

′ τ2

rw

= τ2 .

And finally it is summarized as:

? ?

In order to simplify these equations, the normalizing of ˙ = τ1 and parameters had been carried out. The inputs are Φ F1 = τ2 . Finally, by multiplying and dividing the results by rw and cs , the main torques are derived. The damping effects of the steering system are more than the inertia effects and therefore in Eq. (7), only the damping coefficient is included. Using the above defined variables, putting them into Eq. (9) and making derivatives of Eqs. (4)–(6), acceleration equations are derived, as given by Eqs. (10)–(12). It is important to notice that because Eqs. (4)–(6) are valid in all time domains, it is also possible to differentiate them again in time. a4 = S Φ C θ + Sθ C Φ 2

,

a5 = S θ S Φ ?

CθCΦ 2

a1 a7 ? + a4 ??x7 ? a6 ? ? ? ? ? x ˙1 x 2 ? 0 ? ˙ 2 ? ??x2 ? ? ?x 7 a2 a3 ? ? ?x ? ? ? a a7 2 ? ˙ 3 ? ? x4 ? ? ?x ? ? 2 ? ??x7 a6 + a5 ? ˙ 4 ? = ? x 7 a1 a3 ? + ? ?x ? ? ? ? 0 ? ˙ 5 ? ? x6 ? ? ? ?x ? ? ? ? ? ˙ C Φ a3 a7 ?˙6? ? 0 ? ? ? x7 0 ? ?x7 2d a6 ? ˙8 x 0 ? x 7 a7 ? ? ? a6 1

?

a1 ? ? ma6 ? a2 ? ? 0 ? ?

?

ma6 ? τ1

?[ ] . ? 0 ? τ2 ? a3 ? ? ? ma6 ? ? 1 ? ?
ma6 0 (17)

,

(9) (10) (11) (12) And finally it is summarized as:

¨=v x ˙ a1 ? τ1 v a4 ? v 2 a3 a2 , ¨=v y ˙ a2 ? τ1 v a5 + v 2 a3 a1 , ¨ =v θ ˙ a3 + τ1 v
CΦ 2d

˙ = f (X , τ ), X
X = x1

.

?

x2

···

x8

?T

,

? τ = τ1

τ2

?T

.

(18)

Eqs. (10)–(12) express the relation between the derivative of the wheel speed and the torque of the steering with robot acceleration. These equations are used to derive kinetic motion equations. The Newton–Euler method is used in order to derive

Eq. (18) is a continuous equation in the time domain. After discretization, the state space variables are derived, which are shown in (19). X (t + 1) = f (X (t ), τ (t ))dt + X (t ) = F (X (t ), τ (t )). (19)

954

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

The main reason why Eq. (19) is defined is that it can be used in order to use the Kalman filter that estimates state variables. The discretization method used in Eq. (19) is in Euler format; it is acceptable to use the Runge–Kutta discretization method instead, which has better precision. Although it seems that the discretization error of the Euler method is greater because an estimator is used, its error is corrected continuously. Eq. (19) is only used in the discretized Kalman filter, and the equations of the real model are solved using a better method. 3. Representing a control algorithm for the robot A nonholonomic system can achieve any configuration. Compared to controllable systems, there is no continuous and derivable state feedback to change the configuration of a nonholonomic system. It means that it is required to have a feedback that depends on time and state variables. Feedback linearization of a nonholonomic system is almost impossible. Robot kinematics equations can transform into a ‘‘chain form’’, which is controllable. But, reverse mapping that transforms the ‘‘chain form’’ equation to robot kinematics equations is not generally an easy task [24]. The ‘‘chain form’’ brings a very complex control algorithm, involving a huge amount of calculation. In this paper, a simple and obvious control strategy has been addressed in such a way that there is no need to use the ‘‘chain form’’. Considering the fact that this article aims to control a number of robots in order to hunt and loop around a target, the control of the center coordinates of each robot, q = x y , is adequate. The robot body angle controller is not designed directly and it is indirect and spontaneous. It is observed, as follows, that the body angle of the robot converges to a desired value that is physically acceptable, and it is because of the type of maneuver and the governing control equations. In [25], the control of nonholonomic systems is explicitly explained. Consider the fact that robot constraints are not geometric, which means that there is no position limitation. It means that each point of the robot can travel to any arbitrary position (the system is reachable), but because of the velocity constraint, no point can travel in any arbitrary direction (the system is uncontrollable). It is not allowed to move in any arbitrary direction, except in cases where the desired movement is in agreement with the kinematic constraint equation. These special cases rarely happen. For nonholonomic robots, it is not possible either to define ˙=q ˙ d , nor to define the velocity of the robot in such a way that q ¨=q ¨d. the torques in such a way that q ¨ d is the desired acceleration. As an example, in order to q ˙ t by q and q ˙ , the desired acceleration is designed track qt and q ¨ d = ?kd (q ˙ ?q ˙t ) ? kp (q ? qt ) + q ¨ t and the control in the form of q ¨d, q ˙ and q. The design of the desired signal is a function of q acceleration is explained in Section 8 of this article. In [19], the Euclidean norm of real and desired velocity differences is optimized with a special control signal. In this article, the Euclidean norm of the real and desired accelerations is to be optimized to obtain the control action. There is another point of view in which the Euclidean norms of the desired and real coordinates are optimized. Considering the fact that the control torque at any time interval is effective on the next coordinates, optimizing just one term of the Euclidean norm coordinates does not lead to a satisfactory answer. A complete form of this algorithm will be described in the control part as MPC (Model Prediction Control). For a non-linear model,

Figure 2: Qualitative view of cost function at two different steering angles.

in order to operate the MPC algorithm, future prediction, numerical optimization and a huge amount of calculation are needed. On the other hand, because the torque, at any time interval, defines the acceleration at that time, it is only possible to design a suitable controller by optimizing the Euclidean norm of difference between the desired and real acceleration. Generally, it cannot be deduced that this method is suitable for all systems. There is also a ‘‘parameter optimization’’ in this algorithm, which is easy, and there exists an analytical answer. Eq. (20) defines the cost function that needs to be optimized:

¨d‖ . q?q J = ‖¨

(20)

?

?T

Figure 2 qualitatively shows the cost function (Eq. (20)) at two different steering angles. Meanwhile, steering angle Φ is in a proper position, Φ2 , a suitable driving torque, τ2 , can be found in order to even make the cost function zero. Therefore, an intuitive approach enables this algorithm to be used in the best possible way. The design of the controller in this section is done under ¨d , y ¨d, x ˙ d and y ˙ d are known. The the presumption that variables x first two terms are the desired acceleration of the robot center. The method used to derive these two terms is explained in the multi robot control strategy, which will come later. The next two terms are the desired velocities, which are derived by integrating accelerations. By replacing Eqs. (10) and (11) into Eq. (20), Eqs. (21) and (22) are obtained:

¨?x ¨ d )2 + (y ¨?y ¨ d )2 , J 2 = (x ˙1 ? x ¨ d ) + (v ˙2 ? y ¨d) . J = (v ˙ a1 + v a ˙ a2 + v a
2 2 2

(21) (22)

Finally, the cost function will be in a form wherein two input variables, τ1 and τ2 , are observed; τ1 will appear in the equation ˙ j ’s, and τ2 is derived from v describing a ˙ and τ1 (Eq. (15)). At first, v ˙ is derived from Eq. (23) and then τ2 by presuming τ1 is known.

?J2 = 0, ?v ˙ ?J2 = 0. ?τ1

(23) (24)

τ1 is obtained by using an intuitive method. An equation such as Eq. (24) can be used to evaluate τ1 . By using such an equation ˙ = τ1 . The for τ1 , there is no control over angle Φ , because Φ
steering angle should not exceed upper and lower angle limits, which are predefined by certain values. Therefore, τ1 is obtained by a different method in order not to violate the steering angle

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

955

constraint. Using Eq. (22) and solving Eq. (23), the optimized value for v ˙ is obtained as follows:

v ˙=

¨ d a1 + y ¨ d a2 x
a8

+

v(a4 a1 + a5 a2 )
a8

τ1 ,
(25)

2 a8 = a2 1 + a2 .

Using Eqs. (15) and (25), the optimized value of τ2 is also obtained.

τ2 = ma6

?

¨ d a1 + y ¨ d a2 x
a8

? ?

+ τ1 mv

?

a6 a8

(a4 a1 + a5 a2 ) + a7 .

(26)

In Eq. (26), τ2 is defined in an optimized manner, as a function of τ1 . In order to obtain τ1 , it is just needed to define the desired Φ . ˙ . Eq. (27) shows As a result, τ1 is obtained from equation τ1 = Φ ˙ , in which the terms Φd (desired the desired dynamics for Φ ˙ d will be calculated. kΦ is constant gain and should angle) and Φ be tuned. In order to calculate τ1 , it is needed to determine the desired dynamic of Φ . As a result, τ1 is calculated from equation ˙. τ1 = Φ

tracks Φd without overshoot and the steering angle constraint is not violated. Because of the properties of tan?1 , angle Φd discontinuously changes from ±π /2 to ?π /2 and due to the filter defined by Eq. (35), angle Φ converges to angle Φd mildly. ˙ d , and Sudden changes of Φd do not lead to sudden changes in Φ ˙ d is still a suitable input for Eq. (35). Φ Considering the fact that the goal of this article is to control a group of robots, this algorithm is applied to a group of robots. One of the advantages of this algorithm is that it performs group robot control easily and the results are appropriate. Therefore, complicated algorithms are not required to achieve this goal. There is a question that needs to be answered here, and that is, ‘‘why it is endeavored to make a control signal written in ˙ and v the form of a torque instead of Φ ˙ , which are kinematics parameters that can be used as the control signal?’’ There are two main answers to this question, which are: 1. Because of robot inertia, it is adequate to analyze torque, and a more clear meaning is observed from this method rather ˙ and v than using Φ ˙. 2. The saturation and filtering effects of the actuators on ideal torques should be considered and then the actual torque is applied to the model; this makes the problem look more realistic. In much work done earlier using kinematic controllers, the abilities of actuators to supply control signals were not investigated; this can be a drawback. When the desired torque is very large, the real actuator can only apply its maximum output, or when the input signal has many oscillations, a mechanical system has a limited capability to follow these commands because of its inertia. In this article, the effects of filtering and saturation of the actuator are investigated to show that although the actuator is not ideal, it can be controlled properly. Eq. (37) explains the saturation effects in actuators. Eq. (38) shows the inability of a non-ideal actuator to create control signals with high frequency.

˙ = ?kΦ (Φ ? Φd ) + Φ ˙ d. Φ

(27)

In order to calculate Φd , some efforts are made to find the best steering angle. The best angle is the angle by which the center ˙ d and y ˙d. of the robot is moved toward an arbitrary direction, x By solving Eqs. (28) and (29) simultaneously and defining the new variables into Eq. (30), the best steering angle is obtained.

˙ d = v a1 , x ˙ d = v a2 , y ˙dC θ ? x ˙dSθ , a9 = y ˙dSθ + x ˙dC θ . a10 = y

(28) (29) (30)

Eq. (31) shows the desired steering angle, and the current one should converge to it.

Φd = tan

?1

?
2

a9 a10

?

.

(31)

τ=

?

u

τmax sign(u)
u(s) ρs + 1

|u| ≤ τmax |u| ≥ τmax

(37) (38)

As seen, because tan?1 function is used, the angle varies between ?π /2, π /2 and satisfies the steering constraint. The steering angle should correctly follow the desired angle and ˙ d should appear in the dynamics angle equation, as therefore Φ stated in Eq. (27). By defining new variables into Eq. (32) and ˙ d is calculated by taking derivatives from Eqs. (30) and (31), Φ Eq. (34).

τ (s) =

u is an ideal signal, τ is the output of a non-ideal actuator and ρ in Eq. (38) is the reverse of the filter pole, and when it is increased, the output ability to follow step input decreases. Eq. (39) shows the discrete form of the filter. dt is the step time for discretization.

¨dC θ ? x ¨d S θ , b1 = y ¨dSθ + x ¨d C θ , b3 = y ˙ 9 = b 1 + v a3 b 2 , a ˙d = 2 Φ ˙ 9 a10 ? a ˙ 10 a9 a
a2 10

˙dC θ , b2 = ?˙ yd S θ ? x ˙dC θ ? x ˙dSθ , b4 = y ˙ 10 = b3 + v a3 b4 , a .
(32) (33)

τ (t + 1) = τ (t )e

?dt ρ

? ? dt dt + u(t + 1) + u(t )e? ρ . 2ρ

(39)

4. Kinematics and kinetics of the moving target (34) Figure 3 shows that the moving target has mass and size and it is assumed that it can move to any direction without any geometric constraint. Eq. (40) shows the dynamics of the target, in which mt is the mass, kt is the escaping force coefficient and Ct is the damping constant of the target in which the t index stands for the target.

+

4 a2 9

Eq. (34) shows the changing rate of the desired angle.

˙ = ?kΦ (Φ ? Φd ) + Φ ˙ d. τ1 = Φ

(35)

Eq. (35) shows the value of the steering torque and Eq. (36) is the closed loop equation of error angle in which by choosing a positive kΦ , the error converges to zero.

˙ + kΦ e = 0, e

e = Φ ? Φd .

¨t mt x

?

¨t y

?T

? ˙t = kt Ft ? Ct x

˙t y

?T

.

(40)

(36)

Because the differential equation of the steering angle is not of the second order, there is no overshoot. As a result, Φ

The escaping force increases, such that the distance between the target and the robot decreases, making the target escape with a stronger force as the robot approaches. When a robot

956

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

Figure 3: Target position relative to the two following robots.

Figure 4: Schematic diagram showing signal transmission between transmitter robot j and receiving robot i.

is closer to the target than other robots, it exerts a stronger repulsive force on the target. The damping coefficient is a factor that does not allow the velocity to increase; this property is usually observed in natural and artificial systems. Eq. (41) shows how to evaluate the target escape force: Ft =
n ? ? eit

Rit i =1

=

n ? i=1

? rit ? ?2 . ?? rit ?

(41)

problems, or use the proposed method in this article. In this method, it is presumed that a tracer, such as flying equipment, is used to find out the relative distance of the target and the robots. The added noise on tracer information is presumed to be more dominant than the noise entered into the image processing algorithm used by the robots themselves; however, both of them are used to find the best states estimation. 6. Design of estimators to estimate robot state variables In this section, relative position and state variables estimation by the estimator is investigated. In Figure 4, an arbitrary robot, j, and an estimator robot, i, are shown, respectively. An estimator robot should estimate state variables and the relative position of the center of robot j, using the received signal from robot j’s antenna. In this article, the Kalman filter is used as an optimum estimator. In Figure 4, the distance between the center of two robots, Rij , and the receiving angle of the signal, Ψij , are shown. Robot j transmits its own information to robot i, including some noise. Robot i distinguishes the coming signal from robot j by distinguishing the frequency of that signal. Then, using this information, it will operate the estimation algorithm for robot j. The velocity and position variables of robot j are measured with respect to the xi yi coordinate system. In Eq. (43), the first term expresses the attenuation of the signal strength (dB) transmitted by robot j [16,28]. p0 is attenuation of the signal strength one meter away from the source and α is the strength attenuation coefficient. Robot i receives the transmitted signal i mixed by noise wP . This noise in the received signal depends on i many factors which are explained in [31]. wP is white Gaussian 2 noise with (0, σp ) characteristics. The second parameter in Eq. (43) defines the angle of the received signal, which is mixed 2 by white Gaussian noise, with (0, σΨ ). z1 =
ij

In Eq. (42), rit is the relative position vector of robot i compared to the target. In this article, it is presumed that the target’s physical properties are pre-identified.

? ? rit = xt ? xi

yt ? yi

?T

? = xit

yit

?T

.

(42)

5. Governing presumptions about sensors The moving target can identify its position relative to the center of the robots and calculate the escape force from the group based on it. Robots include sensors that can measure steering angle, body angle and front wheel speeds precisely. The robot torque values can also be known using the control algorithm. Robots transmit their own data, such as torques, steering angles, robot angles and front wheel speeds, to other robots using radio transmission facilities [16]. In order to determine the large relative distances, radio signals are used instead of image processing. The relative positions of the robots are obtained using radio signals. The transmitted signals of each robot specifically belong to that robot, and each has its own frequency different from other robots [15,26]. The receiving robot receives the direction of the signal and signal strength with some added noise. It also identifies the robot that has sent the signal. Positioning using the strength and signal direction is explained in [27]. The receiving robot receives information, such as torques, steering angles, robot angles and front wheel speeds, including some small noise, but most of this noise is from the signal strength rather than the other transmitted information, such as body angle, etc. The further the two robots are away from each other, the further the RSS (reception signal strength) decreases between them; this can be a good reference to calculate their relative positions [28]. Robots can measure the view angle and relative distance of the moving target using image processing from some noisy captured images [29,30]. But, this estimation would suffer from two major difficulties, including far distance and a disrupted image; for example, when another robot is inbetween the target and the observer robot. It is possible to fuse the information obtained by the other sensors to solve such

[

i p0 + α ln(Rij ) + wP i Ψij + wΨ

]

.

(43)

The second input data in Eq. (43) is the angle, since its partial derivations are needed in the Extended Kalman Filter (EKF) algorithm. If used as stated above, there would be many difficulties in numerical calculation. In such a situation, it is possible to transform the measured data, using the proposed equation given by Eq. (44). z1 =
ij

[

i i cos(Ψij + wΨ )(p0 + α ln(Rij ) + wP ) i i sin(Ψij + wΨ )(p0 + α ln(Rij ) + wP )

]

.

(44)

By these changes, undoubtedly, the physical concept of the problem is unaltered, which does not mean that the

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

957

characteristics of the sensor are changed; just that the mathematical representation of the measurement is changed. Eq. (44) is the final output of the sensor, which is used in the EKF algorithm. But in order to calculate the output matrix and sensor covariance, it is assumed that the exerted noise on angle is so small that its sine function can be approximated linearly by the angle of its argument (Eq. (45)):

After defining the relative variables between two robots, the output matrix of the first sensor is calculated (Eq. (52)):

Θ1 =

ij

? ? ? ? ? ? i i cos Ψij + wΨ = cos Ψij ? sin Ψij wΨ , ? ? ? ? ? ? i i sin Ψij + wΨ = sin Ψij + cos Ψij wΨ .

? ?(H ij ) ? ? = 1 ?(Xij ) ?X R3 ? ij ij [ 2 yij (p0 + α ln(Rij )) + α x2 ij × ?xij yij (p0 + α(ln(Rij ) ? 1)) ?xij yij (p0 + α(ln(Rij ) ? 1)) 2 x2 ij (p0 + α ln(Rij )) + α yij

0 0

(45)

01×5 ? ? . 01×5 ? ? Xij

]?

(52)

These procedures are only done to calculate the output matrix and covariance. By presuming that angle noise is sometimes not small enough, the calculated noise covariance and output matrix may have a little error and in this case, the Kalman filter is not optimal anymore, but is still a proper estimator. The covariance of the noise which comes in the actual situation of the sensor is different from the calculated covariance in theory. Efforts have been made to relate the sine and cosine values to relative position coordinates. In Eq. (46), xij and yij are relative position coordinates between robots i and j: cos(Ψij ) = xij Rij

Considering the fact that the body angle, steering angle and wheel speed of robot j is transmitted to robot i, the output of the second sensor is in the form of Eq. (53) in which ei is white Gaussian noise with (0, 1) characteristics. This information is still mixed by some little noise, but efforts have been made in this article to filter this noise away in an optimized manner.

,

sin(Ψij ) =

yij Rij

,
(46)

Rij = norm(xij , yij ).

? ? ? ? e3 θj = vj + σθ vΦ e4 , e5 Φj ? ? σθ 0 0 σθ vΦ = 0 σv 0 . 0 0 σΦ
ij z2

(53)

By defining new variables into Eq. (47) and using them, Eq. (44) is expressed as Eq. (48).

? xij ? Rij H ij = ? y ij
Rij

(p0 + α ln(Rij )) (p0 + α ln(Rij ))

? ? ?,
xij ? Rij ? σΨ yij ? 0 Rij

The noise in transmitted data is presumed to be very small, compared to the noise in the signal strength and received signal direction. By using Eq. (53), the output matrix for the EKF is obtained by Eq. (54):

?
Θθ v Φ =

? yij ? (p0 + α ln(Rij )) ? Rij ij Q1 = ? x ij (p0 + α ln(Rij ))
Rij

1 0 0

0 0 0

0 1 0

0 0 , 1

?

[

σp

0

]

,

? ij Θ2 = [0]3×4

? Θθ v Φ .

(54)

? yij ? ? ? Rij ? ij Q2 = ? x ? σΨ σp , ij
Rij
ij z1

In Eq. (55), the outputs of all sensors are expressed simultaneously to calculate the covariance of the sensor. (47)
ij H ij Z = + Q1 ij 03×2 Θ2 Xij ij

[

]

[

σθ vΦ

02×3

]

Q2 [e5×1 ] + e1 e2 . 03×1

[

ij

]

(55)

? ij e1 = H + Q1
ij

e2

?T

Eq. (56) is the final output matrix.
ij + Q2 e1 e2 .

(48)

In Eq. (48), e1 and e2 are white Gaussian noise with (0, 1) or i i wΨ = σΨ e1 , wP = σp e2 . In order to calculate the output matrix and use it in the EKF, the relative variables between two robots should be identified. State variables of robot j are in the following equation: X j = xj

? ij Θ ij = Θ1

Θ2

ij T

?

.

(56)

Whenever it is multiplied by Xij , the calculated output in the Kalman algorithm is a linearized form. Eq. (57) expresses the governing rules for random variables used in this paper. ei ? (0, 1),
2 E (e2 i ) = σi = 1,

?

˙j x

yj

˙j y

θj

˙j θ

vj

Φj

?T

.

(49)

In this article, relative variables are the position and velocity of the geometrical centers of two robots, with respect to each other. It is not needed to consider the relative state of the last four terms in Eq. (49). The last four terms are used in an absolute way (it means that robot j is observed from the attached coordinates system to robot i). As a result, using Eqs. (49) and (50), relative variables of robot j to robot i are defined as Eq. (51). The relative variables are state variables of the estimator (Xestimator = Xij ). XCi = xi

2 2 2 E (e2 i ej ) = σi σj = 1,

E (ei ej ) = 0,

i ?= j.

(57)

By using Eq. (57), the matrix of covariance of the sensor noise is transformed into Eq. (58): R2 =
ij

[

?1 Q ?1 Q 0

ij

ijT

0

]

σθ vΦ σθTvΦ

[ ij ijT ? ? + Q2 Q2
0

0 . 0

]

(58)

?

˙i x

yi

˙i y

0

0

0

0

?T

,

(50) (51)

It should also be noticed that robot i uses the estimated values to calculate covariance. Considering the fact that the existing disturbance on the robot dynamic is presumed to be zero, the noise covariance matrix of the model is zero (Eq. (59)): R1 = [08×8 ] .
ij

Xij = Xj ? XCi .

(59)

958

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

In order to perform the EKF algorithm, the estimation of the sensor output is needed; Eq. (60) is the estimation of the output in which the estimated relative variables are used:

? ij = H ? ij Z ?

? ij Θ2 X

ij

?T

.

(60)

Using Eq. (19), the Jacobian matrix of robot j is obtained.

? ?? ? F Xj (t ), τ (t ) ? ? φj (t ) = ? ? ?(Xj (t )) Xj (t ) ? ? ?? ? f Xj (t ), τ (t ) ? = ? dt + I8×8 . ? ?(Xj (t ))
Xj (t )

(61)
Figure 5: Schematic diagram showing range and bearing between robot i and the target.

This matrix is based on the assumption that robot j is observed from an inertia coordinate system. But the point that should be noted here is that the Jacobian matrix of robot j observed from the local coordinate system attached to robot i and the global coordinate system, differ from each other. But there is a relation between these two which needs to be defined. From Eqs. (19), (50) and (51), Eq. (62) is derived: X (t + 1) = Xij (t + 1) + XCi (t + 1), Xij (t + 1) = f Xij (t ) + XCi (t ), τ (t ) dt + Xij (t ) + XCi (t ) ? XCi (t + 1).

?

?

As a result, in order to calculate the Jacobian matrix, it is enough to replace relative variables (Eq. (51)) into absolute variables of the absolute Jacobian matrix (Eq. (61)). Eq. (68) calculates the noise on the received torque signal. This noise is small, like the noise of the steering angle, body angle and front wheel speed of robot j.

(62)

?

The above equation reveals the governing dynamics of relative variables (Eq. (51)). Eq. (63) is obtained from Eqs. (19) and (51). f (Xij (t ) + XCi (t ), τ (t )) = f (Xij (t ), τ (t )) + f (XCi (t ), τ (t )). (63) From Eqs. (19) and (51) and using the Eulerian discretization, Eq. (64) is derived:

τ ?1

τ ?2

?T

? = τ1

τ2

?T

+ ντ .

(68)

˙i f (XCi (t ), τ (t )) = x ?
f (XCi (t ), τ (t ))dt

0

˙i y
0

0

04×1

?T

,
0 04×1

? = xi (t + 1) ? xi (t )

yi (t + 1) ? yi (t )

?T

.
(64)

Since that noise is small, whenever the torque noise passes from the actuator filtering, it is completely filtered, and the output of the actuator for such a situation is like the way in which the torque has entered the actuator without any noise. Eqs. (69)–(72) are Kalman filter equations for estimating the relative vector defined in Eq. (51). Robot i implements this algorithm to estimates relative variables between robot j and itself. kij (t ) = φij (t )pij (t )Θ ijT (t ) R2 (t ) + Θ ij (t )pij (t )Θ ij (t )

?

ij

T

??1

, (69)
(70)

By substituting Eq. (63) into Eq. (62) and using Eqs. (64), (65) is derived: f (XCi (t ), τ (t ))dt + XCi (t ) ? XCi (t + 1)

? =? 0 ? =? 0

˙ i (t + 1) ? x ˙ i (t ) x ¨ i (t ) x
0

0 04×1

˙ i (t + 1) ? y ˙ i (t ) y ?T
dt

04×1

?T

¨ i (t ) y

¨ i (t ), y ¨ i (t )) . (65) G (x

? ? T pij (t + 1) = φij (t ) ? kij (t )Θ ij (t ) pij (t )φij (t ) + Rij 1 (t ), ? ? ? ? ? ij (t + 1) = f X ? ij (t ) + XCi (t ), τ (t ) ? X ˙ Ci (t ) dt X ? ? ? ij (t ) + kij (t ) Z ij (t ) ? Z ? ij (t ) , +X
ij Z ij = z1

(71) (72)

Using Eqs. (62) and (65), Eq. (66), which is another form of Eq. (62), is derived, which shows that the dynamic of relative variables is not a function of the absolute position of any robots.

?

z2

ij T

?

.

¨ i (t ), y ¨ i (t )) . Xij (t + 1) = f Xij (t ), τ (t ) dt + Xij (t ) + G (x ? ?

(66)

By knowing the dynamics of relative variables, the Jacobian matrix of robot j, from the viewpoint of robot i, is calculated by the following equation:

As seen earlier, it is not needed to bring XCi position terms into Eq. (71). f in the above equation is the same as f in Eq. (18). The outputs which are used in Eq. (72) are real sensor outputs or those introduced in Eqs. (44) and (53). 7. Design of the estimator to estimate velocity and relative position of the target Estimation of the relative position and velocity of the moving target by the robot will be explained here. As expressed in the previous section, using image processing, robots are able to measure the distance and viewing angle of the target mixed with noise [29,30]. It is also possible to measure the relative distance of the robot and the target with greater noise using another tracer, such as flying equipment. This method is useful when the view of the target is obstructed. In normal situations, these two data are mixed to find an optimized estimation by using a Kalman filter (Figure 5).

? ?(Xij (t + 1)) ? ? φij (t ) = ? Xij (t ) ?X ? ij ? ? ? ?? ¨ i (t ), y ¨ i (t )) ? ? f Xij (t ), τ (t ) dt + Xij (t ) + G (x ? = ? ?? ? Xij (t ) Xij ? ? ?? ? f Xij (t ), τ (t ) ? = ? dt + I8×8 ?? ? Xij (t )
Xij

?? ? ? f Xj (t ), τ (t ) ? ? = ? dt + I8×8 = φj (t )?Xj =X ? ij . ? ? Xj (t ) ? ?
Xij

(67)

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

959

Eq. (73) is the real sensor output, which is transformed into Eq. (74) in order to solve computational problems. The same procedure was undertaken in the previous section of the article.
i Ψit + wΨ , i Rit + wRit ? ? ?] [? i i cos Ψit + wΨ Rit + wR it it ? ? ? . z1 = ? i i Rit + wR sin Ψit + wΨ it it z1 =

Using Eqs. (40) and (75) and defining matrix (Eq. (83)), the state space variables of the moving target transform into Eq. (84). 0 1 ?0 At = ? mt 0 0 0 1 ?kt Bt = ? mt 0 0

?

[

]

(73)

mt ?Ct 0 0 0 0? , 0? kt

0 0 0 0

0 0 ? , mt ? ?C t

?

(74)

?

?

Eq. (74) is just a mathematical representation of the sensor output and it does not change the nature of the sensor. Eq. (75) is for the state variables of the moving target and Eq. (76) is for relative variables between the target and robot i. X t = xt

(83)

˙ t = At Xt + Bt Ft . X
In Eq. (85), the transient state matrix is observed.

(84)

?

˙t x

yt

˙t y ?

?T

, ˙ it x
yit

(75)

1:4 Xit = Xt ? XCi = xit

˙ it y

?T

.

φt = At dt + I4×4 .
Xt (t + 1) = φt Xt (t ) + Bt Ft dt .

(85)

(76)

Eq. (86) is the discretized format of the moving target dynamics. (86)

Eq. (77) is for the tracer sensor output. It is assumed that this information is sent to the robots by the flying equipment. The noise in this data is greater than the noise of the distance and view angle of the robot sensor. As discussed in the previous section of this article, when the view of the target is obstructed, these robots can use the information obtained from robots that can view the target. In this case, it is not needed to use robot communication; the case studied here is different and it is assumed based on the tracer data.
it z2

Eq. (87) is the dynamics of the relative variables of the moving target to robot i. Xit (t + 1) = φt (Xit (t ) + XCi (t )) ? XCi (t + 1) + Bt Ft dt . (87)

x + wxr = it . yit + wyr

[

]

(77)

In Eq. (87), it seems that the dynamics of the relative variables are a function of position variables, XCi . But, because of the nature of matrix At (having two zero columns), the relative variables are not a function of the absolute position of the robot. Eq. (88) shows how it is proven.

In Eq. (78), noise characteristics and their relation are explained. These equations are used to calculate the covariance matrix.

? ? i 2 wΨ ≡ 0, σΨ , ? ? 2 wxr ≡ 0, σxr ,
i ws = σ s es ,

? ? i 2 wR ≡ 0, σR , ? ? 2 wyr ≡ 0, σyr ,
(78)

es ≡ (0, 1),
2 Ee2 1 e2 = 1.

φt (XCi (t )) ? XCi (t + 1) = At XCi (t )dt + XCi (t ) ? XCi (t + 1) ˙ Ci (t )dt = At XCi (t )dt ? X ? ? ˙ ˙i, y ˙ i ) ? XCi (t ) dt h( x ? Xit (t + 1) = φt Xit (t ) + Bt Ft dt ? ? ˙ Ci (t ) dt . ˙i, y ˙i) ? X + h( x (88)
Eq. (87) changed to Eq. (88). Considering Eq. (88), it is deduced that φt matrix is the same as Eq. (85), no matter what the coordinate system is. Eq. (89) shows the output estimation:

Ee1 e2 = 0,

By defining new variables in Eq. (79), the output of the sensors transform into Eq. (80). xit ?yit ? it H = ? ?, xit yit

? ?

? it = H ? it . Z

(89)

By presuming that the moving target knows the exact relative distance, it is able to calculate the escape force. But robots ?ti . use relative position estimation, hence Ft is different from F xit R yit Rit 0 0 0 0 1 0 0

?
it Q1

?yit

? ? σψ ? ? σR
0 0 0

? ? ? = ? xit ? ? 0
0

?0 0? ?? 0 ? 0? 0 1

σx
0

0 0

σy

0 0? , 0?

?

?ti is calculated by robot i in which the Eq. (90) shows how F estimation values which have errors are used. ?ti = F
n ? n ? ? ? ? jt ? jt e r = ? ?2 , ? ? ? j=1 ?? j=1 Rjt ? jt ? r

?it = r

[ ] ? it x ? it , y
(90)

? ? ?yit σΨ σR ? xit ? it Q2 = ? 0 ?,
Rit 0 z2
it Z it = z1

e1 ?e2 ? e = ? ?, e3 e4

? ?

i ?ti + Ferror Ft = F .

(79)

?

? it T

it it = H it + Q1 e + Q2 e1 e2 .

(80)

Eq. (81) is for the output matrix to be used in the EKF and Eq. (82) is the noise covariance matrix.
it Θ1 =

i ?ti is Ferror The difference between Ft and F . On the other hand, i Ferror is completely unknown and there is no specific model applicable to it. According to Eq. (91) a white Gaussian noise can be considered in such a way wherein its effect on state variables i is like the Ferror effect. (N means normal.) i Ferror dt ? N (0, σt σtT dt ).

(91)

[

1 0

0 0

0 1

0 , 0

]

[ it ] Θ1 Θ it = it Θ1

(81) (82)

? it ? itT ? it ? itT Rit 2 = Q1 Q1 + Q2 Q2 .

If it is possible to replace disturbance with a white Gaussian noise with characteristics given in Relation (91), then it is possible to use Kalman filter equations to estimate position; the simulation results confirm this. After this presumption,

960

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

it is important to have a correct estimation of the standard deviation of noise σt in an intuitive and empirical method, which is dependent on the amplitude and frequency of the disturbance force (or it can be said that it is dependent on errors of relative position estimation). Eq. (92) explains the dynamic of the moving target from the viewpoint of robot i.

? mt x ¨t

?

? ¨t y

?T

? ? ? i ?ti + Ferror ? ? Ct x = kt F ˙t

? ˙t y

?T

.

(92)

In Eq. (93), the process noise covariance matrix is calculated. As ?ti and Rit a result, the Kalman algorithm operates using only F 1 without need of Ft .
T T Rit 1 = Bt σt σt Bt dt .

(93)
Figure 6: Physical descriptions of the control algorithm stated by mechanical elements.

Eqs. (94)–(96) are related to the Kalman filter that robot i used to estimate the relative position and velocity of the moving target. kit (t ) = φt (t )pit (t )Θ itT (t )

??1 , (94) × (t )7 + Θ (t )p (t )Θ (t ) ? ? it it it it T it p (t + 1) = φt (t ) ? k (t )Θ (t ) p (t )φt (t ) + R1 (t ), (95) ? ? ? ? ? it (t ) + XCi (t ) ? X ˙ Ci (t ) dt ? it (t + 1) = At X X ? ? ? it (t ) + Bt F ?ti dt + kit (t ) Z it (t ) ? Z ? it (t ) , +X ?
Rit 2
it it itT it Z it = z1

?

it z2

?T

.

(96)

As mentioned earlier, it is not necessary to enter position terms XCi into Eq. (96). The outputs of Z it used in Eq. (96) are the real outputs of the sensor (Eqs. (74) and (77)). By having adequate information about the moving target and its escape force, it is possible to calculate its absolute acceleration to use in the next section. 8. Design of the controller for the group of robots

calculate the desired acceleration for each robot to calculate the exerted forces or torques on each robot. In Section 3 of the article, the relationship between the desired acceleration and torque is defined. In this section, the appropriate desired acceleration would be calculated. In Figure 6, ks or the elasticity coefficient, is the symbol of proportional control, and CL and CR are linear and rotational damping coefficients, respectively, which are symbols of derivative control. This point of view can be useful in order to design a controller and tune the coefficients. The controller for one car-like robot is explained in Section 3. In Figure 6, the goal for the robots is to follow the target using a group maneuver and to be at specified relative distances from the target. The desired acceleration for each robot is the summation of the desired accelerations, which are applied by other robots and the target. Eq. (97) shows the moving target acceleration and Eq. (98) shows the desired acceleration of robot i.

¨t at = x ¨ di ¨ di = x q ?

?

¨t y

?T

¨ d and In this section of the article, the desired acceleration, x ¨ d , for the robot is calculated. The aim of this study is to control y a group of robots to follow a moving target and loop around it. Looping can be done both in a manner where the moving target is completely confined by a group of robots, or in a way where robots make coverage around the target while moving. The aim is to follow the moving target by the robots and make them arrange a relative distance to the moving target in such a way that each robot has a certain relative distance to the other robots. By dynamical analysis of the inertial agents, the desired acceleration for each robot is calculated. The governing dynamics for massless agents are different from inertial agent dynamics. Each component of the inertial agents tends to keep its velocity. The extracted control rules for the massless group of robots cannot be applied to an inertial group of agents; it means they need to be combined with other algorithms to achieve that goal. If the group of inertial agents does not control correctly, it may lead to chaos in the group. As a result, it is important to know the dynamical behavior of the agents. There is a suitable algorithm that is simple and compatible with an intuitive point of view and dynamical analysis. In [32], the desired accelerations are calculated using that method and in this section of the article, a more perfect approach is used to determine the desired acceleration. Figure 6 shows a group of inertial agents. This group includes a moving target and two tracking robots. Each of the agents has mass and their controller output is force. It is necessary to

¨ di y

, ?T

(97)

.

(98)

Eq. (99) expressed the relative vector and distance, and the direction between two agents.

? ? rij = xj ? xi ? eij = ? rij
Rij

yj ? yi

?T

,

rij ? , Rij = ??

? ?

.

(99)

Eq. (100) expresses the desired acceleration of robot i in which n is the number of robots, Rij is the distance between two agents, RR is the equilibrium distance between robots, Rt is the equilibrium distance between the robots and the target, kp is the proportional control coefficient for adjusting distances, kd is the derivative coefficient to reduce the relative speed between t robots, kt p is the proportional coefficient, and kd is the derivative coefficient between the robots and the target.

¨ di = q

n ? ?
j=1 j?=i

kp Rij ? RR ? eij + kd

?

?

[

˙j ? x ˙i x ˙j ? y ˙i y ]

]?

+

kt p

eit + (Rit ? Rt ) ?

kt d

[

˙t ? x ˙i x ? ˙ i + at . ˙t ? y y

(100)

Equilibrium distances are adjusted based on the type of maneuver and safety margins to prevent any collision. As an example, in order to hunt a moving target, the equilibrium

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

961

distance between the robots and the moving target should be in such a way that in an equilibrium situation, the moving target is placed at the geometrical center of the group of robots. In order to make coverage around the moving target, the equilibrium distance between the robots should be chosen in such a manner that the target should be able to move freely and the robots move around it harmonically. According to Eq. (100), the moving target acceleration is the only acceleration term that appears in the desired acceleration of each robot. It should be noticed that for a group of agents, it is meaningless to have two different accelerations in their equation of the desired value, and it is dynamically wrong. Here, the moving target provides acceleration and the proportional and derivative control terms of the robots. According to this design, the group would have equilibrium amongst itself, while following the moving target properly. By installing a coordinate system on the moving target and looking at the other robots (omitting the moving target acceleration term), only proportional and derivative control rules remain, in which the aim would be to reach a static stability with no tracking. According to the article goal, the equilibrium distance between the robots and the target is important and the proportional control coefficient needs to be large enough. This choice leads to higher acceleration when the relative distance between the target and the robots is far, which causes the robots to have high speeds and makes the system unstable and chaotic. Hence, using constant control coefficients is not suggested. In order to improve response quality, two pairs of different coefficients are used for short and far distances, and the suitable coefficient is chosen when necessary. In order to solve this problem, for far distances, the proportional coefficient should be small enough and the derivative coefficient should be large; for short distances, the opposite approach is needed. This is done by a continuous mathematical equation automatically. D = Rit ? (Rt + δ) , (101)
2

Figure 7: Robot tendency to a specified orientation.

control are needed to be known precisely. Different simulations show an interesting phenomenon, which will be described here. When the control of the robot group is done, each robot angle tends to reach the desired acceleration angle, as shown in Figure 7. Figure 7 shows the situation in which the robot angle converges to the desired acceleration angle of the robot center. This acceleration is a summation of the different desired accelerations explained earlier. The desired acceleration angle is the direction towards which the agents tend to move. This phenomenon will be explained in the section of the simulation of the coverage around the target. As seen in Figure 7, the steering angle converges to zero too. So, the robots always follow the target where their front sides are towards the tracking direction. In order to get to this goal, a limited time span is consumed. 10. Simulation results In this section, a group of four robots are controlled to hunt or follow a target by using control algorithms and estimations. Simulation results include: following the target by a robot and the target reaction, hunting the moving target by four robots, following and making coverage around the target, the distance between the robots and the target during hunting, real escape force and its estimation error, relative distance between two robots and its estimation error, relative distance of robots compared to the target and their estimation error, velocity of the center of the robots and moving target, driving and steering torque for a robot, and the simulations results for indirect control of the robot body angle throughout. The parameters used in this section are tabulated in Table 1. The equilibrium distance between √ robots and the target is Rt = 5 (m). Therefore, RR = 5 2 (m). Since there are four robots and during hunting the target should be placed in the geometrical center of these four robots, in order to follow and make coverage around the target, the distance between the √ robots is achieved shorter or equal to RR = 0.7 × 5 2 (m). In the simulation figures, the triangle is the symbol of the car-like robot and the star is the symbol of the moving target. Figure 8 shows following a moving target by a robot and the way that it escapes. The diagrams related to hunting a moving target and its simulation results are shown in Figures 9–16. In Figure 9, four car-like robots are used to follow and hunt a moving target. Although the moving target escapes the group, the robots reach and circle around it. The simulation is done for 120 s and the group is treading steadily from 60 to 110 s. But after 110 s, due to a small asymmetry in group formation and the existence of nonholonomic constraints in robot dynamics, the moving target decides to escape from the enclosed region. However, the group configuration changes slightly to react to the targets new behavior (Figure 10).

β= ?
kt p

(1 + tanh(D ? a ))
2 kt d

, ? ?
kt d2 .

(102)

?T

? = β kt p1

t kt d1 + (1 ? β) kp2

?

(103)

In Eq. (101), δ decides at which distance from the equilibrium distance the change of the coefficient starts. In Eq. (102), β is a coefficient that varies between zero and one. When the difference between the relative distance and the equilibrium distance is greater than δ, β is approximately 1, and when it is smaller than δ, β tends to zero. It means that this coefficient decides which coefficient should be used. Finally, a shows the β changing rate. In order to achieve the final goal, the derivative control of the robots should always operate in order to control their relative speed, but it is not necessary for the proportional control to work permanently. Robots move towards the target, and when the relative distance between two robots is less than their equilibrium distance, the proportional control operates and prevents their collision. If the robots try to achieve an equilibrium distance between themselves and the moving target, the final goal, which is to follow the moving target and hunt, will be achieved. 9. Indirect control of robot body angle Considering the fact that the control equations are just for the center of the robot, the body angles that change during this

962

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

Table 1: Geometric, physical and control parameters and properties used in simulations. dt = 0.05 (s) d = 1 (m), L = 2d, m = 5 (kg) ρ = 0.2, τ1 max = 1 (N m), τ2 max = 5 (N m) kp = 6 (N/m), kd = 1 (Ns/m) t kt p = 0.5 (N/m), kd = 4 (Ns/m) t kt = 3 (N/m), k = 3 (Ns/m) p d δ = 2 m, a = 2 kΦ = 0.5 (1/s) mt = 1 (kg), c t = 1(Ns/m), kt = 1 √ σt = 0.05 (N s) Rt = 5 √ (m) RR = 5 2 (m) √ RR = 0.7 × 5 2 (m) p0 = 1 (dB), α = 5/ ln(10) (dB) σp = 0.2 (dB), σΨ = 0.05 (rad) σvθ Φ = 0.01I3×3 σR = 0.1 (m), σΨ = 0.05 (rad) σx = 1 (m), σy = 1 (m) Discretization step time Robot mass and geometries (length and mid-length) Actuator parameters Damping and spring(repulsive) coefficientbetween robots Damping and spring coefficient between robots and target at long distances Damping and spring coefficient between robots and target at small distances Parameters of controller coefficient variations Coefficient of steering controller Mass, damping and coefficient of target escape force Applied noise to target dynamics Equilibrium distance between robot and target Equilibrium distance between two robots (minimum allowable distance) during hunting the target Equilibrium distance between two robots during covering around the target Signal strength attenuation parameters Added noise to signal strength and direction Added noise to received data sent by robot Added noise to target distance and view angle sensor Added noise to tracer (flying equipment) data

Figure 10: Robots try to stop target from escaping after it is hunted. Figure 8: Following target with a robot.

a

b

Figure 11: Distances between the target and robots (a), and distances between robots (b).

Figure 9: Following and hunting the target with four robots.

a

The target dynamics are holonomic and move in any arbitrary direction: meanwhile, car-like robots have movement limitations and nonholonomic constraints. The phenomenon shown in Figure 10 happens only when the group reaches a steady state behavior. Whenever the group moves in a determined direction, the robots hunt the target and the robots velocity converges to zero. However, it is practically impossible for a group of car-like robots to hunt a target in a steady situation. Because, whenever the target decides to change its direction after being steady for a while, robots follow it with a delay, which is ordinary (because of asymmetry and nonholonomic constraints). The problem can be assumed in another way. In other words, it can be presumed that whenever

b

Figure 12: Relative position (x) between two robots and its estimation error.

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

963

a

b

Figure 17: How the target is followed and how coverage is made around it.

Figure 13: Real escape force of target and its estimation error.

a

b

Figure 14: Relative position (x) between robot and target with its estimation error.

a

b

Figure 15: Velocity of robots center (a), and velocity of target (b).

a

b

Figure 16: Driving torque (a), and steering torque (b).

5 (m), and Figure 11(b) shows the distances between robots greater than the minimum allowable distance, L = 2d = 2. Figure 11 guarantees that no collision will happen. This distance is two times greater than the minimum distance between the centers of two robots. The minimum equilibrium distance be√ tween robots in the simulation is RR = 5 2 (m). Figure 12 shows the relative distance between two robots and its estimation error. The results show that the estimator works properly. As shown, the estimation error is quite small. Figure 13 shows the real target escape force and its estimation error (done by four robots). The escaping force estimation is done by all of the robots. At the beginning, the increase in the escape force means that the robots are approaching the target, and correspondingly a decrease in the escape force means that they are circling around the target. It is obvious that when the four robots loop around the target symmetrically, the resultant force converges to zero. But after hunting the target, the velocity converges to zero; the target tends to change its direction, which leads the group to become more asymmetrical. This phenomenon was illustrated in Figure 10. As seen from the estimation error diagram of Figure 13, the assumption that disturbance force Ferror is a white noise is acceptable. Figure 14 shows the relative distance between a robot and the target and its estimation error. The above estimation is done by a robot. Figure 15 shows velocity of the robots and moving target. The decrease in speed by passing time shows that the target is hunted and the robots are circling around it. Figure 16 shows exerted torques on the robots by considering the effect of the filtering and saturation of the actuators. The torque diagram expresses suitable and reasonable results. Figure 17 shows the following and surrounding of the target. This maneuver is done whenever the goal is not hunting the target, but the goal is to make an adequate coverage around it. This coverage can be used to protect a leader. Under the initial condition, one of the robots (robot 1) does not have a suitable orientation, but it is seen that under a specific maneuver, it returns and follows the target with its front-side. This phenomenon was explained more in Section 9. The next diagram shows the body angle of the robot. Figure 18 shows the steering angle in the maneuver (Figure 18(a)) and the body angle of the robot in the maneuver (Figure 18(b)). As seen, the body angle of robot 1 changes from 120° to the desired acceleration angle (the group direction). Meanwhile, it follows the group with its front-side. 11. Conclusions In this article, the following and hunting of a moving target by a group of car-like robots were investigated. The tracking

the target is hunted, the cars velocities become zero and the target no longer tends to escape. The system behavior under these conditions is observed in Figure 9. Figure 11(a) shows that the distance between the robots and the moving target converges to the equilibrium distance, Rt =

964

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965

a

b

Figure 18: Steering angle (a) and body angle of robot 1 (b).

robots have a nonholonomic constraint and the target did not have any constraint. It is assumed that the measurements of state variables and control parameters by the robots are not complete and also the target moves away from the robots by a repulsion force. In spite of these presumptions, the results of the simulations prove that the represented control algorithm is properly applicable to nonholonomic robots. The results show that in order to do team work, complicated algorithms are not needed. On the other hand by knowing the dynamics behavior of the robots, simpler and stronger algorithms can be represented. Two main reasons behind undertaking a correct maneuver are the dynamics analysis of the inertial agents and the varying controller coefficients. The results also show that the estimation algorithms were successful and combining the estimation algorithms with car-like robot group maneuvers achieved the results of this article. By using the estimation algorithm and despite not knowing the value of the target escape force, proper estimation of the target movement is achieved. The output of the controllers is torque and it can be used for checking the possibility of the maneuver. The control algorithm characteristics behave in such a way that the robots always move with their front-side; this phenomenon is physically important and reasonable. By varying the controller coefficients and equilibrium distances, both the following and covering operations were done successfully. As seen from the results of the covering operation, this maneuver can be used in order to make security coverage around a leader. By slight changes in the proposed algorithm, other maneuvers can be done too. This is a step towards finding a solution for real problems and real applications. Future work in this field is mostly about improving indirect control of body angles, estimating environmental disturbances and using them in the control procedure, controlling car-like robots equipped with special accessories, preventing robots from rolling while performing group maneuvers, estimating surrounding obstacles, performing different kinds of maneuver while obstacles are present and performing practical operations; for example, in traffic engineering. References
[1] Jun, O. ‘‘Multi-agent robot systems as distributed autonomous systems’’, Advanced Engineering Informatics, 20(1), pp. 59–70 (2006). [2] Sayyaadi, H. and Moarref, M. ‘‘A distributed algorithm for proportional task allocation in networks of mobile agents’’, IEEE Journal of Automatic Control, 56, pp. 405–410 (2011). [3] Sayyaadi, H. and Dostmohammadian, M.R. ‘‘Finite time consensus in directed switching network topologies and time-delayed communications’’, Scientia Iranica: Transaction B, Mechanical Engineering, 18(1), pp. 77–85 (2011). [4] Yamaguchi, H. ‘‘A distributed motion coordination strategy for multiple nonholonomic mobile robots in cooperative hunting operations’’, Robotics and Autonomous Systems, 43(4), pp. 257–282 (2003).

[5] Lihua, D., Hao, F. and Haiqiang, Z. ‘‘Task allocation for multi-robot cooperative hunting behavior based on improved auction algorithm’’, IEEE Conference, CCC 2008, 27th Chinese, pp. 435–440 (2008). [6] Matveev, A.S., Teimooria, H. and Savkin, A.V. ‘‘Range-only measurements based target following for wheeled mobile robots’’, Automatica, 47(1), pp. 177–184 (2011). [7] Lan, Y., Yan, G. and Lin, Z. ‘‘Distributed control of cooperative target enclosing based on reachability and invariance analysis’’, Systems & Control Letters, 59(7), pp. 381–389 (2010). [8] Wu, M., Huang, F. and Wang, L. ‘‘A distributed multi-robot cooperative hunting algorithm based on limit-cycle’’, IEEE International Conferences on Automation and Robotics, Kobe, Japan (May 12–17 2009). [9] Franchi, A., Stegagno, P., Rocco, M.D. and Oriolo, G. ‘‘Distributed target localization and encirclement with a multi-robot system’’, Conference Paper of 7th IFAC Symposium on Intelligent Autonomous Vehicles, Lecce, Italy (2010). [10] Jun, L., Qi-shu, P., Bing-Rong, H. and Mao-Hai, L. ‘‘Multi-robot cooperative pursuit based on association rule data mining’’, IEEE Conferences on Fuzzy Systems and Knowledge Discovery, pp. 303–308 (2009). [11] Zhang, J. ‘‘Practical issues in formation control of multi-robot system’’, A Dissertation Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of Doctor of Philosophy, May 2010. Available at: http://repository.tamu.edu/bitstream/handle/1969.1/ETD-TAMU-201005-7886/ZHANG-DISSERTATION.pdf?sequence=2. [12] Pinto, G. ‘‘Towards a control architecture for cooperative nonholonomic mobile robots’’, Doctoral Thesis, 26 November 2007. Available at: http://tel.archivesouvertes.fr/docs/00/19/38/35/PDF/thesis.pdf. [13] Pinto, A.G., Fraissea, P. and Zapata, R. ‘‘A decentralized algorithm to adaptive trajectory planning for a group of nonholonomic mobile robots’’, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 404–417 (2006). [14] Pinto, G., Fraisse, P. and Zapata, R. ‘‘Decentralized strategy for car-like robot formations’’, IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4176–4181 (2007). [15] Pinto, A.G., Fraisse, P. and Zapata, R. ‘‘Wireless communication for secure positioning in multi robot formations of nonholonomic ground vehicles’’, IEEE/RSJ International Conference on Intelligent Robots and Systems, p. 4198 (2008). [16] Pinto, A.G., Fraisse, P. and Zapata, R. ‘‘Wireless reception signal strength for relative positioning in a vehicle robot formation’’, IEEE Conferences on Robotics Symposium, LARS ’06. IEEE 3rd Latin American, pp. 100–105 (2006). [17] Laumond, J.P. and Jacobs, P.E. ‘‘A motion planner for nonholonomic mobile robots’’, IEEE Transaction on Robotics and Automation, 10(5), pp. 577–593 (1994). [18] Xi, W. and Baras, J.S. ‘‘MPC based motion control of car-like vehicle swarms’’, Proceeding of the 15th Mediterranean Conference on Control & Automation, Athens-Greece, pp. 1–6, 27–29 (July 2007). [19] Bemporad, A., Luca, A.D. and Oriolo, G. ‘‘Local incremental planning for a car-like robot navigating among obstacles’’, Proceeding of the 1996 IEEE International Conference on Robotics and Automation, Minneapolis, Minnesota, vol. 2, pp. 1205–1211 (April 1996). [20] Maklouf, O.M., Halwagy, Y. and Beumi, M. ‘‘Cascade Kalman filter application in GPSINS integrated navigation for car like robot’’, IEEE Conference on Radio Science, pp. 1–15 (2009). [21] Feng, D. and Krogh, B.H. ‘‘Satisfying feedback strategies for local navigation of autonomous mobile robots’’, IEEE Transactions on Systems, Man and Cybernetics, 20(6), pp. 1383–1395 (1990). [22] Vanualailai, J., Sharma, B. and Nakagiri, S. ‘‘An asymptotically stable collision-avoidance system’’, Non-Linear Mechanics, 43(9), pp. 925–932 (2008). [23] Moret, E.N. ‘‘Dynamic modeling and control of a car-like robot’’, MS Thesis, Thesis Submitted to the Faculty of the Virginia Polytechnic Institute, 2003. Available at: http://scholar.lib.vt.edu/theses/available/etd03242003.../thesis_ENMoret.pdf. [24] Luca, A.D. and Oriolo, G., Modeling and Control of Nonholonomic Mechanical Systems, In Kinematics and Dynamics of Multi-Body Systems, 360, pp. 277–342, Springer Verlag, Wien (1995). [25] Bowska, E.J. ‘‘Stabilizability and motion tracking conditions for mechanical nonholonomic control systems’’, Mathematical Problems in Engineering, doi:10.1155/2007/31267. Hindawi Publishing Corporation, Article ID 31267, 20 pages (2007). [26] Fraisse, P. and Zapata, R. ‘‘Remote secure decentralized control strategy for mobile robots’’, Advanced Robotics, 19(9), pp. 1027–1040 (2005). [27] Roberts, R. ‘‘IEEE P802.15 ranging subcommittee final report 2004’’, 2004. Available at: http://grouper.ieee.org/groups/802/15/pub/04/15-04-058107-004a-ranging-subcommittee-final-report.doc. [28] Li, X. ‘‘Performance study of RSS-based location estimation techniques for wireless sensor networks’’, Preceding IEEE Military Communication Conference, 1, pp. 1–5 (2005). [29] Frew, E.W. ‘‘Cooperative standoff tracking of uncertain moving targets using active robot networks’’, IEEE International Conference on Robotics and Automation, Roma, pp. 3277–3285 (April 10–14, 2007).

H. Sayyaadi et al. / Scientia Iranica, Transactions B: Mechanical Engineering 18 (2011) 950–965 [30] Das, A.K. and Fierro, R. ‘‘A vision-based formation control framework’’, IEEE Transactions on Robotics and Automation, 18(5), pp. 813–825 (2002). [31] Sarkar, T.K. and Zhong, J. ‘‘A survey of various propagation models for mobile communication’’, IEEE Antennas and Propagation Magazine, 45(3), pp. 51–82 (2003). [32] Urcola, P. and Riazuelo, L. Cooperative navigation using environment compliant robot formations, in: IEEE/RSJ International Conference on Intelligent Robots and Systems Acropolis Convention Center Nice, France, pp. 2789–2794 (September 22–26, 2008).

965

Professor and his research interests include; Dynamics and Control, Robotics and Mechanisms, Artificial Intelligence and Neural Networks. Hamed Kouhi Gilvan was born in Masal, Guilan, in 1986. He received a B.S. degree in Mechanical Engineering from the University of Guilan in 2008 and, in 2010, he received his M.S. degree in Mechanical Engineering from Sharif University of Technology, Tehran, Iran, on the topic of the ‘‘Control of Car-Like Multi Robot for Following and Hunting a Moving Target’’. His current research interests include: Control, Sensor Fusion, Multi Agent System, Dynamics, Robotics, Optimization and Neural Networks. Hassan Salarieh was born in Tehran, in 1980. He received his B.S., M.S. and Ph.D. degrees, all in Mechanical Engineering, from Sharif University of Technology, Tehran, Iran, in 2002, 2004 and 2007, respectively, and is now Assistant Professor at the same university. He is working in research fields such as Dynamics and Control.

Hassan Sayyaadi received a B.S. degree from Amirkabir University of Technology, Tehran, Iran, in 1987, and an M.S. degree from Sharif University of Technology, Tehran, Iran, in 1990. He received his Ph.D. degree from the University of Tokyo, Japan, in 2001, all in Mechanical Engineering. From 1990 to 1996, and from 2001 until now, he has been with the Mechanical Engineering Department of Sharif University of Technology. He is now Associate


相关文章:
更多相关标签: