当前位置:首页 >> 英语学习 >>

Trajectory Control for Groups of Humans by Deploying a Team of Mobile Robots


Trajectory Control for Groups of Humans by Deploying a Team of Mobile Robots
Edgar Martinez-Garcia, Ohya Akihisa and Shinichi Yuta
Intelligent Robot Laboratory University of Tsukuba Tennodai 1-1-1 Tsukuba, Ibaraki 305-8573 Email: (eamartin,ohya,yuta)@roboken.esys.tsukuba.ac.jp
Abstract— In this paper a multi-robot system (MRS) trajectory control for conducting a group of humans is proposed. Its architecture, implementation and the strategy to conduct people by a team of robots is discussed, as well as the robots motion planning methodology is being encompassed. Some experimental results on people localization by a vision system are also introduced, which exhibit its usage as sensory information for generation of people trajectory control. A social model to simulate humans motion is also included in this investigation as means to prove the mechanism of guidance and crowd dynamics by the team of robots, where such motion control is based on intelligent changes of position and speed.

I. I NTRODUCTION The main focus in this paper is to discuss an architecture of a motion planner and its elements as means for a team of mobile robots to steer the trajectory of a group of people. A major endeavor in this research is to investigate a way to ful?ll guidance regarding the dif?culty it may represent non-communication between robots and humans. The analogy of the system may be compared with the process done by a human-guide in companies, tours, exhibitions, and so on. In the proposed context, people trajectory control is performed by deploying a team of mobile robots surrounding a limited group of persons as depicted in Fig.1. A front-end robot (called Ra) provides guidance, while the robots at the back (Rb, Rc) observe the group as a way to get a global observation of the situation, and to crowd the group if required. The team has the ability to sense the environment dynamically from distributed locations (stereo vision), whereby robots share sensory information in a central host to build a ranged model of the surrounding (group of people).
Fig. 1. Guiding a group of people by a team of mobile robots.

without performing any kind of explicit communication for accomplishing conduction. The model could be seen to the one exhibited by sheepdogs ?ocking and conducting herds of sheep (the only known work of robotic sheepdog was ?rstly proposed by Prof. Vaughan et al., in their work of ?ock control with animals [5]). However, animals behavioral patters in their natural environments greatly differs from humans behavior and other human factors, as well as the strategy for trajectory control and cooperation among the robots is fairly different from dogs behavioral conduction. II. A IM
AND

S TRATEGY

Successful contributions concerning robotic tour-guide tasks have been presented in [1]-[4]. In such works the task is accomplished with a single robot, some of them including interaction with people, communication between humans and robots, and even including all the functions a tour guide needs (providing information, showing interesting routes, dealing with people behavior, speaking, showing feelings and joking as well). The present system further than having the above mentioned aims, it attempts to steer the trajectory of a group

In this ?rst approach, the paper considers basic aspects of human behavior simulated by the use of a social force model [18] to represent humans motion, which exhibits repulsive and attractive magnetic effect. The paper only encompasses the simple assumption of people following the front-end robot, and it does not include tackling special cases (e.g. robot approaching people leaving the group and dealing with human behavior, other human factors, ethology aspects, etc). The set of problems found in people conduction was divided in three main items, and they involve particular behavioral patterns. 1) Conduction. It is the simplest case, and is de?ned as the conduction of the group of people guided by the Ra, which is easily followed by the group. 2) Crowding (Group size Control). It is the process of grouping the people while moving along. In this context

an undesirable situation might be if the size of the group enlarges becoming bigger than a desired size. 3) Interception. It is when a person attempts leaving the group moving away from its scope, so that any robot approximates to he/she, making the person to go back into the group. This situation is considered an special case, which implies to deal with other challenging problems such as pattern behaviors and human factors, ethology aspects, people identi?cation/tracking, humanrobot interaction and so forth. The boundary of the paper considers an strategy of trajectory control only for the case 1) and in certain extent for the case 2). However, the item 3) is an special case that for now is out of the scope of this context, being discussed in the future. From a technical point of view the requirements for people conduction in the present context draw attention to a general strategy itemized as follows: 1) Vision system for people localization. 2) MRS architecture framework. 3) People trajectory control and a motion planner. Items 1) and 2) were introduced and discussed by the authors in references [9], [10] and [11], while the item 3) is the present matter of discussion. The importance of this work relies on the proposal of this type of guidance, thus, to understand and have a clear idea of the steering trajectory process the Fig.2) roughly describes the task.

? The direction for navigation is determined by Ra. Besides, for representing the scope of a group, we established a circular model that would encompass all the members together as previously depicted in Fig.1). So far, we have restricted it by the number of people between 1 and 5 persons in hallways of the University of Tsukuba.

III. P EOPLE S OCIAL M ODEL It is suggested that the motion of pedestrians can be described as if they would be subject to social forces. The corresponding Social Force Model (SFM) can be applied to several behaviors. It describes the acceleration towards a desired velocity of motion; it also terms re?ecting that a pedestrian keeps a certain distance from other pedestrians and borders; and a term modeling attractive effects. In reference [19], an attempt to simulate crowd dynamics (using the SFM) by pedestrians affected by the presence and introduction of mobile robots was presented. Such context considers a large number of pedestrians and few robots in order to study and understand its impact and effect in wide areas people behavior. In the present work, the SFM has a different application as we adapted it to simulate a reduced number of pedestrian behaving as a group following the leader robot Ra and affected by the presence of robots Ra and Rc. The equations of the SFM involves: 1) A model for the desired direction of each pedestrian. 2) They model repulsive effects (avoid obstacles and/or other member of the group). 3) They model attractive effects (pursuing Ra, a chatting with other members). 4) They model some random variations of the behavior. From the original social force model only direction velocity forces, repulsive and attractive effects were implemented as enough forces to produce realistically the required behavioral effects of people by the presence of the team of robots during the conduction navigational task. The direction, velocity and acceleration vectorial forces are determined for each member towards the Ra. The repulsive effects against each pedestrian in the group are performed according to the rules established by the SFM, where there exist a particular territorial effect similar to an ellipse-shape that avoids to collapse against the other pedestrians. Similarly, part of the adaptation of the model was by deploying the same rules of repulsion to affect the crowd dynamics by the back-end robots. Members yield repulsive effects against Rb and Rc, but no opposite way. Finally some small random ?uctuations of the people behavior were added. Such ?uctuations slightly affected the vectorial members’ velocity. Indeed, many social groups of people are a need to realistically evaluate the reactiveness of the MRS, as each group of people yields different behaviors. People behaves according to they feel more comfortable being in the scope of the robots surrounding while conducted. Other members in the group walk conversing instead of only paying attention to the following of Ra, they may be attracted by other people to have

Fig. 2.

The team of robots in formation conducting the group of people.

In addition, the social force model was used to simulate scenarios with people behaving as a group. The original model presented by Helbing and Molnar in [18] established a sum of forces involving direction velocity of each people, a territorial effect which exhibits a repulsive effect to other pedestrians, repulsive effects against obstacles, and attractive effects to other pedestrians (e.g. when conversing) or objects. The model was adapted according to some considerations yielded during conduction by the robots as showed in Fig.3. ? People assumptions (they follow Ra, and/or just follow the crowd). ? The philosophy is leader-based robots formation. ? The robots motion plan depends on the group’s center of gravity (cog ). ? Three robots surround the group of people. ? People walking feel the approach of the back robots. ? Robots crowd depending on positions and speeds.

robot is unable to observe all the scenario. Moreover, results from previous process were shared into a common coordinate system in the central host. Segmentation, human detection and localization were then carried out as critical steps for people localization. The MRS communication architecture is depicted in Fig.4.

Fig. 4.

MRS architecture and communication ?ow.

Fig. 3.

Considerations taken for adapting the SFM to our guiding context.

social interaction, which despite of such situations conduction must be accomplished. As a preamble in this investigation to analyze how a team of robots could affect and/or control crowd dynamics while conduction, the authors have proposed a simulation model that could provide: (a) A good approach to prove the effectiveness of the proposed trajectory control model; (b) veri?cation of the method and the strategy; (c) conformation of the control; (d) the MRS motion planning; and (e) many simulated experiments of human-motion modeling. IV. A RCHITECTURE
AND PREVIOUS RESULTS

The MRS was developed within a framework that included only the requirements to conduct people reliably. Some results localizing people were accomplished from distributed robots locations [9], [10] and [11]. The Fig.9 is the con?guration of one of the experiments, with 4 persons and 3 robots (indoors). The purpose was to localize trustworthy each human in the group, distinguishing humans from other objects in the world. Furthermore, localization accuracy was obtained by matching the real environment con?guration against the computed results by the MRS, thus error in people localization reached and average of 10cm (each person’s cog ), depicted in Fig.9-(d),(g). The circles in Fig.9-(g), represent members’ occupancy area. Circles have different radius because were represented with the standard deviation (σ ) of points in each cluster (Fig.9-(f)). Multiple human localization was accomplished, by sharing sensory information from each robot, as a single

The central host and the team of robots carry out the following process for people localization: 1) The robots receive a synchronization signal from the central host. 2) Sensing and data ?ltering is performed by each robot. 3) Robots cooperatively self-localize by using an internal relative Cartesian coordinate system (CCCS) [11] and [14]. 4) The robots transmit sensor info and pose (x, z, θ) to central host. 5) The central host compute the algorithms for people positions. 6) Based on people positions, the central host generates a new motion plan for the robots. 7) Again from step 1). Moreover, the average time spent for data transmission (sensors data and robots position) was about 8.5ms for 100Kb, but less than 10Kb are transfered among the robots and central host. In fact, the approach in this development is a centralized MRS architecture, in which decisions are taken by a central host that remains during the entire mission duration as similar architecture presented in [13], and described in [6], [7], [8]. In this development, inter-robot communication, centralization, synchronization and coordination are critical for the MRS to control humans course. Likewise, robots pose determination is a key-issue to overcome some of those problems, as well as to calibrate distributed moving sensors, to let the MRS share sensor data (sensor fusion). The architecture is compounded by a team of 3 self-contained mobile robots depicted in Fig.5, and a central host. A Pentium-III Laptop on-board with wireless technology via IEEE802.11b was ?tted on each robot. The communication system is based on functions for spread-

the central-host’s functions is to compute the cog expressed by cog (x, z, θ, v, w), located at (x, z ) and heading angle θ, with lineal displacement in XZ-space vk , and angular velocity wk at discrete time k . As sensory information is not a perfect noiseless model, we implemented a Kalman Filter to estimate (?lter) the observations of the cog ’s trajectory. For Kalman ?ltering [15], [16], [17], the parameters considered are the state n-vector of the process xk = (x, z, θ, v, w) at discrete time k , including group’s pose, lineal and angular velocity respectively. Besides, the observation of the system which relates the sensory information is expressed in zk = (x, z ) (cog ) and it can be modeled by equation (1), zk = Hxk + uk (1)

Fig. 5. The team of Yamabico self-contained robotic platforms and their con?guration.

ing messages and a group-communication philosophy, as similar communication system used in [13]. The network data transaction management is under Linux in a TCP/IP network. In addition, for localization each robot performs autonomously a routine in a background called CCCS, whereby robots pose is obtained cooperatively. For the guiding-task performance, robots localization is a critical issue and the CCCS facilitates the problem by using a relative coordinate system. Only the leader (Ra) makes use of an extra element called Pose Estimator Module that merges sonar ranging data and odometry estimations, allowing to have an accurate positioning system. These measurements are used to correct the CCCS calculations in the central host, as a way to improve a future motion plan. V. T RAJECTORY C ONTROL
OF

Being H[2×5] the stationary over time matrix noiseless connection between the vectors xk and zk , and the uk is a Gaussian white sequence. Furthermore, the Kalman gain then is expressed by, Kk = Pk H T (HPk H T + Rk )
?1

(2)

C ENTER OF G RAVITY (cog )

In equation (2) the Kalman gain is updated at every time k , whereby the error dispersion covariance matrix P[5×5] (nonstationary) in (3) and the noise covariance in the measurement R[2×2] are also involved. Noise covariance values arise from z xx k and xk in Rk . ? ? cx 0 0 0 0 ? 0 cz 0 0 0 ? ? ? ? (3) Pk = ? ? 0 0 cθ 0 0 ? ? 0 0 0 cv 0 ? 0 0 0 0 cw The ?rst part of a traditional Kalman ?lter was de?ned in previous equations, thus the process of cog estimation can be established. With an update equation for the new estimate x ?k+1 , combining the old estimate x ?k with the measurement data zk in (4), x ?k = x ?k + Kk (zk ? H x ?k ) (4)

The framework for controlling the trajectory of the group’s cog is itemized as following: 1) Observation of cog overtime. 2) Estimation of noisy cog measurements (Kalman ?lter). 3) People trajectory control model. 4) Motion model for prediction of next desired cog position. The ?gure 6 depicts a block diagram of the vision-based feed back control.

The innovation equation is derived from expression (1), which is also related in previous equation (4). Additionally, a subsequent part of estimation process suggests also the update covariance Pk de?ned as, ?k = Pk ? Kk HPk P (5)

Fig. 6.

MRS trajectory control.

A. Estimation of cog Due to the cog is one of the major important issues which is part of the mechanism to provide human guidance, one of

The previous equations (2), (4) and (5) yield an estimate of the state vector xk and the error covariance matrix Pk . Basically, the projection of estimate x ?k+1 and the vector ?k+1 . Equation (6) expresses the state error covariance matrix P projection into k + 1 of previous estimate x ?k , and it relates the state transition matrix of the process Φ (also non-stationary), and some n-vector noise sequence qk . x ?k+1 = Φ? xk + qk (6)

The transition matrix of the process expressed in (7) ? ? 1 0 0 cos θk ?t 0 ? 0 1 0 sin θk ?t 0 ? ? ? 0 ?t ? Φ=? ? 0 0 1 ? ? 0 0 0 1 0 ? 0 0 0 0 1

Now, the result from previous equation (10) allows us to correlate a measuring of the current angle θk to calculate the one at next discrete time θk+1 by the equation, (7) θk+1 = θk + wk+1 ?t (12)

Likewise, projection into k +1 of the covariance is expressed by (8), ?k+1 = P ?k + (A + AT )P ?k ?t + (AP ?k AT + Σw )?t2 P (8)

Basically, the previous result of the angle becomes fundamental to get a value of the cog lineal velocity, which expresses a representative situation of the cog motion behavior. Thus, the lineal velocity vectorial value in an XY space, with its components decomposition are
x x vk cos θk+1 + γx (vref ? +1 = vk z z vk+1 = vk sin θk+1 + γz (vref ?

Where A[5×5] is a Jacobian matrix of partial derivatives (9) of the state transition matrix Φ respect to xk consequently nonstationary matrix, and ?t represents the time interval between each measurement. Besides, Σw [5×5] is a matrix involving the covariance arising from sensor measurement error. ? ? 0 0 ? sin θk cos θk 0 ? 0 0 cos θk sin θk 0 ? ? ? 0 0 0 0 1 ? A=? (9) ? ? ? 0 0 0 0 0 ? 0 0 0 0 0 A general representation of the Kalman ?lter implementation is integrated within the trajectory control model, also depicted in ?gure 8-(a). B. Trajectory Control Model A basic principle in our method, is that the team of robots must steer the cog towards a desired path. The equation (10) expresses a model of the cog angular acceleration (αk ), and yields a trajectory from the current cog location towards the desired pathway, having a distance to reach called ?x. Nevertheless, the team of mobile robots cannot explicitly control αk , but can in some extent affect the reaction of cog heading angle θ as a way of heading control while navigating. The equation also requires as input the group’s angular velocity wk . In our case the equation (10) is a lineal feedback control system. αk = ?k1 ?xk ? k2 θk ? k3 wk (10)

vk vk

cos θk+1 ) sin θk+1 )

(13)

Being γx and γz the gains of XZ-velocities respectively, and the establishment of a desired velocity called reference velocity denoted by vref . Eventually, our model for position calculation is given by the equation (14), which determines in advance the next possible position value relaying on the lineal velocity (see Fig.8-(b)).
x x px k+1 = pk + vk+1 ?t z z z pk+1 = pk + vk+1 ?t

(14)

The Fig.7 shows the simulation results of the cog motion behavior from 100cm heading to 60o (displacement, angle behavior and XZ-velocities performance). Subsequently, the results are fed into the Kalman ?lter, which is integrated as a prediction module of the feedback system control performing cog estimation in real-time during conduction task.

The gain is established by the constants k1 , k2 and k3 and were determined by trial and error for the robots. Thus, the steps to calculate a desired cog ’s location at time k + 1 is by substituting the calculated values by the equation for control that calculates the αk for usage of equations (11), (12), (13) and (14). The effects of the control must be as depicted in Fig.8-(a). C. Motion Model A projection of the group’s angular velocity wk+1 is given by the equation (11), involving a measurement of wk and the previous computed αk . wk+1 = wk + αk ?t (11)

Fig. 7.

cog motion behavior (Gaussian error included)

.

VI. ROBOTS M OTION P LANNING A. Conduction (easily following) Real path conduction between multiple people and multiple mobile robots can represent in some extent an intractable

task to accomplish when dealing with the problem of implicit communication. Certainly, it is a dif?cult problem to overcome since the only communication between robots and humans is based on motion reactions. The MRS reacts expecting a favorable human motion-behavior tracking the Ra, or acting according to the motion reactions given by the robots through a special robots keeping-formation that has been utilized as shown in Fig.8-(c). The basic principle relays on the fact that only a robot (Ra) provides no control but guidance. Meanwhile the rest of the robots at the back are purposed to observe and control the motion and size of the people dispersion. B. Crowding The Fig.8-(d) depicts a group circular model, and its main element to crowd relays on affecting the actual radius rk until reaching a desired radius rref by means of the robots position and speed (rref established a priori). If the condition for rk is rk > rref , the process of crowding is performed. The crowding process is expressed by equation (15) with a gain β . Here the core of this method arise from the viewpoint that the smaller the rk , the more the crowding. Since there is not explicit communication, the strategy is that the team of robots must get closer or farther from the cog , forcing them to modify more their inter-space. rk+1 = rk + β (rref ? rk ), rref , rk > rref rk <= rref (15)
Fig. 8. Con?guration for the team of robots formation.

VII. S IMULATION A. Methodology

RESULTS

C. Robots Motion The Ri poses are determined based on the cog location as depicted in Fig.8-(c). Once the cog was predicted by the motion model of section V-C, Ra pose is then established according to the predicted heading angle for cog . Likewise, Rb and Rc are also planned with the angles δ = 0o for Ra, δ = 150o for Rb and δ = 210o for Rc substituted in equation (16). Rik+1 = cogk+1 + ?s sin(θk+1 + δ ) (16)

The ?s is the distance required for the ?eld of view of the vision sensors, set as a constant. Furthermore, whether the crowding process is required or not, the model always verify the value of rk+1 overtime. Likewise, the team of robots will always head towards the direction established by the motion plan, pursuing the desired pathway.

Thus, robots increase/decrease their speed in order to reach certain locations (Rik ) affecting people positions in next update time. Robots speed (Vk+1 ) is an important factor to regard in order to affect the crowd of the group, so people must stop if being too close to Ra or smoothly accelerate if the leader speeds. It is expressed by the equation (17), Vk+1 =
|Rik+1 ?Rik | ?t

Until this stage we have obtained experimental results in laboratory with the team of robots and sensory info, as well as results from our simulation model, which gave us: (a) A good approach to prove the effectiveness of the proposed trajectory control model; (b) veri?cation of the method and the strategy; (c) Con?rmation of the control; (d) the MRS motion planning; and (e) many samples of human-motion modeling. Actually, some of the most important parameters for people motion simulation were the sampling time τ = 0.5s, a mean of desired people speeds v o = 1.34m·s?1 , the maximal acceptable people speed V max = 1.3v o , the repulsive effects o among pedestrians α and β were Vαβ = 2.1m·s?1, also the variance for exponential decreasing repulsive effects σ = 0.3, and the pedestrians step space sv = 0.9m. In addition, the methodology for simulation is explained in the following steps: 1) Original randomly location of the members among the team of robots. 2) The cog is measured. 3) The cog is ?ltered by the Kalman ?lter. 4) Next cog ’s pose is predicted with the next robots motion. 5) The members pursuit towards Ra. 6) Members’ position and velocity include ?uctuations. 7) The group’s size is determined (radius=farthest member). 8) Robots move towards next desired position based on the motion equations. 9) Again from step 2). With this general methodology, the ?gure 10 depicts the simulation results with the merging of all the models already proposed. VIII. C ONCLUSION We have introduced a MRS architecture purposed to guide a group of people through a desired pathway. Nevertheless, our endeavor in this article has been to discuss a trajectory

(17)

control model and a robots motion planning system from an architectural and technical implementation approach. Further than establishing a deep study on ethology of entities (humans or animals). We established the model presented in this paper from previous experimental results, taking as keyissue observation of group’s center (cog ). Likewise, we brie?y explained experimental results of multiple people localization, the communication system and its architecture. The peoplerobot interaction is only for the simplest case (people willing to be guided, and more complex situation will be investigated in the future), and their behavior was yielded by an adaptation of the social force model. We may synthesize the features of the MRS conduction methodology as follows: 1) The proposal of this type of people conduction by three robots. 2) The cog observation. 3) The cog estimation by Extended Kalman Filtering. 4) Prediction of the cog by a proposed motion model. 5) A motion-planner for the robots behavior. 6) Integration and adaptation of the Social Force Model to simulate groups of people. 7) The merging of all the models in a simulation process. In addition, some of the assumptions for the people behavioral patterns are: the people follow Ra as it is the conductor; people behavioral patters correspond to adults and do not include neither children, nor third age persons, etc; Rb and Rc make certain atmosphere or sense of guiding control in the people feelings as they surround them. Let us note that, this proposal is only a ?rst approach as to be the base that will include more complex functionalities to deal with human factors and people behavior. This initial contribution must be considered a ?rst basic model as a preamble to develop more complex aimed tasks and some potential applications might be: to guide refugees towards safe places in case of military actions or disasters; guided-tours in companies for visitors; escorting important and/or famous people by bodyguard robots; conducting herds of animals by farm-robots; and so forth. Hereafter, a next attempt is the implementation of the social force model as an alternative to improve the humanmotion behavior. Likewise, currently we are undertaking the realization of real experiments in simple situation with groups of people and the MRS, which has been very challenging and has implied many technical issues to regard. ACKNOWLEDGMENT The authors would like to thank the anonymous reviewers who contributed with their valuable comments to improve this manuscript. Likewise, we would like to mention that this work was partly supported by the Japanese Grant-in-Aid for Scienti?c Research. R EFERENCES
[1] R. Thrapp, C. Westbrook and D.Subramanian, Robust localization algorithms for an autonomous campus tour guide, In Proc. of International Conference on Robotics and Automation, 2001. Vol. 2, pp. 2065- 2071, 2001.

[2] Illah R. Nourbakhsh, Judith Bobenageb, Sebastien Grangec, Ron Lutzd, Roland Meyerc and Alvaro Sotoa, An Affective Mobile Robot Educator with a Full-time Job, Arti?cial Intelligence, Volume 114, Numbers 1-2, October 1999, pp.95-124. [3] J. Schulte, C. Rosenberg, S. Thrun, Spontaneous, Short-term Interaction with Mobile Robots,In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 1999. [4] W. Burgard, A.B. Cremers, D. Fox, D. Haehnel, G. Lakemeyer, D. Schulz, W. Steiner, S. Thrun, Experiences with an Interactive Museum Tour-Guide Robot, Journal of Arti?cial Intelligence, Vol. 114, No. 1-2, pp. 3-55, 1999. [5] Richard T. Vaughan, Neil Sumpter, Jane Henderson, Andy Frost and Stephen Cameron, Experiments in Automatic Flock Control, Robotics and Autonomous Systems 31 pp.109-117, 2000. [6] Y. Uny Cao, A. S. Fukunaga, A. B. Kang and Mengand and F. Meng, Cooperative Mobile Robotics:Antecedents and Directions, Autonomous Robots, 4, pp. 1-23, 1997. [7] G. Dudeck, M.R. Jenking, E. Milos and D. Wilkes, A Taxonomy for Multi-Agent Robotics, Autonomous Robots, 4, pp. 1-23, 1996. [8] Iocchi L., Nardi D. and Salerno M. Reactivity and Deliberation:A survey on Multi-Robots Systems. In E.P.M. Hanneabaver and J. Wendler (Ed) Balancing Reactivity and Deliberation in Multi-Agent Systems (LNAI 2103), 2001, 9-32 (Springervelarg Berlin Heidelberg) [9] Martinez E., Ohya A., Yuta S., Recognition of people’s positioning by multiple mobile robots for humans groups steering, Proc. Computational Intelligence in Robotics and Automation, Kobe Japan, pp. 758-763, 2003. ISBN 0-7803-7866-0/03. [10] E., Martinez-Garcia, A., Ohya, S., Yuta, Multi-people Localization by Multiple Mobile Robots: First Approach for Guiding a Group of Humans, International Journal of Advanced Robotics Systems, Vol.1, No.3, pp. 171-182, Sep. 2004. [11] Martinez E., A multi-robot system architecture communication for human-guiding, to appear in the Journal of Engineering Manufacture Part B1, Vol.256, 2005. [12] Takumi Munekata and Akihisa Ohya, A Walk Support System for Two Distant Persons using Mobile Robots, Proceedings 2003 IEEE International Symposium on Computational Intelligence in Robotics and Automation, pp.45-49, 2003 [13] Iocchi L., Nardi D., Piaggio M. and Sgorbissa A. A Distributed Coordination in Heterogeneous Multi-Robot Systems. Autonomous Robots, 2003, 5, 155-168. [14] T. Yoshida, A. Ohya and S. Yuta, Cooperative self-positioning system for multiple mobile robots. In Proc. IEEE International Conference on Advanced Intelligent Mechatronics, pp. 223-227 (2003). [15] R. E. Kalman, A New Approach to Linear ?ltering and Prediction Problems. Trans. on the ASME-journal of Basic Engineering, 82 (Series D), pp. 35-45. 1960. [16] G. Welch and G. Bishop, An Introduction to the Kalman Filter. UNCChapel Hill, TR 95-041, March 2002. [17] P. S. Maybeck, Stochastic models, estimation, and control. Vol. 1, Academic Press, Inc. LTD. 1979. [18] D., Helbing and P., Molnar, Social force model for pedestrian dynamics. Physical Review E, Vol. 51, No. 5, pp. 4282-4286, May 1995. [19] J. A., Kirkland and A. A., Maciejewski, A Simulation of Attempts to In?uence Crowd Dynamics. IEEE Int. Conference on Systems, Man, and Cybernetics, pp. 4328-4333, Washington, DC, Oct. 5-6, 2003.

Fig. 9.

Multi-people localization experimental results.

Fig. 10.

Trajectory control simulation results.


相关文章:
更多相关标签: