当前位置:首页 >> 工学 >>

Micro Sensor Based Eye Movement Detection and Neural Network


Micro Sensor Based Eye Movement Detection and Neural Network Based Sensor Fusion and Fault Detection and Recovery
J. G u and M. Meng
Dept. of Electrical Engineering University of Alberta Canada T6G 2G7 Jason@ee.ualberta.ca, Max.Meng@ualberta.ca
2

A. Cook
Faculty of Rehabilitation Medicine University of Alberta Canada T6G 2G4 Al.Cook@ualberta.ca

M.G. Faulkner
Dept. of Mechanical Engineering University of Alberta Canada T6G 2G8 Gary.Faulkner@ualberta.ca

Abstract
A person with one eye missing, through various reasons, m a y suffer psychologically as well as physically. The loss of an eye can be solved by the ocular implant cosmetically. This artificial eye appears natural. B u t it is static. To let the artificial eye have the same natural movement as the real eye, a n ocular system is developed. The artificial eye is mounted onto the tiny small servomotor. The whole system shall be able to sense the real eye movement and control the motor t o drive the artificial eye to the desired position. A tiny infrared sensor array is used for this study. This paper describes an approach of using the Artificial Neural Network t o do the sensor fusion to' detect the eye movement. T w o types of neural networks are used for the sensor fusion and sensor fault detection and recovery respectively. Usually the sensor fusion relies on the model of the system, Kowever, sometimes it is not possible to get the accurate model of the system, or one or several of the parameters of the system may be unknown or partially known. I n addition, there may be measurement in accuracies associated with the sensors. In this case, conventional method may not have good performance. An artificial neural network can learn the characteristic of a non-linear, non-modeled system through training samples. Then during the real application, the sensor signal can be used to feed the network and obtain the desired output. Using the micro sensor array t o detect the eye movement carries out experimental study. T h e sensor data as amplijed and digitized then sent to the computer. Two-layer neural networks are trained by the data samples. First trained network will be used to do the sensor fusion, and the second two neural networks are used to detect the sensor failure and recover the faulty data respectively. Experimental studies with Soft sensor fuilure and hard sensor failure are included. Main part of this paper deals with the network training method and further considerations.

1

Introduction

Multi sensor integration and fusion have got much attention in recent years. This can be found in paper [l], which surveyed all kinds of technology in sensor fusion. Bayesian method is one of the classic methods that have been used in sensor fusion. Durrant-White started the uncertainty modeling for multi-sensor systems in 1985; he assumed Gaussian distribution and applied Bayesian inference with minimum variance estimate to fuse linearly structured multi-sensor systems. This method had its shorting comings. It lacked of flexibility and couldn't discriminate uncertainty and ignorance. Dempster-shafer theory was used to remedial this shortcoming, which could be found in [2, 31. It proiided a way for information fusion where uncertain elements existed. Instead of placing an exact probability on iz given event as Bayesian theory did, upper and lower probabilities were used as likelihood bounds. It was used for image processing and signal classification. Based on Bayesian theory and Dempster-shafer theory, [4] presented a new strategy for statistical decision and evidence combination, it was called double bound testing (DBT). It increased the flexibility of decision. Fuzzy set and neural network are also used in sensor fusion [5, 6, 7, 8, 9, lo]. Fuzzy approach [5] was used for classification. Neural network was used to do motion detection [6], object detection [7], speech perception [lo], and signal processing [ l l ] . [8] presented perception-action network. The net embedded feasible system behaviors in various level of abstraction; such that the system can re plan and control its behaviors towards the set goals. This paper presents a neural network based approach for sensor fusion. An artificial neural network can learn the characteristic of a non-linear, non-modeled system through training samples. Then during the real application, the sensor signal can be used to feed the network and obtain the desired output.

518

2
2.1

Neural Network Approach Two Layer Neural Network

An artificial neural network can learn the characteristic of a non-linear, non-modeled system through training 22,.. . , z ] ,, samples. Assume there are n inputs X = [XI, and m outputs Y = [yl,y2,. . . , ym]. They are related by a nonlinear unknown function Y = F ( X ) . A neural network layout sketched in figure 1 is able to learn the relationship between X, and 1'.

I

-1

,hesired output' I I Generator

I

Figure 2: Supervised learning

output Output layer
I

ID21
0

D22
'0

D2n

0

0
0

Hidden layer
x1 x2
* * * X n
L

0

Figure 1:

Two layer neural network
output

This is a two layer neural network. Input layer is a hidden layer, all the neurons are fed with the sensor measurement x,. i from 1 to n. The hidden neurons have activation function F,, and biases B,,. Output layer is the second layer, all the neurons of this layer are fed with the output of hidden layer. The output neurons have activation function FOutand biases Bout. A set of weights is connected t o each layer. Let W,, for hidden layer and Woutfor output layer. In this network, the output of Y can be expressed as

Figure 3: Fusion and sensor failure detection row data and the time interval are fed into the neural network to obtain the weight, and the estimated sensor output can be acquired. The last column is the newest data, which is used as input for the trained neural network to do the fusion.

Y = Fout(Wout * (Fin(Win * X ) +Bin))+Bout

(1)

3
3.1

Experimental Study Sensor Data Space Creation

2.2

Learning of Neural Network

Supervised learning method is used for this two-layer network. The outputs of the network are compared to the desired output. The error is used to adjust the weight and the bias. In this way, the network can be trained by minimizing this error term. The block diagram of the learning method is show in figure 2.
'

Multiple sensors are used to detect the eye movement

[12]. A nine cells pair infrared sensor arrays are used to detect the eye movement. Here in this paper, only three cells sensor array is used for the detection. The experimental study was first carried out by using the artificial eye modal, as shown in figure 4. Two artificial eyeballs are mounted inside the eye socket modal, which has same volume of the real eye pit. Two eyeballs are linked to the servomotors, which is controlled by a micro controller. The micro controller drives the servomotor to move the eyeball by using a predefined eye movement signal. The infrared emitter will send out the infrared light to illuminate the artificial eye, and the infrared detector will receive the reflected infrared light. The relationship between the infrared array output

23 .

Sensor Fusion and Sensor Failure Detection

Figure 3 is the block diagram for sensor fusion and detection. Dij is j t h data of sensor i . Each row data is the successive data from same sensor and each column data is the data from the entire sensor from the same time. Each

519

Figure 5: T h e eye m o v e m e n t record a tracking target n

Figure 4: T h e artificial eye model Figure 6: T h e f u s i o n block diagram and the eye position is non linear. Training the neural network could complete the nonlinear mapping between the input and the output, which is divided into following three steps: Calibration First step is to let the eye move following the predefined eye movement signal, and record the infrared sensor output simultaneously. Do this calibration procedure many times; the number of times is large enough, such that the expected square error is minimized. Training Using the recorded sensor output as input for the multi-layer neural network, the predefined eye movement signal as output to train the supervised network. The weight and the bias of the network are obtained as the information for the mapping between the sensors and the eye position. Experiment During the experiment, feed the recorded sensor output to the neural network, the output of the network is the eye movement signal.
I 4
I

33 .

Experimental Results for Fault Free Sensor Data

Matlab is used to do the neural network training and simulation. Figure 7 shows the experimental result for fusion. Left side of the panel of figure 7 shows the infrared array data. It is clear t o find out that the artificial eye moves back and forth three times during the experimental study. The right side of the panel of figure 7 shows the eye position output. The result verifies the fusion algorithm.

,

.I

.I

- -

T"

Figure 7: Fusion with fault free

As shown in figure 5, the artificial eyeball movement
range is 40 degrees, from minus 20 degrees to plus 20 degree. The servomotor drives the artificial eyeball from the left end to the right end at the slow speed. The resolution is 1 degree for the time being. And then the infrared sensor recorded the data and sent it to the computer for analysis. The three cells sensor array data is shown on the left side panel of the figure 5 . Using the three steps described above, the trained neural network is obtained.

3.4

Ex erimental Results for Soft Sensor Fafure

3.2

Experimental Results for Fusion

To do the experiment for the fusion, periodical eye movement signal is sent to the controller to drive the artificial eyeball. The recorded sensor array data with noise is fed into the trained neural network to get the fusion output. Figure 6 shows the process.

Soft sensor failure means that the sensors are still working, however, there are some noise in the sensor data, such as the bias, drifting, and precision degradation. The amplitude of the sensor noise is very low. The network can tolerate this. Following figure 8 shows the experimental results with soft sensor fail. In the experiment, the random noise is added to the sensor output. The noise amplitude is from 1 percent of the maximum sensor output amplitude t o 10 percent of the maximum amplitude. The relationship between the noise amplitude in percentage and the position error in degree is shown in the figure.

520

sensors and a failed sensor, which readings are to be recovered. Training for failure detection network Using the normally working sensor data and the data of faulty sensor data as input for the multi-layer neural network, the maximum and minimum value are assigned as output for those two types of data respectively t o train the supervised network. Training for failure recovery network Using the normally working sensor output as input for the multi-layer neural network, the to be corrected failed sensors as output to train the supervised network. The weight and the bias of the network are obtained as the information for the mapping between the normal sensors and the abnormal sensors.
0

-0 2 4 56 8 10 Noise amplitude in percentage
Figure 8: Fusion with soft sensor failure

3.5

Experimental Results for Hard Sensor Failure

Hard sensor failure means that the sensor doesn't work at all, usually in electronics it can be defined as the stuckat sensor failure, where the sensor is stuck at one extreme of its signal range. In Practice, this is likely t o be an open (stuck-at 0) sensor or a short circuit (stuck-at 1) sensor. Following figure 9 shows results when sensor cell one, cell two,and cell three stuck at 0 respectively. It is clear that using the trained network without failure detection and recovery, the result is totally out of order.

Completion Try t o add the sets of normal sensors versus abnormal sensors data sample, the thorough exploration of the data will enable the network t o deal with different sensor failure configuration.

By using the trained neural network, the previous hard failure problem can be solved easily. As shown in figure 10, the left side is the detection result. The threshold distinguishes the normal sensor data and the abnormal sensor data. The right half panel shows the recovered eye position signal.

J
50 100 150 Index o 88nsor d a h f

Figure 10: Failure detection and recovery

4

Experimental Study with the Real Eye Movement

Figure 9: Hard sensor failure without failure 'det and recovery

n

3.6

Experimental Results for Failure Detection and Recovery

To deal with the sensor failure, two types of neural networks have been generated. The training steps of the neural networks is described as following:
0

Sample selection Select a set of data samples for normally working

The emitter and the detectors are mounted to the frame of the eyeglasses that subject will wear. The emitter sends out the infrared light to illuminate the eye. The reflected infrared light is detected by the detector array. The reflected light intensity will change according to the eye position. The obtained sensor data is processed through sensor fusion algorithm; the eye position can be determined. The real eye movement measurement procedure is same as the one for the arfificial eye movement. The subject is asked t o stabilize his/her head. The subject he/she is asked t o look to the left, right, and to the center. Then the subject is asked to move the eye horizontally. A calibration curve will be generated. And then the subject is

521

asked to tracking the moving target with his/her eye. The whole procedure will be recorded. Eye positions as measured by the target location and infrared sensor data will be recorded. Currently we are going through the ethics approval to get some subjects to do the experiment. The author first carried out the pilot study, and promising results were obtained.
eeeeeeeeeeeeeeeeeeeeeeee~
1 2 3 4 5 6 7 8 9 10
888.8

Peter Wide and Dimiter Driankov, “A Fuzzy approach to multi-sensor data fusion for quality profile”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multi sensor Fusion and Integration for Intelligent Systems, 1996 pp 215-221 Aiqun Wang, Nanning Zheng, LiXing Yuan and Xiaodong Fu, “Multiplicative Inhibitory Velocity Detector (MIVD) and Multi-velocity Motion Detection Neural Network Model”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multi sensor Fusion and Integration for Intelligent Systems, 1996 pp 476-483 Yong-Jian Zheng and Bir Bhanu, “Adaptive Object Detection From Multi sensor Data”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, 1996 pp 633-640

11727374757677787980

Figure 11: T h e real eye calibration set up

5

Conclusion and Future Considerations

Sukhan Lee, “Sensor Fusion and Planning with Perception-Action Network”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, 1996 pp 687-696 Joris W. M. van Dam, Ben J.A. Krose and Franciscus C.A. Groen, “Adaptive Sensor Models”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, 1996 pp 705-712 Harouna Kabre, “On the Active Perception of Speech by Robots”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, 1996 pp 765-774 Dukki Chung and Francis L. Merat, “Neural Network Based Sensor Array Signal Processing”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems, 1996 pp 757-764

Sensor fusion is a key issue in many systems, mainly in robotics related systems. This paper proposes a neural network method for sensor fusion of infrared reflection data to estimate the position of the eye. From the experiment, it is easy to find out that this network performs well under the situation of noise and sensor fault. This approach needs to be further validated by the experiment through real eye movement experiment. Future work will be the experimentation and extension of this method to related field.

References
Jay K. Hackett and Mubarak Shah, “Multi-sensor Fusion: A Perspective”, Proceedings of the 1990 International Conference on Robotics & Automation 1990 pp 1324-1330. Jung-Jae Chao, Kuo-Chih and Lain-Wen Jang, “Uncertain Information Fusion using Belief Measure and Its Application to Signal Classification”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multi sensor Fusion and Integration for Intelligent Systems, 1996 pp 151-157 Thomas D. Garvey, Gohn D. Lowrance, and Martin A. Fischler, “An Inference Technique for Integrating Knowledge from Disparate Sources” 1981 pp 319-325 Xiao-Gang Wang, Wen-Han Qian, Enrico Pagello and Ren-Qing Pei, “On the Uncertainty and Ignorance of Statistical Decision and Evidence Combination”, Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multi sensor Fusion and Integration for Intelligent Systems, 1996 pp 166-173

J. Gu and M. Meng A. Cook M. G. Faulkner, “Sensing and Control System for Ocular Implant”, Proceedings of the 1999 IEEE Canadian Conference on Electrical and Computer Engineering, 1999 pp 1408-1412.

522


赞助商链接
相关文章:
Analysis of vehicle sensor network
sensor network technology based on wireless sensor ...(5) Perception Node Data On Vehicle Movement ...Mobeyes: Smart mobs for urban monitoring with a...
传感技术及应用
based on RBF neural network , therefore this method may be affectively ...(2) study on micro-differential pressure sensor based on ferrofluid Different...
更多相关标签: