Home
Overview
Methodology
Results
Publications
Members
Links

"Clean" Background Scenes

We illustrate our approach based on a video sequence captured at the traffic intersection, as shown in Figure 1 (a), where different vehicles are observed by a fixed camera. In order to provide an accurate representation of the density distribution of each pixel in the background scene, training inputs are collected to train the support vector regression for function estimation. In the given spatial position, training inputs used in our work include two components, probability to be classified as background and intensity of pixel. Training data with high probability belonging to background has been manually assigned value 1. Intensities of training inputs are generated from the 'clean' background scene without moving vehicles and pedestrians. The 'clean' background scene can be formed by filtering captured video frames based on the median filter, as shown in Figure 1 (b).

Figure 1. (a) Captured video sequence
Figure 1. (b) "Clean" background

Detection Results Based on Visible Sequences

The proposed target detection algorithm has been tested based on two different data sets. Data set 1 is a visible image sequence captured at a traffic intersection. A total of 2 hours video sequences were collected, with sample rate 4 frame/second. Dataset 2 is a thermal image sequence captured at a university campus walkway intersection and street over several days (morning and afternoon) using a Raytheon 300D thermal senor core with 75mm lens mounted on an 8-story building [4][5]. In the following, we give the preliminary detection results of the proposed target detection algorithm based on these two data sets, respectively. Figure 2 demonstrated the detection results based on visible images (shown in 1st column) in Dataset 1. In 2nd column of Figure 2, detected vehicles were detected and labeled by white color. Corresponding spatial position of detected vehicle were shown with yellow rectangle in 3rd column of Figure 2.

Figure 2. Detection results based on visible image captured at a traffic intersection. 1st column: captured traffic scene; 2nd column: background subtraction based on the estimated probability image; and 3rd column: spatial position of detected vehicles (yellow rectangle).

Detection Results Based on Infrared Sequences

Compared to visible image sensor-based vehicle detection, infrared image sensor-based vehicle detection may enhance the system performance for the nighttime surveillance and have a relative higher resistance to poor weather (snow, rain and fog) due to the high contrast infrared imagery. Infrared image sensors exploit a combination of temperature differences, emissivity differences and "cold sky" reflections that in combination result in imagery with a high contrast between the target and background clutter. In many cases this contrast is superior to that which would be attained in visible imagery [4]. Therefore, in the following, we also demonstrate the detection results based on thermal images and give the comparison result with AdaBoosted classifier algorithm [5] in Figure 3 and 4, respectively.

Figure 3. Detection results based on thermal images
(a) Detection result of AdaBoosted classifier algorithm. (b) The detection result of the proposed algorithm.
Figure 4. Comparison result with AdaBoosted classifier algorithm [5] and the proposed support vector regression-based algorithm.
Figure 5. Video sequences for human detection based on thermal images

Back to Top

 

Home | Overview | Methodology | Results | Publications |Members|Links

 
For problems or questions regarding this web contact [Junxian Wang].
Last updated: 11/01/05.