Image Processing For Automatically Guided Vehicle Navigation System

Image Processing For Automatically Guided Vehicle Navigation System

Project brief:
The design of an autonomous platform capable of intelligent visual sensing and control through the specialization of perceptual processes and behavioural networks.
Requirements:
• A plug and play real-time vision application (min 15fps) for tracking and depth perception
• The identification of static and dynamic
• Obstacles within the environment (scene parsing)
• Real time reactance and path planning (control)
• Three dimensional tracking
Design work:
A stereopsis (stereo vision) application was scripted to enable depth perception for localisation and target tracking. While, traditionally, stereo vision tasks tend to be reliant on full calibration (both intrinsic and extrinsic parameters are fully defined), to better simulate that of a generic stereo setup the vision application, in this instance, utilizes two auto-focusing cameras and has been written to provide ‘plug and play’ functionality for a variety of stereo applications. As a result, certain parameters (such as baseline and focal length have been treated as undefined parameters). The advantage of this approach in that weak calibration was found to be adequate for the matching process where image
registration and depth reconstruction (distance to a given target) was achieved from a single pair of binocular images through the use of a suitable number of point correspondences.
Determination of unobstructed regions (free space) within the scene was achieved through the use of probability based mapping (an appearance based obstacle detection algorithm utilising colour segmentation within the HSV colour space) . The use of the HSV colour space affording partial exclusion of both localised and ambient variations in light intensity between disparate scenes (both local and ambient variations in light intensity can prove a considerable hurdle in the design of vision applications). A further benefit of this methodology, the ability to provide an estimate of distance in untextured scenes (stereopsis requiring sufficient texture or feature points in the matching process) Position monitoring, goal achievement and real-time reactance (collision avoidance) was achieved
through the use of a behavioural-based control.
Outcome:
The outcome of this project was the design of a robust, light-weight, real-time, vision application capable of reactance, three-dimensional localization and scene interpretation within both dynamic and disparate environments
Key Features:
• Plug and play stereo vision application
• Appearance based obstacle detection
• Target tracking
• Behavioural control for real-time reactance and control as a requisite of the project, a mobile platform was also constructed to test the vision system within a real-time environment.
• A front-end processor, low level arbitration and implementation of the behavioural network.

<< Back to Case Studies