Enabling Multimodal Sensing, Real-time Onboard Detection and Adaptive Control for Fully Autonomous Unmanned Aerial Systems

pdf

The goal of this proposed research project is to achieve true onboard autonomy in real time for small UAVs in the absence of remote control and external navigation aids. Three major areas have been explored. In the area of UAV flight control, an automatic trajectory generation framework is developed. It consists of waypoint planning at upper level and LQR based trajectory generation in the lower level. The Deep reinforcement learning based framework reduces the control trust by more than 15% with much less computing complexity compared to state-of-the-art approaches. In area of obstacle sensing, 3D object detection/classification using Capsule Networks is investigated. Compared to other existing approaches, it achieves higher accuracy with less training samples. Finally, in the area of system integration, an FPGA based implementation of Yolo is developed. The model is compressed using block-circulant weight matrix. Compared to GTX 1070 GPU, it achieves similar throughput but 6x higher energy efficiency. Compared to TX2, it improves throughput and energy efficiency by 7x and 3.5x respectively.

Tags:
License: CC-2.5
Submitted by Qinru Qiu on
Feedback
Feedback
If you experience a bug or would like to see an addition or change on the current page, feel free to leave us a message.
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.