Landing is the last unsolved problem in autonomous multi-rotor drone flight. Many other tasks, such as takeoff, waypoint-to-waypoint flight, and various mission tasks (e.g. collection of images and video), have been reliably automated. Yet, autonomous landing remains primarily a manual task because of its inherently risky, sensitive nature. The critical result is that fully autonomous mission cycles are just out of reach with current technology in many contexts. Prior work towards autonomous multi-rotor landing has been subject to at least one of several disadvantages – either it depends principally on GPS. It is, therefore, subject to possibly fatal inaccuracy (especially in Iceland). It has relied on the detection of special markers (known a priori) with a downward-facing camera (such that it may easily lose sight of the markers during approach and descent), it uses differences in pixel speed to deduce terrain topology (but therefore depends on motion), or it has relied on sophisticated ground stations to carry out the computationally expensive processing required for terrain analysis. The proposed research targets the problem of autonomous landing to create an algorithm to land multi-rotor drones with the following constraints reliably: 1. having no prior knowledge of a landing site or GPS position, 2. executing in real-time with a critical deadline, and 3. using only the limited computational environment onboard a drone.
Thus far, we have created and tested several drone platforms with three different flight control software stacks (ArduPilot, PX4, and DJI), revealing many challenges of operating drones in Iceland: low GPS accuracy, unpredictable winds, and the commonality of rain. While an extensive, weatherproof drone system can withstand wind and rain, low GPS accuracy has proven to be a real obstacle. DJI drones and flight controllers have been shown to outperform others in this regard vastly and thus provide a good base moving forward. Further, we have tested landing algorithms based on fiducial markers in simulation, differing from previous work in that we use a gimbal-mounted camera to allow it to track the marker over time. This has involved further developing existing fiducial systems (April Tag and WhyCode) in optimizing for execution on embedded hardware and in testing the accuracy of their orientation estimation. Finally, we have created a proof of concept of such algorithms on a physical drone.
Moving forward, we plan to expand on the following research questions: RQ1. With what methods can a drone autonomously identify a safe, previously unknown landing site? RQ2. What data do such methods require? RQ3. How can such methods execute in real-time and in the power-limited environment of a drone? We hypothesize that
- H1: A U-net, or variants such as Residual U-net with pre-/post processing steps for image rectification and communication with flight control software, will be able to recognize landing sights from drone sensor data.
- H2: This data can be point clouds from LIDAR units or RGBD (image + depth) cameras, which are small enough to be embedded in a drone.
- H3: An embedded TPU, GPU, or FPGA can execute the method onboard a drone in real time.
We plan to generate training/testing data sets of LIDAR/RGBD point clouds in AirSim, and collect further testing data sets from the real world using our existing drone platforms. This data will serve as the basis for training several neural network models based on the U-net architecture. Promising algorithms will be optimized using pruning and tested for execution speed and power consumption on physical hardware. We will test any sufficiently fast/accurate methods in real landing scenarios.
People
Joshua D. Springer
Marcel Kyas
Publications
2022
Autonomous Multirotor Landing on Landing Pads and Lava Flows Proceedings Article
In: IEEE International Conference on Robotic Computing (IRC 2022), Naples, Italy, 2022.
Autonomous Precision Drone Landing with Fiducial Markers and a Gimbal-Mounted Camera for Active Tracking Proceedings Article
In: IEEE International Conference on Robotic Computing (IRC 2022), Naples, Italy, 2022.
Evaluation of Orientation Ambiguity and Detection Rate in April Tag and WhyCode Proceedings Article
In: IEEE International Conference on Robotic Computing (IRC 2022), Naples, Italy, 2022.