Open MULTIDRONE Software

This is a webpage dedicated to publicly available software developed within the MULTIDRONE project. In order to access this software, please contact Prof. Ioannis Pitas.

The following sections describe the core functionalities of the MULTIDRONE system, including Visual Analysis Modules (both onboard & ground station), Gimbal & Camera control, as well as functionalities regarding the setup of autonomous flight missions.

Visual Analysis Modules

Object Detection & Tracking

The system allows for autonomous target tracking based on object detection. Specifically, the Master Visual Analysis node provides a FollowTarget service. When called, a Detect service call is made to the detector node, which returns candidate bounding boxes. After choosing which candidate to follow, the tracker is initialized and the tracking process begins.

  • Detector: Can be any object detector so long as it can provide bounding boxes. Implemented are YOLO (based on the original darknet implementation), SSD (based on Tensorflow Object Detection API), as well as our lightweight pyramid-based detector.
  • Tracker: Can be any generic visual object tracker. A bounding box and video frame are required for initialization, and a prediction for the target bounding box is made given subsequent frames. Currently the MULTIDRONE system offers implementation of the following state-of-the-art 2D visual target tracking algorithms:
    • KCF
    • STAPLE
    • SiamFC
    • SiamRPN
  • Verifier: The verifier periodically checks whether the tracked target is correct, i.e., by comparing against the initialization box, or classifying the depicted target, etc. Implemented is a tracker-specific MLP classifier which classifies a bounding box as corresponding to the target or not.

    How to launch:

    • Provide an Image stream as /drone_n/shooting_camera
    • Launch Visual Analysis Modules roslaunch drone_visual_analysis face_detection_tracking.launch
    • /drone_n/follow_target request must be called to begin tracking.

Heatmap-based Crowd Detection

The MULTIDRONE system offers heatmap-based crowd detection, the results of which can be used for safety reasons during UAV missions (e.g., crowd avoidance). The crowd detection node is meant to be executed on a ground computer (NVIDIA 1080 or better is recommended) and utilizes a fully convolutional NN (requires Caffe) in order to detect crowd in the images provided by the UAV streams.

How to launch:

  • Provide an Image stream as /drone_n/ground_shooting_camera
  • roslaunch ground_visual_analysis crowd_detection.launch

Semantic Map Manager

The MULTIDRONE system utilizes the generated heatmaps from the crowd detection node in order to determine the location of the detected crowd and can create an octomap with appropriate marking of the crowd location

How to launch:
- Launch crowd detection node - roslaunch ground_visual_analysis

Visualization tools

  • Object detection and tracking visualization rosrun drone_visual_analysis
  • Auto-focus assist (Focus peaking)
    • Begin video streaming in drone_n
    • roslaunch ground_visual_analysis focus_n.launch

Autonomous Mission Planning Modules

Mission Controller

This is the core of the MULTIDRONE system, receiving the missions from the Director’s Dashboard, planning the mission and sending the corresponding tasks to each drone while monitoring the execution. It can be divided into different parts:

Onboard Scheduler

The Onboard Scheduler receives the list of actions corresponding to the drone from the Mission Controller. Then, it is in charge of executing them sequentially, synchronizing their start and end and calling the Action Executer for the actual execution of the different navigation or shooting actions. This module reacts to different alarms and emergencies, such as low drone battery, being able to command a safe path to a landing site thanks to an included path planner. It also reports about the drone status to the Mission Controller.

How to launch:

UAV Abstraction Layer

The UAL (UAV Abstraction Layer) is in charge of abstracting the user from the drone hardware, interfacing with the autopilot commands. This module is able to provide drone positioning in a standard format regardless of the underlying autopilot, as well as to receive and execute navigation commands such as land, take-off, go to waypoint, set velocity, etc.

This module is available in a different public repository:

How to launch:

Drone, Gimbal and Camera Control

The modules listed below are executed on-board each drone of the MULTIDRONE system during a shooting mission. They are responsible for controlling the drone, the gimbal and the camera according to the individual shooting mission specifications received from the Onboard Scheduler. Drone control relies on the underlying UAL module.

Action Executer
The action executer runs onboard each drone and is responsible for executing the navigation and shooting actions as they are received from the onboard scheduler. It can be divided in three parts:

Gimbal Camera Interface
The Gimbal Camera Interface implements the communication bridge between the Action Executer and the gimbal hardware. It connects to the gimbal through a serial communication (UART), sending the velocity commands and retrieving its status: motor angles and speed and gimbal orientation, and as well as other low-level parameters. For compactness, it comprises the interactions with both the gimbal and camera. If desired, two nodes can also be launched separately. At any moment during the mission, the gimbal backup pilot can get control over the gimbal to change any camera setting or to manually control the gimbal, with the possibility of switching back to automatic mode.

Microcontroller Board for Gimbal Camera Interface
A Teensy LC board provides the core of the hardware pipeline for the camera and gimbal, channeling the commands originated from the onboard computer and the RC command to the gimbal and camera. It is also responsible for swapping between manual control (pilot) and automatic.

How to launch

In order to access this software, please contact Prof. Ioannis Pitas.