The conference aims to bring together academia and industries to exchange visions and ideas in the state of the art and practice of Unmanned Aerial Vehicle Technologies and Applications. We invite you to share your scientific findings, newest research projects, as well as industry and line-of-business solutions.
DATE, TIME & VENUE
- Tuesday, 26 May, 2020: from 10:00 to 18:00
- Wednesday, 27 May, 2020: from 10:00 to 17:00
- Adlershof con.vent. Rudower Chaussee 17, 12489 Berlin, Germany.
SPEAKER RATES & RULES
|Developer/Researcher/Academia||Free of charge|
|Exhibitor||Speakers will be charged 400,-€ for up to 30 minute presentation (price excl. VAT)|
|Other||Speakers will be charged 800,-€ for up to 30 minute presentation (price excl. VAT)|
- There is no fee to submit a proposal.
- The duration of most presentations is 15-30 minutes, including time for a question and answer period.
- DRONE Berlin reserves the sole right to accept or reject any proposal received without liability.
- DRONE Berlin does not pay for a speaking fee. Travel related expenses, meals and ccommodations are the responsibility of the speaker.
- If attendees are charged a fee, the speaker shall submit requests prior to the event for review and approval by DRONE Berlin.
Conference language is English.
There is no formal deadline for speaker submissions, the submissions will be closed once all the speaking slots have been filled out. We encourage all applicants to submit as early as possible.
CHOOSE YOUR METHOD OF SUBMISSION
If you are selected to speak at the event, you will be notified approximately 6-8 weeks prior to event.
INVITE YOUR AUDIENCES
Your presentation is meeting place for your professional connections. As a presenter you might wish to invite audiences and give them complimentary tickets. Our guest ticket policy offers a marketing tool you can use to invite guests to attend your presentation and meet each other without paying admission.
All programs are free to attend by the general admission ticket holder.
The organizer of DRONE Berlin invites proposals for special sessions to be held during the main conference from 26 to 27 May, 2020 in Berlin.
To submit a proposal, please send a summary containing the following information:
- Special session title
- Organizers (complete address, phone, and email)
- Abstract (up to 300 words)
- List of special sub-topics, if any
For submissions or inquiries please email firstname.lastname@example.org.
The information contained herein is subject to change without prior notice.
We will keep updating program in the coming days, please check back for updates.
How to boost infrastructure defect detection using a streamlined annotation pipeline and efficient training strategy on drone images
AI-based defect detection on electrical grid requires massive amounts of quality training data to reach industrial-grade performance and ROI. Sterblue and Ingedata collaborated closely to boost Sterblue’s AI models with Ingedata-labelled drone images. This success-story brings a deep understanding on how to set an efficient pipeline to annotate defects on drone images, and presents defect detection results for electrical grids. From image backlog management to quality control, Sterblue & Ingedata jointly present how they built an image labelling process and trained an annotation team for objectified defect recognition. This real life use case demonstrates the path we went through from quality labelling to unclenching added value levers for large utility companies.
Semantic Segmentation of UAV Aerial Videos using Convolutional Neural Networks
UNIFIED AI LAB
Semantic segmentation of complex aerial videos enables a better understanding of scene and context. This enhances the performance of automated video processing techniques like anomaly detection, object detection, event detection and other applications. But, there is a limited study of semantic segmentation in aerial videos due to non-availability of the relevant dataset. To address this, an aerial video dataset is captured using DJI Phantom 3 professional drone and is annotated manually. In addition, the proposed research work investigates the performance of semantic segmentation algorithms for aerial videos implemented using Fully Convolution Networks (FCN) and U-net architectures.
How Drone AI can accelerate enterprise knowledge on their existing field assets and operations
VHIVE TECH LTD.
In this session, we would like to introduce how drones revolutionized site inspections in various industries. The use of AI as part of the autonomous drone flight has brought the enterprises to new heights by enabling them to obtain accurate, high-quality data regarding their assets while reducing cost and minimizing risk. We can present a case study that shows how a Tier 1 enterprise has benefited from vHive’s software solution using of autonomous drone hives. What were the challenges (data accuracy, data quality, time, costs, and putting their employees in danger). We will bring results of using this cutting-edge technology and how AI shortened immensely the time and costs of data acquisition, data analysis and providing better actionable intelligence while guaranteeing employee safety. www.vhive.ai
Mission and Path Planning for Drones
BRANDENBURG UNIVERSITY OF TECHNOLOGY
The mission and path planning problem for an inhomogeneous fleet of unmanned aerial vehicles (UAVs) asks for optimal trajectories that together visit a largest possible subsets from a list of desired targets. When selected, each target must be traversed within a certain maximal distance and within a certain time interval. The UAVs differ with respect to their sensor properties, speeds, and operating ranges. The UAVs’ trajectories must avoid „forbidden“ areas. Also the fuel consumption rates during cruise, climb and descend is considered. We formulate the mission and path planning problem for UAVs as mixed-integer nonlinear control problem, and solve it numerically using available software tools for different scenarios with varying numbers of potential targets, fleet sizes, and restricted areas.
New technologies and materials for Unmanned Aerial Vehicles (UAVs)
WAR STUDIES UNIVERSITIES
The aim of the work is to develop new technologies used in the construction of unmanned aerial systems. New construction materials will be presented - fiber composites. The results of strength tests of these materials will be presented in the context of their adhesive and mechanical connections - rivets and rivet nuts. On the basis of the DIC (Digital Image Correlation) method, displacements will be determined, which are used to assess the possibility of using fasteners depending on the materials.
Dynamic task allocation in an autonomous mult-uav mission
MASSACHUSETTS INSTITUTE OF TECHNOLOGY (MIT)
A mechanism for dynamically allocating tasks among multiple UAVs operating autonomously during a mission is discussed. Task assignment is adjusted by each UAV dynamically during the mission based on criteria related to the individual UAV's operational status and/or mission parameters. Task allocation is determined independently without group communication between the UAVs actively taking part in the mission and without direct communication to a ground-based controller. A communication UAV pro videos a shared memory space that may be utilized by each UAV in determining its own task allocation.
Increasing the flight time of unmanned aircraft using renewable energy sources
VILNIUS GEDIMINAS TECHNICAL UNIVERSITY
Pointly - an AI Platform for Point Cloud Data and how Drone Companies can benefit
SUPPER & SUPPER GMBH
Drone-based photogrammetry and laserscanning have vastly expanded the applications of 3D point clouds. Many companies already gather point cloud data of their physical assets on a regular basis. But in order to fully leverage the potential of this data for a digital transformation - advanced and automated analysis tools are needed. This is where Pointly comes in as a cloud based SaaS. It's AI-assisted labelling tools allow you to assign classifications to large point clouds faster and more precise than ever before. Point cloud storage and acquisition platforms can be interfaced with Pointly through it's API and can give you and your customers access to an advanced point cloud toolset. www.supperundsupper.com
Knowledge-driven Point Cloud Data Structuration
UNIVERSITY OF LIÈGE
The presentation primarily aims at providing all the necessary information for the development of an infrastructure: The Smart Point Cloud (SPC). It permits to handle point cloud data, manage heterogeneity, process and group points that retain a relationship regarding a specific domain ontology that allow to query and reason for decision-making tools including smart modelling. The resulting implementation of the SPC is based on new meta-models that permit to structure the information (3D geometry and semantics) and leverage available knowledge for accessing decision-making support tools and resasoning capabilities.
Unsupervised semantic interpretation of 3D point clouds of vineyards for precision agriculture
DISAFA - UNIVERSITY OF TURIN
In addition to 2D georeferenced field maps, 3D models are showing a growing importance in effective and site specific management of crops. A new unsupervised algorithm to semantically interpret raw 3D dense point-cloud of vineyards will be presented. The algorithm automatically classifies portion of the model as terrain, vines canopy or other. In addition, it provides a 3D mesh surface describing the external envelope of vine rows canopy, with a limited number of instances. Vineyards result to be spatially modelled by a dataset even 400 times lighter than the original dataset, assuring in the meanwhile a neglectable loss of information. The methodology does not require vine rows to be rectilinear and it is robust to inter-row grassing and hilly region.
3D Semantic Segmentation of Large-Scale Point-Clouds in Urban Areas Using Deep Learning
Point cloud is a set of points in 3D space, typically produced by a 3D scanner to capture the 3D representation of a scene. Accurate 3D-segmentation results can be used for constructing 3D scene for robotic navigation and assessing the city expansion. Dealing with point cloud data poses a huge challenge of irregular format as points are distributed irregularly unlike 2D pixel of an image or 3D voxel of a 3D model. A number of deep learning architectures have been proposed to model 3D point cloud to perform semantic segmentation. In this presentation, we present a new case study of applying three novel deep learning architectures, PointNet, PointCNN and SPGraph, to an outdoor aerial survey point cloud dataset, whose features include intensity and spectral information (RGB). We then compare the results of 3D semantic segmentation from such networks in term of overall accuracy.
UAV-Based Situational Awareness System Using Deep Learning
THE UNIVERSITY OF SYDNEY
Situational awareness by Unmanned Aerial Vehicles (UAVs) is important for many applications such as surveillance, search and rescue, and disaster response. we developed the Person-Action-Locator (PAL), a novel UAV-based situational awareness system. The PAL system addresses the first issue by analyzing the video feed onboard the UAV, powered by a supercomputer-on-a-module. Specifically, as a support for human operators, the PAL system relies on Deep Learning models to automatically detect people and recognize their actions in near real-time. To address the third issue, we developed a Pixel2GPS converter that estimates the location of people from the video feed. The result - icons representing detected people labeled by their actions - is visualized on the map interface of the PAL system.
AeroInspekt - Automated Aerial Surveying of Rail Infrastructuree
TU BRAUNSCHWEIG - INSTITUTE OF FLIGHT GUIDANCE
Manual routine inspection of infrastructure is often cost and time intensive due to manual labor and survey related down-time. The presented research project AeroInspekt by HHLA and TU Braunschweig developed and evaluated an approach based on drone photogrammetry to survey crane rails infrastructure in Hamburg's container terminal. The approach aims to build an automated workflow to survey the rails with Millimeter-resolution even during crane operation. Therefore an adaptive mission planner, specialized ground control points and an automated data extraction workflow has been established and tested in operation.
Is there an afterlife for drone images?
AARG (AERIAL ARCHAEOLOGY RESEARCH GROUP)
Germany & UK
Images from any above-ground platform may be taken for certain projects but, as has been apparent from uses of aerial and satellite images, they can serve many additional purposes of which identification of archaeological features is one. Aerial photographs and satellite images can be consulted or purchased from on-line databases or analogue libraries and we encourage drone operators to consider the use of an accessible platform so that their use may extend beyond the project for which they were captured. As an outlook such a database/platform can give the basis for drone-based AI applications and reference data sets.
Camera-based Navigation for UAVs in GNSS-denied environments
Unmanned aerial vehicles (UAVs) rely on global navigation satellite systems (GNSS) like the Global Positioning System (GPS) for navigation but GNSS signals can be easily jammed. Therefore, we propose a visual localization method that uses a camera and data from Open Street Maps in order to replace GNSS. First, the aerial imagery from the onboard camera is translated into a map-like representation. Then we match it with a reference map to infer the vehicle’s position. An experiment over a typical sized mission area shows localization accuracy close to commercial GPS. Compared to previous methods ours is applicable to a broader range of scenarios. It can incorporate multiple types of landmarks like roads and buildings and it outputs absolute positions with higher frequency and confidence and can be used at altitudes typical for commercial UAVs. Our results show that the proposed method can serve as a backup to GNSS systems where suitable landmarks are available.
Swarm Information Gathering with Autonomously Flying UAVs
GERMAN AEROSPACE CENTER (DLR)
In this talk I will introduce a drones swarm system that we developed at the German Aerospace Center (DLR). Our system allows an operator to monitor an area of interest with a swarm of autonomous drones. First, I will present an overview of our developed system. This will be followed by results of a measurement campaign that we carried out within the framework of H2020 HEIMDALL project. In the campaign, we used our system to monitor an area of interest to search for potential wildfire hotspots. I will finalize this talk with an outline of our latest research on the use of deep reinforcement learning methods to monitor a wildfire front with a swarm of drones.
Intelligent video processing for unmanned aerial systems
The huge amount of produced image data poses a great challenge for using UAS as a tool in many application fields. In this context, Fraunhofer IOSB is working on methods and applications that are aimed to assist the exploitation process and to disburden the operator during critical missions. The goal is to provide real-time optimized features for live surveillance and tactical reconnaissance as well as functionalities valuable for offline reconnaissance tasks. In combination with a video archive and retrieval functionality, this offers a large added value for UAS users – especially in the context of search and rescue tasks as well as infrastructure inspection tasks.
Autonomous Flight in the Wild: Progress and Challenges from Skydio
The technology for intelligent and trustworthy navigation of autonomous UAVs is just reaching the inflection point to provide enormous value across video capture, inspection, mapping, monitoring, and delivery. At Skydio we believe the ability to handle difficult unknown scenarios onboard and in real-time based on visual sensing is the key to making that happen, within a tightly integrated system from pixels to propellers. I will discuss our learnings from shipping a fully autonomous drone, the algorithms that make it work, and challenges beyond.
Mapping multispectral Digital Images using a Cloud Computing software: applications from UAV images
UNIVERSIDAD DE EXTREMADURA
This presentation reports an experience related to the analysis of a vineyard with multispectral photogrammetry technology and UAVs and it demonstrates its great potential to analyze the Normalized Difference Vegetation Index (NDVI), the Near-Infrared Spectroscopy (NIRS) and the Digital Elevation Model (DEM) applied in the agriculture framework to collect information on the vegetative state of the crop, soil and plant moisture, and biomass density maps of. In addition, the collected information is analyzed with the PIX4D Cloud Computing technology software and its advantages over software that work with other data processing are highlighted.
COLIBRI: A robotic hummingbird
UNIVERSITÉ LIBRE DE BRUXELLES ULB
COLIBRI is a tailless robotic hummingbird capable of hovering flight. The robot weights 22 gr with on- battery and control board; it has a wing span of 21 cm and a flapping frequency of 22 Hz. The lift force is produced by compliant membrane wings. It is actively stabilized in pitch and roll by changing the wing camber with a mechanism known as wing twist modulation.
Design and Development of a Cargo Unmanned Arial Vehicle (UAV)
This presentation will be focused on our journey from research and development all the way to manufacturing of the Black Swan UAV – an unmanned cargo aircraft made out of composite materials. We are currently in the detailed design stage of a full scale prototype of the UAV. A process for designing, analyzing and optimizing of complex aerodynamic surface with varying curvature will be presented, as well as a process for designing and analyzing of complex structural elements part of the internal structure of the UAV. The presentation will also include an in depth look of the manufacturing processes involved in the creation of one of our composite parts.
DATE & LOCATION
DRONE Berlin, 26.-27.05.2020
Rudower Chaussee 17, 12489 Berlin
OUR FREE NEWSLETTER
Subscriped. Thank you.