This is a framework that uses deep learning and model predictive control to manage agile robots and quadrotors.




Over the past few years, computer scientists have made significant progress in developing advanced algorithms that can control the movements of robotic agents. One such technique is model predictive control (MPC), which employs a model of the agent's dynamics to optimize its future behavior towards a given objective while satisfying certain constraints (e.g., avoiding obstacles).

A team of researchers from the Technical University of Munich and the University of Zurich has recently introduced a framework called Real-time Neural MPC. This framework integrates complex model architectures based on artificial neural networks (ANNs) into an MPC framework designed explicitly for agile robots like quadrotors (i.e., drones with four rotors). The concept presented in IEEE Robotics and Automation Letters builds on a previous approach developed by the University of Zurich's Robotics and Perception Group.



According to Tim Salzmann and Markus Ryll, researchers at the Autonomous Aerial Systems Group of the Technical University of Munich, they were inspired by the work of the Robotics and Perception Group, led by Davide Scaramuzza. The group's core idea of using data-driven components to enhance traditional control algorithms caught their attention, and they wanted to build upon it.

After developing a proof-of-concept that generalized the approach using Gaussian Processes (GPs) to Deep Learning Models, they shared their idea with the Robotics and Perception Group at the University of Zurich. The two labs collaborated on technical work and experiments, leading to a new collaboration.

The latest framework proposed by Salzmann, Ryll, and their colleagues merges online optimization of MPC with deep learning models. Deep learning models are complex and require significant computational resources. However, by approximating them online in real-time, the framework can use dedicated hardware (GPUs) to process these models efficiently. As a result, their system can predict optimal robot actions in real-time.

Salzmann and Ryll stated that the Real-time Neural MPC framework combines two fields, optimal control and deep learning while allowing both to take advantage of their respective optimized frameworks and computational devices. They explained that deep learning computations can be performed in PyTorch/Tensorflow on a GPU, while control optimization can be performed in compiled C code on a CPU. This approach allows the power of deep learning to be used in previously impossible applications, such as onboard, optimal control of a quadrotor.

The researchers tested their framework in simulated and real-world environments and used it to control a highly agile quadrotor in real time. The results of their experiments are encouraging, as they were able to use neural network architectures with a parametric capacity more than 4,000 times larger than those previously used for real-time control of agile robots. Additionally, they discovered that their framework can decrease positional tracking errors by up to 82% compared to conventional MPC methods without a deep learning component.

Salzmann and Ryll explained that developing models of the dynamics of controlled systems and their environment is challenging in robotics as it is difficult to formulate them analytically. However, learning-based approaches, particularly neural networks, can capture dynamics and interaction effects. The accuracy of these models increases with the size of the neural network, but the computational cost also increases. The Real-time Neural MPC framework overcomes this challenge by enabling powerful deep learning models previously not feasible in Model Predictive Control, thereby improving the model's predictive power.

The team's framework, which leverages the growing presence of GPU chips in embedded systems, has the potential to help developers model robot dynamics and interactions with their environment more accurately. This could reduce the risk of accidents and improve navigation capabilities. The researchers also noted that future work could focus on detecting situations where the output of deep learning models is erratic and providing a fallback to stabilize the system and improve its robustness in such situations.

Additional details are available in the following publications:

1. Tim Salzmann et al, "Real-time Neural MPC: Deep Learning Model Predictive Control for Quadrotors and Agile Robotic Platforms," IEEE Robotics and Automation Letters (2023), DOI: 10.1109/LRA.2023.3246839

2. Guillem Torrente et al, "Data-Driven MPC for Quadrotors," IEEE Robotics and Automation Letters (2021), DOI: 10.1109/LRA.2021.3061307

Post a Comment

Previous Post Next Post