Follow BigDATAwire:

June 7, 2021

AI Model May Lead to Moodier, More Cinematic Drones

(Maria Dryfhout/Shutterstock)

Drones are already pretty smart: most modern mid- to high-end consumer models are equipped with features like in-flight object avoidance, tracking, following, and pre-programmed flight paths thanks to a bevy of onboard sensors and increasingly robust processing power. Even then, mastering these tools to create streamlined, cinematic videos requires hundreds of hours of practice with delicate, expensive equipment and tremendous effort on the day of a shoot. 

“Sometimes you just want to tell the drone to make an exciting video,” said Rogerio Bonatti, a doctoral candidate in the Robotics Institute at Carnegie Mellon University (CMU), in an interview with CMU’s Aaron Aupperlee. Bonatti, along with researchers from CMU, the University of Sao Paulo and Facebook AI Research, is working on an AI model to do just that: enable drone flights that automatically conduct cinematography in response to a desired mood.

“We are learning how to map semantics, like a word or emotion, to the motion of the camera,” Bonatti said. To do that, the team gathered hundreds of drone videos representing a variety of camera angles, speeds and flight paths. Then, they asked thousands of viewers to describe how the videos made them feel.

This data was all fed to the AI model, which in turn began to learn what kinds of shots evoked what kinds of emotions in viewers – and then could direct a drone to use those kinds of techniques on command. Commands, in this case, weren’t “follow this person” or “fly along this path”: instead, users could direct the drone to, for example, make an “interesting,” “nervous,” or “enjoyable” video.

And it worked: sample videos created using drones powered by the model were rated by viewers as having the desired moods – and not only that, but the model was able to induce different degrees of those moods. “I was surprised that this worked,” Bonatti said. “We were trying to learn something incredibly subjective, and I was surprised that we obtained good quality data.”

“This opens the door to many other applications, even outside filming or photography,” Bonatti continued. Beyond working to deliver this technology to consumers through avenues like on-screen directions for drone users and new drone capabilities, the research team is further exploring the possibility of using the same methods to direct robot behavior – for which they’ve already designed a model.

To learn more, watch the presentation on this research on YouTube at this link.

BigDATAwire