Adobe will continue its adoption of artificial intelligence with a new generative AI-powered video creation and editing tool coming “later this year”.
In what the company described on Wednesday (Sep, 11) as being “a new era of video editing” the upcoming Firefly Video Model will be available in beta and is said to have been “designed to be commercially safe.”
“At Adobe, we’re leveraging the power of AI to help editors expand their creative toolset so they can work in these other disciplines, delivering high-quality results on the timelines their clients require,” the team writes in their announcement.
The generative AI will help with common editorial tasks like gaps in footage, removing unwanted objects from a scene, smoothing jump cut transitions, and generating b-roll.
Adobe’s AI Firefly Video Model can generate videos from image prompts
The AI video tool will be able to create new videos in under two minutes through text prompts and reference images can even be added to gain greater control.
“With the Firefly Video Model you can leverage rich camera controls, like angle, motion and zoom to create the perfect perspective on your generated video.”
The tool is said to be able to support a variety of use cases including creative atmospheric elements like fire, smoke, dust particles, and water against a black or green background. This can then be layered over existing content using Premiere Pro and After Effects.
Adobe says there is an “ever-increasing demand for fresh, short-form video content” which means “editors, filmmakers and content creators are being asked to do more and in less time.”
The aim is for the generative AI to help out with the additional tasks too like color correction, tilting, visual effects, animation, audio mixing, and more.
To create a tool that will actually be of value to the audience, the American software company says it has spent months working “closely with the video editing community to advance the Firefly Video Model.
“Guided by their feedback and built with creators’ rights in mind, we’re developing new workflows leveraging the model to help editors ideate and explore their creative vision, fill gaps in their timeline and add new elements to existing footage.”
The actual date of release isn’t yet known, but the beta version can be expected at some point over the next few months.