While there have been concerns that generative AI will replace artists and writers, I take a more optimistic view — and I’m not alone. I see a future where humans use generative AI to increase productivity, automating the boring parts of their jobs so they can focus on the creative process.
In addition to increasing creative output, for the filmmaking industry, harnessing the power of AI can translate into lower budgets and shorter post-production times—a huge win for filmmakers , especially those leading small productions such as Everything, Everywhere, Simultaneously.
The film was the big winner of this year’s awards season, taking home the Screen Actors Guild, Baftas and Golden Globes, as well as seven Oscars, including Best Picture, Best Director and Best Actress. While the film is said to herald a new dawn in Hollywood, one that celebrates diversity and the Asian community, European Economic Area It also ushered in another major change in the film industry: the use of AI to provide better and more cost-effective visual effects.
While recent developments in AI chatbots have taken the internet by storm, another large language model (LLM) is quietly changing filmmaking. Generative diffusion models are unlocking powerful image creation and editing tools, enhancing the creativity of visual effects artists, and ushering in a new era of cinematic magic. Diffusion models observe billions of images and learn various elements to generate new images, expand existing images beyond their boundaries, transfer styles, and create entirely new images based on text you find in metadata.
in the case of European Economic AreaA small team of visual effects artists were tasked with creating a multiverse under tight deadlines, which led them to rely on artificial intelligence tools to automate tedious editing tasks. The editors used a popular suite of AI “magic tools” from Runway, an AI content creation startup and one of the researchers behind Stable Diffusion, to create a video that costs money to make on a movie set or film set. Too high and time consuming. CGI effects. For one scene specifically, a VFX artist used the Orbit tool to quickly and cleanly cut rocks that cut through the sand as dust swirled around the shot. Days of hard work are cut down to mere minutes. result? Oscar-quality filmmaking magic.
The field is home to a slew of innovative startups helping filmmakers bring their visions to life in new and exciting ways. Metaphysics leverages generative artificial intelligence to create photorealistic videos, and will soon be used to help Tom Hanks and Robin Wright portray younger characters in high-fidelity quality by de-aging, more than previous attempts Like Harrison Ford in the latest Indiana Jones, not Jeff Bridges in the latest Indiana Jones. A few years ago, the sequel to Tron. Synthesia helps anyone with a computer create professional videos (for corporate training, product marketing, and educational purposes) from simple text prompts (in 120 languages), no film degree required.
Krikey, a startup led by a sister duo, uses generative artificial intelligence to make it easier for creators to breathe life into animation, helping them automate character movements. One of the best things about this tool is that artists have the option to create videos using the custom 3D avatars provided by the tool (including body and hand movements, facial expressions, 3D backgrounds, and camera angles), or export a “skeletal animation” file to just Apply it to their own characters with just a few clicks. This ensures that studios and game companies can protect their intellectual property, which is never shared with Krikey. The company is also offering a “Canva-like” app that lets anyone easily create animated movies with just a few clicks, a welcome breakthrough for corporate and educational video makers.
The possibilities are endless. Composition, stylization, restoration, motion tracking, you name it — AI makes it easier, faster, and easier for creators, allowing them to focus on ideas and concepts and iterate faster. Existing footage of a train pulling out of a station can be converted to clay animation. An image of a person running on snow can be recombined to look like he is running on the surface of Mars. Aerial footage of a city model built with LEGO bricks can be rendered as a realistic, vibrant cityscape at dawn. A model walking the runway can cover up her true hair color to match her clothes. All of this can now be generated in seconds, following simple text or image prompts, and maintaining high quality and flexibility.
As more models and refiners enter the market and interest grows, we will need enormous computing power to maintain and scale them—a classic application of cloud computing power. The first version of Stable Diffusion started with 100,000 GB worth of training images and labels and generated images in just 5.6 seconds. Today, the new version has reduced that time to 0.9 seconds, while also adding the ability to increase image resolution and infer depth information.
We can all rejoice in the victorious victories European Economic AreaAs more studios, editors, and artists adopt AI tools, these tools will democratize and help unlock the potential of amateur filmmakers around the world. One thing’s for sure: the internet’s so-favorite cat videos are about to get even funnier.
Howard Wright is AWS Vice President and Global Head of Startups.
The opinions expressed in Fortune review articles are solely those of the authors and do not necessarily reflect the views and beliefs of: wealth.