Experience the wonder of Japanese Animation!
Warning: Unmarked Spoilers Lie Within These Pages!
We need anime profile submissions and character profile submissions to help us grow. Do you have the knowledge, passion, and desire to write one?

How Artificial Intelligence is Reinventing Visual Effects Posted Jun 15, 2020

Artificial Intelligence (AI) is a wide-ranging tool that allows people to improve the way we collect information, analyze data, and use the resulting insights to improve decision making. Some of the application of AI include speech recognition, expert systems, and machine vision. AI can be used in casino gaming to improve the overall gaming experience. Also, it is used in the healthcare sector to discover the link between genetic codes, power surgical robots, and so on.

Just like other sectors across the globe, AI is having a huge impact on computer graphics research with the potential to transform the VFX production.

Keep reading to know how AI is reinventing visual effects.

AI has automated many repetitive tasks

In Marvel’s Avengers Endgame, Josh Brolin’s performance was flawlessly rendered into the 9-ft Thanos by a team of animators at Digital Domain. This example was produced using AI and ML tools to automate the part of the process. This demonstrates that AI/ML can not only transform the VFX creation for movies but also make sophisticated VFX techniques more accessible.

In recent years, 3D animations and simulations have reached a fidelity it terms of art-direction that is near perfection to the audience. Today, there are very few effects that are not possible to create using AI, given challenges such as crossing the mysterious valley for photorealistic faces.

Over the past few years, the VFX industry has placed a major emphasis on creating more effective, efficient, and flexible pipelines in order to meet the requirement of VFX film production.

For a while, most of the repetitive and arduous tasks like composting, rotoscoping, and animation were outsourced to foreign studios. But with the recent advancements in AI, many of these tasks can now be fully automated and performed very fast.

Manual to automatic

Matchmoving, for example, is a technique that allows the insertion of computer graphics into live-action footage with correct position, orientation, scale, and motion relative to the photographed objects of the shot. This can be a frustrating process as tracking camera placement within a scene is typically a manual process and can consume more than 5% of the total time spent on the entire pipeline.

Recently, software developer Foundry developed a new method using algorithms to accurately track camera placement using metadata from the camera. This has improved the matchmaking process by 20% and proved the concept by training the algorithm on data from DNEG.

Rotoscoping, another labour-intensive task is being tackled by Kognat’s Rotobot. Using its AI, the company claims that a frame can be processed within 5-20 seconds. The accuracy of the work is limited to the quality of the deep learning model behind Rotobot but it will improve dramatically as it feeds new data in the near future.

Democratising mocap

AI is also transforming motion picture, another traditionally expensive exercise requiring specialized hardware, suit, trackers, controlled environments, and a team of experts to make it all work.

RADiCAL is planning to create a motion capture AI-driven solution with no physical features at all. It aims to make it is as easy as recording a video, even from a mobile device and uploading it to the cloud where the firm’s AI will send back motion-captured animation to movements.

Visitor Comments

Additional Content