Lifelike avatars are one step closer

Lifelike avatars are one step closer

Anyone who uses cyberspace may soon encounter animated avatars. ETH researchers have developed new algorithms that make it much easier to create virtual people than before. […]

At the latest since the corona pandemic, we are staring at the screen even more often. Meetings, discussions with work colleagues or conferences take place via video telephony. According to the big tech companies, we should be able to meet in a virtual world of experience, the so-called metaverse, from next year – using 3D glasses and specialized computer programs.

The key to the most natural user experience possible in virtual reality applications are so-called avatars, i.e. computer-generated, plastic representations of people. The more realistic the avatars look and behave, the sooner the feeling of a real social interaction arises.

However, modeling a person in detail and in motion still challenges the developers of such applications. Today, graphics programs can already create photorealistic, static avatars. But in order to animate a laughing face, for example, graphic designers have to edit almost every single image by hand on the computer; they improve nuances such as wrinkles and the casting of shadows.

Researchers led by Otmar Hilliges, professor of computer Science at ETH Zurich, showed how things can be simpler in a new study that they published at the international conference for computers Visioncall_made in autumn 2021. Instead of modeling every detail, the scientists use intelligent algorithms that, based on 3D images of people in a few poses, learn to automatically display animated full-body avatars in all imaginable poses.

Computer model can even represent rollover

Computer programs that use artificial intelligence (AI) to create lifelike virtual people have only been in existence for a few years. So that these programs can realistically depict the various body positions, they are trained with so-called 3D scans of a real person, which a complex camera system has previously recorded.

Dancing Avatar
(c) Xu Chen/ETH Zurich

The AI algorithms process the scans by measuring countless points outside and inside the body and thus defining its contours as a mathematical function. You create a first representation of the person in the basic position. The algorithms now calculate the path from a moving pose back to this basic position. In this way, you will build a computer model that can set avatars in motion.

However, extreme poses outside the known repertoire of movements overwhelm such models and there are clearly visible errors in the representations: the arms are detached from the body or the joints are positioned in the wrong place. Today’s models are therefore trained with as many different poses as possible, which means a huge effort for image acquisition and requires enormous computing power.

Wheel Beating Avatar
(c) Xu Chen/ETH Zurich

Especially for interactive applications, AI avatars are therefore hardly usable so far. “It is impossible and, above all, inefficient to capture the entire repertoire of movements in the image,” says Xu Chen, an ETH doctoral student and first author of the study.

The new method developed by Chen, on the other hand, takes the opposite approach: starting from the basic position, the AI algorithms calculate the path to a moving pose. Because in this way the starting point of the calculations always remains the same, the algorithms learn to generalize movements better.

For the first time, such a computer model is able to display new movement patterns without any problems. Even acrobatic movements such as a rollover or a back bridge can create it.

Any number of new faces with just one image

The new full-body avatars cannot yet be personalized; the representations are limited to the person from whom the 3D images originate. Chen and his colleagues therefore want to further develop the computer model in such a way that it can create new identities at will.

One face, different views
(c) Marcel Bühler/ETH Zurich

Marcel Bühler, also a doctoral student in Hillige’s group, has already found a solution for personalizing the faces of avatars and modifying them as required. Like Chen in his full-body models, Bühler used intelligent algorithms to create new animated faces by combining a 3D face model and a wide range of portrait photos.

While previous computer programs already provide good animations of faces in the frontal view, the Bühler model can also realistically display faces in the side view as well as from above and below.

If you look closely, you can expose deepfakes

One face, different views
(c) Marcel Bühler/ETH Zurich

Is there a danger that the new technology will soon be used to circulate even more realistic deepfake videos, for example to fake a speech by an important politician? “Deepfake videos are far from perfect,” explains Bühler. Most computer programs would often only give good results for a certain setting. For example, the new face model cannot yet realistically depict details such as hair.

“If you look closely, you will still find mistakes,” says the ETH doctoral student. He thinks it is more important to inform the public about the current state of affairs and to sensitize them. If research on 3D display techniques, as well as their vulnerabilities, are publicly available, this could help cybersecurity experts to more easily detect deepfake videos on the web, says Bühler.

For interactive virtual reality applications, the work of the ETH researchers is making great progress. It is quite possible that tech companies such as Facebook and Microsoft will implement the newly developed techniques of the two doctoral students in their avatars.

This article first appeared on ETH-News.

Outsourced software development company | Outstaffing services

Ready to see us in action:

More To Explore

IWanta.tech
Logo
Enable registration in settings - general
Have any project in mind?

Contact us:

small_c_popup.png