How are virtual digital humans in the metaverse made?

thumbnail

Recently, virtual humans have developed explosively, and different industries such as banking, media, and beauty have launched their own virtual humans. All of a sudden, AYAYI, LING, Liu Yexi and other virtual humans with distinctive styles entered the public eye. The virtual digital human image is divided into two categories: 2D and 3D, and the appearance style is divided into cartoon, realistic, realistic and other types. Hyperrealism refers to the appearance of the characters with high degree of simulation and lifelikeness, which requires the number of faces to be 10,000. Above, the high precision can withstand 360-degree shooting without dead ends. Liu Yexi is a typical 3D hyper-realistic virtual digital human. Zoom in to see if her skin, facial features, hair, and limbs are very close to real people?

To put it simply, the production of a 3D virtual digital human needs to go through three steps: image generation, animation generation and voice generation. Image generation determines her appearance, animation generation allows her to move flexibly, and voice generation allows her to speak and perform. Expression and interaction; the most important part of image generation is modeling. Common modeling methods include manual modeling, AI modeling and scanning modeling. With the development of science and technology, more efficient scanning modeling and AI modeling technology gradually It has become the mainstream way of character modeling, and a series of binding and driving are required to make the icy model move after modeling.

Bone and muscle binding determines the naturalness and fluency of the model's subsequent body movements and facial expressions. Currently, there are two mainstream methods: bone binding and blend deformation, and the driving is divided into real-life driving and intelligent driving. Real-person drive refers to the acquisition of real-life actors' movements and facial expression data through capture technology, and then migrating and synthesizing these data into virtual digital humans.

In recent years, the capture technology based on computer vision has developed rapidly. Most of the facial expressions of digital virtual people are collected by a depth-of-field camera to collect the 3D point cloud image of the real person's face, and then the facial movements and expressions are transferred to the digital human body in real time. With binding and driving, animation is generated through rendering. Rendering is divided into real-time rendering and offline rendering. In order to realize real-time control and real-time interaction of virtual digital humans, major rendering engines have been making breakthroughs in algorithms and improving real-time rendering efficiency. It is hoped that the optimal solution can be obtained in terms of picture quality, rendering speed, and computing resources of real-time rendering.

The voice of the virtual digital human can use real voice or synthetic voice. After continuous training with artificial intelligence technology, the synthetic voice is more and more similar to the tone, rhythm and cadence of real voice, and can correspond to the lip shape in real time. The virtual digital human is the perfect combination of technology and art. In the virtual space full of imagination in the future, the virtual digital human is also a more idealized and free human projection, and continues to experience the charm of the virtual world.

This article was originally published on the "" public platform, so stay tuned for more in-depth interpretations of the Metaverse.

Related Posts