Don't show up to the next meeting? Send your 4K AI-based "metahuman" clone instead

Interesting Engineering met up with Ploonet at CES 2023 to develop a language-agnostic digital clone.
Sade Agard
Creating a clone

The virtual world is one of the most high-anticipated emerging technologies on the planet. In particular, virtual humans are predicted to have a $527.58 billion market by 2030 as their use grows in several industries, including entertainment, business, and retail.

One company that caught our eye at the Consumer Electronics Show (CES) 2023 was Ploonet- a subsidiary of artificial intelligence (AI) Korean-based firm Saltlux.

Interesting Engineering (IE) spoke with CEO Young Sun Bae to learn more about his 'metahuman' technology and what we can expect regarding future (or current) real-life applications.

 

So, what does one have to do to be 'cloned?'

The procedure was straightforward. At first, you need to step into a greenscreen booth and position yourself with the guidance of two marked areas on the floor. You're then recorded for 40 seconds and told to gesture (upper body only) as if talking in the pre-written script you provided beforehand. The script was about two to three sentences long. 

Critically, you had to record the video with your mouth slightly open but without moving your lips- this part was a little awkward. Some slight movement of the head was allowed. 

The following question and answer (Q&A) session has been edited for length and clarity.

Could you describe what we have here today? What is this service you've developed for making metahumans?

We provide the experience of cloning yourself using our AI-powered platform.

Typically, at a professional level at 4k resolution, it takes about three to four hours for face and voice scanning. But today, we only need your 40-second video. With that, you can clone yourself and make your avatar, and then that avatar can speak in any language with a given script.

 We have about six or seven languages and will expand too. The interesting part is that our technology is language-agnostic. For example, two gentlemen had asked if they could add Norwegian. Yes, of course, we can.

Could you tell us more about the technology behind your metahumans?

Yes, you can imagine something similar to deep-fake technology. It's similar to that, but we have our own technology. And very importantly, it's not only about the video. Sure, people watch and say, 'oh, that looks natural.'

Another very important layer understands what the person is gesturing to and their vowel sounds. This, synchronizing vowels and lip movement- that's where our technology comes in. 

What kind of real-life applications could we expect with this? 

Let's imagine a globalized company CEO. When creating messages for their employees, making multiple languages could take at least one or two days. But using our solution, we would take a couple of hours to generate the messages. 

You can use this for explainer videos and product instructions. Additionally, many business clients found that 40 percent of the calls at customer or technical centers could be removed if they had proper instructions.

So, what he can do is not only create this virtual avatar, we can make it interactive. It will talk to your customers, and then it will answer based on the training done.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board