DeepBrain AI and the Future of Virtual People

Image credit Deepbrain AI

DeepBrain AI is in the business of creating high-fidelity, high-quality human digital twins that are mostly used to supplement talent on news programs. With this implementation, the original talent (depending on the contract) is compensated when their digital twin is used for programming. And they also create derived avatars that aren’t connected to any one person but are amalgams of lots of people. These can be used without having to compensate for any related talent.

The avatars (example here) are very difficult to distinguish from real people and have uses that include short news spots when a reporter isn’t in the studio, when talent is otherwise unavailable, or when production budgets force a lower-cost talent decision.

Let’s talk about virtual people this week and how they might be used in the future.

The perfect stand-In

If you are like me, it isn’t unusual to be double-booked and must be in two places at once. This can be problematic in a video conferencing session where people see your face and expect you to be engaged should you attempt to attend both meetings at the same time remotely.

But with your own personal avatar, you could have the avatar attend the meeting in your stead and potentially ask questions about the content that you could set up prior to the meeting based on the meeting’s agenda. But you could easily move back and forth between the meetings that were in conflict while appearing very focused on the subject matter and using the tools in the latest video conferencing platforms to transcribe what was said so you knew what your commitments were and allowing you to successfully attend and benefit from overlapping meetings.

Other uses would simply be to use your avatar as your face in meetings, never aging, always dressed well, and seemingly very focused on the content in the meeting, even if you are anything but.

Add generative AI or PowerPoint

If tied to a generative AI tool like ChatGPT, the avatar could even naturally answer a variety of questions based on its training and what is posted on the web. DeepBrain AI is using this technology in kiosks so that those seeking information have a friendly and seemingly human face with which to interact.

For those using PowerPoint like a script, generative AI can expand that script into a full speech, and then the avatar based on your image could give presentations on demand and answer questions on your behalf based on what you’ve written into the presentation notes or any related paper you provided to the system.

Even if you are backing up the presentation, the savings are significant because using a typical studio for such a video typically costs around $4K per video if you do them in person but only around $100 if you use DeepBrain’s technology instead.

So, technology has both huge time and monetary savings benefits.

Fan support

Celebrities often want to engage with fans but can’t scale well to large numbers. This technology is already being used to allow Celebrities to virtually interact with fans and to scale out Celebrity advocacy so that related videos are less expensive to create and can be created on demand even if your chosen celebrity advocate is unavailable. And you don’t have to use real people, either. Imagine Disney using this technology to have Mickey Mouse interact with kids planning to go to or are already at Disneyland for a vacation.

Kids could interact with characters both in the park or from home and ask questions and be as annoying as possible, but the virtual character would never run out of patience, get angry, or behave inappropriately with the kids. And you could do the same thing with CEOs of companies. For instance, having a chat with virtual Elon Musk might be an interesting Twitter or Tesla feature if you are a paying customer to either company. And a digital Musk could be programmed to avoid some of the mistakes that have plagued the human Musk.

Wrapping up

The creation of human digital twins is a potential game changer as these twins can be present for a lot of interactions that don’t or can’t use live people and sometimes exceed what can be done with a live person.

My personal wish, and I know this is coming, is to see this technology applied to game NPCs (Non-Player Characters) and movies so that you can have more realistic game experiences and sometimes have the option of putting yourself and your family into movies you enjoy by digitally replacing the actor and their voice with your (or your kid’s) appearance and voice kind of like what this guy did with his cat (OwlKitty). And, as noted, you wouldn’t be limited to people.

I think we are not only going to see this technology improve our productivity but allow us to create content both professionally and for fun that will be amazing. And we are only at the very start of this capability. Imagine what we’ll be able to do ten years from now.

Latest posts by Rob Enderle (see all)
Rob Enderle: As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.
Related Post