Qualcomm Moves to Provide Generative AI On Smartphones and Other Personal Devices

Image from Pixabay

Generative AI is all the rage at the moment. Ethan Mollick out of Wharton has been showcasing just how amazing and disruptive these tools are. This 10-minute video that he has created, for those that haven’t seen what Generative AI can do, is an eye-opener, and it only shows current tools, not the even more capable tools that will be coming to market this year. It’s worth watching. And while I was writing this, I ran into this video on Facebook featuring an ex-President that looks real but was created by Generative AI (it is amazing how fast this technology is advancing, some of the videos are provocative and frankly frightening).

There are several issues with Generative AI, including bias, accuracy, and ethics which are getting focused attention. But one big issue is that the technology uses a lot of processing power creating huge scalability and cost issues as this technology gains in popularity and becomes far more common. To address this problem Qualcomm is stepping up to provide AI capabilities in smartphones, PCs, and other personal devices (like a coming competitor to Apple’s Vision Pro).

Let’s talk about why this capability is critical to the future of Generative AI.

Qualcomm’s Move To Fix Generative AI

There are several problems running Generative AI in the cloud. One is the loading issue I mentioned above, but another is that as we depend more and more on this technology, our need to have it work even if the network is unavailable is also critical. With Generative AI expanding from tools that create things to becoming our interface in products like the Apple Vision Pro (which will depend greatly on voice input and response), the limitations of wireless networks, even 5G wireless networks, will become pronounced. If you are depending on that Generative AI, it will need to work even if the network is unavailable otherwise, the help and interaction the user needs will be unavailable and the related device unusable.

This week Qualcomm demonstrated the fastest ControlNet demo running on a phone using Qualcomm’s embedded AI capability. This is a 1.5B perimeter Generative AI model with performance comparable to the cloud. While this demo showcased their technology could supplement and even replace cloud-based implementations offloading the related processing to the device and not additionally loading the cloud data center at all. Further demonstrations showcased the creation of real-time virtual coaching, 1080p (HD) video coding, 3D reconstruction for Extended Reality, Computer vision for Smart Cameras, driver monitoring technology to address safety concerns surrounding impairment, and some amazing facial avatars for Extended Reality. This last could remove the headset in an Apple Vision Pro video conferencing session and make it look to the attendees like they were all conversing in the Metaverse as if they were there naturally and weren’t wearing a huge headset.

While this series of demonstrations highlighted in a next-generation product how Qualcomm’s technology could replace cloud Generative AI implementations, their goal is more of a hybrid approach which would eventually provide a balance between cloud and edge resources to assure the best experience for the user and the lowest cost for the provider of whatever cloud service the user is accessing for their Generative AI experience.

It’s amazing and incredibly disruptive.

Wrapping Up

Generative AI has several issues it will need to address as it matures in the market. One of the biggest is data center loading because we don’t have enough computing resources to handle everyone using Generative AI all the time, which is clearly in our future. Only by offloading much of the processing power to the device can we approach the level of latency, performance, and availability users will demand once Generative AI reaches critical mass. The only way is to make the Edge device capable of running the Generative AI applications as much as possible so that the loading on cloud resources doesn’t exceed what those resources have available and to assist users who may not have network connections when they need Generative AI the most (like dealing with a hiking injury outside of cell service).

In the end, Qualcomm is proposing we immediately move to a more hybrid model to prevent overwhelming data centers and crashing servers so that users can get the experience they demand without the unintended consequences that might result from inadequate processing resources.

Qualcomm isn’t alone in believing AI will need to work locally, but they are currently the most aggressive at both creating a mitigation product and in showcasing the benefits that such a product would provide in a world with hard limits on its computing resources.

Latest posts by Rob Enderle (see all)
Rob Enderle: As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.
Related Post