I’m at the Qualcomm Technology Summit in Hawaii this week and while my wife is enjoying a tropical drink and getting to know the cabana boys, I’m hearing about 5G. I must be really twisted because I’m good with that—mostly because what they are talking about with regard to the roll out of their Snapdragon 855 solution and how I work. I’m writing with a Lenovo Yoga C630—the first laptop using a Snapdragon 850 processor—which is saving my life because the Wi-Fi in this conference room sucks given how many people are connected to it at the moment and attempting to stream this event (I have to believe the frame rates suck).
This is allowing me to get stuff done while others are reliving what it was like when we all had analog modems. They are taking regular shots at Intel, largely because Intel has been publishing competitive collateral “proving” performance parity but only to Qualcomm’s last generation. This isn’t uncommon for a challenging vendor to do but for a company as storied and well-regarded as Intel you’d think it would be below them.
But something jumped out at me during the 5G part of this briefing and that is that 5G isn’t just about bandwidth and range as prior “G” technologies were. According to Qualcomm, 5G has a huge new latency requirement. What makes latency important is the ability to host applications in the cloud, including twitch games, and provide experiences that are indistinguishable from apps running locally.
This has a lot of implications for the future, let’s explore some of them.
The Importance Of Low Latency
Now, I was going to point out that this focus on latency will massively enable Microsoft’s (disclosure Microsoft is a client of the author) Virtual Windows effort, which promises to allow us to run Windows on anything that is cloud-connected. If it connected with 5G, the result will be a great experience.
But, as I was writing this, the topic of AI (artificial intelligence) came up in the context of the Snapdragon 855 part and the presentation focused on how much smarter 855-based smartphones will be. Nevertheless, you’ll still be limited to what you can run on a smartphone and the most powerful AI in production on the planet is IBM’s Watson. Granted with the 855 you get a 3X performance increase for AI, but with Watson it is likely closer to 1000x and this performance increase isn’t tied to the platform. Instead, it’s a capability of a remote datacenter some of which are the size of small towns now.
Google added to this and showcased Google Lens and spoke about how this service was evolving from being able to instantly recognize objects (including naming people and pets) to being able to instantly alter how we perceive reality.
Let me give you the example they showcased. Let’s say you aren’t really happy about how your bedroom, kitchen, or living room looks. You aim your smartphone camera at the room you want to improve, and the phone analyzes the room, takes into account your tastes, and then shows you a picture of what it could be along with links to the furniture, pictures, and other items that you should like better. You could even set a budget to bound the suggestions and, I’d expect, an Amazon like purchase button to turn the virtual image into reality.
Another demo was of a woman that wanted to see how a hair color would look. The application Nalbi was showcased which can take a video (they used a woman with long hair) and it was able to apply a filter in real-time to realistically change her hair color strand by strand. That way she could see what a color would look like before she went through the expense of having hers changed and then finding out her choice sucked. (Granted at some point we’ll all be wearing AR (augmented reality) glasses which should make changing hair color obsolete because you could just do it virtually).
This also enables things like SenseTime which is a very accurate facial recognition feature which could be used to make sure your small overly active child can never use your phone to buy something when you aren’t around. For me the combination of facial recognition and AI makes sure I can remember the name of the person walking up to me (as I suck at remembering names).
Wrapping Up: Putting It All Together
When you combine low latency and AI technology, both local and remote, you get to the potential of having a true partner in your pocket. That means real-time advice that you aren’t getting now. Let me give you an example.
Have you ever watched the videos of folks staring at their phone and walking into a hole, fountain, tree, traffic, or taking a header off a cliff? Your phone should—with this technology—give you a heads-up and maybe save you a broken limb or even an early funeral. If you are motivated to pick up your phone while driving, your phone could recognize the threat and scream, “Drop the phone or die!” In fact, I can imagine insurance companies rolling out apps that connect lower premiums to people that run them, limiting injury or death. Finally, we get to a point where our phones are no longer a threat to our lives. In addition to the fact they’ll be better able to warn you of danger (you don’t want to be walking in that neighborhood or down that alley at night, otherwise known as the Bruce Wayne feature), with your permission, it could also listen in and alert you to sounds that suggest you might be at risk and more accurately call for help when you need it.
This is all on top of the ability to run any app on the device remotely as if it ran locally. This is a game changer and, I expect, it’ll make the 2020s vastly different than the 2010s.
- IBM X-Force Threat Report Still Indicates the Biggest Threat Is You - October 8, 2024
- The Impact of Qualcomm Buying Intel - September 30, 2024
- How NVIDIA AI Is Changing Driving - September 22, 2024