One of the big problems with machine learning (ML)—which is the artificial intelligence (AI) training technology most prevalent at the moment—is privacy. This reliance on ML is a problem because, with machine learning, you have to have people—often contractors—in the information loop to assure the AI is behaving as designed. This use of human consultants has led to many problems, most recently with popular personal digital assistants and individual privacy. Some users and governments don’t like the idea of people effectively using these digital assistants to listen to customers.
Deep learning (DL), on the other hand, is where the AI trains itself. DL systems have typically had to use dedicated hardware—which is often CPU-centric—making them relatively slow and costly to deploy. Well, NVIDIA may have just fixed this by rolling out their vComputeServer with NGC Containers (this included support for VMware vSphere and extends the existing support on KVM-hypervisors including Red Hat and Nutanix). The result, according to NVIDIA, increases DL performance by up to 50x and can be deployed in the cloud providing this resource in a far more scalable and accessible form.
Let’s talk about the potential impact of this announcement this week.
Deep Learning = Better Privacy Less Potential Bias
If the concern is getting people out of the process of training AIs to protect privacy, this effort by NVIDIA should have a significant positive impact. With deep learning, the system trains itself and then passes that training to systems that use the related inference to make decisions. Humans introduce bias at both the data acquisition and analysis levels and removing that bias should make the resulting implementations have a more solid connection to the data streams that define them.
You don’t want decision-making systems moving at the speed of light to be biased—which could result in an extremely rapid group of cascading bad decisions that would happen so quickly people couldn’t act in time to mitigate the damage. Removing the human element—if done properly, of course—should not only address some of the most visible privacy concerns surrounding deploying AI broadly, but some of the concerns concerning introduced bias as well.
But people don’t just introduce bias they also represent security risks.
The Security Advantage
We know that users represent the biggest security exposure that most projects enjoy. This high exposure to risk is because it is relatively easy to trick people into providing information. Often, they don’t even have to be tricked, they can be bribed or simply mess up. If you can remove people from any process you can significantly increase the level of security on that process.
By expanding the use of DL over ML, at least in theory, the related projects (particularly if they revolve around user personal information) should be both more secure and less likely to trigger a GDPR like fine. If there is no breach, there is no related liability. Besides, as part of the announcement, enterprises can also extend their security policies to virtualized servers using the GPU clusters at the center of the announcement.
Wrapping Up: More Smarter AIs With Greater Privacy
Collected together this all means that we should be getting smarter AIs with less bias and greater privacy. These solutions will be inherently more secure as well lowering the potential for profit-killing fines and unfavorable regulations.
Individually, this means that our personal digital assistants—as they pivot to this DL technology for training—will get a lot smarter a lot faster and we’ll have fewer indications that people are using this technology to spy on us. But DL AI goes far farther than just digital assistants into broad automation, smart homes, buildings, and cities, and into areas of autonomous vehicles that travel on land, sea, and air.
Overall, this means that the technology we touch will be getting smarter, more reliable, and more trustworthy at a significantly faster rate with a far lower risk that it will do us harm. That is important to most of u—which makes this NVIDIA announcement potentially far more interesting than most likely realize.
- The Impact of Qualcomm Buying Intel - September 30, 2024
- How NVIDIA AI Is Changing Driving - September 22, 2024
- Why AMD Is Taking Share from Intel – The Hidden Cost of Layoffs and Need for AI in HR - September 15, 2024