NVIDIA Clara Learning Could Vastly Improve Healthcare Quality While Preserving Privacy

Image from Pixabay

One of the strongest AI applications is in medicine because the complex mass of medical problems and even more complex set of remedies are an ideal big data problem for an AI engine. However, because of the concerns surrounding patient privacy, training the AIs so they can reach their potential for matching ideal cures to unique diseases and problems has seemed an impossible task. Looking at this problem NVIDIA, known mostly for their advances in autonomous driving and graphics, came up with a way to do AI training while protecting patient privacy. The solution called NVIDIA Clara Learning, which was announced this month, shows substantial promise for massively improving the quality and effectiveness of healthcare worldwide. Let’s explore that this week.

The Problem with AI Training and Privacy

Deep Learning, which is the Holy Grail of machine-driven AI processes, requires massive data sets to reach operational effectiveness. But—once trained—can do nearly magical things by providing effective ways to solve complex problems using that data. This approach would seem ideal for the medical industry which has massive amounts of real-life information on cures, medication, and treatments that often aren’t matched to the patients that need them.

I’m reminded of one of the early IBM Watson trials where they were able to get the system trained in certain areas and the test team was able to demonstrate that one patient who had been suffering for years and eventually found a cure could be used as a test case and that AI was able to come to the same conclusion in minutes.

But what has become a problem are the various laws surrounding patient privacy. In addition, concerns around hospital proprietary research data has largely prevented the now maturing focused AI solutions from getting to the data they need to reach these timely conclusions.

The result is that thousands of people, who might otherwise be cured or have treatments vastly improving their quality of life are not—and figuring a way around that problem has, in and of itself, been problematic. Disappointingly so, particularly for those at risk or who have loved ones that could have been saved but die instead.

NVIDIA, realizing that to reach its potential there needed to be a way around this created an interesting approach to AI training that provides a similar benefit but without the risk of violating patient confidentiality.

NVIDIA Clara Learning

NVIDIA used a very innovative approach of allowing each hospital or health organization to, using their own data, abstract their own data into limited actionable information. As a result of this abstraction the personal references are eliminated and only the relevant connections between the medications, practices, and cures remains. Then, this refined information is aggregated between organizations—kind of like a digital symposium where doctors share successful practices while not discussing the patients they were used on but at massive scale and computer speeds.

The result is an ever more capable central information repository that the AIs can then use as a training base. This is somewhat like the Symbolic AI approach that has been gaining traction of late where the data is abstracted early on to provide similar results to Deep Learning while massively reducing the related data sets. So, the data directly connected to the patients never leaves the organization that holds it but the information connected to the successful and unsuccessful approaches does and is used to train an AI that then becomes available as a resource to all of the participating parties.

This should result in the ability to fulfill the promise that AI used in a medical role has always had and provide a path to wellness that many might not either find in a timely enough manner or at all.

Wrapping Up: Innovation Squared

Often with big problems like AI for medicine companies work on the aspects of the problem that are easy and ignore the critical path problems that are causing the result to be significantly sub-optimal. That isn’t the case here, NVIDIA realized that to get the job done a far more innovative process was needed and created a unique process that would achieve a similar ideal result without compromising privacy and there is a good chance they may have saved your or my life as a result. This may, as a result, be one of the most compelling examples of innovation we have seen using AI yet and it sets a template for a far more creative and interesting approach to both private and proprietary information used for critical AI training.

Latest posts by Rob Enderle (see all)
Rob Enderle: As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.
Related Post