Generative AI is very new, so it’s having a few growing pains. One of the major issues is that the results that generative AI produces can’t always be trusted because the data that trained the AI has been somehow compromised. The other issue is that there are major concerns about what this AI will do as it evolves to be able to accomplish more and more things. The concerns have resulted in a broad call to pause AI development, which has no chance of happening, and resulted in a recent US Presidential Technology Advisory board meeting to try to mitigate any resulting unmanageable damage.
It is somewhat ironic that the only major AI player not at that meeting was IBM given it has been working on AI in production longer than anyone else. IBM both identified these concerns early on and moved to mitigate those concerns successfully. I get why they had Microsoft, Alphabet (Google), OpenAI, and Anthropic at the meeting because those entities represent the point of the spear when it comes to risk, but why wouldn’t you invite the company that first identified and worked to mitigate the risks of generative AI?
Generative AI risks
This isn’t to say that ChatGPT (the generative AI that is the foundation of Microsoft Copilot) is too risky to use. It isn’t. I use it myself regularly. But I do have to oversee what generative AI does and says because, like any computer program trained on public data, it gets the answer wrong from time to time. As we move from using it for fun and towards helping us make major decisions, the risks associated with any mistakes increase, particularly if we move into areas like defense, healthcare, government, law enforcement, or finance. In those areas, unidentified mistakes can cost money or lives. At enterprise scale, those mistakes could do a massive amount of damage.
Therefore, assuring the accuracy and validity of any related answer or decision becomes a critical part of assuring the outcome. The kind of rigor you put behind an editing or email tool would likely be inadequate for areas like healthcare and defense where a minor mistake could be catastrophic (the movie War Games comes to mind).
ChatGPT and other advanced platforms will mature over time, and their training sets will increasingly get automated reviews to assure quality, but IBM has largely already done this work which has now resulted in the company’s launch of Watson X, IBM’s generative AI answer to the question of assuring an enterprise-level generative AI solution.
Watson X
I first started following Watson when no one was really talking about production-level AI. This was back in the early 2000s shortly after a prototype out of IBM Labs had won Jeopardy. Years later, it lost a debate challenge against a professional debater while I watched. What impressed me was that while the pro won on the merits, Watson came across as more natural and more human than its human challenger (that decision was subjective, not objective). That was back in June of 2018, nearly five years ago, and IBM has been advancing the platform ever since.
But IBM had concerns early on about how AI would be used, the need for accurate answers, and the need to focus on employee enhancement rather than employee replacement. In other words, pretty much the same things we are beginning to look at now. IBM’s approach, thoughts, and remedies are now mature, tested and trustworthy, and consistent with the IBM brand that once was known as the only brand you would never get fired for choosing.
Wrapping up: Watson X sets the generative AI bar
If we are looking for an initial standard on how AI should be created, assured, managed, and deployed, it might be wise to start with the company with the most experience, and that has already identified and worked to mitigate the problems the new providers are just now considering. IBM has been at this the longest, has identified and worked to mitigate the related problems the soonest, and has the most mature AI platform in the market, enterprise-level or otherwise. Watson X currently sets the bar when it comes to quality, security, reliability, and safety, so it would be wise to use IBM as the standard against which all others are measured.
- How to Build the Perfect AI Workstation - November 5, 2024
- IBM Launches Guardium Data Security Center: Well-Timed for High-Risk Sites - October 28, 2024
- Intel and AMD Form x86 Consortium in Advance of NVIDIA’s ARM Challenge - October 19, 2024