watsonx.governance GenAI IBM watsonx

IBM’s watsonx.governance: Making GenAI Ready for Enterprise Primetime

The continuing evolution of Generative AI (GenAI) services and solutions has been one of the loudest tech industry narratives of 2023. However, while the media has focused intently on offerings like OpenAI’s ChatGPT, enterprise GenAI use cases have garnered less press attention. That may be, at least in part, because the path to enterprise-class GenAI is not as inherently entertaining as positing how the technology might be used by errant students, active scammers, and predatory cybercrooks to deceive.

That said, acceptance and adoption by enterprises is one of the primary paths that GenAI vendors can travel to build meaningful businesses and profitability. That journey is one that has long been familiar to IBM. The company’s new watsonx.governance solution illustrates why and how the company is a leading player in enterprise-ready AI.

What makes enterprise-ready GenAI?

What do enterprises require in GenAI and other AI-focused solutions, including large language models (LLMs)? According to IBM, three key features include:

  1. Transparency – Businesses need to understand how and with what data foundational LLMs and resulting GenAI solutions are trained.
  2. Risk Reduction – Also necessary is the ability to find and reduce risk in LLMs, as well as to monitor bias, drift, fairness, and adherence to new metrics.
  3. Support openness – Recognize that enterprise customers, as they do with virtually every other form of IT, embrace a wide variety of computing tools and solutions. So, it is crucial to support offerings that allow them to commonly manage, maintain, and govern AI models from open-source communities and other model providers, as well as from IBM.

Are other vendors following a similar path? Not especially.

For one thing, many, if not most, vendors pursuing AI solution development regard the training models they employ as proprietary content vital to their competitive positions. Given the amount of financial and human capital they are investing, that seems somewhat reasonable. However, vendors often scrape training data from a wide variety of digital resources across the Internet, making it difficult, if not impossible, to be validated as fair and accurate – points that are crucial to the function, accuracy, and value of AI services.

In addition, it is easy to assume that at least some AI vendors are still in their “run fast and break things” phase. For example, writers, artists, and other creative community members have criticized vendors for using copyrighted materials to train their AI models without informing or compensating copyright holders. In fact, suits have been filed against OpenAI for allegedly using copyrighted work by authors, including Michael Chabon, Jonathan Franzen, John Grisham, George RR Martin, Jodi Picoult, and the Authors Guild to train the company’s chatbots.

In response, OpenAI CEO Sam Altman recently said the company would pay the legal costs of clients using its ChatGPT Enterprise solution and developers using ChatGPT’s application programming interface who face copyright infringement suits. Amazon, Google, and Microsoft have reportedly offered similar compensation to users of their Gen AI software. Plus, Adobe, Getty Images, and Shutterstock provide financial liability protection for their image-making applications.

Compensating customers for the cost of copyright litigation is all well and good, but wouldn’t it be wiser to take steps that help them avoid litigation altogether?

IBM watsonx: Proactively transparent and trustworthy AI

IBM recently announced IP indemnity protections for customers using its LLM models and GenAI solutions. However, the company’s approach is distinctly and transparently different than Amazon, Google, and Microsoft. How so? Because IBM published a white paper concerning its Granite Foundation Models that covers issues, including underlying data and data governance, training algorithms, compute infrastructure, testing and evaluation, socio-technical harms and mitigations, and usage policies.

In other words, rather than being forced to employ “black box” solutions, IBM watsonx clients and developers can see and understand the data and details behind Granite LLMs, allowing them to better assess potential problems and hazards. Will that eliminate most or all litigation risks? That seems unlikely, given the popularity of commercial litigation. However, it does provide tools customers can use to recognize and address concerns.

The new watsonx.governance solutions offer enterprise customers further insights and benefits. According to IBM, those include:

  • Monitor new LLM Metrics: Enables clients to monitor and alert in both inputs and outputs of LLMs when preset thresholds are breached for quality metrics and drift. These include instances of toxic language — including hate, abuse, and profanity, as well as Personal Identifiable Information (PII).
  • Visibility into LLM development: Allows developers and teams to automatically collect information about the model-building process while also explaining decisions to mitigate hallucinations and other new risks.
  • Transparency of AI Lifecycle for LLMs: Automatically documents model facts across all stages of the lifecycle, monitors for drift for text models, and tracks health details such as data size, latency, and throughput. This helps administrators identify bottlenecks and enhance the performance of compute-intensive workloads.
  • Validation Tools for LLMs: Enable prompt engineers to map LLM outputs, providing context/reference data for Q&A use cases. As a result, engineers can determine whether the LLM is being appropriately influenced by the reference data, thus ensuring it is relevant to the preferred output.

IBM also noted that along with these specific features and processes, watsonx.governance can help clients facilitate compliance with internal policies, industry standards, and future regulations. Finally, IBM expects to expand these capabilities in 1Q24 to allow clients to govern third-party AI models from any vendor — on cloud or on-premises — thus orchestrating governance processes across their entire organizations.

Final analysis

What’s the essential takeaway from IBM’s new watsonx.governance offerings? First and foremost, it reflects the company’s deep understanding of and connections with its core enterprise clients and communities. In addition, it highlights those clients’ need for services and solutions that robustly and reliably support essential business processes and functions.

Most importantly, IBM’s watsonx.governance illustrates that innovations that don’t support businesses’ elemental need for transparent and open technology solutions provide less value and, potentially, more headaches than those that do. Overall, enterprise organizations considering or actively planning to employ AI technologies and services would do well to examine IBM’s watsonx solutions, including watsonx.governance.

Scroll to Top