AI is getting a bad reputation of late. It’s the basis of the actors and writers strike, and it plays a role in the expected United Auto Workers strike, as well, because both employee groups seem to think, not without cause, that it will be used to replace them.
IBM was outspoken early on that AI should only be used to enhance employees, not replace them, and demonstrated this commitment with its watsonx Generative AI application that converts Cobol-to-Java in order to modernize mainframe applications.
Let’s explore this in the broader context of how to properly use AI.
AI is in its Infancy
Using AI to replace employees isn’t just a threat to employees, it is a problem for companies because AI isn’t able to fully replace employees. Decisions to use AI in that capacity are premature because assuring the quality of these large language models is difficult, and they are generating many failures and mistakes.
New employees also make mistakes, but they are generally overseen by managers or mentored by peers who can mitigate the related risks until the employee is up to speed. But AIs work at machine speeds, making it nearly impossible for management to address the kind of volume of mistakes that a poorly trained, biased or damaged AI is likely to produce.
In addition, the best and most sustained successful use of AIs is when they are used to enhance, not replace, an employee. For example, early on, IBM demonstrated how its Watson AI could help a doctor arrive at a difficult diagnosis in a fraction of the time it took the doctor alone. However, the doctor still had to properly identify which one of the options that Watson listed was the right one because it didn’t have the ability to determine the nuances between the choices.
While capable, AI is nowhere near mature. More importantly, in order to evolve to a point that it can replace an employee, it needs to learn the fuzzier aspects of a task that determines good work from bad or even mediocre work, and it must develop defenses from those that accidentally or intentionally damage it.
IBM’s good example
IBM took a job that virtually no one ever wants to do (translating an existing program from one language to another more current language) and gave it to the AI. Getting AI to do a job that needs doing but that a human has no desire to do because it is very tedious or too dangerous for a human to do alone is the ideal application for AI.
We are experiencing massive labor shortages now. There are long lists of jobs that need doing that few want to do. Translating documents, creating forms, doing discovery (which requires reading massive numbers of documents) or contract reviews, doing months of work on a single patient to discover what might be wrong with them, bomb disposal, and a variety of other jobs that folks just have no interest in doing are all ideal uses for an AI. And people will be willing to train the AI, so they don’t have to do that job anymore.
Microsoft’s Copilot is another case in point because these tools deal with the tedious parts of a task and allow employees to focus on the parts of their job that they enjoy.
Wrapping up
For actors and writers, AI could be used to rewrite scripts or deal with endless retakes as the director tries to figure out what they really want (I was an actor and found this incredibly annoying). In this capacity, AI supplements their work by alleviating the mind-numbing, repetitive tasks.
If a director doesn’t know what they want or if their idea sucks, the AI would be a far less annoyed partner to work with to figure out what was needed and could better point out that the director’s or studio’s ideas were stupid without the typical wrong-headed pushback because the director or executive feels they are being disrespected.
As IBM and Microsoft have demonstrated, AI can be used to improve working conditions, help make better decisions, and reduce the likelihood of strikes rather than be the cause. A little better understanding of what AI is good at, some admission that decision-makers are often wrong, and a focus on applying AI where it would work best would be a far better path than the one the unions are rightly afraid we are on.
- IBM Launches Guardium Data Security Center: Well-Timed for High-Risk Sites - October 28, 2024
- Intel and AMD Form x86 Consortium in Advance of NVIDIA’s ARM Challenge - October 19, 2024
- AMD and the Future of AI - October 11, 2024