There has been a joke going around my community that goes something like this: “OMG, AI is taking our jobs. All users will need to be able to do is describe what they want an app to do, and AIs will do the work without us!” The response is, “No worries. When has a user ever been able to describe what they want?”
This joke really isn’t a joke as much as it is the literal truth. I’ve been involved in creating apps on both ends, and the failures result from poor communication skills on both sides of the request. The first time I worked on creating an app was on what became one of the very first CRM programs. After showing the development team what I was using (a hand-built Condor PC database application that I needed translated to an AS400), they gave me an application that was so labor-intensive it was almost unusable. I was an ex-actor, ranked nationally as a public speaker, and considered an expert at writing contracts, yet what IT (it was called Management Information Services back then) produced was nothing like what I thought I’d asked for.
Let’s talk about why communication skills, even more than traditional STEM training, will be critical as we continue to move into our AI future.
AI’s Are Better Listeners But…
People can be distracted by their inner monolog, selectively hear what they want to hear over what is being said, and have relatively poor memories when compared to AIs. AIs will listen better, but they are still defined by their training, and if their training interprets a series of words or an acronym differently (AIs don’t do well with acronyms), their comprehension can degrade significantly.
Because of concerns surrounding proprietary information propagation, the ability for AIs to learn from you is limited. This is unfortunate because if these limits weren’t in place, AI platforms would learn from you more quickly.
AIs tend to be very, very literal so you need to be very precise in what you ask them to do. The photographer, James Friedman, takes the directions he is given to creatively edit (photoshop) images. You can see some of his work here. He’s been doing this for a while. With his understanding of what clients want, he could create better outcomes, but he does showcase what an AI would likely do if the directions weren’t specific enough. While the photos are funny, if this same communications default was applied to a nuclear reactor or (given the current news) a container ship, the outcome could be far more problematic, particularly if the verbal or written directions were applied in real-time during a crisis.
Knowing What We Want
Speaking precisely is a skill, but even this skill isn’t much help if we first haven’t fully thought through what we want. For instance, in my own failure, what I should have wanted was a way to improve customer relationships with the company, streamline customer interactions, and reduce overhead. What I got was a PC database program running poorly on a platform that wasn’t designed for it, as IT tried to put my hand-built PC application on a mid-range computer that couldn’t run it.
Eventually we’ll be able to just state the overall goal, but AIs today aren’t integrated deeply enough to make the leaps in comprehension to drive to the solution you want from simple statements. Thus, we need to think through not only what we want but also how the AI will need to accomplish it and then alter our requests to match system limitations.
It will be an iterative process. You’ll need to simulate and thoroughly assess each iteration as you address system shortcomings, mistakes in communication and comprehension, and continue iterating until the results meet defined benchmarks. You’ll need to learn from your mistakes and improve your own skills to reduce the number of iterations over time, improving the quality of initial results and approaching the promised productivity benefits of the AI tool you are using.
Wrapping Up
Generative AI uses language models promising a natural language interface and delivering on that promise. However, our language skills aren’t precise, we’ve done little work to assure that precision because we simply haven’t had to in most careers. However, with AI, precision in language is critical not only to success but also to completeness in our understanding of the project we want to do. These tools are not only not psychic, but they also currently lack the human experience of dealing with a diverse group of humans over the years, like good IT or development staff.
Given that limitation, we need to develop not only the skills to set well-defined goals and then communicate them but also to be able to select other employees who can do this as well, and we are currently woefully inadequate in both skill sets.
Fixing these problems, communication and the ability to identify good communicators will be on the critical path to getting the most out of our existing and coming AI tools. The only company I follow that is aggressively doing this right now is HP, which is starting internally and with partners because you can’t train someone in something you don’t yet know how to do well yourself.
- How to Build the Perfect AI Workstation - November 5, 2024
- IBM Launches Guardium Data Security Center: Well-Timed for High-Risk Sites - October 28, 2024
- Intel and AMD Form x86 Consortium in Advance of NVIDIA’s ARM Challenge - October 19, 2024
View Comments (0)