This week the judge handed down the verdict for the FTC Qualcomm trial that many of us, thanks to the collapse of the core arguments sourced from Apple, and the effective discrediting of the lead FTC expert and architect for the action, thought would end up in Qualcomm’s favor. It did not. The judge ruled against Qualcomm and pretty much upended the entire technology licensing industry, even though no consumer damage was ever shown. In fact, in this regard, not only was the FTC’s expert discredited, his model that suggested that you didn’t need to show harm, but that harm was likely, was proven to be false at least regarding the “harm likely” part. Even the concept of “potential harm” was discredited.
I think this points to a significant problem in the courts today and that is that judges just don’t have the bandwidth or the experience to properly consider technology cases.
I do think deep learning artificial intelligence (AI) could help. Let’s talk about both those things this week.
I attended the close of this case and saw some of the final testimony and the close by both litigants. The foundation of the case seemed to be based on two things—claims by Apple that they were being bullied (which in and of itself given Apple’s horrible history of abusing the vendors that served them seemed unbelievable) backed up by evidence that was shown to be manufactured during the opening of another later trial in San Diego. In addition, the primary foundation for the case was based on a theory by economist Carl Shapiro who appeared to argue that the primary reason his model was right was he was the only non-idiot economist in the room.
Not only did this theory lack any proof, but evidence presented by Qualcomm indicated it didn’t work. Watching this from the audience was almost an out of body experience because Shapiro seemed to be arguing that proof was inadequate to disprove the brilliance of his own siloed thinking. It may be an unfortunate sign of the times that someone can get away with arguing they are right, everyone else is wrong, and that facts don’t count.
Now, what else is strange about the ruling is that the Judge went out of her way to argue, almost categorically without proof mind you, that all the Qualcomm witnesses were unreliable. Now this may go to a common belief that all corporate executives are crooks but that is hardly settled law and that kind of a global conclusion without any evidence to back it up would seem unsupportable.
So, to net out the trial, the foundation was found to be false (though this didn’t become evident until after the trial and so wasn’t introduced), the expert that crafted the FTC’s position was successfully discredited along with his model, and there was no actual damage yet the judge found for the FTC. Even the DOJ seemed to think this was nuts.
But I think this points to a bigger problem.
Judges Are Overwhelmed
I just finished another trial with a similar result. In my case the damage was obvious, the liability clear, and the violation of settled law self-evident, yet I too lost and my judgement also read like the judge slept through the case. But, since I was there as well, I know he didn’t. Subsequent to that I’ve investigated feedback across a number of judges (there is a judge ranking system) and, on a ten-point scale where 10 is fantastic the average appears to hover around 1 and 2. If these judges were a Yelp business they’d be out of business.
Common complaints include an inability to remember what happened during the trial, bias against female attorneys, ignorance of the law, horrible time management, and an unbelievable amount of non-trial distractions.
It looks like the technology in courtrooms sucks, the load on the judges is excessive, the judges don’t have the time or resources to stay current on the law, and that they appear to either be disengaged during the trial or experiencing massive memory loss.
There is no indication that the judges are crooks, but plenty indicating they either shouldn’t be judges, or—more likely—are spread so thinly that they can’t adequately perform the job. The big issue is that if people lose confidence in the court process, they are more likely to take things into their own hands which tends to end badly for them and society.
I think technology could fix this.
Wrapping Up: Deep Learning AI
IBM’s Watson deep learning artificial intelligence platform does have litigation as one of its focus areas, but to help attorneys prepare for a trial—not to model or decide who should win. However, AIs like Watson are good at looking at lots of data, seeing connections, and providing advice. It certainly would be possible to implement an AI to monitor trials and allow it to make the initial recommendations based on evidence and precedent. It could even better-assess the credibility of witnesses based on prior testimony and any records, like social media, it had access to.
Used by the judge, it could help them arrive at a decision less based on their gut (which seems to be the case largely today) and more based on the evidence, precedent, and settled law. The same system run by either the plaintiff or the defendant’s law firm could help them not only better present their case but package up a powerful appeal if the judge varies from the ideal behavior. Or, run by the Appellate Court, it could better showcase which cases need to be heard and reversed and which do not.
Across the board this would lower legal expenses, increase the quality of judgements, and potentially reduce the growing concern that we can’t trust the courts.
- Bear Systems: The Best Security Company You’ve Never Heard Of - May 23, 2023
- IBM Announces Watson X: The Best Choice for Enterprise Generative AI - May 17, 2023
- Cisco and Building a Tame Attack AI - May 5, 2023