top of page

Who Likes ASS— Artificial Stupid Stupidity?

If I could be sure we’d find out the truth, I would bet that the doors coming off the Boeing jets could be traced to artificial intelligence (AI). It’s more than a hunch. Boeing, like the rest of the aerospace companies, use AI in the design and manufacturing of their planes, the idea being that this will enhance efficiency and innovation— not cause doors to fly off jets in mid-air. In the design phase, they use AI phase to optimize aircraft structures, systems and components since AI algorithms can analyze vast amounts of data to suggest design improvements, reduce weight, enhance aerodynamics, improve fuel efficiency, make bathrooms and seats even smaller and less comfortable for humans. At least in theory, AI-driven simulation tools can also help engineers evaluate different design concepts and predict how they will perform under various conditions. Then, in manufacturing processes, they use AI to improve productivity, quality control and safety, AI-powered robots and automation systems assembling aircraft components, welding, painting, and performing other repetitive tasks with precision and efficiency, except when it comes to doors the aren’t supposed to open in mid-flight. And, Boeing admits they incorporate AI-driven systems for flight operations and maintenance. AI algorithms analyze real-time data from sensors, avionics systems, and flight data recorders to monitor aircraft performance, detect anomalies and predict potential issues— supposedly before they occur. Because, you know… safety is our number one priority.

Am I being as asshole? You tell me. The entire airline industry uses AI-powered chatbots for customer service. Tried that lately? It sure isn’t anything like Apple’s customer service, which actually kicks ass— and is operated by, imagine this, well-trained, well-paid humans! The claim is that AI systems answer common questions and provide personalized recommendations based on historical data and user interactions. Uh huh. A few days ago my financial advisor told me she wants to invest in AI. I said no, no, no-- absolutely no; and I don't care how much I'm missing out on. (Her office's comms system was recently AI-ed-- a catastrophe, unless communication isn't important.)

Yesterday, an attorney friend of mine sent me this report from LawSites about how fabulously AI is working out for attorneys stupid and lazy enough to use it in their cases. As I’ve explained before, ChatGPT and Google’s Bard, recently rebranded as Gemini (already a p.r. disaster), are just plane wrong more frequently that they are correct in responding to research-related questions.

Bob Ambrogi, an attorney and legal journalist wrote that “By now, in the wake of several cases in which lawyers have found themselves in hot water by citing hallucinated cases generated by ChatGPT, most notoriously Mata v. Avianca, and in the wake of all the publicity those cases have received, you would think most lawyers would have gotten the message not to rely on ChatGPT for legal research, at least not without checking the results. Yet it happened again this week— and it happened not once, but in two separate cases, one in Missouri and the other in Massachusetts. In fairness, the Missouri case involved a pro se litigant, not a lawyer, but that pro se litigant claimed to have gotten the citations from a lawyer he hired through the internet. The Massachusetts case did involve a lawyer, as well as the lawyer’s associate and two recent law school graduates not yet admitted to practice.”

In the Missouri case, the judge dismissed it and then, deeming it frivolous because of the AI nonsense ordered the litigant to pay $10,000 in damages towards his opponent’s attorneys’ fees. ‘We find damages … to be a necessary and appropriate message in this case, underscoring the importance of following court rules and presenting meritorious arguments supported by real and accurate judicial authority.’”

Smith v  Farwell
In this Massachusetts Superior Court case, plaintiff’s counsel filed four memoranda in response to four separate motions to dismiss. In reviewing the memoranda, Judge Brian Davis wrote, he noted that the legal citations “seemed amiss.” After spending several hours investigating the citations, he was unable to find three of the cases cited in two of the memoranda.
At a hearing on the motions to dismiss, the judge started out by informing plaintiff’s counsel of the fictitious cases he’d found and asking how they’d been included in the filings. When the lawyer said he had no idea, the judge ordered him to file a written explanation of the origin of the cases.
In that letter, the attorney acknowledged that he had “inadvertently” included citations to multiple cases that “do not exist in reality.” He attributed the citations to an unidentified “AI system” that someone in his law office had used to “locat[e] relevant legal authorities to support our argument[s].” He apologized to the judge for the fake citations and expressed regret for failing to “exercise due diligence in verifying the authenticity of all caselaw references provided by the [AI] system.”
The court then scheduled another hearing to learn more about how the cases came to be cited and to consider whether to impose sanctions. As the judge further reviewed the attorney’s filings, he found an additional nonexistent case in a third memoranda, bringing it to four fictitious cases in three separate memoranda.
At the hearing, the attorney again apologized. He said that the filings had been prepared by three people in his office— two recent law school graduates and an associate attorney.
“Plaintiff’s Counsel is unfamiliar with AI systems and was unaware, before the Oppositions were filed, that AI systems can generate false or misleading information,” Judge Davis. “He also was unaware that his associate had used an AI system in drafting court papers in this case until after the Fictitious Case Citations came to light.”
While plaintiff’s counsel had reviewed the filings for style, grammar and flow, he told the court, he had not checked the accuracy of the citations.
The judge wrote that he found the lawyer’s explanation to be truthful and accurate, he believed the lawyer did not submit the citations knowingly, and the lawyer’s expression of contrition was sincere.
“These facts, however, do not exonerate Plaintiff’s Counsel of all fault, nor do they obviate the need for the Court to take responsive action to ensure that the problem encountered in this case does not occur again in the future.”
Citing the original and now famous hallucinated citations case Mata v. Avianca, in which the court said, “Many harms flow from the submission of fake opinions,” the judge wrote:
"With this admonition in mind, the Court concludes that, notwithstanding Plaintiff’s Counsel’s candor and admission of fault, the imposition of sanctions is warranted in the present circumstances because Plaintiff’s Counsel failed to take basic, necessary precautions that likely would have averted the submission of the Fictitious Case Citations. His failure in this regard is categorically unacceptable."
After going through a thoughtful discussion of Mata and other prior cases involving hallucinated citations, the judge distinguished this case in that the lawyer was “forthright in admitting his mistakes” and had not done anything to compound them, as happened in Mata. Even so, he said, the conduct required sanctions of some sort.
“Plaintiffs Counsel’s knowing failure to review the case citations in the Oppositions for accuracy, or at least ensure that someone else in his office did, before the Oppositions were filed with this Court violated his duty under Rule 11 to undertake a ‘reasonable inquiry,'” Judge Davis said. “Simply stated, no inquiry is not a reasonable inquiry.”
For that reason, the judge decided to impose a sanction on the lawyer of $2,000 (payable to the court, not the opposing party).
The judge ended his opinion with what he described as the “broader lesson” for attorneys generally:
“It is imperative that all attorneys practicing in the courts of this Commonwealth understand that they are obligated under Mass. Rule Civ. P. 11 and 7 to know whether Al technology is being used in the preparation of court papers that they plan to file in their cases and, if it is, to ensure that appropriate steps are being taken to verify the truthfulness and accuracy of any AI-generated content before the papers are submitted. …
“The blind acceptance of Al-generated content by attorneys undoubtedly will lead to other sanction hearings in the future, but a defense based on ignorance will be less credible, and likely less successful, as the dangers associated with the use of Generative AI systems become more widely known.”


1 Comment

Mar 04

So... Boeing uses AI to try to make it better (?) and cheaper (!!) and more fuel-efficient?

But no airplane gets certified unless the FAA signs off.

So, a familiar rhetorical question: Who is at fault when one augurs in with 180 souls aboard or some piece falls off of one?

  1. the company that designed and built it (*)?

  2. those whose job it is to verify that everything is safe? from common sense design to materials to manufacturing?

  3. god?

  4. all of the above.

It should be familiar. It's the same thing that SHOULD come to mind every time DWT posts something about trump's many crimes or gaetz or ... fill in any name you like.

is it the fault of…

bottom of page