AI Still Makes Things Up: Why Professionals Must Remain the Experts
Artificial intelligence is a powerful tool, a great staring point to starting a project or an article for social media, but we must remember that it isn’t infallible. We should still check to see if the information it provides is reliable.
A recent legal research experience reminded me that diligence and human judgment remain essential in the age of AI.
I recently used artificial intelligence to assist with legal research. I was working on a matter involving South Carolina property law and asked the AI to identify case law about the possibility of changing ownership from Tenants in Common to Joint Tenants with Rights of Survivorship, post death of the parties, if the deed, when signed, was signed with a misunderstanding as to type of ownership. The AI returned what looked like a legitimate and useful citation: Durham v. Blackard, 255 S.C. 202, 177 S.E.2d 601 (1970).
It even provided a case summary: (This portion is from my conversation with the chatbot)
I took out the parts where I ask it nicely to perform a function. My wife likes to make fun of me for saying please and thank you to the AI. I tell myself, politeness doesn’t cost anything, but in reality, when the AI takes over the world and comes for us, I know it’s going after the rude ones first.
“Facts: Parties sought reformation of a deed due to mutual mistake.
Holding: Reformation was allowed only upon clear and convincing evidence that the deed failed to reflect the parties’ true intent.”
Unfortunately, the case didn’t exist. There is a Durham v. Blackard, but it’s a 1993 decision involving fraudulent conveyance under the Statute of Elizabeth. It has nothing to do with mutual mistake at the time of signing the deed.
I brought up the fake case and the made-up content to the AI and it gave me another case. Creekmore v. Redick, 246 S.C. 423, 143 S.E.2d 251 (1965). The summary it produced sounded plausible and relevant, claiming the case dealt with a mistaken property description and supported deed reformation.
But once again, I could not locate any such case in South Carolina’s reported decisions. The case was simply fabricated, citation, summary, and all. Here is the AI’s response when I once again informed it that it made up a case.
“Apologies for the earlier citation errors. Upon further review, I found that the case Creekmore v. Redick does not exist in South Carolina case law. I regret any confusion caused by these inaccuracies.”
This experience underscores a reality that many professionals are beginning to discover: AI can still make things up. It will confidently produce answers that look right, sound authoritative, and might even contain accurate legal principles, but unless you already know the area well, it can be nearly impossible to spot where the technology fabricates.
This is particularly dangerous in a field like law, where the foundation of our work is truth, precedent, and precision.
Artificial intelligence can be a useful starting point for organizing thoughts, identifying issues, or framing arguments, but be cautious about trusting it to finish the job. It is not a substitute for subject matter expertise.
As professionals, we cannot blindly accept what AI gives us. We must remain vigilant, verify sources, and apply our judgment. We are the subject matter experts, not IT.
Leave a Reply
Want to join the discussion?Feel free to contribute!