The Twilight zone is here. My encounter with an inorganic being reveals weaknesses, dangers of Artificial Intelligence.
I had a real-life experience with a non-living being. A first for me. The Twilight Zone is here and it is opaque.
Upfront and personal I entered the world of artificial intelligence (AI). My clerks and interns have in the past laughed at me and told me to quickly get into the new world of electricity and motor cars. I am not so sure; my real-life experience with AI was a matter of high puzzlement. I wanted to try AI in preparation for teaching a seminar on water law and water rights.
One legal commentator has stated, “AI can automate time-consuming tasks like legal research, document review, and contract analysis, freeing up lawyers to focus on strategic thinking, client interaction, and courtroom advocacy.”
But if this opinion is accepted, and if there ain’t no one standing over the AI research process with a sharp eye and sharper attitude, one has a guaranteed doomsday scenario. Relying on AI is a recipe for collapsed bridges, messed-up legal contracts and poorly planned lawsuits.
I wanted to use the opportunity to see if AI as a research tool is worth a damn. I had in mind a legal issue in water law. I have practiced in water law and written about it for some time now and have an established background and knowledge base in the field.
I chose a more detailed water rights issue for my AI research. This was not intended as a deep legal dive using standard treatises, case law, statutes and administrative rules but rather an attempt to see whether artificial intelligence could adequately research a somewhat detailed issue.
The research results were a shock. I read the results and recognized that the information provided was a near quote of some public articles I had previously written on the subject. The results were my words and my work.
Those words were good but did not at all address the more detailed issue I had put before the AI bots for the search. The bots drove right past my question and only provided public information I had previously written on the subject.
The more detailed question I put before AI was clear and could have been answered if AI programming was what some suggest as really good and really “smart.” My previous written articles, used by the AI bots (or munchkins), were solid information which provided good fundamental points on the subject.
But my detailed question to AI was not answered.
Was the report bad information? No, it provided clearly inadequate information considering the clear and well-written question asked. It is a dangerous copout to conclude, “Oh well, the system just provided ‘unknowlingly’ bad answers.”
Are these programs maximizing profit or future profits, or providing accurate answers and the truth? To hold that current AI systems are just tools is partially correct.
To espouse however that AI provides reliable analysis of an issue is false. To argue that AI is becoming smarter and more humanlike is dangerous. And, importantly, to use “humanlike” as a benchmark is hilarious. To suggest that if something is humanlike “it is good,” fails to have the slightest knowledge of the long, sordid, corrupt history of humankind.
As we speak, the AI industry is unregulated and unfettered in its assurance of what AI believes to be the truth.
Beware the manipulators of truth by those in power. The power of broadcasting truth is not limited to those in the power of government.
David Ganje is an attorney who practices natural resources, environmental and commercial law. The website is lexenergy.net.
Graphic: public domain, wikimedia commons
The South Dakota Standard is offered freely and is supported by our readers. We have no political or commercial sponsorship. If you'd like to help us continue our mission to advance independent political and social commentary, you can do so by clicking on the "Donate" button that's on the sidebar to your right.




