Using AI to “help” you “write,” “research,” “outline,” or even attempt to evaluate your nonfiction writing is utterly useless. So many people have tried so hard to tell me that this tech can help me. I have looked into it. It is nothing but a waste of time and dangerous for a person who doesn’t already know what they are doing.
For example, at someone’s insistence, I tried to see if ChatGPT was able to transcribe a historical document for me. I chose a military form from WWI, partially typeset with decently legible handwritten entries. I’ve spent tens of thousands of hours doing this kind of work for the biography I’m finishing: finding these documents in physical archives, then deciphering, transcribing, translating. Why not see if a machine can do it faster?
I gave the AI a clear hi res image of the form. I asked it to simply transcribe both the typed and handwritten text. I asked it to skip any text that wasn’t legible and to give a 100% faithful rendering.
The AI promised to do what I asked, said it had no problem reading the handwriting, and then spat out a bunch of text. It looked very much like it could be right. Cool! Maybe this was a timesaver after all! Right?
Wrong. I read what it had produced. It claimed that the military form told about how the young man named at the top had received a commendation for bravery because he had rescued wounded comrades under fire. Very cinematic. It had details, places, names, dates. It even pretended one line was illegible. But that is the key part— pretended.
As it turns out, I speak French and can read handwriting from the 19th and early 20th century. I read the form myself and saw that ChatGPT had made the whole thing up. In reality, it was documentation that the young man had received a knee injury at such and such battle, and as a result was entitled to such and such benefits, etc. What the AI had produced—a flattering story it thought I might like—was worse than useless, it was misinformation it had insisted was accurate.
That is the problem with “AI” or rather the LLMs we are calling AI. They just make shit up. They cannot be trusted to transcribe, to translate, definitely not for factual research, and also cannot be trusted when it comes to advice about your outline, the quality of your prose, etc. This is not the only experiment I ran to see if it could, in fact, help with any aspect of my work. It failed every one miserably. It cannot. No serious writer should be using it at all. Full stop.