Academic editing has changed completely in the last three or four years. I’ve just picked up some foreign language assignments, autotranslated with an AI assistant cleanup of the result.
Even manuscripts originally written in English are fed into the AI homogenizer.
My job is then to edit the “assisted” edit.
The guidelines for these assisted edits are very clear. The first order of battle is to identify and check any changes made by the AI, who has a code name. We’ll call him Eddie the Ghost Editor.
If the authors happen to have seen or have queried changes by Eddie, it’s first and foremost crucial that you come up with a plausible excuse or explanation, and some plausible excuses are provided. You do not admit that this was an AI decision and no mention may be made of any such processing.
First time that lying is not only part of my job description, but it’s the very first thing I’m told to do.
The next thing: you can see instantly when the AI is making changes. And there are so many mistakes in what it does. The one thing that can be said for these machines. When they make a mistake, they generally make it very systematically, they compound their errors.
But where they do make changes that are not actually wrong, they so often flatten the text into the kind of seamless mush that they’re so good at.
There’s an incredibly important point about LLMs that more people are beginning to discover. You need to know Shannon’s law of communication to understand this: the amount of information that a signal conveys is inversely related to the probability of its occurrence.
If the next word, and the next, and the next, and the next, become more and more predictable, then they convey less and less information.
LLMs are trained to give you the most probable answer that all the dead voices of the past rattle together to provide, the most probable way they would complete your narrative.
The most probable way is, by Shannon’s law, the way that provides the very least information.
These chatbots churn out endless strings of language, all predicted on being the most probable things people would say in this context.
It’s therefore chatter designed literally to be as meaningless as possible, while appearing to be omniscient.
It’s a very clever trick, to be sure, but it can’t last for long.
We are witnessing a truly epic bubble bursting here, with the broligarchs furiously puffing hot air into each other’s balloons. The bang cannot come fast enough. The crash will not be far behind.
There’s a really important paper called “The Ironies of Automation” by Lisanne Bainbridge, published in 1983. She was looking especially at automation in the nuclear industry.
She noted all kinds of paradoxes, including severe stress, cognitive issues and premature burnout.
She noted that automation doesn’t actually decrease work, it concentrates work, requiring operators to make increasingly strategic decisions with limited information. See the very interesting article below.
The system absorbs the tractable parts of the job and returns to the human a workload that is larger in volume, broader in scope, and denser in the kind of high-stakes, high-attention judgement that cannot be delegated. The work does not diminish. It concentrates.
Everything is designed to dull your normal operating senses, because your routine maintenance has been taken over, and then leaving you completely in the lurch when there’s an unpredictable crisis.
This thing about concentrating the work is just so obvious with my current editing work.
I’m so used to editing documents that have been worked on by other people, sometimes many other people. I often have changes by six or seven different editors indicated, each with their own colour. I have absolutely no problem with any of that, it’s routine.
Until I hit the editor called Eddie. Then everything changes.
For one thing, you literally cannot take one single thing Eddie does as being useful or in any way meaningful.
I am a notoriously light editor. I only make changes when they’re absolutely necessary. I have an iron rule, after being quizzed by one too many a pedantic author. I never, ever make a change I can’t explain on the spot. If I can’t think why I’m changing it, I don’t change it.
In all my other editing, no matter how dumb Authors 2 and 4 may be, you can at least try infer some kind of logic in what they’re doing. They are, after all, human. Which means they make finger slips, among other errors.
But Eddie —
The trouble is, my brain is literally hard-wired to think of the reason for a change. But I’ve very quickly learned to undo all of that wiring for Eddie’s work. If I can’t see an obvious reason for the change, one that I can explain and justify straight away, I just automatically put it back to the original.
Why do I do this?
I am under constant client approval assessment, and there’s no one fussier than the academic client. No one. The international agencies can be surprisingly loose.
So: I am not going to have some pedantic professor asking me three weeks down the line to explain just why I changed his precious This to ugly That, when the truth is there really is no good reason, it’s just the way the machines churn out just the dumbest language they actually can. Every time.
Therefore, much of my work these days is simply undoing the work done by a literally brainless AI assistant. It’s actually quite a pleasure not having think about what someone might have been thinking when he or she changed This to That. You can be sure there’s no one thinking at all in there.
Concentration of work: absolutely. Exhausting: a whole new level. Checking machine output for defects is just about the lowest our profession can go.
What’s most interesting, though, which is why I’m burying it down here, is that we are forbidden in any way from revealing that these changes are being made by AI. The whole point of the business is that it’s done by a human editor, a native English speaker who takes full personal responsibility.
AI is NOT repeat NOT repeat NOT a selling point. (Sorry, I need the triple negative). AI is to be kept secret. People do not trust AI one little bit, not with their precious academic babies, and they are quite right. If authors query an AI glitch, we must think of a plausible excuse for the change. A little bit of real creativity in our lives.
So let two things be noted in this here Note: (1) LLMs by design churn out as little information as is mathematically possible in any given situation; and (2) The only place where there is any meaning generated in any loop, is where there is a human being in that loop. No have human, no have any meaning. There has to be a human being at least trying to decide if something in that text is meaningful and correct, or is maybe highly plausible but complete garbage. And doing the really hard work of synthesizing it all into something truly meaningful and as concise as possible.
OK. Let me get back to my editing. This is a whole new game, it’s quite fascinating actually. I said long ago, I am going to have endless AI content to edit. I am just looking at the endless job queues and absolutely quailing, there are hundreds of “assisted” edits rolling by on the board. The manifold joys of being proved right.