ChatGPT and AI Writing is Stupid and You Shouldn’t Use It

Tuesday, January 3, 2023
By Phil Elmore

As a writer who has worked for many years to become good at what I do, I have to admit to a visceral, knee-jerk response whenever anyone mentions AI writing. The proliferation of news articles and think pieces about ChatGPT, in particular, would have you believe that the robots have already taken over. Why, college professors are discovering to their horror that their students are using the AI to write college papers. The nation’s copywriters, we are told, will soon go the way of horse-and-buggy whip manufacturers. Why, it’s only a matter of time before AI is spewing out complete works of fiction, as even now it is being used to produce “art” (as long as you aren’t particular about the number of fingers your models have).

Except that this is all stupid, much of it is morally questionable, none of it is sustainable, and it’s not happening. At least not now.

Modern AI “Writers” Are Stupid (For Now)

The current generation of AI (at least what’s available to the public) is capable of doing a few things reasonably well. It can sound like it was written by a person instead of a random word generator… as long as that person has a brain injury or is just mildly stupid. It can be used to “respin” content to change the wording in a way that could allow you to use the same previously written blog post for more than one client, I suppose. The last time I tried software meant to do this, a number of years ago, it was absolutely wretched, so today’s AI certainly has set the bar higher.

For producing anything that actually sounds good, however, you still need a human being trained to write. Writing is something anyone can do as long you aren’t particular about the quality of the output. Even machines can spit out words in arrangements that conform to predefined rules of grammar and syntax. But there is no AI right now that can produce text you’ll want to read or that you’ll enjoy reading.

AI is a Plagiarism Engine

Non-fiction produced by AI is… mediocre. If you’re okay with your writing never surpassing a mild, “That was okay, I guess,” then certainly, you can use it. Just understand that nobody’s going to be impressed. If your writing application can be satisfied with a comfortable, often stilted mediocrity, I guess there’s no issue. The AI will get better in time; the robot writing your content will sound less like a simpleton. But that’s the least of your problems.

Your real problem is the fact that AI text generators are essentially plagiarism engines. They have to be; the AI can’t create, so it must repurpose content that already exists. It finds this content on the Internet, drawing from whatever sources are available. The engine doesn’t care who owns what. It’s only looking for answers. Everything is fair game and, no matter how complex the rules you give it not to steal, it’s going to end up plagiarizing something eventually — in spirit if not by the letter of the content it coughs out.

Without actual, human writers producing the content the AI leverages, there is no AI chatbot text or “writing.” But that’s just it: AI writing isn’t writing at all. It’s an algorithm, a pattern recognition system that must have content fed to it before it can regurgitate something we’ll erroneously call new.

Of Course AI Can Imitate Lawyers and Journalists

Of course, this also means that modern AI text engines can pass certification exams, write uninspiring college papers, and even take a passing shot at the legal profession. Well, why couldn’t they? In a world of yes and no answers, of rules and regulations, the greatest lawyer or college professor would be an idiot with a photographic memory. What greater repository of raw rules and simple data is there than the Internet?

I’m reminded of the kerfuffle over the political bias in crowdsourced cites like Wikipedia. “Our science articles,” sniffed Wikipedia’s apologists, “are as accurate as those of [credible] encyclopedias.” Well, of course they are — because unlike the subjective world of politics and socio-economics, science is largely factual. There is room to argue, but on the whole, it’s yes and no, right and wrong.

For an AI to spit out a (possibly plagiarized) article about a factual topic, it need only steal and reword other statements and reporting of facts, applying a template or pattern algorithm to the mishmash of data it is pulling from whatever sources. Small wonder, then, that formulaic writing like journalism is something it can manage after a fashion. It is even less surprising that the AI can apply rulesets to exam questions and get them mostly right.

AI Is Politically Biased and Racist (Like Its Programmers)

It came out in late January or early February of 2023 that ChatGPT is hideously politically biased. At least at that time, if you asked it to write you a poem admiring Donald Trump, it would refuse. Ask it to write you a poem glorifying Joe Biden and it had no problem. Jokes about women? Not allowed. Ask it to explain what white people can do better? It’ll gladly tell you. Ask the same about black people? Not allowed.

The double standards baked into ChatGPT don’t stop there. Jokes at the expense of men? Get ready for some tepid guffaws. Ask it to defend Donald Trump from accusations of racism? No can do, friend. Ask it to defend Joe Biden from accusations of racism? It was happy to comply in its mediocre way.

Bias is unavoidable. An AI isn’t necessarily objective; it responds according to both its programming and its input. It’s probably safe to say that the aggregate political information online leans left, because popular culture leans left. We are inundated every day with messages to the effect of, “Democrats good, Republicans bad; Left-wing good, Right-wing equals far-right equals bad,” and so on. But even if the sum total of information online that informs an AI were politically neutral, it would still be subject to human biases. There is no doubt that the programmers behind ChatGPT have their own very clear political favorites. These have been baked into AI programs like ChatGPT because the programmers don’t see their bias as bias.

I’m reminded of various interviews given and statements made by, say, Brian Stelter and Don Lemon. Neither man believes CNN is politically biased. They believe CNN’s skewed political perspective represents the median, a political center-point; anything with which they disagree is thus automatically “far-right misinformation.” They earnestly, genuinely believe this because they do not believe themselves to be ideologues. Their default assumptions are that their biased opinions are neutral statements of fact. When an AI is programmed by people who posses these preconceived political slants, there is no way the resulting output can be objective or even politically neutral.

As Bad As AI Non-Fiction Is, AI Fiction Is Much Worse

Non-fiction produced by AI is mediocre at best and horribly stilted (or merely stolen) at worst. Fiction produced by AI, by contrast, sounds like it was written by someone who was scribbling in a notebook while running up a “down” escalator. Here is a good example. I asked ChatGPT to write me a story about World War 1 trench warfare featuring a Private Jerry O’Connell. Here’s what it crapped out:

Private Jerry O’Connell was just a young man when he was drafted into the army to fight in World War 1. He had always dreamed of being a hero and defending his country, but he never expected the brutal realities of trench warfare.

Jerry was assigned to the front lines, where he spent day and night in a cramped, muddy trench with his fellow soldiers. The constant threat of enemy attacks and gunfire made it almost impossible to get any rest.

One particularly cold and rainy night, Jerry was on watch duty when he heard the sound of footsteps approaching the trench. He immediately sounded the alarm and grabbed his rifle, ready to defend his comrades.

As the enemy soldiers came into view, Jerry opened fire, taking out several of them before they could even get close. But there were too many of them, and soon Jerry was overwhelmed.

Just when all hope seemed lost, a group of reinforcements arrived to help drive the enemy back. Jerry and his fellow soldiers were able to hold the line, thanks in large part to Jerry’s bravery and quick thinking.

From that day on, Jerry was hailed as a hero among his comrades. Despite the horrors he had faced, he remained determined to keep fighting for his country until the war was won.

Now, here’s a passage written by an actual human being — me — about exactly the same topic.

O’Connell slumped into the mud, feeling it ooze down his back. His arms were numb. He no longer felt the hilt of the trench knife clenched in his hand; he no longer heard the rattling breaths of the German soldier dying in the muck at his feet. He was home again, in Alyssa’s arms, finally warm and full and safe. No one was trying to kill him. He had no blood on his hands.

“Private,” barked Sergeant Hauser above him. “Are you all well? Private!”

O’Connell looked up, barely recognizing the man. He could not seem to loosen his grip on the trench knife. The blade oozed crimson. In the mud and filth around him, the German’s blood mixed with the muck in the trench. O’Connell no longer cared.

“I… I’ m not injured, Sergeant,” he managed.

The Sergeant considered him, perhaps wondering if Private Jerry O’Connell had finally broken.  With uncharacteristic compassion, he said, “Steady, lad. We’ve beaten the advance. They’ll hail you as a hero, O’Connell.”

O’Connell looked down at his hopelessly jammed rifle. He looked back to the Sergeant, but the man was already moving down the trench, checking on the dead, the dying, and those who remained.

A hero. Jerry O’Connell did not feel like a hero.

He only wanted to go home.

Now, I ask you: Which would you rather read? More importantly, which piece of work represents the level of quality you’d like to have associated with you or your business… and which sounds like something a not-too-bright tween would turn in for a sixth-grade reading assignment?

AI: A Tempting, Unsustainable Shortcut

I realize just how appealing a shortcut AI must seem. Not everyone can write. Not everyone can create art. I know several artists who are appalled at the proliferation of AI picture generators. Honestly, if an AI can squat it out, what you’re holding isn’t “art” at all. But for someone without the talent to create art themselves, the allure of a machine that can do it for them is undeniable. Whether art or writing, whether non-fiction or fiction, AI represents a shortcut of which countless people are desperate to avail themselves. That’s understandable.

The problem, however, is that AI simply isn’t good enough not to sound (or look) stupid, biased, and awkward. It’s also ethically questionable and, without a steady stream of human beings to shovel fuel into its glowing maw, it’s incapable of actually creating anything. Even when AI improves to the point that it is no longer stilted or obvious (and we’re rapidly approaching that point), even if its masters manage to curb its penchant for intellectual property theft and its transparently slanted, racist politics, AI will never have the passion or the independent judgment of a human being who knows how to create. That’s going to be an insurmountable gap in any attempt to use AI to replace a human being.

Simply put, you can’t and you won’t, no matter how good the computing power gets.

One Response to “ChatGPT and AI Writing is Stupid and You Shouldn’t Use It”

  1. Al

    That was awesome! Very well explained, and confirmed a lot of my preconceived notions. Really enjoyed it!

    #28689

Leave a Reply