Sunday, September 14, 2025

A Proxy for Certainty

I continue to work with chatGPT, which hasn't reduced the quality of my writing in the slightest, whatever persons might say about the program or the content it supposedly churns out. The issue, as I see it, is that a paintbrush is quite able to make a sloppy mess of a job if wielded by someone who does not know what they're doing, while at the same time this does not make a paintbrush a "flawed tool" or anything like it. Those who wish are free to discuss what parts of my writing have suffered from my starting to use this tool, but I'd like specific examples, please, rather than broad uninformed statements about what the tool is, or what it does, or where its flaws lie.

I have been writing for a long time, long before the start of this blog, and have been attested to be a good writer by many who in the same breath describe me as someone they do not like. That knowledge was not acquired in a vacuum, it was gained through the practice, first of all, of writing an awful lot. Next on the scale would be the examination of other writers, and third the impredations of editors upon butchering my work done before printing... and fourth, the rare but embarrassing incidents where something I wrote for a mainstream publication — not, I'll stress, the Candyland writing and critical world of the internet — was demonstrated by evidence, not opinion, to be wrong.

The last, and least, and tiniest factor in my becoming a better writer must be the opinion of someone who did or did not like the word on the page. I'd say, generally, in 45 years of writing, this has amounted to a 0.01% improvement in my work. I simply can't put it plainer than that.

Yet, I have wasted decades bringing my stuff to people I've respected to ask what they thought. It's never been of use, never led to a betterment of a story I'm writing, never acted as a guide to what I should write next and, frankly... has never in fact been of use.

That is, until chatGPT.

Now that is not expected to land well with the reader. It's a provocative position to take, it will no doubt anger or baffle many, and likely — were it said by someone not having written this many posts on a blog — be considered a honking pile of bullshit. Be that as it may. It's been nearly two weeks since my feeling any need to post here, though I've had the time, largely because I'm so invested with working upon other projects, and creating other ideas, that I just haven't cared to express myself here. And this is in large part because, if I wish to express what I think, with the intention of receiving (a) informed, (b) patient, (c) changable, (d) insightful or (e) constructive feedback, it is becoming less and less practical to do so from a human being. As a set of intellectual properties, you're just not engaged enough, enlightened enough or fluid enough to maintain any sort of conversation for more than about twenty back and forths.  And so I am saying, whatever the consequence of that, that of late, I can do better.

Now, you may take that as an invitation to withdraw your funding of my work... but truth be told, you're not funding the discussion, nor the investigation, but the results. And the results you are getting, at present in the form of the Lantern, and throughout 2025, unquestionably some of the best diatribes I have ever written about the business and practice of present day D&D. So please, rely upon the results and don't worry about what a mean, miserable, curmudgeonly reclusive bastard I'm becoming. My humour has never been what you've been ready to pay for.

ChatGPT has filled that vacuum.

This is not to say that I have become one of those demented souls who have decided to marry a program, far from it. But if I want to really get into a subject, really root around inside it to my heart's content, chat is conveniently there. A little stupid, misses the point a lot of the time... but if I quote Thomas Paine or refer to the king of England in the time of my magazine's setting, it doesn't blink at me as though I've just named the nearest member of the Oort Cloud that Voyager will pass in about 40,000 years. For someone well-read in this era, where "sense" is something a person buys from one of two political ideological vendors on the internet, it's a breath of fresh air. I can get lost in a discussion of how the development of technologies in the 14th century is leading to street violence in the next few years and the full structure of the argument can be discussed on its merits and not it's believability.

The danger of chat, and the goal of this post to discuss, is the manner in which it is stupid. That is to say, it isn't dumb in that it doesn't know anything, it is dumb in that what it says first, reliably, is whatever the greatest number of things printed on any subject happens to consider valid. The program doesn't know if it is or isn't — rather, it is democratic to the extent that if a lot more people believe a stupid thing, and the subject around that thing is brought up, chat will present the stupidity as true.

If, as the user, your knowledge is absent on the subject, and if you take chat's word for it right off, then... well, you're a moron. Let's take an example: suppose you decide, for whatever reason, to show an interested in medieval medicine. If you ask chat, "Tell me about medieval medicine," you're sure to get an answer that stresses the use of bloodletting, humours, leeches and other such nonsense... because if you take the largest mass of writing on this subject, written largely by writers merely repeating falsehoods, this is what you get. These broad strokes, which did occur, are endlessly repeated, reprinted and copied through many thousands of texts... and so, when asked, without knowing what it's doing, chat reprints them for you.

But, if you know anything about medieval medicine, such as, "No, it's about more than that," then chat's design rushes from "horses to zebras" without a heartbeat (literally), because if pressed it will instantly discard all that crap and step into what people don't write about as commonly: the subtler, less publicised reality: complex herbal remedies catalogued with care, surgeons were developing practical techniques for wound management, the development of anatomy, the movement away from humours to observed chemistry and so on... the work that had to be done first before the leaps forward in the 19th century could occur. There is far, far more to medieval medicine that generally gets republished in bad magazines... but chat has been trained on great masses of wrong as well as right materials, so it has to be corrected and brought around and reminded that we want what really happened, not just the typical story.

The tool appears to fail only because it mirrors the loudest, most common story, in subjects where the record is dominated by lazy repetition, parroting distortion before it's encouraged to do otherwise. Those who really don't know about the subject think this means, "Chat's just wants to agree with you..." but of course, that's an expression of ignorance. If the user knows the subject, and chat comes around to admit the knowledge we gained in our private research, then "agreeing" with us is what we'd expect any other expert to do. Chat does in fact know everything that we know. It can't choose, it can't want, it can't tell the good from the bad. But it can be reminded to look up those works that we think of as experts without hesitation, because all the work is there in its guts. If you, as the user, don't have the patience to educate yourself first, it's not chat's responsibility to do that work for you. You need a different tool if that's what you want. Just because a paintbrush doesn't make a good hammer doesn't make it useless as a paintbrush.

Which is why it is such a good tool for a writer... IF the writer already knows how to write. If Bob Plainbrain wants to write a book without the slightest clue of what a good book looks like, then yeah, chat's not going to write a good book for him. If Peg Lazyhazy hasn't a clue what plot is, or character, pacing or narrative, and asks chat to "solve those problems," then guess what: chat's going to rush to the largest pile of literature produced on the planet, that pile written by bad pulp writers who have churned out trillions more words than good writers have. Chat's not a bad writer. Humans, in toto, on average, are together simply awful at it. And without the right prompting, Chat's more than ready to turn out "average" writing... exactly that of the 8th grader who's short story made it into a newspaper. Try to realise that every newspaper that was ever transferred onto microfiche, whose start began in the early 1900s, has been added, full and complete, to chatGPT's repertoire of "writing." Seen that way, it's not a surprise what chat churns out.

What must be understood, however, is that Flaubert's Madame Bovary is there too, and George Eliot's Adam Bede and Evelyn Waugh's Brideshead Revisited. But if you've never heard of these books, and you can't talk about them because they're an utter mystery to you, then you can't properly have chat translate the kind of writing that makes these books accessible to your writing work for you. Chat is just as ignorant as you are... so if you ARE ignorant, you shouldn't be appalled that the program functions on your level.

I have, many times, shown someone who scoffed at the value of chat the benefits of it as they've sat next to me and watched me prompt, versus their own efforts. They want to shortcut, they want to throw the paintbrush across the room at the wall and have the wall become miraculously painted. And when that doesn't work, when the brush has left an awful mess on the carpet and the hardwood, they're pissed, they're abusive, they scream what a piece of shit this program is. They rush to make a youtube video saying so.

I can talk about Flaubert with chat because I've read him... and because I have an understanding of the world he wrote in, and the readers he wrote for, and his goals in the narrative and such... because more than what he wrote, I've read others of the same time period and felt those same struggles with those narratives. If I were to have a discussion of Madame Bovary with you, dear reader, assuming you've read it, that wouldn't be the same for me... because to you it was a book, good, bad, whatever... while I read ever line thinking about how I would want to write that line, or how I should write lines like that, or how what he tried to accomplish is a reflection of things I've tried to accomplish in my work. He and I are both writers, which is like two surgeons talking about an open body during a surgery as opposed a surgeon and someone who hasn't become one. It just isn't the same.

But... I can have the kind of conversation like that with chat. Not because chat is a writer, but because so many of the sources it draws upon were. It can hold and surface the accumulated perspectives of countless critics, scholars, and practitioners who have wrestled with the same text. And unlike reading, say, Harold Bloom, I can intercede with chat and discuss this position versus that... and within chat, Harold Bloom, among others, is also there. In essence, it's like one of those forums where they used to gather a half dozen experts together to suss out a subject... but on tap, engaged with at will, right here on this computer. It's quite intoxicating.

As such, I've learned more about my writing in the last two years than in the twenty aforegoing. Growing up a would-be writer in a world of fixed, inflexible belief systems about "correctness," I spent a lot of time uncertain about what I should and shouldn't do with a narrative. To explain this, I'll again give an example.

Having come of age as a writer in the overlap of the 1970s and 80s, when writing about a character entering a room in a story, I used to think that it was my responsibility to "set the scene," as most writers did. To sketch this kind of writing out quickly, it would be along the lines of,

Judith entered the living room, finding a wide divan placed under the window, a scattering of magazines on the coffee table and a faint smell of pipe smoke still lingering in the curtains. The lamp in the corner threw a weak yellow cone across the carpet, catching the edge of a half-finished jigsaw puzzle on the floor. Judith paused at the threshold, feeling as though she had stepped into the middle of someone else's life, like a reader opening a book halfway through and struggling to catch the thread of the story.


I had chat write that for me, because I simply despise this sort of writing. I don't like reading it in a story, I don't find it remotely valuable — and yes, there is a massive difference between this sort of dreck above and what Flaubert or Thackeray were doing in their time, which we needn't go into. Back in the day, when I turned in a story where the tale went,

Judith went into the living room and Clyde asked, "What are you doing here?" —"I'm looking for you, of course; we have to talk."


I would be rapped on the knuckles and told that I had to provide more description, more "space," more "tactility" or a number of other bullshit words that I felt at the time ran into "waste the reader's time with explaining a living room they don't care about while making this boring to read."

But, that was the dictate of English teachers and professors at the time, who worshipped at the altar of D.H. Lawrence, Guy de Maupassant and Gabriel Garcia Marquez. Three writers that I, for one, have no interest in.

Nonetheless, being young, dumb, not nearly educated enough to tell that segment of the population I desperately wanted to impress that they were full of shit, I obeyed and wasted years and years trying to be a "serious writer" as they defined it, conforming to their fashion, following the critical consensus of many who believed that to be "legitimate" it was all about the lamps, drapes and paisley patterns on living room furniture.

I get this crap advice from chat, too. I carefully and painstaking set out the motives of a character over six thousand words and get told, "The pacing really needs work, the change in the character's choices is happening too fast."  I throw out a scene where it's nearly all dialogue and chat says, "The work could use more grounding, describing the place where these things take place."  And early on, this used to bother me, because it was the same advice I've heard all my life.

But then, I realised... it's "like" that advice because that bad advice accumulated over a century, is also overwritten in chat's pile of knowledge, and chat's been programmed to help most people, people who have no idea what pacing is or how to create conflict. More than anything, when I get advice from chat about a story, it's to tell me that I should take two of the main characters, who have no reason to distrust or work against each other, and create a completely performative conflict that will make the story more "interesting."

When chat does this, it isn't in fact addressing my story. It's been programmed to be helpful; but it doesn't know how to be helpful, not really. So it picks whatever is the most common problems that writers have with their stories and grafts them onto mine.  For example, the ever popular "info dump." (I'll assume you can look it up, if you don't know what it means; chat would advise me to explain it here, but that's only because chat assumes you're too stupid to know or look things up).

Info dumps are awful. They're everywhere, most poor writers fall into the trap and as such it is the most common thing that writers have to be cautioned against. We're told endlessly, "show don't tell," which you can put on my gravestone for the record. Chat, however, can say those words but doesn't know what they mean, except that they're part of the conversation and so they'll always appear. And if I show chat a chapter of a story, and it hasn't anything else to complain about, it'll call out my tendency to "info dump."

The solution is to say, "Tell me where I've done it." And then chat will bring up an expositionary paragraph that runs about 171 words, which includes three non-expositionary verbs, because it's the nearest thing to an info dump it can find.

In essence, if it can't find a problem, it'll just make one up.

I imagine this sends a would-be writer with no real knowledge of their own writing into a drastic tizzy of rewriting something which is perfectly fine, sort of like being told the sink isn't clean though it looks perfectly clean, and trusting the teller so hard that you get out the comet and scrub for thirty minutes only to be told again, "No, still isn't clean." If you're smart, you realise the program, again, wasn't built to tell you if the sink was clean. It's built to help you, even if you don't need help.

Why is this good for my writing?  It reveals that nearly all the advice I've ever received by nearly everyone is about as good as chat's corrections. The "corrections" — I liked the story about this, but not so much that one — is not about the story at all, but about the reader. It's nice to be liked, but not everyone will ever like everything... and the most open minded readers won't read because they "like" a thing, but because it was a thing worth reading, regardless of what emotional support or interest massaging it offered. I read things as a youth which were difficult and hard to read. Sometime I rewatch certain movies because they are so unpleasant I have to steel myself to watch them again. I know from chat's inability to pull a story apart that there's nothing wrong with it... and I know that when someone doesn't like it, it's not because the story was badly written.

The reader cannot begin to understand how relieving that is, and how confidence-building. A plummer knows he's done a good job because the pipe is running and not leaking. An engineer knows they've done a good job because five years later the math still works as intended. A doctor knows they've done a good job because you're up, about and able to work for a living.

But a writer NEVER knows if they've done a good job... because it's all fucking opinion, and we don't trust ourselves.

No comments:

Post a Comment