CheatGPT and Alexander Pope
My friend Michael is a jolly good photographer, and I remember him telling me long ago, when he first started posting them online, that he'd had a comment from someone saying, "Your camera takes really nice photos!"
To which Michael had replied, "Thanks! Your keyboard writes really nice comments!"
Little did we know, back then, what was in store...
--
I have a sneaking suspicion that one of my clients is using a GPT to write some of his emails. I've had two or three in recent months that are just too carefully formatted: with nicely boldfaced section headings, too many bullet points, no typos, and they're just a bit too verbose: they read more like a legal document, a press release or a bad Powerpoint presentation than like a message to someone you've known for years.
My immediate reaction was that wonderful phrase I heard recently in an AI-related discussion: "Why would I want to read what somebody couldn't be bothered to write?" And if I knew that it was an LLM, and not a human, that had written it, that might have been my response. But I wasn't quite sure.
And this makes me think that accusing someone of using an AI, if in fact they haven't, could become a dreadful insult - I'd certainly take it that way. "You write like a machine." And, actually, one of the reasons I'm fairly confident that this particular chap is using it for some of his messages is that he also sends me missives which are much more human, and sound like him, and the difference is noticeable.
Unfortunately, kids aren't always smart enough to detect this distinction, and schools and colleges are finding they must now emphasise, to a much greater degree than in the past, that the essay a student produces for their assignment -- the end result -- has no value in itself. Your teacher isn't looking forward to receiving it because he really wants to have your great work of literature to keep on his bookshelf. No, it's the process of writing that essay that is the valuable thing, and doing so is the only thing that will help you when you're in the exam room at the end of the year without the help of ChatGPT (or 'CheatGPT' as I've recently heard it called). The recent idea that continuous assessment is a fairer way to assess students than the rather artificial world of exams is therefore being turned on its head.
--
In the early 18th century, Alexander Pope published his poem 'An Essay on Criticism', which introduced us to such phrases as 'A little learning is a dangerous thing', and 'Fools rush in where angels fear to tread'.
Think about that second one, for a moment . Can you imagine ChatGPT and its ilk ever coming up with that beautiful and succinct phrasing which incorporates so much human tradition and experience in just eight words? No, of course not. It might repeat it, if it had found it elsewhere, but it would never originate it.
AI systems are trained on the bulk of the data to be found on the internet, and they statistically predict what words and phrases might come next based on what they've seen most frequently. An AI's output will always be average, and never excellent. If you're lucky, then your AI will have been trained more on quality content than on the random output of the hoi polloi, and it might produce output which is in some senses slightly above average, but it is nevertheless always just plagiarising large numbers of humans.
Another famous line from An Essay on Criticism is the wonderful
To err is human; to forgive, divine.
And it was in the late 1960s -- yes, as early as that! -- that the newspaper columnist Bill Vaughan came up with a pleasing and much-quoted variation:
To err is human, to really foul things up requires a computer.
But I would suggest that we now need to revise that for our current age.
"To err is human, to excel is human, but to be truly average requires a computer."
Leave a Comment