Monday, March 25, 2019

Can it Report on Brexit . . . in the Shed?

The Guardian today ponders whether artificial intelligence will ever write enough quality novels to make human attempts at the task futile. Or somesuch.

And as I have never met Stephen Poole in real life, I can’t brush aside the assumption that he is, in fact, an AI writing this article about AI writing novels for the Guardian, which itself may be only a figment of my imagination.

But going on the assumption that Mr. Poole and The Guardian do indeed exist in flesh and brick and mortar, this is the crux of what they say:

AI has been the next big thing for so long that it’s easy to assume “artificial intelligence” now exists. It doesn’t, if by “intelligence” we mean what we sometimes encounter in our fellow humans. GPT2 is just using methods of statistical analysis, trained on huge amounts of human-written text – 40GB of web pages, in this case, that received recommendations from Reddit readers – to predict what ought to come next. This probabilistic approach is how Google Translate works, and also the method behind Gmail’s automatic replies (“OK.” “See you then.” “That’s fine!”) It can be eerily good, but it is not as intelligent as, say, a bee.

Nevermind that the non-AI folks at the Guardian seem to have forgotten about the AI scrrenplay they wrote about in 2016. (But since it was an article originally published at ArsTechnica, maybe the flesh-and-brick robots at the Guardian simply forgot about it.)

(Of course, the screenplay contains the same kind of AI gibberish still being seen today, at least outside of the closed labs of the ironically-named OpenAI.)

The Poole article quotes the OpenAI lab, a nonprofit backed by tech entrepreneurs looking at developing AI that can write, in saying their GPT2 AI is too dangerous to release to the general public – or other tech entrepreneurs presumably not paying them cash – because it could be used to create “deepfakes for text.”

That’s as may be, but as Pool and the Guardian point out, humans are already busily producing plenty of fake news that such “deepfakes” may go unnoticed or just be added to the fake news ball.

In other words, humans still appear capable of using their own brains to create enough ethical conundrums we may not necessarily need computers that can do the same thing. Or if not need, certainly not notice all that much.

As I consider my long writing career, first in journalism and now in technical writing, I wonder how valuable AI writing could be. All I know is now I’m trying to revise technical documents with the input from subject matter experts with tunnel vision and working at cross purposes, making me want to flee like Gossamer into the night. Or through the walls.



AI, I think at this point, is good at mimicry. But from what I’ve read of AI-generated text, it’s not good at all at interpretation. What, for example, does that subject matter expert mean to change in a document over multiple emails when many bits of important minutia are discussed? A human can interpret the meaning and resubmit to the SMEs for checking, but the AI, as far as I know, has no checks except for the humans who are reading what’s been put out on the other side.

There’s much room to debate which would fail worse at the “Garbage in, Garbage out” battle between humans and AI. But I would give the edge to humans in the realm of subtle garbage. Right now, reading an AI-written novel will draw the yuks via obvious stumbles, but the stumbles humans come up with are highly subjective. What’s a stumble to one reader may be forgiven by another. Although I’ve read enough books where horses are managed by “reigns” rather than “reins” to know there are some books you should not show the horsey set.

Humans will also win in any race involving distractibility. Ergo, the title to this post.



I am, of course, no one to talk. I have not published a novel. I have, however, gone through the editing process. Seventeen times at last count on my current work in progress. I like to think I’m learning more about writing a book the plodder’s way than AI ever could learn reading Reddit-recommended web pages. Or novels for that matter.

No comments: