I want to write about artificial intelligence, or, more specifically, about large language models such as ChatGPT and the like. But first, I’m going to write about Disney animator Milt Kahl. The reasons for this, I hope, will be clear.
Affectionately known as one of Disney’s “Nine Old Men,” Kahl is considered the animator’s animator, known for his skill in drawing fluid two-dimensional characters of the like of Shere Khan from The Jungle Book, Little John from Robin Hood, and Tigger from Disney’s Winnie the Pooh stories from the 1960s and 1970s.
He developed signature animations without the use of human models because he wanted to animate how the unique drawn figure would move, not as a model would imagine them moving. He’s known for the Kahl “head swaggle,” seen in the likes of Shere Khan and Edgar, the scheming butler from The Aristocats, to show off his ability to maintain head shape and body weight and positioning throughout the complex animation.
YouTuber A Humble Professor, an admirer of “animation, film, and comics,” analyzes scenes from 1977’s “The Rescuers,” for which Kahl was the directing animator. The professor focuses on animation of Madame Medusa, the film’s primary protagonist: a vain, vile woman of seemingly humble means in New York City who longs for her dreams of wealth to be realized through the discovery of a diamond long-lost in pirate treasure. The professor discusses rough animation of one scene: “Despite how rough these are, Medusa’s character shines right through them. We can really feel her frustration she feels in the scene and we can also see how her pear-shaped body influences how she sits down and scoots the stool.”
In other words, while Kahl could have used human models, or relied on past animation success (which he did with the head swaggle) he also realized that good character animation relied on how each unique character moved and talked and walked and, in the case of Madame Medusa, shrieked and climbed up on a stool and gathered her skirts when Bernard, a hero mouse in the film, arrived, spat out by one of her pet alligators after she whopped it on the head with her walking stick.
Kahl helped Medusa become a standout character because, through long effort and practice, he figured out how she looked and walked.
Uniquely.
Now let’s get back to Chat GPT and the like.
Essayist Evan Puschak, narrating a video essay called “The Real Danger of ChatGPT” at Nerdwriter1, says “Language is how human beings understand themselves and the world. But writing is how we understand uniquely. Not to write is to live according to the language of others, or worse, to live through edits, tweaks, and embellishments to language generated by an overconfident AI chatbot.”
Let me re-emphasize what Puschak says: Writing is how we understand uniquely. Not through the language of others, but through our own language and understanding.
That, I believe, is what large language models threaten to remove from us. Artificial intelligence will win out not because it develops the ability to think like humans, but because it will entice humanity with the expeditiousness that only results in humans accepting writing like an algorithm thinks they should is good enough.
Large language models present the world of writing with its own calculator moment and we as writers, educators, and students have to figure out how much we’re willing to give up for the ease of what this new calculator offers.
This is not to say that the likes of ChatGPT are bad in every way. Like any other tool, large language models can and should have their uses. The danger lies in confronting every writing problem as a nail, and using large language models as the ubiquitous hammer.
For every student and teacher who recognizes ChatGPT is a valuable tool for outlining, for brainstorming and the like, there is a student who takes the large language model shortcut and submits artificially-written work and fails classes because of it. Just this year I saw a student who was writing passable essays and commentary as an English learner succumb to the promise of ChatGPT – once, then again on a major assignment after being warned about the first offence; a semester of effort turn into failure in only a matter of days. My colleagues all share similar tales of woe. I can often hear my wife, who also teaches English, ranting in the next room in our basement as she finds another student who thought artificial intelligence was their best writing friend.
And I feel like a failure to him: Should I have done more to warn my students about the pitfalls of AI? Did I miss AI on previous assignments and not nip the temptation in the bud earlier, so his confidence built in the tool he was using? Is the course itself – its design out of my hands and in those of a committee trying their best – flawed? Is my teaching too lackadaisical?
I don’t know. But it’s hard to place all the blame on the student when the solution to a writing problem seems so clear and easily accessible – and as a savings of time.
How do I help my students hone their ability to understand uniquely the world they live in, rather than – if you can bear another metaphor – exchange it all for a mess of pottage?
Or should I, as some commenters on large language model-adjacent videos I’ve watched, just surrender because AI is only going to get better and become more human and more undetectable?
I’ve piled on the questions here. Let’s go find some answers.
No comments:
Post a Comment