But boy, I really had to laugh when I read this in Kai-Fu Lee's "AI Superpowers":
Internet AI already likely has a strong grip on your eyeballs, if not your wallet. Ever find yourself going down an endless rabbit hole of YouTube videos? Do video streaming sites have an uncanny knack for recommending that next video that you've just got to check out before you get back to work? Does Amazon seem to know what you'll want to buy before you do?
If so, then you have been the beneficiary (or victim, depending on how you value your time, privacy, and money) of internet AI. This first wave began almost fifteen years ago but finally went mainstream around 2012. Internet AI is largely about using AI algorithms as recommendation engines: systems that learn our personal prefernces and then serve up content hand-picked for us.
Fair enough, except the algorithms now seem to favor (on Facebook) stuff that people have paid Facebook to promote and (on YouTube), inexplicably, videos I've already watched -- complete with the red bars on the thumbnails showing I have indeed watched them.
Then there's this, a few paragraphs later:
Adoping those same methods in a different context, a company like Cambridge Analytica used Facebook data to better understand and target American voters during the 2015 presidential campaign.; Revealingly, it was Robert Mercer, founder of Cambridge Analytica, who reportedly conined the famous phrase, "There's no data like more data."
First, that really odd phrasing: a company "like" Cambridge Analytica. Why the "like"? Maybe he's trying to show that other companies outside of this one are mining data. Still, the phrasing is really weird.
And yes, it's *that* Cambridge Analytica.
Lee does go on to acknowledge the scandal, but in more of an "Oops, we got caught" way than anything else, which kind of ties in with the "What, me worry?" ethos Silicon Valley and China writ large seems to take in developing AI without really, you know, considering the ethics of it all.
He also mentions a few other uh-ohs:
Social media using internet AI to suss out "fake news," until, you know, they stopped doing that because purveyors of fake news wanted to pay them, and fake news got engagement, and money and engagement are a lot more important than, you know, stopping the spread of fake news.
AI helping teachers tailor their teaching to individual students, while the real AI concern in academia is students using generative AI to fake write everything from essays to discussion posts to journal posts so they can take the learning out of learning and get on to much more interesting things, like, I guess re-watching the videos the alrogithm tosses up for them to watch.
So much of this AI stuff is a race to the bottom. . .
In a way, of course, it's the bleeding edge thing that's biting the ideas put forth in this book. There's a lot good that can possibly come from AI, but what we're seeing mostly is that race to the bottom. I feel the same way about Clay Shirky's rosy look at the Internet in "Here Comes Everybody" and "Cognitive Surplus," where he sees the good the Internet offers. He'd have to write different books if he were to update those from the early 2000s.
No comments:
Post a Comment