You Look Like a Thing and I Love You
by Janelle Shane, Ph.D.
published by Voracious (Little, Brown and Company)
2019
I’ve enjoyed Janelle Shane’s site, aiweirdness.com, for some time, and when she mentioned that she had published a book on the same themes, I couldn’t resist it.
What are her themes? Machine learning, mostly, and how difficult it is to train a neural network to do what you really want it to do. You THINK you are training your software to recognize cancerous lumps, and it does well with your training data, but it doesn’t work so well in real life. In retrospect, you trained it with images of cancerous lumps that have rulers next to them to show the size of the lump, while no one cares (or measures) what size benign lumps are. Your program relied on the rulers to know whether or not a lump is cancerous: ruler = yes, no ruler = no. You invented… a RULER-DETECTOR.
Why I am reading about this geeky, specialist topic? I have to deal with the limitations of “AI”s of various designs all the time. Voicemail hell? That’s a not-very-intelligent program imitating an AI, possibly with AI voice recognition. Applying for a job? Software is screening my resume. Getting a laboratory test? Software may be screening that for me, too!
If you’ve ever gotten into an argument with your phone, you know that these programs are… not perfect. Depending on whether you have a high or low voice, they may not seem to work at all. My father is still amused that one of his friends couldn’t get her voice assistant on her phone to understand ANYTHING she said, but my father (who sounds like Darth Vader) could ALWAYS be understood. Why? Because it was trained this way.
Janelle Shane finds amusing ways to talk about how neural networks and other near-AI programs work, what they are good at, why they fail at so many tasks, and how the data sets they train on can make them vulnerable to manipulation.
You will laugh, as I did, as an AI trained to generate metal band names learns to generate ice cream flavors! You’ll laugh often, really: Ms. Shane has some good stories, and good quotes from people who fought to teach their AI something specific, and their AI interpreted them literally and won. The challenges she sets up for the simple neural nets she build are VERY FUNNY.
It isn’t just jokes and witty examples: you won’t laugh at the idea of a navigation-bot telling you to drive TOWARD a fire (because there is less traffic in that direction!), nor at racial and gender biases that oblivious employees train software with, nor at the fact that image recognition programs that train on the same free (manipulatable) data sets can be mis-trained to see things that aren’t visible / obvious / correct to humans.
Maybe there’s a rare but catastrophic bug that develops, like the one that affected Siri for a brief period of time, causing her to respond to users saying “Call me an ambulance” with “Okay, I’ll call you ‘an ambulance’ from now on.”
Excerpt From: Janelle Shane. “You Look Like a Thing and I Love You.” Apple Books. https://books.apple.com/us/book/you-look-like-a-thing-and-i-love-you/id1455076486
It is good (and refreshing) to truly think about the serious implications of our rush to be dependent upon machines, and the hazy way we think that machines are neutral decision makers, when nearly every application we have developed for them is not neutral in inputs, programming, or impact.