AI Makes the World a Weirder Place, and That’s Okay

PCMag
PC Magazine
Published in
8 min readNov 5, 2019

--

Dr. Janelle Shane wanted to create a fun and approachable way for people to learn about AI, so her new book focuses on the bizarre and hilarious things it has produced.

By S.C. Stuart

Artificial intelligence can do some amazing things, but it’s not perfect. Research scientist Dr. Janelle Shane has been cataloging the sometimes hilarious, sometimes unsettling ways that algorithms get things wrong” on her website, AI Weirdness, and dives deeper into the topic in her new book, out this week.

Time and time again, Dr. Shane’s neural nets ingest the data she throws at them and spits out some strange stuff-from inedible recipes (horseradish brownies, anyone?) to bizarre cat names and paint colors from hell.

At first glance, Dr. Shane’s book—You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Placeseems like a lighthearted, cartoon-enhanced look at AI, but there are some lessons about human vulnerabilities. We spoke to Dr. Shane to find out why she wrote the book and what she hopes we’ll learn from it.

PCMag: Dr. Shane, can you explain your day job at Boulder Nonlinear Systems, where you make computer-controlled holograms for studying the brain and photonics (light-steering) devices?
Dr. Janelle Shane: Sure! I work as a research scientist at a small company that develops new ways to steer light. I get to work in a bunch of different areas, because there are lots of applications. Studying the brain is a big one-there are scientists who use our programmable holograms for studying the way that neurons are connected in the brain of a mouse.

That sounds weird and wonderful in itself. What’s the strangest project you’ve ever worked on?
One of the strangest projects I worked on was developing a virtual reality arena for mantis shrimp.

The mind boggles. Okay, now give us the backstory on writing this book.
We’re dealing more and more with artificial intelligence in our daily lives-algorithms that determine which ads we see, or who gets a loan, all the way to partially or completely human-powered systems that still get sold as “AI.” I wanted to create a fun and approachable way for people to learn about AI, because almost everyone’s going to need to know the basics.

Dr. Janelle Shane

How did that title come about?
The title is from a neural network that I trained on a collection of existing pickup lines. It didn’t quite get the hang of the cheesiness and innuendo of the originals. In my opinion, You Look Like a Thing and I Love You is better than all of them!

Indeed. On a serious note, in the book you say: ‘As more of our daily lives are governed by algorithms, the quirks of AI are beginning to have consequences.’ Are you giving us a wake-up call?
I think we are already beginning to realize just how harmful a biased algorithm can be-how it can deny people parole or health care or an interview, just by unknowingly copying the biases it sees in human behavior. Already people are challenging some of these biased algorithms in court and winning. I’d like to see not only the bias, but also the why of the bias become common knowledge. Other AI quirks can be harmful too-and putting too much trust in the intelligence of AI is often at the root of them.

When interviewing Dr. Yolanda Gil, President of the Association for the Advancement of AI, she talked about the need for ethics within the field. But you’re going further than that, arguing that any black box AI is a problem if we can’t see how it reached its conclusions.
I think ethics in AI does have to include some recognition that AIs generally don’t tell us when they’ve arrived at their answers via problematic methods. Usually, all we see is the final decision, and some people have been tempted to take the decision as unbiased just because a machine was involved. I think ethical use of AI is going to have to involve examining AI’s decisions. If we can’t look inside the black box, at least we can run statistics on the AI’s decisions and look for systematic problems or weird glitches.

How can we do this?
There are some researchers already running decisions on some high-profile algorithms, but the people who build these algorithms have the responsibility to do some due diligence on their own work. This is in addition to being more ethical about whether a particular algorithm should be built at all. If facial-recognition algorithms tend to be used against minorities, and emotion-recognition algorithms tend to leave out people who are non-neurotypical, should we build these at all?

Good point. As you’ve illustrated, some AIs, particularly GAN-where AIs ‘compete’ with each other-can produce highly creative work, especially in the magical lands of fashion and design, but we don’t want weird in cybersecurity, right?
Ha, yes, there are applications where we want weird, non-human behavior. And then there are applications where we would really rather avoid weirdness. Unfortunately, when you use machine-learning algorithms, where you don’t tell them exactly how to solve a particular problem, there can be weird quirks buried in the strategies they choose. Researchers have shown that by injecting just a few carefully chosen examples into public malware databases, they can later design malware that makes it past AI-based malware detectors trained on those databases. We haven’t seen any real-world examples of those kinds of adversarial attacks, but people are concerned about them.

Dr. Aleksandra Faust, from the Google Robotics/AI division, told us about the difficulties of teaching robots to navigate environments. You have a great example in the book about an AI ‘figuring out’ how to get a ‘body’ from point A to B.
If the AI can design whatever body plan it likes, it tends to build itself into a tall tower and fall over, thus landing at point B. I love this example because it’s come up time and time again, ever since people have been working on AI-controlled robots. It’s just much easier to fall over than to learn to walk.

As you point out in the book, AI also has a memory issue.
In general, AIs have an easier time with applications where they don’t need a lot of memory. I’ve got lots of examples of text-generating neural nets that only have memories that go back a sentence or two, or even just a few words. One of them, trained on dream diaries, has the incoherence of a dream itself, switching settings and characters and scenarios even in mid-sentence. Lately some big new neural nets have had new ways of keeping track of long-ago information and this has improved somewhat. Still, a neural net-written story tends to lose track of the plot pretty quickly.

The Tesla tragedy of March 2018, when a driver died because his car didn’t register a flatbed truck in front, has been covered a great deal. And you make the point that full autonomy is probably never going to happen, right? There are too many variables?
It’s much easier to make an AI that follows roads and obeys traffic rules than it is to make an AI that avoids weird glitches. It’s exactly that problem-that there’s so much variety in the real world, and so many strange things that happen, that AIs can’t have seen it all during training. Humans are relatively good at using their knowledge of the world to adapt to new circumstances, but AIs are much more limited, and tend to be terrible at it. On the other hand, AIs are much better at driving consistently than humans are. Will there be some point at which AI consistency outweighs the weird glitches, and our insurance companies start incentivizing us to use self-driving cars? Or will the thought of the glitches be too scary? I’m not sure.

On a more positive note, you highlight the AI known as Quicksilver, which is redressing the gender balance in Wikipedia by automating entries for scientists who happen not to be cisgender male. Do you have some other examples of ‘good’ AI?
I’ve talked to translators and audio transcribers who use AI tools as a first draft. Like the Quicksilver articles, the product still needs human editing, but it saves a lot of time. Another area I really like is creative use of AI, including artists and musicians who use AI generation and filtering as a creative tool. I’m really looking forward to Robin Sloan’s upcoming novel, in which he generates some weird, evocative phrases and passages using a custom-trained, text-generating neural network.

Quick backstory: You did your PhD in Photonics at the University of California, San Diego, after a Masters in Scotland, and undergrad in electrical engineering at Michigan State University. How did you get into this field in the first place?
I first encountered machine learning at Michigan State, where I attended a fascinating lecture by Prof. Erik Goodman about his work on evolutionary algorithms. He told some of the same kinds of stories I do in my book-of algorithms that misinterpret their tasks in amusing ways, or that arrive at unexpected (and sometimes unexplainable) solutions. I started out working on evolutionary algorithms before moving gradually into optics, but I’ve always found them fascinating.

Finally, you point out: ‘the narrower its task, the smarter an AI seems.’ Is your argument that AI should only be used for very specific tasks with clear boundaries, or that AI really only functions well with not just human oversight but more as IA, or human intelligence augmentation?
I think there’s a place for both kinds of AI. We’ve seen AI do amazingly well on narrow, well-constrained tasks like playing chess or Go. In other applications, AI can be really useful even if it’s not perfect, like filtering spam or tagging photos. When we need to avoid the glitches, and especially if there’s potential for harm, we do still need human oversight. Even if we think a task is narrow enough for AI, we need a human to check that the AI really did do it right. We humans do broad, difficult tasks so unthinkingly that we tend to be terrible judges of how hard a task is, until we build an AI to do it and it unexpectedly fails.

You Look Like a Thing and I Love You: How AI Works and Why It’s Making the World a Weirder Place is out Nov 5.

Originally published at https://www.pcmag.com on November 5, 2019.

--

--