Confab 2018
Sara Wachter-Boettcher

Inclusive content, ethical tech, and you

,

Transcript

Kristina Halvorson: Hello, again. I forgot I had a mic on. How was the cheese? It was great? Delicious. We are seeing a few ... Somebody found a drink ticket on the floor and they were like, “Woo hoo, free $10.” Who was it? It’s mine? Okay. See now we’re just going to fight over the drink ticket. It can only be one person’s. So again, take your drink ticket, tuck it away. The sound of one million Velcros opening. Okay, excellent.

So our next speaker, I’ve known for quite some time. Her name is Sara Wachter-Boettcher. She is the author of Content Everywhere and Technically Wrong. She ... I’m going to read just a little bit about her company as a principal of Rare Union. She has led projects and facilitated workshops of Fortune 100 corporations, education and research institutions, and startups.

I asked Sara to tell me about a risk that she had taken that turned out great, and she said, “I don’t know.” And I said, “Oh, I do.” So several years ago ... What year was it? 2011 I said to Sara, “I really think you should write a book on structured content. And I’d like to put you in touch with Lou Rosenfeld at Rosenfeld Media.” And she was like, “Uh, okay. Let’s do it.” And so I was like, “Okay, I’m going set up this phone call.” So I set up this phone call between the three of us and at the last minute I was like, “Mm I can’t make the phone call. But it’s okay, go ahead and talk to him on your own.” And she was just like, “Ah!” But then she ended up writing a book, and then she wrote another book. So it was a big risk that turned out okay.

So please welcome to the stage Sara Wachter-Boettcher.

Sara Wachter-Boettcher: Hello. It is great to be here at Confab again, and it is great to talk to you all about inclusive content, ethical technology, and what that means for all of us, all of the jobs that we do.

To start out, I’d actually like to talk about one of my favorite examples of problematic tech. And that is what I call the Mini Cupcake Meltdown. It is an issue that happened at Google Maps a few months ago. So back in October, they launched this update where it went out to a group of iPhone users. And what they did with that update is that in addition to getting the normal stuff you get with Google Maps, like you could look up walking directions, or transit instructions, they also decided to start telling you how many calories they thought you might burn if you walked instead of taking another form of transit. And in addition to telling you how many calories you might burn, they also decided that what you needed to know was how many mini cupcakes that might be.

Now pretty quickly people had some feedback to this. I’m going to talk about just one particular person who had a really great tweet stream about it. Started around 8 o’clock p.m. the night that she got the update. It’s a woman named Taylor Lorenz, she’s a journalist. And so her first response is just like, “Oh my god, they’re doing this thing, this is weird.” This is 8 o’clock. So keeps on going and she’s like, “Do they realize this is really triggering for people with eating disorders? Do they realize this is kind of shame-y?” She talks about the fact that you didn’t opt-in to it, that you couldn’t opt-out of it. She goes on and on and talks about what even is a mini cupcake? And is a calorie count even helpful for people? The number of calories you might burn walking is not what I might burn walking. So this goes on about an hour, until 9:03 p.m.

Here is a recap of all of the reasons she came up with. Not only was it not opt-in, there was no way to turn it off, no way to opt-out. Dangerous for eating disorders, shame-y, average calorie counts are inaccurate, not all calories are created equally. A cupcake just isn’t a useful metric, like what the hell is that? Then she talks about, “This is my interpretation.” This was not her actual wording but she talks about what pink cupcakes even mean. That they’re not a neutral food, that they’re encoded as being white, middle class, and feminine. And then perpetuation of diet culture, which she saw as negative.

Now individually each of you could agree or disagree with any of these points. Individually you might really like calorie count information with your walking instructions. But what I want to notice here is just, it took an hour for her to document this, one hour of her time on Twitter, probably doing other stuff at the same time. Took her one hour. Within three hours, they had actually turned the feature off.

Now I have worked on some projects. I bet you all have too. How long do you think they spent on that? I bet you they spent more than three hours deciding what the frosting on the cupcake would look like. I bet you there were multiple discussions about what color it would be and if there would be sprinkles on it. And I bet you they spent more than three hours on the little microcopy that said, “That’s almost too many cupcakes.” I bet you that it took a lot of time and energy to build this in. And I love this example because I feel like it’s such mundane, small, everyday way that tech and design can go really wrong for people. And so often we think that what we’re doing is delivering a good user experience, when we make it seamless, and we make it seem very natural, to flow right along in the process. But we don’t necessarily think about how the decisions we’re making can hurt people. Or should we be doing this at all? If somebody said they wanted to map something, does that mean that they wanted a calorie count?

And the thing is we see this all over the place in tons of small choices. And many of those small choices are directly related to content. I have a lot of examples of this, but I’m only going to talk about a few because I have a very short amount of time. But I have a huge folder of these kinds of things.

This one is one of my favorites. My friend Dan Han got it. So he has a smart scale, and so his smart scale sends him emails. And you’ll notice that this email is not actually addressed to Dan, it is addressed to Calvin. Calvin was told to not be “discouraged by last week’s results. We believe in you! Let’s set a weight goal to help inspire you to shed those extra pounds.” You may also notice Calvin weighs 29.2 pounds. Calvin is Dan’s toddler, and weird, his weight goes up every week.

And the scale had only been designed ... The people who designed this product and thought about what notifications would be sent, had only thought about it as a tool for weight loss, and had only thought about lowering weight as being a good thing. In fact, it wasn’t the only kind of notification that Dan and his family got, because this is one that his wife received as a push notification: “Congratulations, you’ve hit a new low weight.”

Now in her particular example, she had just had a baby. So, okay. Technically true. But think about all of the people for whom this is not a moment they want to be congratulated about. I know people who have chronic illnesses, and one of the signs that they’re getting sick is that they are losing weight. I know people who have gone through pretty intensive eating disorder treatment programs, and one of the things they actually had to learn was how to not congratulate themselves for hitting low weights, and how to not tie their worth to how much they weigh or how little they weighed, to be specific.

There are tons of reason that you might not want to be congratulated for hitting a new low weight, but this system hadn’t been designed around any of them. And it hadn’t let people choose why they were using the product, or how they were using the product. And the results could really be disastrous for people.

You also see a lot of examples where we end up with little copy decisions that reinforce biases about who and what is normal. On here we’ve got Cycles. Cycles is a period tracking app, and one of the things that period tracking apps often do is they allow you to share information with a partner. So if you’re somebody who gets a period and you have a partner who wants to know what is happening with your cycle, you can give them access to the information. But in this one, for no reason at all, all of the language is super heteronormative. “Keep him in the loop.” Below that it says, “Just for you and him.” There is no reason the copy needs to say that. And the woman who received this, found it really alienating. Her partner is not a man.

And the same thing at Etsy. This was actually sent to a woman I know, Erin Abler, and it’s a push notification she got where Etsy is trying to encourage her to buy people gifts for Valentine’s Day. Except they’re encouraging her to buy Valentine’s Day gifts for him. And she looks at that and she’s like, “Didn’t think you had any gay customers, huh?”

And of course, these are not big things. This does not ruin their life. These people can move on. They can choose to use the product anyway. But it’s these little moments of alienation, those little paper cuts, those little micro-aggressions that make them feel like this is not for me.

And then there’s other examples like this. Let me talk about this for a moment here. Okay, so this was received by a woman name Sally Rooney. She is an Irish novelist. Her novel is actually wonderful, I just recently read it. But she received this and she shared it on Twitter, and I talked to her about it. And one of the things that she said was that when she got this on her phone as a notification, she freaked out and was like, “Oh my God, have I someone accidentally started following the tag ‘neo-Nazis’?” And she goes back through all of her settings, and she’s like, “What is going on?” And it turns out, that they had decided she needed a push notification for alerts about when neo-Nazi topics are trending because she had read a couple of articles on Tumblr about the rise of fascism in the United States.

Now there’s that piece, and then there’s what else is going on here. So the other piece of it is of course, nobody sat down to write this copy. Nobody wrote “Beep beep! #neo-Nazis is here.” What they wrote was “Beep-beep! (topic) is here.” And the topic would be inserted as a little text stream. Nobody intended for it to come out this way, but nobody thought about how wrong that could be for lots of different topics. Somebody commented on this on Twitter that they got the same notification and it was like, “Beep-beep! Depression is here.” Accurate. But not really what you want to hear.

Now these are small things, but I want to highlight something that came out of this conversation. So I was paying attention to all of the responses that she was getting, and she actually got into conversation with people at Tumblr. And a guy named Tag Savage, who I’m sure he’s a great person, and I’m sure he’s trying to do the best job he can do, he actually said, “We talked about getting rid of it but it performs kind of great.” I think that this is the answer very often in technology. We talk about getting rid of things but it performs. And the thing is, that has led us to some very scary places.

For example, how many of you saw this story a few months ago where James Bridle published this look at this sort of creepy underbelly of violent content being targeted to kids on YouTube? This is from a knockoff Peppa Pig video where instead of the normal Peppa Pig video where she goes to the dentist and it turns out great, she goes to the dentist and it morphs into this graphic torture scene. He looked at tons of these videos on YouTube and what he found was these things are being produced and added to YouTube by the thousand. They were tagged with keyword salad, and then they were being auto-played to kids according to the things they had watched in the past.

So what happens is that a child starts out by watching a normal Peppa Pig cartoon on the official channel. That ends and because all of this other content has been tagged and marketed in such a way that it seems similar to that content, they could get immersed in a darker, and darker, and darker world without their parent even realizing what’s going on.

Now I will say that YouTube has recognized this problem and has tried to fight back against it, but only after this story blew up. Because you know what, this content performs. If your metric is videos watched, this content works.

Or we can look at examples that are happening in anything regarding machine learning, or algorithmic decision making, AI. For example, I want to talk about something called Word2vec. What Word2vec is, is a natural language processing tool that was trained using 3 million words from Google News articles. So they basically took Google News articles as a corpus, fed that to the system, and it learned about language. And what it was supposed to learn was not just about what words mean, but a different type of language processing that’s about relationships between words based off of where they are spatially with each other.

What it learned from that was some stuff that’s actually hard for computers to learn, like analogies. So for example, it learned that ... Nope, there we go. Oh, okay. All right … cool. Here we go. Okay, all right. So it learned it can complete analogies, which is hard for a system to do. So it knows if you say. “Paris is to France as Tokyo is to (blank),” it knows the answer is Japan. But it also learned to complete other kinds of analogies. So it thinks that man is to computer programmer as woman is to homemaker.

Now obviously that is not true in the same way that Paris is to France as Tokyo is to Japan. But in the corpus of data it learned from, all those Google News articles, it was true. The relationships between words were similar. And so what we end up having is we have bias that gets baked deeper, and deeper into our tools, in our systems, in our language, and all of those little details, all those little tiny paper cuts in our interface copy, those same biases end up in some pretty deep and scary places.

And we can see all of these ethical gray areas cropping up. Like for example, how many of you saw the demo of Google Duplex, I bet a bunch of you did. It was really neat. But it was also pretty scary. What it did is somebody could make a phone call using a bot to ... They showed in the demo making a salon appointment and making restaurant reservations. And what Google demo did was add mmms and ahhs, and clarifying questions, and it was really designed to sound like a human, which means it was ultimately designed to deceive. And it really raises a ton of questions about how we communicate with our technology. How do we even know we’re talking to the person we think we’re talking to? And is it okay for a bot to pretend it’s human?

You know, they have to record the call in this really short-term way in order for the AI to work, so it’s like this temporary recording, but what does that mean for privacy? And, can we, people who are privileged enough to have services like this, can we really be outsourcing our tasks we don’t want to do to bots and then expect the low wage workers to have to pick up the phone on the other end of the line to deal with these bots?

There’s a lot questions that it raises. And the reality is we’re not able to answer these questions very effectively if we’re not talking about those kinds of stuff in our work. And while not all of this is directly content strategy, we are talking about things that fundamentally have to do with communication with words, with content. This is the kind of stuff we should care about.

As Gerry said, “Words matter so much in digital experiences.” And they are increasingly important to all of these types of more advanced experiences people are designing now. So what are we going to do about that? Well I think that there are basically three things we need to be really focused on in our work. We need to think about our practice, our process, and our priorities. And I’m going to talk about them in ways of increasing difficulty, because the first one, practice, is really the smallest. Like what are the habits that we have every day.

One of the things that we really need to get better about is uncovering the assumptions that we all have in our own work, in our own day-to-day, about who we think is normal and what we think is normal. Because those assumptions say very little about the world. They only say something about us.

For example, many of you have probably seen this before. This is a screenshot from a feature called Year in Review, and it’s a screenshot my friend Eric Meyer received. What it was, Year in Review, the very first year they did it, was this little package of your top highlights of the year on Facebook. So posts and things that got the most comments, the most likes. And they packaged it up for you and said, “Hey, don’t you want to share this with your friends?”

Now Eric received this at the end of the year, it was on Christmas Eve, and what he had in the center of his Year in Review was the most popular photo he had posted all year. But it was a photo of his daughter Rebecca who had died on her sixth birthday. It was heartbreaking for him, to have that surfaced up for him when he didn’t ask for it, and also to have it be surrounded by all these pictures of balloons and streamers, and people dancing at a party. Designs that Facebook had inserted into his context. And it wasn’t just Eric, of course, that had had a bad year. Tons of people have terrible years all the time. See there’s lot of examples of things that you don’t really want to celebrate that were popping up all over the place.

When you start looking at a problem like this, it’s easy to be like, “Oh wow, what a terrible tragedy.” Or like, “Oh gosh, that’s too bad.” But we need to look a little deeper and say, “Okay, what kind of assumptions were getting baked into that product?” We can talk about a few of them.

That product only works if you assume that the user had a good year, or that most users had a good year, that they want to relive their year, that they want to share their year, and that the most popular content is a good proxy for the best, most positive content. And if any of those things is not true, the experience falls apart. And so, we need to get a lot better at anticipating these kinds of things, and asking ourselves questions in our design process, in our practice, but help us question the assumptions that we have.

We often make assumptions about all kinds of stuff. About peoples’ identities, about where they live, about the physical state that they’re in, emotional state that they’re in. In fact, we often make assumptions about things like race. I think if we’re honest with ourselves, we often imagine white people as the audience for our products. It’s so embedded in our day-to-day that if you look at things like Google Search results, and if you search for something like “beautiful women,” the study found that almost all the results were showing white people.

So there’s a company that is trying to think a little bit about that, and that would be Pinterest. I don’t think they have it all figured out, but I’m going talk about what they’re doing. So they realized that they had way too many pictures showing up in their beauty results, that were only of white women. And so what they did is they took a look at how those results looked for somebody who wasn’t white. And in fact, their head of diversity and inclusion spearheaded this project after she realized that all of her searches for hair stuff wasn’t useful unless she added the term “4c.” 4c is particularly coily hair, kinky hair, and she had to add that to her searches to get anything valuable to her.

So they wanted to start making it easier to find content that was more relevant to you. So they went through this process of building out these skin tone filters. So you could actually get results and my example here is summer lipstick. You could actually get results that is more specific to your skin tone. I don’t think this is the end-all, be-all, but this a company that is actually saying, “Wait a second, we have a diverse user base, and we want to make sure people get things that are relevant to them. And so we have designed work to do to make that happen.”

Which leads me to talk about process. It’s not enough to have just your personal practice, you have to think about what that means for your organization, because after all, when it’s important, you’re going to make a plan around it. When it’s important, you’re going to have resources for it. If it’s important, you need to be able to evaluate against it. And if you’re not able to do any of those things, then what you’re telling me is that it’s not actually important.

So I really want to challenge organizations to make inclusive design an explicit part, not an implicit part, not a “yes we’re all good people, we all care about that,” but an explicit part of our projects. So at every step of the way, are we asking ourselves, “How could this hurt somebody? Who might this leave out?” When we’re at the very early stages of our projects, are we thinking, “Who could this go wrong for? Or have we thought about equity? Have we thought about ethics in the design of this feature?” Do we have scenarios that are really, not just aimed at the desired outcomes, but also aimed at people who are in stressful scenarios or in difficult circumstances. Do we actually evaluate our employees on whether they are living this out? If you’re not being evaluated on it, then again, it’s not that important.

And that leads me to my final area, which is priorities. And when I talk about priorities here, I’m talking about really big things, the priorities of your organization. Because you can only really impact how inclusive your product is, inclusive your content is, if it’s supported by an organization that prioritizes it.

So to talk about that, I’m going to talk about something I think we’ve all heard a lot about recently, which is Facebook. And this is an ad that Facebook took out last month. It’s a full page ad that they took out in the Washington Post, the New York Times, a bunch of British publications. And in this ad they are responding to the Cambridge Analytica scandal, where 87 million Americans had their data improperly collected and shared with Cambridge Analytica, and it was used specifically to mislead Americans in the presidential election.

Now, here is what Mark Zuckerberg said after that scandal. He said, “We are an idealistic and optimistic company. For the first decade, we really focused on all the good that connecting people brings. But it’s clear now we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm.” We didn’t think enough about how we could do harm because we were so focused on idealism.

And I think this is really common. But I also think it’s not really very true, because you see there is a lot of signs along the way that Facebook was harming people. For example, last year, actually more than a year ago now, but about a year and a half ago, Republica released a report showing that you could target advertising for housing on Facebook, according to race. So you could exclude certain races from seeing a housing ad, which is illegal under the federal Fair Housing guidelines of 1968. Later on that year they found that Facebook had these algorithmically created ad categories, and you could target your ads to people who hated Jews. They also found that even just this year, that some of those discriminatory housing practices, or at least the system that was allowing people to discriminate if they chose, that they actually hadn’t been removed. That Facebook was still allowing that to happen and there’s a lawsuit now over it.

Or you can talk about this example. This is an example of an ad that Facebook placed on Olivia Solon’s friends’ pages. What it is, is a graphic rape threat she received. She’s a journalist, she works online, she gets some pretty negative mail. And she posted it to her Instagram account because she wanted people to see the abuse that women get. Instagram is owned by Facebook, Facebook wants more Facebook users to use Instagram, so Facebook scooped up her Instagram post, which was “popular” and used it as an advertisement to try to get her friends to sign up.

Or you can look at the report that was released where Facebook actually told advertisers in Australia it can identify teens feeling insecure and worthless, as a perk to advertising. Or you can look at how even back during the election, we talked a lot about fake news. And one of the things that got lost in this whole discussion was that a few months before the election, Facebook fired all of its curators who are meant to look at the content that was going into the News Feed and determine what should be highlighted there, and replace them with an algorithm and immediately fake news took over.

There are so many examples of ways that this stuff has gone really, really wrong. And I think so much of it comes down to priorities. Priorities around that hockey stick growth. Getting the engagement and the money higher, and higher, and higher without thinking about who it might be harming. Or without caring enough about who it might be harming to sacrifice any of this.

Facebook made $40 billion in ads last year. And I will say, there are a bunch of people from Facebook here … hi. All of you that I have ever met are great people. You want to do good things and you care about people. But if you’re in an organization that is unwilling to sacrifice any of this, then we are not talking about mistakes anymore, we are talking about choices.

And I think that we can make some different kinds of choices. For example, I’m going to talk briefly about Nextdoor. Nextdoor is trying to do something a little bit different because Nextdoor, it’s social networking for people in your neighborhood, and you can do things like reports a lost pet, or talk about a yard sale coming up. But what people were finding was that there were some problems with Nextdoor. Because what would happen is people would post about sketchy people in their neighborhood, and what it turned out to be was a Latino man drove by, a black person was walking a dog. But they were turned in to these crime and safety reports.

So what happened is that actually a bunch of citizens groups in Oakland got involved and they were like, “Okay, this is not all right.” And they were really pressuring Nextdoor to do something about it. And so what they decided to do was to take a look at the process that they were using to collect those crime and safety reports. At first what they tried to do was just make the flagging process more robust, but what they realized is they needed to change how people reported information in the first place.

So what they did is they took what used to be a big empty text box, like “report your crime and safety thing here,” and they turned it into a process that makes people slow down and think. They have to ask themselves some questions, describe the incident. And then when it comes to describing people, they wouldn’t let you submit the form if you tried to include race, but you don’t also include a bunch of other information. So you have to actually have details about a person to include their race when you’re filling this out.

What they found was that this actually cut down on racial profiling on the platform substantially as they tested in a bunch of different markets. I don’t think they’ve solved their problems, but this was a big improvement for them. But the other thing that they found was their form completion rates took a nosedive, 50% fewer people were filling it out. What that equals for Nextdoor is less engagement. What that equals for Nextdoor is fewer page views, fewer people commenting because guess what, crime and safety reports get a lot of comments. That is not good for their numbers. But they had to decide that it was more important to solve this problem and it was more important to make sure that racial profiling wasn’t having a home on their platform than it was to keep those numbers boosted.

So as I wrap up here, what I’d like to leave you with is this idea of who we are and what we are doing here. This is a photo from Confab 1. I was there. It was very exciting for me. I felt like I was finally amongst my people back in 2011. And I still feel like that today, my people. But what I’ve realized is that something that unites us as content strategists, it’s not just our love of spreadsheets, it’s not just our very deeply held opinions about commas, what we need to be able to do, particularly now as we’re gaining influence in our organizations, as I see so many teams are saying, “We’re growing our content strategy team. We’ve made this an essential part of our practice.”

What we need to be able to do is we really need to decide how are we going to use the influence that we have? How are we going to make sure that tech is more humane, that our content is more compassionate, and that our work builds equity? And how are we going to get ourselves involved in the communication challenges to come? It is a very difficult job, but I am excited to be here for it.

Thank you.

Get Confab email updates

Be the first to know about new content strategy events, early-bird prices, and other useful things. We promise to email you only if it’s important.

Thanks! You have been added to the Confab mailing list.
Oops! Something went wrong while submitting the form.