Fight bias with content strategy

,

Featured speakers

Transcript

David Thomas: Hello. It’s so great to be here. My name is David Dylan Thomas. Today we’re going to talk about how to fight bias with content strategy. Where am I pointing this? There we go. As Kristina mentions, I’m the principal of content strategy at Think Company, an experience design firm. But for the purposes of this talk, the really relevant piece is that I am the creator and host of The Cognitive Bias Podcast, which is a podcast about, wait for it ... cognitive bias.

I want to talk a little bit about how I came to, in fact, host this show. A few years ago, I saw an amazing talk by Iris Bohnet called Gender Equality by Design. I highly recommend you go find it. It’s on YouTube. It’s way better than this talk. One of the things she talks about is pattern recognition and how things like hiring, that is unequal, can come down to something as simple as pattern recognition. Let’s imagine you’re a hiring manager and you’re hiring, say a web developer, and in your mind, a web developer looks like a skinny, white dude. You have seen that pattern over time on television, around your office, it’s just, this is the pattern, this is what I expect. So if you see a name at the top of a resume that doesn’t fit, skinny, white dude, you might start to give it the side-eye.

Now, when I found out that something as pernicious as gender inequality, racial inequality, and these processes can come down to something as simple as pattern recognition. I thought, I need to learn everything I possibly can about cognitive bias, and so I did. This is the rational wiki list of cognitive biases, there’s well over 100 of them. I thought, I am not going to get through all of this in a day, so I took one a day. Every day I would pick one and I would learn about it. The next day I’d move onto the next one, and the next one, and the next one. This turned me into the guy who wouldn’t shut up abut cognitive bias and my friends were eventually, “Dave, Dave, Dave, please, just get a podcast.” So, that’s what I did.

Now, it’s worth discussing briefly what is cognitive bias, and at the end of the day, cognitive bias is basically a series of shortcuts that your mind takes just to get through the day, and you need this. On any given day, you have to make roughly a trillion decisions. Even right now, I’m making decisions about where my gaze is going to go, how loud I’m speaking, where I’m stepping; and if I had to carefully about every single one of those decisions, I would quite simply dissolve into puddle of goo right here, so we have shortcuts. Most of our lives, we’re on autopilot and that’s a good thing. Most of those shortcuts are good, but sometimes those shortcuts lead us into error, and those errors are called biases.

One of my favorites is illusion of control. Imagine that you’re playing a game where you have to roll a die. If you need a high number, you will roll really hard. If you need a low number, you will roll really gently. I think everyone in this room knows that it makes no difference how hard or soft you throw the die, but we like to think we can control the outcome and so we embody that by rolling the die a certain way.

Now, confirmation bias is probably the most well-known and it is exactly what you think it is. You have an idea in your head and you go out and you look for evidence to support that idea, and if you see anything that even remotely contradicts it, you say, “Fake news.” One of the most powerful examples of this came during the Iraq War. Iraq War, the basic premise was, there are weapons of mass destruction in Iraq, we need to go in there. I bought into this, it’s like, “Yeah, of course.” As it turns out, not so much, and when the President of the United States said, “There are no weapons of mass destruction in Iraq,” the number of people who believed there were weapons of mass destruction in Iraq went up. We’re going to come back to this bias.

Now, these biases are extremely difficult to combat. Part of the reason is, you may not know you even have the bias. In fact, there’s a bias called the bias blind spot. It’s basically, “I don’t have any biases, but I’m sure all the rest of you do.” I’m sure you’ve never met anybody like that. Part of the reason you may not know you have the bias is because about 90% of cognition happens below the threshold of conscious thought. Like I said, autopilot, you make the decision, you don’t even know you made the decision. In fact, if someone asks you why you did something, the most honest answer you can give them is, “How the hell should I know?” By the way, 90% is a conservative estimate. A lot of scientists think it’s even higher than that.

Here’s the worst part, even if you know you have the bias, you do it anyway. So there’s a bias called anchoring. The idea is, I would have everybody in this room write down the last two numbers of their Social Security number, and then apropos of nothing, I would say, “Hey, I want you to all bid on this bottle of wine.” Those of you who wrote down high numbers will bid higher for that bottle of wine. Those of you who wrote low numbers will bid lower. It’s anchoring, it’s a thing.

Here’s the thing, I could start out that experiment by saying, “Hey, guess what? There’s this thing called anchoring and you’re going to write down some numbers and you’re going to bid a certain way. Don’t do that.” You’ll still do it. It gets worse. I could start out the experiment by saying, “Hey, there’s thing called anchoring and I will pay you cash money not to do it.” You’ll still do it. The good news is there are in fact content strategy and design choices we can make that can mitigate some of these biases or occasionally leverage them for good.

Let’s go back to the example of the skinny, white dude and the hiring manager. So in fact, if you do have two identical resumes and the only difference is the gender or what you read into the gender of the name at the top of that resume, if it’s in a generally male-dominated field, the resume with the female-sounding name at the top will be put in a pile and the one with the male-sounding name will be moved on. This has happened in experiment after experiment. Here’s the thing, why do you need that information? If you’re a hiring manager and you’re trying to decide who to hire, what about the name is making that decision easier, right?

As content strategists, we can think about that as signal versus noise problem, right? The signal are the qualifications, the experience. The noise is whatever I’m reading into the name around gender or race. In fact, the City of Philadelphia did a round of blind hiring for a web developer position and they learned two things very quickly. One is that if you want to blind a resume, even in the high tech world of web development, the most effective way to do it is to have an intern who has no stake in the hiring process physically print the resume, take out a marker, and redact it like a CIA document.

The other thing they learned was, as soon as they saw a set of qualifications they liked, naturally, they would want to go to that person’s GitHub profile. It’s sort of the portfolio of a web developer. What would happen the second that they went to their GitHub profile, they would see all their personal information and the experiment would be ruined. So industrious people that they are, they created a little bit of code, a Chrome plug-in, that as soon as that GitHub profile loaded, it redacted all of the personal information. Thank God for structured content.

Now, just to make that circle complete, they then took that code and put it back on GitHub. If you ever want to try this yourselves, it’s right there waiting for you. Incidentally, this very conference and their first round of screening for talks does it blind. They do not know anything about you personally, just your idea, which I applaud.

I want to talk a bit about cognitive fluency and the basic concept is that if something looks easy to read, I will assume that whatever it’s talking about is easy to do. Similarly, if it looks like it’s going to be hard to read, I will assume that whatever it’s describing is something that is hard to do.

Now, I love pancakes, I would happily make pancakes for everyone in this room if we had the time and resources. Here’s a recipe for pancakes and with the text kind of clumped together and all kind of small print, I haven’t read a single word yet, but I’m going to assume pancakes are probably hard to make. I don’t know if I want to make pancakes.

Now, if I see big pictures and this text is spaced out, it could literally be the same words, but I look at that and I think, “Huh, pancakes can’t be that hard. I think I might make pancakes.” A two-minute video? Forget about it, we are having pancakes.

Now, this becomes very important when the task is something we actually have to do, right? I have to fill out my taxes, but when I look at that “EZ” form, I think to myself, “Taxes are impossible. I’m going to wait until the last possible minute and then maybe then some.” If I look at something like TurboTax, I think, “That looks kind of pleasant almost. I bet taxes aren’t that hard, maybe I’ll start sooner.”

Now, everyone in the room, I want you to vote one way or another. How many people here, by raise of hands, think there are 53 African nations in the United Nations? All right. How many people think that there are 55 African nations in the United Nations? Again, have to vote one way or another. All right. Okay. Kind of hard to see, but it looks like more people voted for 55. Social scientists will tell you that is because if something is easier to read, we actually think it’s more true. By the way, you’re both wrong, there are 54 African nations in the United Nations.

It gets worse if it rhymes. We actually think it’s more true, right? This has consequences. Right? It gets real. So part of what’s going on here is that we love certainty and we really hate uncertainty. Something that’s easy to remember feels more certain, like I did something yesterday and I can remember it clearly. I’m sure I did that, I’m sure that’s true. Things that rhyme are easier to remember. Things that are easy to read are easier to process. We just equate that with certainty, with it being true.

Now, this becomes important when it comes to things that it’s very, very, very important that people think is true. As it happens, in this country, very few African Americans trust health information that comes from the government, right? Back in 2002, the statement, “The government usually tells the truth about major health issues like HIV/AIDS.” Only 37% of African Americans agreed with that. By 2016, that had dropped to 18%.

Now, we could do a whole other talk about why there are legit reasons for African Americans to be suspicious of health information from the government. That having been said, it can save lives for people to know and believe this information. So if that means we have to do silly things like rhyming or make sure that the text is very, very, very clear in a very, very bright font or just be in very plain language, that is something we need to do.

Now, as it happens, I actually learned about this last year at Confab. There was a Federal Plain Language Act of 2010, I believe, which said, “Hey, look, government website that’s getting federal funds for the services you’re providing, you have to make sure that describing those services using plain language.” And if you’re curious, 18F has some amazing guidelines about what plain language actually means.

The most dangerous bias in my opinion is called the framing effect, and it starts out simple enough. Let’s say, you go to a store and you see a sign that says, “Beef 95% lean,” and then you see another sign that says, “Beef 5% fat.” Right? Most people will go to the 95% lean, but it’s the exact same thing.

Now, it’s harmless enough when it comes to buying food, but what if I were to say, “Should we go to war in April or May?” See what I did there? We are no longer discussing whether or not it’s a good idea to go to war, and wars have been started over less. Right now, pretty much every move the current administration has made, has been one version or another of the framing effect, writ large.

Now, how many people here are either bilingual or speak more than one language? Okay, that’s good. You all have a secret weapon against the framing effect. If you think about the decision in a language that is not your native language, the framing effect is less likely to take hold. So I speak a little bit of French, and if I were to think about that beef decision in French, I would be doing things like, beef that’s boeuf, that’s a lot of vowels. 95%, I think that’s quatre-vingt-neuf, no wait that’s not ... By the time I figure it out, I realize, “Oh yeah, that’s obvious.” Right? Because, I had to think about it slowly.

By the way, Thinking, Fast and Slow by Daniel Kahneman, just get that. You won’t even need my talk. But when we have to slow down our thinking, that’s when those tricks don’t catch as well.

Now, it turns out you can use the framing effect for good as well evil. Now, here’s another fun experiment. If you show this image to an audience and you say, “Should this person drive this car?” What you will get is a policy discussion and some of the room will say, “Old people are bad at everything. No way.” And the other people will say, “That’s ageist. People should do what they want.” Right? All you will learn by the end of that conversation is who was on what side.

Now, you can show that exact same photo to a different audience and ask, “How might this person drive this car?” What you will get is a design discussion, right? “Oh, we could move the steering wheel. Oh, we could change the shape of the dashboard.” Right? What you will learn by the end of that conversation is different ways that person might drive that car.

We can go even broader and say, “How might we do a better job of moving people around?” Right? The reason that person was in the car in the first place is because they were here and they wanted to be there. When I frame it this way, all the sudden, public transportation is on the table.

Now, I want to close by talking about our own biases. These are in a way the most concerning. One, that’s especially important for this crowd is called notational bias and an example is sheet music. I used to arrange, I used to play the saxophone, I had to learn how to read sheet music, and I came to think that this is how you represent music. There was no piece of music in the world that you cannot represent using this methodology. As it turns out, that’s simply not true. There are all sorts of cultures, all sorts of music that this is just not going to do it if you’re going to try to express how to play that piece of music. So by this being the standard, it actually starts to erase entire cultures, entire pieces of art.

An example of that we might be more familiar with is the forms that ask us for personal information. If I’m creating that form with my own framework of what is true, I might end up wiping out entire populations. If I think the world is simply male or female, where there are all sorts of identities that now don’t get to participate.

But this can leak into things like structured content. So this is a very uncomfortable fact. Until 1986, the New York Times prohibited the use of “Ms.” as an honorific for women; and understand that meant that up until 1986, the first mention of a woman might have her name, but every subsequent mention had to begin with either “Miss” or “Mrs.,” implying that the most important thing to know about the woman was whether or not she was married, right? A standard which was not, by the way, implied by “Mister.”

We pass on the values of our society in how we structure our content. So it becomes very important for us to think carefully about these decisions because these decisions are how we scale inequality, not to put too fine a point on it. So this is me saying, our jobs our extremely important, not to pat ourselves on the back, but as a grave responsibility.

As Mbiyimoh Ghogomu said, “Language doesn’t just describe reality, it shapes it.” Think about that.

Now, there are resources, and I’ll post a bunch of these by the way, in the Ethical Content Slack channel later, but this is one of them. It’s called Radical Copy Editor and it gives you tools to think about inclusive language, right, and who gets to use certain terms and why.

I told you we’d come back to this. I, for a very long time, had a misunderstanding of what the scientific method was. I thought it was, “Oh, I have an idea about how the world works. We’ll call that a hypothesis, and then I’ll test that hypothesis. If I’m right, you all try it. If you get the same results great. Let’s call that a law and move onto the next hypothesis.”

After talking to some actual scientists, I found out that it’s much more rigorous. The actual scientific method looks a lot more like this: I have an idea about how the world works. Let’s call that a hypothesis, we’ll test it. I got good results, you got all the same results, great. The next step is, I get to spend the rest of forever trying to prove myself wrong. Right? I have to think, “If I’m wrong, what else might be true? Okay, let’s go test that.” That is much closer to the real scientific method, and the scientific method was invented to combat confirmation bias.

Now, how does this play out in our jobs? We might come up with a design solution, a content strategy that we think is full-proof, that we think is awesome, and we might fall in love with it. But if we do that, we risk leaving better design on the table. Let me show you how that works.

We’re going to play a game. The game is called, put whatever number you want, where that question mark is, and I will tell you if that number fits the pattern. Put as many numbers as you like in, and when you’re ready, you tell me what you think the pattern is. If you’re like me, you say, “Eight,” and the answer comes back, “That fits the pattern. Would you like to try another number?” And if you’re like me, you say, “Hold my beer. I don’t need any more numbers. I’ll tell you the pattern right now. It’s even numbers.” The answer comes back, “No, you’re wrong.” The reason I’m wrong is because I didn’t try … that. The pattern is, every number’s higher than the number that came before it, which is a much more elegant solution. Ask anyone who codes, it’s way easier to code that. But because I fell so in love with that idea, I refused to consider the possibility that there was a better solution.

Now, as it happens, there is a strategy, a fairly cost-effective strategy, for combating this in the corporate workplace and it’s something that the military uses and journalists use, and how many times do you hear that? It’s called “red team, blue team.” The idea is that you have a blue team and the blue team is going to go along and basically get you all the way to prototype or all the way to design, all the way to flushing out this idea. Before they go any further, the red team comes in, for one day, and the red team’s job is to go to war with the blue team. They pick apart all those potentially harmful things that the blue team didn’t even think about because they were so into their own cognitive bias.

Now, what I love about this solution is that I don’t have to go to my COO and say, “Hey, from now on, we kind of have to spin up two teams for every project and then they’re going to check each other’s work all the time and question each other. It’s no big deal.” No, just one day, I need one other team for one day to come in and make sure we don’t make some horrible error.

The last one I want to talk about is called déformation professionnel. I told you I spoke French. This is basically the bias where you look at the whole world through the lens of your job. In our kind of workaholic society that might almost be considered a good thing, until it’s not. The paparazzi who ran Princess Di off the road probably thought they were doing an amazing job, and technically speaking, they were. They were getting very difficult-to-get photographs that were going to get them a very large sum of money. What they were doing a bad job of was being human beings.

Now, the previous police commissioner for Philadelphia, when he got the job, asked his officers, “What do you think your job is?” And most of them would say something like, “To enforce the law.” And he would say, “That’s seems a pretty reasonable answer, but what if I were to tell you, your job is to protect civil rights.” Now, that encompasses enforcing the law, but it’s a much bigger, harder job, right, and it gives you a mission. In fact, it gives you a mandate to treat other people like human beings.

I would argue, that our jobs are harder than we think, right? It’s not just, build cool stuff, and we need to come up with a definition for our jobs that allows us to be more human.

Now, there are already people working on this. Mule Design has come up with design ethics and there’s now a whole book around this like Mike Monteiro just dropped. Any number of the books that people are talking about today, by the way, are also touching on this topic. There is a fantastic website called Humane By Design at humanebydesign.com that breaks down ways to think ethically about design, and it also just has a really cool content model if you want to play around with it.

Recently, TurboTax released a live chat function and when they were creating the design system for it, they considered it a part of their job to think about gender. Now I’m not 100% on board with where they landed, but I am 100% on board with the fact that they considered part of their job to consider gender when they created that system; it was not an afterthought.

In the web development world, we’re seeing things like the Never Again Pledge, where people are realizing they want to be ethical about how they work and they’re willing to go on strike if it means preserving that. So recently we had Project Maven out at Google, which was a battlefield AI, and there were engineers at Google who said, “Look, I didn’t get into this to build weapons, we are going to go on strike if you continue down this path.” And Google walked away from a quarter of a billion dollar military contract as a result.

Not long after that, they started doing Dragonfly, which was an internet search engine in China, and once again, the engineers were saying, “Hey, were going on strike.” If you look into this, they were even creating infrastructure to sort of support people while they were on strike. This one, it’s still a little unclear how this one’s going to play out, but it’s a similar pattern.

By the way, who comes up with these like “Maven,” “Dragonfly,” like James Bond villain names for these projects? Like some adult actually put that into an email.

We must rapidly begin the shift from a thing-oriented society to a person-oriented society. When machines and computers’ profit, motives, and property rights are considered more important than people, the giant triplets of racism, materialism, and militarism are incapable of being conquered.

Now, this was not some tech guru at a TED Talk. This was Martin Luther King. Over 50 years ago, he saw this, and it’s only become more true since then. So the challenge I would give everyone in this room is, how can we define our jobs in a way that allows us to be more human to each other? Thank you.

Get Confab email updates

Be the first to know about new content strategy events, early-bird prices, and other useful things. We promise to email you only if it’s important.

Thanks! You have been added to the Confab mailing list.
Oops! Something went wrong while submitting the form.