Experts

Experimentation Makes TV Ads More Effective – David Broockman (UC Berkeley)

Eric Wilson
March 20, 2024
28
 MIN
Listen this episode on your favorite platform!
Apple Podcast Icon - Radio Webflow TemplateSpotify Icon- Radio Webflow TemplateGoogle Podcast Icon - Radio Webflow TemplateAnchor Icon - Radio Webflow TemplateSoundCloud Icon - Radio Webflow Template
Experimentation Makes TV Ads More Effective – David Broockman (UC Berkeley)
Experts
March 20, 2024
28
 MIN

Experimentation Makes TV Ads More Effective – David Broockman (UC Berkeley)

"In a world where it's kind of hard to predict what works, but it is the case that some things work better than others, that really kind of underscores the value of doing this kind of experimentation."

David Broockman is an associate professor of political science at UC Berkeley. He is the co-author of a study published in the American Political Science Review last month titled “How Experiments Help Campaigns Persuade Voters: Evidence from a Large Archive of Campaigns’ Own Experiments.”  In our conversation, we discuss the research and learn what campaigners should do in light of the findings.

Episode Transcript

Eric Wilson (00:02.336)
I'm Eric Wilson, managing partner of Startup Caucus, the home of campaign tech innovation on the right. Welcome to the Business of Politics show. On this podcast, you're joining in on a conversation with entrepreneurs, operatives, and experts who make professional politics happen and study it. We're joined today by David Brockman, Associate Professor of Political Science at the University of California, Berkeley.

He is the co -author of a recent study published in the American Political Science Review last month titled, How Experiments Help Campaigns Persuade Voters, Evidence from a Large Archive of Campaigns Own Experiments. I promise it's really exciting. I've been recommending it to everyone I've spoken with about innovations and campaigning recently. In our conversation, we discussed the research and learned what campaigners should do in light of the findings. David.

TV advertising remains the most expensive line item for campaign budgets. And it's hard to argue that it's not effective at moving candidates' numbers. But from an academic perspective, what is our understanding of the effectiveness of TV advertising for campaigns?

David Broockman (01:21.71)
Yeah, first of all, thanks so much for having me on the show. I really appreciate it. And I'm excited to chat about this new research we have. And yeah, this is, I mean, the really the jumping off point for our paper is that there's a lot of existing evidence in political science and economics that's really been very creative actually in trying to carefully and rigorously study the effects of television ads and political campaigns. And I think there's two pretty clear and consistent findings that emerge.

One is that they work. You can, in fact, spend money on television ads to get votes in elections. And so that, I think, is maybe a little bit obvious. Campaigns are not stupid. They spend billions of dollars on this stuff for a reason. The second conclusion that I would say has really come out of existing research is that the effects decay relatively rapidly. And so the ads a few days before people vote really have a much bigger effect than...

ads months before. But that's, I think, the two big conclusions that come out of the state of the art kind of before our paper is just really looking at television honestly really broadly and saying, does it work? Yes. And when does it work? Well, closer to the election.

Eric Wilson (02:33.664)
Got it. Yeah. I think that's one of the most frustrating things for me is that that decay, as you call it, of, you know, we're investing billions of dollars in political advertising and we're hoping that it hits right at the exact moment that people are returning their ballots or voting. And that's what makes it worth it. But it sure doesn't stick around for very long. I mean, it varies from seven days, 30 days where we just see the effect completely go away. Right.

David Broockman (03:04.622)
Yeah, that's right.

Eric Wilson (03:06.24)
So in the last decade, we as practitioners have invested heavily in better targeting of TV advertising. So most big budget campaigns are now combining voter file data with modeling to inform how they buy TV ads. But what's the state of the industry when it comes to optimizing the content of the ads?

David Broockman (03:29.422)
Yeah, so one of the, in my over a decade of work now, just working with different campaigns, speaking with them, I think it's, I think a lot of the conversation about kind of how and where money is spent really kind of looks like the question of like, okay, well, who do I write a check to? And like, who do I tell them to talk to? Right? Do I write a check to like the TV vendor or the mail vendor? And likewise, you know, it's like, okay, well, if it's the mail vendor, like, who am I going to tell them to mail? But I actually think those questions, my read of the data is those questions actually are

way less important, TV versus mail, who are you targeting, than this first order question of what is it that you're saying to people? We all sort of know that matters, but my sense of the data is that that is actually far more important. For example, just to take the question of targeting, and what I've seen in my collaborators on this paper are actually working on another analysis that I won't steal their thunder on that speaks to this more. But in...

The research that I've done, you just consistently see that the messages that are best are just best for everybody. And part of what's happening when political practitioners, when I first started doing this, I really, it's so funny that people will say, all right, I'm gonna send you the cross tabs and you get these like 400 page PDFs of like slicing and dicing samples every which way. Part of what's happening and I think in the political industry is like obsession with.

Eric Wilson (04:46.624)
Right.

David Broockman (04:55.598)
looking at particular subgroups is actually just kind of an error of how people read statistical results, which is even if you have a message that is best for everybody in an experiment or in a poll, because you get into these kind of small sample sizes when you kind of slice and dice the samples, just because of statistical error, you're going to get, you know, maybe it looks like in, you know, among left -handed lesbians or among, you know, people that, you know, drive, you know, cars that have license plate that start with the letter Q or whatever.

Eric Wilson (05:01.79)
Mm.

David Broockman (05:24.952)
you're gonna have a small sample like that, it might look like some other message is way better, just by random chance. But my sense of it is that oftentimes when campaigns try to do that targeting to say, oh, this message for this population, this message for that population, often they're just chasing that statistical noise. And it's often like the most compelling messages are the best for everybody. That's obviously not true in every case, of course, but I think as a general rule, there's just far too much attention paid to...

Eric Wilson (05:42.206)
Hmm.

David Broockman (05:53.454)
this question of targeting and not nearly enough to opt this question of how do you optimize like just what it is you're saying.

Eric Wilson (06:00.128)
Yeah, I think that's an interesting observation. And I, I feel I think I need to improve how I think about things based on that analysis. But there there's an interesting aspect to that, which is we as political professionals, trust our gut, we, you know, we know what the right message is, we just need to get it to the to the right people. That's so the emphasis has been on targeting that.

of that right, right voter, right moment. There hasn't been as much introspection on what are we saying? And so for your latest study, you gained access to this incredible data set from a platform called swayable that Democrats and their allies on the left use to test their ad messages. We don't really have a good analog on the right. So if you could give us the rough sketch of how that process of testing an ad on swayable,

works from start to finish.

David Broockman (07:03.022)
Yeah, so the basic idea is, it's the same sort of work that I do in a lot of my research is taking the idea of randomized controlled trials from medicine, where in the COVID vaccine example, right, you'd randomly assign, Pfizer would recruit a bunch of people or randomly assign them to get a COVID vaccine or some saltwater placebo. And then you would compare in the time afterwards, like, okay, how many of these people go to the hospital? How many get COVID?

Eric Wilson (07:25.728)
Thank you.

David Broockman (07:32.536)
blah, blah, blah. So in much the same way, what is done in these experiments is we'll take voters and randomly assign them to different, we call them treatments because all of this stuff comes from the world of medicine. This kind of research I think most famously started from my mentors, Don Green and Alan Gerber, in the world of voter turnout where they would randomly assign households to get different forms of mail or door knocking or whatever and then look at the voter files after the fact.

In my work, I've really tried to focus on taking that to the world of persuasion. And so these experiments that vendors like Swaeb will do, people will be recruited off the internet. They'll be randomly assigned to one of multiple different ads or a control group. And then immediately afterward, they'll be asked, OK, well, who do you plan to vote for now? And you'd say, OK, maybe in the control group, you've got 45 % of people supporting your candidate.

The people that saw ad A, 47 % support your candidate, and the people that saw ad B, 49 % say they'll support your candidate. And that pattern of results would tell you that the effect of ad A is 2 % percentage points, the effect of ad B is 4 % percentage points, so ad B is twice as good as ad A. That's the basic idea, and Swable is one of the vendors that campaigns can kind of upload their ads to, and the vendors can go recruit people and analyze the data and tell you, OK, here's our best guess about.

you know, what the effects are.

Eric Wilson (09:00.32)
Yeah, that randomized control trial process, RCTs, is kind of the core of what you do and Don Green and others in your field have really pioneered over the last few years. It's difficult for us to do as practitioners, right? So if we think something's going to be effective, the idea of creating a control group by holding it back is tough for us to do. We'll talk about that in a little bit, but...

I think it's important that people hear that the randomized control trials require that you have a control group of people who basically get the placebo or no treatment or add at all. So what did you do with the data? What were the questions you sought to answer?

David Broockman (09:46.382)
Yeah, so one of the, I would say there's two broad questions that we're trying to answer here. So one is, frankly, in my career, a lot of how I've thought about the research that I've done over the last decade is trying to answer these kind of age -old big questions of like, okay,

What makes a political ad effective? Is it talking about issues or not? Is it talking about being positive or negative? Is it using certain persuasive techniques like a credible source giving a testimonial that tie into these kind of enduring social psychological questions? And so the first thing when we turned to this data was to say, okay, well, let's use this treasure trove of data. It's data from over 500 ,000 people when they've gone through these experiments over 100.

Eric Wilson (10:31.154)
you

David Broockman (10:33.198)
100 of these experiments, hundreds of ads, to say, OK, if we look across all of this and we coded the ads for lots of different traits, to say, OK, well, when we look across all this data, what do they say about those kind of questions that I've always wanted to answer? And the interesting thing, which after you hear a research result, you always say, well, that's obvious. And now I think, god, smack myself on the head. Why should not I should have expected this from the start? The interesting answer is that there is no answer.

Eric Wilson (10:53.728)
Yeah

David Broockman (11:00.846)
which is to say that the things that work well in one election cycle don't work well the next election cycle. And I'll give you one example. And this is what we see in the data, and I'll give you a kind of speculative explanation for why we might see it.

Eric Wilson (11:12.8)
And we should add here that the data set covered 2018, 2020, and 2022. So you had that.

David Broockman (11:15.95)
Yes. It's actually, it's 2018 down ballot, 2020 down ballot, and then 2020 presidential. So one of the things that you see in the data just is, so we have a bunch of hypotheses and we test them in all these elections. If you look at one election, it might look like, oh, a certain kind of ad is better, but then you go to the next cycle, that kind of ad is actually worse. So one example of that is actually ads that talk about issues. So in 2018, democratic ads that talked about issues were way better than the ads that didn't.

Eric Wilson (11:23.584)
Got it, okay.

David Broockman (11:45.39)
And again, speculating, one potential reason for that could be that 2018, that was coming off of the Congress where Congress had passed some historically unpopular legislation. It's actually pretty rare that Congress votes for things that a majority of Americans don't support, but the HCA and the corporate tax cuts were just not popular. And so Republicans paid the price for that in 2018. And so Democratic ads talking about those worked really well.

Well, then in 2020, Democrats kept talking about those things. And it didn't work so well to talk about them in 2020. And it wasn't the same opportunity to hit Republicans on issues in 2020 that it was in 2018. And that might be why we see the issue ads do great in 2018 and don't do well in 2020. And that's just to say, as one example, we have a lot of different hypotheses like this of what techniques do you use, that there is no answer to the question of.

Eric Wilson (12:19.956)
Hmm.

David Broockman (12:40.654)
do issue ads work better or not? Because, well, it depends on the issues that the environment wants you to run on, right? The other kind of conclusion, so then stepping back, is like, OK, well, it's really hard to predict using just kind of general rules of thumb what's going to work well in a particular cycle. The kinds of ads that campaigns make, kind of how different are they from each other, right? So maybe one reason why these theories don't work very well is,

Eric Wilson (12:43.072)
Yeah.

David Broockman (13:06.862)
You know, like an ad is an ad and like if a campaign is going to make an ad, it's going to be as good as it can be. And you know, this messaging stuff, you know, it doesn't really matter. That's not what we find in particular. And this is something you can actually only do with as much data as we have to kind of statistically look at a bunch of data like this. We asked this question, which is if you pull two ads out of a hat, how much better in a specific campaign? So, you know, a campaign test five ads, if you pull two out of a hat,

How much better should you guess that one is gonna be than the other? And our best guess is the answer is about 50%. So if you make two ads, your best guess should be one of them is gonna be like 50 % more effective than the other. That's a really big deal, right? If you're spending a billion dollars and you can make it go 50 % as far, that's a big deal. And that's just if you have two ads. So what if you make five ads or eight ads and test all of them? Then your best guess starts to be actually the best ads might be like twice as good as the average ad. So.

For us, like that, the implication there is really that in a world where it's kind of hard to predict what works, but it is the case that some things work better than others, that really kind of underscores the value of doing this kind of experimentation is to say, it's, you know, again, post -hoc, it makes sense of like, oh, 2018 was a good year on issues, 2020 wasn't, but I'll tell you, like the democratic campaigns, you know, they also thought 2020 was a great year for issues and that they could run on a bunch of stuff.

I think it turns out they were just wrong about that. So that to me suggests the importance of using data to guide what our messaging is, because it's hard for us to predict what's going to work.

Eric Wilson (14:49.216)
It's, it reminds me of the old joke that half of my advertising works. A problem is I just don't know which half. And so testing is how you get at that, that answer. You're listening to the business of politics show. I'm speaking with professor David Brockman about his recent study on experimentation with political TV advertising. Now I want to dive into the conclusions from your research. First, you found that there is a quote small, but politically meaningful variation in ads persuasive.

David Broockman (14:54.83)
Yes.

Yeah.

Eric Wilson (15:18.432)
effects." Decode that for us and explain the significance.

David Broockman (15:23.97)
Yeah, so what that's sort of trying to say is that when you look at the effects of these ads in surveys, they're quite small, but they're still probably way overstated. The effects of an ad on someone when you ask their vote choice immediately afterwards is going to be way larger than the effects out in the real world. And so we see these small effects that in truth are almost surely even smaller out in the real world.

And so we kind of want to emphasize, we're not saying that voters are really easy to manipulate. The vast, vast majority of people just don't change their mind. But elections are one on the margin. And so what we see statistically is that it seems like some ads are meaningfully better than each other. So a really small number, if you double it, is still a really small number. But doubling the effects of an ad matters a lot when you show it a lot of times to millions of people.

So that's sort of what we would say is that our can add testing, you know, is that gonna, it's funny, like some of the people in this world of AI are like, well, if AI, you know, helps create political propaganda, you know, is that just gonna like brainwash everybody? And I would say like, no, people are very hard to persuade. But even having said that, you know, if you're in an election that's, you within, you know, a half a percentage point, like that, those close elections are where this stuff can make a difference. And so you don't, yeah. And I was just gonna say, and you don't need big effects to,

Eric Wilson (16:47.008)
Now you go ahead.

David Broockman (16:51.118)
to do that, right?

Eric Wilson (16:52.192)
So when you're talking about millions of votes cast, tens of thousands of votes can really make a difference, as we've seen time and time again. You alluded to this a little bit earlier, but when it came to understanding the effectiveness, the content of the ads, what patterns, if any, did you find?

David Broockman (17:12.462)
Yeah, so it's funny that there's a ton of these patterns that we looked for and we just keep on coming up empty. That basically, you can sort of squint and say like, oh, well, maybe this hypothesis works well in two of the elections, but maybe the other elections, it doesn't work so well. There is a practitioner, Aaron Strauss, who I respect a lot, who...

had a nice kind of thread on threads about this. And there's actually some hypotheses that might work a little bit more consistently in the down ballot if you don't look at the presidential. And so I think there's maybe there's some certain generalizable lessons sometimes. But from my point of view, I think that the key generalizable lesson that I see is just the importance of doing this ad testing. One other thing that we've seen that we're working on writing up now, but that I worry a lot about in the practitioner space is that,

When a lot of practitioners hear ad testing your data, what they think is doing traditional polling or focus groups. And in a lot of those polls, what you'll do is ask people to self -report whether or not something would persuade them. So here's an ad. Did you like the ad? Did that ad persuade you? Does that make you more likely to vote? And so we have another project in progress, myself and my colleague Josh Kalla, where we show that that stuff just doesn't work.

Eric Wilson (18:14.738)
All right.

David Broockman (18:35.758)
that in particular, when you look at the effects of the ads as measured in like a proper rigorous experiment, they are basically uncorrelated with what people will tell you persuades them. And I think that's people will say, like, oh, that would never persuade me. But I think part of why is that in an experiment, right, I just told you the effects are really, really small, right? That means that 99 % of the people in your data are kind of irrelevant.

Eric Wilson (18:49.12)
Everyone hates negative ads, if you ask them.

David Broockman (19:05.71)
because they're gonna vote your way or the other way. Whatever their vote is, is not gonna change as a matter of the ad. And we're trying to understand this one or 2 % of people who are persuadable, what's persuading them? And in an experiment, you've got those 99 or 98 % of unpersuadable people equally represented in both groups because of randomization. Just like in the COVID trials, the people who would never be exposed to COVID.

Like there's an equal number of them in the two comparison groups because of randomization, right? And I think one of the problems with these studies where you ask people like what would persuade you is like you don't know who those one or 2 % of people are who are actually persuadable. And so you're asking like everybody what would persuade them and you know, the vast majority of your data is coming from people that are not persuadable. And so it's like,

It's sort of like, you everyone kind of knows that if you like walk into a DSA meeting or walk into a tea party meeting or whatever, like that would be a bad group to ask, like, what's going to persuade persuadable voters. But like, that's actually what you're doing when you just ask a general sample to do this. And it's very hard to find, to know exactly who is persuadable in a given election. And so I think our best way to do it is with these experiments where we randomly assign two groups and

That way we've got equal numbers of DSA and Tea Party and whatever members in both groups and they're not gonna contribute to our estimate of what ads are best if they're truly not persuadable.

Eric Wilson (20:37.92)
I can hear skeptical listeners saying to themselves right now, this is all well and good in test conditions, but in the real world where you have other campaigns competing, lots of different messages flying around, it's a different story. So what can your research tell campaigners about how to allocate resources to testing their messages?

David Broockman (20:59.374)
Yeah. So in our paper, we do, it's a very...

kind of sketchy, like just back of the envelope analysis to say, OK, based on just what it costs to make an ad, what it costs to hire a firm like Swayable, what campaigns are spending on media, like roughly, and given what we're finding, like roughly what percent of a media budget should be spent like literally just on making ads and testing them before you actually run them. And the answer we get is in the like 10 % 15 % range.

So that's a lot. And so basically we're saying, if you're right, exactly, if you're running a $10 million ad buy, it's actually better for you to not run a million dollars of ads just to spend that million dollars going to make a bunch of ads and test a bunch of them. Throw 20 or 30 ads at the wall, see what's best because the best two might be really good. So.

Eric Wilson (21:57.568)
That's really revolutionary what you're saying. So I want to make sure that settles in with people that 10 to 15 % of what you're going to spend on a media buy is more effectively spent on testing.

David Broockman (22:12.302)
Yeah, and I want to give two caveats to that. One is that in our analysis, this assumes something that, in fact, a big assumption of the whole analysis that I just want to be really clear about, which frankly, we just don't know. And it's something I'm hoping we'll get more data on soon. Is the question of how predictive are these ad testing platforms of what actually works in the real world? Now, there's some reason for optimism that these ad testing platforms, if they're worth their salt, they're going to be

testing among actual voters who actually live in the state or the district where you're doing the testing. On the other hand, there is something really artificial about doing an app like these people who are in these captive ad experiments. There might be some ads that work better in the short run, and maybe other ads don't work as well in the short runner but are more memorable in the long run, for example. So I think if these ad testing platforms are not

to the extent that they're not predictive of what really happens in the real world, then that obviously lowers the value of testing to some extent, depending on how strong that relationship is. So I just want to be clear that that 10 % to 50 % number is very back of the envelope, and it's very much based on this assumption that we can trust what the ad testing platforms tell us, which, again, I think there's some reasons we're optimism for, but there's also reasons to doubt it to some extent.

Eric Wilson (23:29.696)
You also said something interesting there that I want to peel back, which is you mentioned testing more variations of the ads than I had imagined. So you sort of said 20 to 30 variations. Are we talking about broad swings or is it the kind of A -B testing that we're doing in digital marketing where we test a red button or a blue button?

David Broockman (23:51.022)
Yeah, I mean, that's a great question that we did not look at. So we didn't, in our study, kind of code the ads for like, how different are they? My sense is that, is just having watched a bunch of them, that they're really different. It's like, are you running, are you talking about your biography? Are you talking about healthcare? Are you talking about gun control? Are you, it's like these ads differ in like a bunch of different ways from each other. And so that would be my suggestion is to say, start like,

And I just think it's what my PhD advisor at CECON told me, data exists to torture us. And one of the things he meant by that is like, no matter what you expect going in, like you find something else. And that's definitely been my experience. And so I think coming in, being willing to be surprised and saying like, look, you know, a lot of us think it's silly to try, you know, at, you know, these three ads, but like, look, let's just like see what happens. And like, sometimes you get surprised. I think the other thing that are.

David Broockman (24:51.534)
I do think is a dynamic that exists in real campaigns too, is that in real campaigns, you don't just do like one ad test and then dump your entire media budget on that ad test. What happens, right, is say you do an ad test or you're like, whoa, like everyone thought it would be a bad idea to test that ad, we tried it anyway, turns out it was really good. Let's make more ads like that. Or like, oh, what is, you know, oh, if voters are really responding to that, like let's do kind of like, that's gonna change what we're making in the future. And so there's this kind of like,

iterative process, you know, just like how research works, where it's not just a one shot thing. You get data and then that changes what you do. You get more data and it's kind of a back and forth. That kind of more dynamic learning dynamic is not something that we have modeled out because we don't have great data on how that works. But my sense is that that's something in a real campaign that happens too.

Eric Wilson (25:44.596)
All right, David, what what assumptions or ideas about political advertising did you have going into this study that you've had to update as a result of these findings?

David Broockman (25:55.886)
Yeah, I will say, I've covered a lot of it already. So I'll say one more thing, which I think is kind of interesting, which is about how AI is going to change all of this, which is that a lot of the cost of these ad tests is actually just making the ads. And so I do think it's interesting to think about in a world where we can just make ads even more cheaply.

and of how does that affect all of this? And again, I'm not so worried about the, I don't know, the deep fake stuff. I think, well, I think there's things to worry about, but I think we're not gonna see an epidemic of campaigns making up things that their opponents said. I worry about Russia doing that. But that to me, at least in terms of the legal, ethical campaigning, I do think how AI changes all this will be really interesting.

There's also a question of, will AI do a better job than we can at predicting the effects of ads? And that I don't think we have a great answer to yet, but I think is something I know a bunch of people are looking at.

Eric Wilson (27:07.04)
Well, thanks to Professor David Brockman. Thanks to my thanks to Professor David Brockman for a great conversation. You can learn more about him in our show notes and I'll include a link to that study. It's very interesting reading. If this episode made you a little bit smarter and it must have because we're talking with one of the foremost researchers in political science right now, or gave you something to think about. All we ask is that you share it with a friend or colleague.

You look smarter in the process and more people learn about the show. So it's a win win all around. Remember to subscribe to the Business of Politics show wherever you listen to podcasts so you never miss an episode. You can also sign up for email updates on our website and view previous episodes at business of politics podcast .com with that. Thanks for listening. We'll see you next time.

John Carter - Radio Webflow Template
Eric Wilson
Political Technologist
Facebook Icon - Radio Webflow TemplateTwitter Icon - Radio Webflow TemplateInstagram Icon - Radio Webflow Template

Managing Partner of Startup Caucus