Sasha Issenberg's The Victory Lab. The book does an excellent job highlighting the still-minimal nexus between political scientists and political consultants -- not only some of the recent work of the Obama campaign but also the history of this nexus. As Issenberg portrays it, this sort of applied political science work has basically always been heretical on some level. Traditional political scientists view dabbling in the applied world as inherently unscientific; consultants dismiss the academics as paper-writing eggheads who don't understand the way real politics works.
Issenberg's history of this subfield begins in the early 20th century with Harold Gosnell's research at the University of Chicago. Gosnell sent out a variety of mailers to Chicago households to try to determine what sorts of messages boosted voter turnout. His approach was entirely consistent with what we consider good social science today, but many in the discipline were uncomfortable with his work then. At the 1910 APSA conference, for example, Woodrow Wilson objected to the term "political science," arguing that human relationships "are not in any proper sense the subject matter of science." Harvard's president Abbott Lawrence Lowell described politics as "an observational, not an experimental science." Few would follow in Gosnell's path until decades later.
Issenberg also charts the development of several political consultants, noting the disincentives in that field toward embracing scientific methods. Campaign budgets, first of all, are usually quite tight, and any funds devoted toward research in a campaign are usually considered wasteful. Second, clients usually have little interest in research -- their priority is to win the current race, not to worry about future ones. Third, there's a real resistance to creativity in the field. As direct mail legend Hal Malchow notes, "If you do something different, everyone will point at the thing you did different and say that's why you lost. So if you're the campaign manager you don't do anything different. If you follow the rule book strictly they can't blame anything on you." Consultant Mark Grebner is similarly critical of campaigns, which tend to repeat the same tactics each cycle without evaluating whether they work or not: "Why do people go out and sing Christmas carols before Christmas? Because they've always done it!"
Ultimately, it took a few established consultants with a bit of money to spare and an itch to see if they could improve the craft -- people like Matt Reese (whose class I took in the mid-90s) and Mike Podhorzer and Malchow -- along with some political scientists like Sam Popkin, Daron Shaw, Chris Mann, Don Green and Alan Gerber who were willing to risk sullying their academic reputations a bit by consulting for campaigns. It also took a few candidates like Rick Perry and Barack Obama who were willing to let the scientists experiment with their campaigns a bit in exchange for information that would be useful in subsequent elections.
Campaigns are still nowhere near as sophisticated as they could be -- my impression is that Apple and Coca Cola and Volkswagen are still much shrewder at figuring out what consumers want and exploiting that -- but they've nonetheless advanced a lot in just the past few years thanks to these and other pioneers. The 2008 and 2012 Obama campaigns, in particular, were staffed by relentless experimenters who were constantly trying to figure out the best way to reach untapped voters and convert their sympathies into Election Day behavior. The consultants who favor more traditional dogmas are quickly being left behind (or just getting jobs as commentators on the news networks).
One thing that Issenberg doesn't really get into in this book is the question of just how much of an edge these new data-based campaign approaches provide for their candidates. That's a hard question to answer. To some extent, we can credit data-based campaigning with saving the campaigns some money -- they don't have to waste precious funds on campaign activities that are known not to work. But how much of an advantage are we talking about here? The research I did suggesting that Obama's 2008 field offices were worth roughly half a percentage point of the vote could be an indicator that the new campaign style mattered, or perhaps it was just the enthusiasm and overwhelming numbers of the volunteers that did it. And there's some recent evidence that Obama's 2012 campaign, undoubtedly more technologically skilled than Romney's, gave him no measurable advantage at all. (I'll have more on this soon.)
In some ways, the advocates for data have been lucky: Obama won the presidency twice, Rick Perry won a contested gubernatorial race, and we never really got to see a full test of his presidential infrastructure as that campaign died to due candidate-inflicted injuries. As more campaigns embrace these experimental methods and are willing to devote some funds to them, we're going to see some of them lose -- not because they used the methods wrong, but simply because the election fundamentals were against them. Yet the methods will likely get blamed. Then we'll know whether this is really the new style of campaigning or just the latest fad.