Dr. King and his colleagues, Benjamin Schneer, of Florida State University, and Ariel White, of the Massachusetts Institute of Technology, spent almost five years on the study, with the first three years dedicated to observing, learning from and building trust with journalists. Key to getting them on board, he said, was limiting his team’s involvement to a part of the process that is often arbitrary: the timing of publication.
“In some sense, you’re flipping the coin already,” he said, noting that for many non-breaking stories, publication timing is subject to the whims of editors or the news cycle.
At all times, the publications retained the right to bow out of the experiment, shelve the stories or publish them when they wanted.
The study is not without its critics. Creative as it may be, the paper overstates its conclusion, said Kathleen Hall Jamieson, who studies political communication and is the director of the Annenberg Public Policy Center at the University of Pennsylvania.
“Is this methodologically ingenious? Yes. Do we know whether or not the findings are substantively important? Not based on the disclosed information,” she said.
Twitter is just one social media platform and social media itself is only one venue in which the national conversation takes place. Tweets also hardly amount to discussion, Ms. Jamieson said. Many people simply share links on Twitter, offering, at best, a few lines of commentary.
Without seeing the content of the articles or the tweets, it’s difficult to judge the study’s findings, she said. (While the authors provided the names of the outlets that participated in the experiment, they withheld the articles involved to protect the reputations of the publications.)
To those who participated, though, the experiment offered a chance to better understand their influence, a crucial issue for media organizations.
“When we had the opportunity to actually measure impact in a new way, we were really, really excited. This is core to our mission,” said Jo Ellen Green Kaiser, executive director of the Media Consortium, a network of independent news outlets whose members accounted for the majority of those involved in the study.
While most of the outlets the researchers worked with were small, independent publications, such as Truthout or In These Times, the study included some more well-known outlets, too, including The Nation, The Progressive, Ms. Magazine and Yes! Magazine, according to the authors. In all, 33 outlets participated in the final experiment, though more than a dozen more participated in earlier trial runs, which were designed differently. The authors did not say which publications participated in which part of the study.
The researchers were principally involved at only two points in the publication process: the beginning and the end.
Each experiment began with them choosing from one of 11 broad policy areas, such as food, immigration, reproductive rights or jobs, which had been identified as already being of interest to the news organizations.
The researchers then asked a handful of outlets to volunteer to collaborate, in groups of two to five, on stories of their own choosing related to the topic. For example, the authors said, with technology as a topic, the group might decide to write pieces about how Uber drivers feel about driverless cars.
The researchers then chose a two-week window in which to study discussion online, asking the outlets to publish the stories during either week, chosen at random. They would then compare the discussion on Twitter related to that topic in the week in which the pieces ran to the week in which they did not.
The stories, typically published on Tuesday, could come in any form, be they straight news, investigations, interviews, opinion pieces, videos or podcasts. The outlets treated them no differently and the researchers said that, as far as they were aware, their involvement went unnoticed by readers. Awareness of the study varied at each outlet, but editors and reporters were often informed.
“We really had to get whole editorial teams on board, and then often the reporters knew, too,” Ms. Kaiser said.
In the end, the authors conducted 35 experiments for the study over a year and a half beginning in October 2014.
The authors tried to anticipate some criticisms, too. Many tweets are created by bots, they acknowledged, but they found that bot traffic was consistent each week, making it essentially background noise. The researchers also avoided weeks when known world events, say a planned presidential speech on immigration, might have influenced the results.
Continue reading the main story