

Some of the ways Americans have defined and wrestled with the appropriate role of journalists in our republic.
Just the Facts?
The Rise and Fall of Facts
Tracing the evolution and challenges of fact-checking in journalism.

In his 1964 Harper’s Magazine article on fact-checking, “There Are 00 Trees in Russia,” Otto Friedrich related the story of an unnamed magazine correspondent who had been assigned a profile of Egyptian president Mohamed Naguib. As was custom, he wrote his story leaving out the “zips”—facts to be filled in later—including noting that Naguib was “such a modest man that his name did not appear among the 000 people listed in Who’s Who in the Middle East” and that he elected not to live in the royal palace, surrounded “by an 00-foot-high wall.” The editor then sent the article to a fact checker in Cairo to fill in the zips. No answer came and, with the deadline looming, the editor, fuming, rewrote the story so the facts weren’t needed. A week later, the magazine received a telegram from the fact checker:
Am in jail and allowed to send only one cable since was arrested while measuring fifteen foot wall outside farouks palace and have just finished counting thirtyeight thousand five hundred twentytwo names who’s who in mideast.
Friedrich’s anecdote reveals the great truth of fact-checking: while facts are sacred to writers, readers, and, above all, editors, they are sometimes more work than they’re worth. The importance of fact-checking—particularly when it comes to inconsequential detail—is based on the long-held theory that if you’re fastidious about the little things, the reader will trust you with the big things. But the history of fact-checking suggests that too often, the accumulation of verifiable minutiae can become an end unto itself.
READ ENTIRE ARTICLE AT COLUMBIA JOURNALISM REVIEW
An Equal Say
Where does truth fit into democracy?
One of the stranger rituals performed by the media in the Trump era has been to keep an obsessive count of the president’s lies since he took office. By September 2018, The Washington Post reported, he had already passed the 5,000 mark, including a new one-day record of 125 on September 7. The Poynter Institute’s nonpartisan fact-checking project PolitiFact keeps a running list, and The New York Times did likewise throughout 2017.
There is a certain pointlessness to these exercises. Anyone who has paid even the slightest attention to Donald Trump should recognize that, since long before his presidential campaign, he lies as easily as he breathes. He says whatever he thinks will get him what he wants, and whatever he thinks he can get away with. But if there is nothing truly revelatory about the number of Trump’s lies, keeping track of them still serves a variety of symbolic purposes for the commentators who repeat the steadily mounting figures with gleeful outrage. One is simply to underline the extent to which this is not a normal presidency. Another, far more debatable, is to hold up Trump as a symptom and symbol of what is often called the “post-truth era.”
Take, for example, Michiko Kakutani’s recent book The Death of Truth, which cites a figure for Trump’s lies (2,140 in his first year in office) on its third page. His questionable attitudes toward truth are, Kakutani tells us, “emblematic of dynamics that have been churning beneath the surface of daily life for years.” The goddess of truth has fallen mortally ill, her book charges, and a dizzying list of perpetrators are responsible for poisoning her: Fox News; social media; the New Left; “academics promoting the gospel of postmodernism”; the narcissism of the baby boomers; and “the selfie age of self-esteem.”
Kakutani and the many pundits and critics who have offered up a similarly broad cultural diagnosis have obvious incentives for doing so. It lets them pose as serious public intellectuals who can see beyond the froth of the current news cycle. It gives them the chance to display their wide-ranging and eclectic reading (in a single paragraph, Kakutani name-checks Foucault, Derrida, Heidegger, Nietzsche, Thomas Pynchon, David Bowie, Quentin Tarantino, David Lynch, and Frank Gehry). And, not least, it exonerates them from the charge that they are nothing but liberal ideologues by allowing them to assign blame to both sides in the ongoing American culture wars. Yes, the responsibility for the death of truth may lie, in part, with Fox News and the GOP, but it also lies with the New Left and those dreadful postmodernist academics. “Postmodernist arguments,” Kakutani explains, “deny an objective reality existing independently from human perception.” And since one perception is as good as another, anything goes. Michel Foucault and Donald Trump: brothers-in-arms.
READ ENTIRE ARTICLE AT THE NATION
Free from the Government
The origins of the more passive view of the freedom of the press can be traced back to Benjamin Franklin.

Over the past several weeks, questions about the role of the press in politics and what exactly “freedom of the press” means have come to the forefront of public debate. Politicians have begun to attack press freedom as a shield for hostile reporters. For their part, journalists have been thinking out loud about the way they practice their craft, and worrying about the impact of the new presidential administration on it. In so doing, they are restoring the eighteenth-century formulation that emphasized freedom from government interference as the key element of press freedom.
From the founding of the republic, freedom of the press has been seen as a cherished part of a functioning democracy. But what freedom of the press means has neither been clear nor consistent over time. Since World War II, Americans have tended to view freedom of the press as a license for journalists to investigate any story in the public interest without threat of retaliation. Journalists suggested that they earned that expansive right by holding themselves to a standard of objectivity, through which they pursued stories with determination but without interjecting their opinions.
In debates since late last year, however, many journalists have returned to the arguments that newspaper editors and printers made in the eighteenth century, identifying press freedom not as active participation in politics, but simply as the right to reflect the words of others. Crucially, now, as then, that more passive construction of the role of the press provides plausible deniability from political retaliation, while still letting journalists shape the content of the news.
We can trace the origins of this more passive view of the freedom of the press to Benjamin Franklin, whose illustrious career began in the newspaper business. When Franklin began publishing the Pennsylvania Gazette in Philadelphia in 1728, most newspapers in the American colonies were “published by authority,” which is to say that they carried the blessing of the colonial government to publish news. Drawing on seventeenth-century English traditions, most printers opposed direct interference by officials (that is the sense in which they believed their presses were “free”), but nonetheless most printers saw their papers as chronicles of public events rather than forums for public debate. Only when a town grew large enough to support a second newspaper did this perspective shift, with the new entrant into the market taking a sharper, more oppositional approach to the news.
Franklin had experience with this oppositional approach as a teenaged apprentice to his brother James in Boston. James Franklin published the New England Courant. In 1721, in the wake of a smallpox epidemic in the town, Boston’s government officials proposed to inoculate the population to prevent further outbreaks. The move was deeply controversial because, unlike vaccination, inoculation introduces a live virus into a patient, thus actually making the person sick with the disease.
READ ENTIRE ARTICLE AT WE’RE HISTORY
The Invention of Objectivity
The view from nowhere came from somewhere.

The Times was not an exciting read. But Ochs treated its reputation for dullness as an asset, not a liability. He downplayed its editorials, expanded anything having to do with news, financial news in particular. He invested in the paper’s legal coverage and began listing the day’s fires and real-estate deals. Rival papers such as The Sun and the Tribune sniped that the Times had somehow managed to become even more sleep-inducing. By the time they realized that the Ochs strategy was working, he’d surged ahead of them. Even the mighty Joseph Pulitzer felt compelled to bring his New York World around.
What Ochs had realized was that his rivals had undervalued the demand for timely, comprehensive, and trustworthy information. He’d correctly judged that readers, or at least “quality” readers (as they were called), were fed up with sensationalism.
His tidy slogan for the Times—“All the news that’s fit to print”—made it clear what he was offering. His rapid turnaround of the Times is one of the great success stories in the history of journalism.
The air of authoritative impartiality with which Ochs and Van Anda imbued the Times is now under assault from both ends of the political spectrum. The right accuses the so-called mainstream media of abandoning neutrality, while the progressive left argues that it should be abandoned. No one is truly unbiased, the left notes, and so journalists might as well declare where they stand. “Transparency is the new objectivity,” this argument goes. Some advocates even profess to believe that ditching forced postures of impartiality will help restore trust in media, rather than erode it further.
But the case for more bias in reporting is very dodgy, and one suspects that it has gained traction mostly because what passes for journalistic evenhandedness these days is a pale imitation of the version embraced by idealists such as Ochs and Van Anda.
In today’s opinion-driven news environment, the other side of the story is often presented more out of a sense of obligation than true curiosity. Some outlets have given counternarratives more consideration than they deserve. Being dismayed by all the false equivalences and foregone conclusions makes sense, but giving up on such an important guiding principle after experiencing the cheapened version of it is like renouncing all forms of air travel after flying easyJet.
It’s worth remembering that, back when Ochs and Van Anda began working together, the model of objective journalism that is now derided as the “view from nowhere” was not the default. Throughout the 19th century, moderation and impartiality were virtually unknown in popular media. Many newspapers didn’t just lean one way or the other politically—they answered directly to party bosses. Standards of accuracy were lower. “Buncombe and fraud” were facts of life.
READ ENTIRE ARTICLE AT THE ATLANTIC
The Story behind the First-Ever Fact-Checkers
Here’s how they were able to do their jobs long before the Internet.

At TIME Magazine’s 20th anniversary dinner, in 1943, the magazine’s co-founder Henry Luce explained to those gathered that, while “the word ‘researcher’ is now a nation-wide symbol of a serious endeavor,” he and co-founder Briton Hadden had first started using the title as part of an inside joke for a drinking club. “Little did we realize that in our private jest we were inaugurating a modern female priesthood — the veritable vestal virgins whom levitous writers cajole in vain,” he said, “and managing editors learn humbly to appease.”
Luce’s audience nearly 75 years ago is not the only group to wonder about the origins of fact-checking in journalism, though the casual sexism of the 1940s would no longer fly. Today, especially amid concern over so-called “fake news” and at a time when it may seem inconceivable that checking an article would be possible without the Internet, it remains a natural question: How did this journalistic practice begin?
And, as it turns out, that story is closely linked to TIME’s past.
In the years between 1923, when TIME’s first issue was published, and Luce’s speech, journalistic fact-checking had gone from a virtually unknown idea to standard practice at many American magazines. (These days, journalistic practices aren’t necessarily country-specific — Der Spiegel, for example, is known for having one of the world’s biggest fact-checking departments — but that wasn’t the case a century ago, and this particular kind of checking was an especially American phenomenon.)
How Can the Press Best Serve a Democratic Society?
In the 1940s, scholars struggled over truth in reporting, the marketplace of ideas, and the free press. Their deliberations are more relevant than ever.

Henry R. Luce, the publisher of Time Inc., first proposed engaging a panel of scholars on the state of the American press in December of 1942. He suggested the idea to his friend Robert Maynard Hutchins, a legal and educational philosopher who, just over a decade earlier, at the age of thirty, had become the president of the University of Chicago. With the country mobilized for the fight against totalitarianism, Luce envisioned a philosophical inquiry that would reaffirm the foundations of freedom in the United States. Distrust of the media had become pervasive, and Luce believed that the public needed to better understand the purpose and function of the press. At first, Hutchins demurred, contending that the project would be too difficult to organize. Finally, in the fall of 1943, after months of Luce’s cajoling, he agreed to lead the effort.
On December 15, 1943, a group of academics and policymakers gathered for the first time at the University Club, in New York. Luce’s initial idea had been to enlist the University of Chicago’s philosophy department, but Hutchins went in a different direction, selecting luminaries from a range of disciplines. The group included Reinhold Niebuhr, the theologian and ethicist; Charles E. Merriam, one of the nation’s leading political scientists; Arthur M. Schlesinger, Sr., a Harvard historian; Archibald MacLeish, the librarian of Congress and a Pulitzer Prize-winning poet; and William Ernest Hocking, a renowned philosopher of religion. None were journalists; Hutchins believed that the industry needed to be excavated by outsiders. The thirteen Americans and four international advisers, whom Hutchins called the Commission on the Freedom of the Press, would spend nearly three years evaluating American journalism. In a statement of principles, Hutchins told them that their purpose was to answer three questions: “What society do we want? What do we have? How can the press . . . be used to get what we want?”
In “An Aristocracy of Critics: Luce, Hutchins, Niebuhr, and the Committee that Redefined the Freedom of the Press” (Yale), Stephen Bates, a journalism professor at the University of Nevada, Las Vegas, re-creates the panel’s deliberations. As fascism advanced in Europe, there was a palpable sense that liberties were imperilled at home; in an essay published in the Times, Henry A. Wallace, Franklin D. Roosevelt’s Vice-President, compared fascism to an “infectious disease” and warned against the “deliberate, systematic poisoning of the public channels of information.” Commission members worried about the forces of division in American society, the power of tribalism to warp political debate, and the press’s role in provoking discord. Americans were inhabiting “different worlds of fact and judgment,” John M. Clark, a Columbia University economist, said. Hocking, the philosopher of religion, considered the way a publication and its readers could create a closed system that was rage-filled, self-reinforcing, and profitable; another commission member, George Shuster, the president of Hunter College, warned that a one-sided press could “pull the house apart.”
READ ENTIRE ARTICLE AT THE NEW YORKER
The Birth of the American Foreign Correspondent
For American journalists abroad in the interwar period, it paid to have enthusiasm, openness, and curiosity, but not necessarily a world view.

But what’s most important about these characters, which the author notes in the prologue and epilogue, is that they were all Americans abroad while the U.S. was still in its “stumbling global ascendancy.” The role of the foreign correspondent would change dramatically after the Second World War. “As the United States sought to exert its dominance globally, remaking the world to suit, foreign correspondents became more entangled in that project, either as critics or as sympathizers,” Cohen writes. But, for this lot, both their impressions and advocacy were a little less loaded. They rarely had deep knowledge of any foreign countries or languages before they went abroad; Thompson’s hard-won German remained “ungrammatical” in the nineteen-forties, during the war. The Chicago boys, even more naïve at the outset, were empty vessels, learning about the world as they wrote about it.
Is being an empty vessel an asset to the foreign correspondent? And is having strong beliefs, especially political ones, a detriment to journalistic objectivity? These questions are a major undercurrent of this book, and are most electrically animated in the romantic and professional partnership between John and Frances Gunther. Cohen makes use of a wealth of archival material about these two—letters, diaries, and, in the Freudian cast of their era, dream journals and analysis-session notes—and maps their debates onto the seminal world events happening around them.
John and Frances first collided in Paris, in 1925. John was from a German American family on Chicago’s North Side, and was a son of a seedy businessman and a doting mother; Frances was born, in 1897, to Jewish immigrants who ran fabric and convenience stores in uptown Manhattan. She attended Barnard, in 1916, where she became the secretary-treasurer of the Socialist Club. After dropping out or being kicked out of three consecutive universities, all the while mixing with such prominent left-wingers as Dorothy Day, she eventually graduated from Barnard, at the age of twenty-four. The Gunthers married in 1927, less than three years after Frances arrived in Moscow. Their first child, Judy, tragically died at just a few months old, in 1929, and their second, Johnny, was born later that year. After that, Frances did much of her reporting vicariously, through her husband’s postings.
It’s evident from the correspondence cited in “Last Call” that Frances had the first-rate analytical mind. Cohen writes:
Unlike John and Jimmy, who liked to emphasize their American freshness and ignorance, Frances had read the classic texts of Marxism with Harold Laski and the Wednesday Socialist Club, attended Dadaist art shows and constructivist theater. . . . She was paying attention to the world.
Among the main characters of the book, she alone understood the ideological background of the emergent Soviet superpower and asked penetrating questions about the larger economic and political forces at work. When John went to Germany during the Nazi rise to power, he was, once again, in over his head, but Frances urged him to pay attention to economics: “The capitalists who were using the Nazis to prop up their own class interests. The industrialists filling up Nazi coffers. The arms dealers supplying the fascists with munitions,” in Cohen’s words.
READ ENTIRE ARTICLE AT THE NEW YORKER
Men in Dark Times
How Hannah Arendt’s fans misread the post-truth presidency.

Arendt saw Nazism and Communism as endeavoring to fashion history into a science, something that acts according to universal laws and can therefore be predicted. A totalitarian leader, she writes, styles himself as the fulfillment of historical destiny and therefore its oracle. She cites Hitler’s announcement to the Reichstag in January 1939: “I want today once again to make a prophecy,” he said. “In case the Jewish financiers . . . succeed once more in hurling the peoples into a world war, the result will be . . . the annihilation of the Jewish race in Europe.” What Hitler means, according to Arendt, is: “I intend to make war and I intend to kill the Jews of Europe.” Totalitarianism voids the distinction between a prophecy and a declaration of intent. As Arendt puts it, once a totalitarian movement has seized power, “all debate about the truth or falsity of a totalitarian dictator’s prediction is as weird as arguing with a potential murderer about whether his future victim is dead or alive.” Just as the murderer can kill his victim to substantiate his claim, a government with total control can, in theory, ensure the accuracy of its predictions.
But total control is a chimera—a prediction about human affairs can only be infallible in a world expunged of all human agency—so its pursuit necessarily culminates in both outward aggression and internal terror. The organization of a totalitarian regime—its concentric circles of power, its bureaucratic systems, its elevated police force, its plans for world conquest—is designed to create “a society whose members act and react according to the rules of a fictitious world.” For adherents of the totalitarian movement, its lies grow impossible to challenge, “as real and untouchable an element in their lives as the rules of arithmetic”—and even the leaders “are convinced that they must follow consistently the fiction and the rules of the fictitious world which were laid down during their struggle for power.” According to Arendt, Hitler felt compelled to play along with his predictions, following conspiracies to their inevitable conclusions, often against reason and even self-interest; he was not content to lie without reorganizing the actual world accordingly.
To accuse Trump of anything so sophisticated is to misread his lies altogether. From the start, Trump’s lies were incidental and reactive, unconstrained by the need for coherence or the pressure to position himself as the culmination of historical trends. Though he retweeted QAnon-linked accounts, he did not explicitly endorse the conspiracy, which he could have harnessed to achieve the kind of “lying world of consistency” Arendt outlines. He invented facts as he needed them, flooding the field with misinformation. He tossed off a lie, and by the time the media had scrambled to fact-check him, he had already moved on to the next one. For the most part, his supporters were undeterred when his lies were unveiled, because they understood he was saying whatever was advantageous, not speaking as an absolute authority. In the end, Trump’s lies were less grand theory than self-aggrandizement—corporate bluster intended to artificially boost his own stock. He tended to inflate the numbers: how much money he was worth, how many people had attended his inauguration, how many votes he had received.
READ ENTIRE ARTICLE AT HARPER’S
“It Was Us Against Those Guys”: The Women Who Transformed Rolling Stone in the Mid-70s
How one 28-year-old feminist bluffed her way into running a copy department and made rock journalism a legitimate endeavor.

“It was just us against the world, and us against those guys.”
In early 1974, Rolling Stone was the epicenter of American youth culture. Not quite seven years into its run, the magazine’s focus had widened beyond the stoned musings of rock stars, and was offering journalistic deep dives into everything from Patty Hearst’s kidnapping to Karen Silkwood’s murder. This period was Rolling Stone’s much-celebrated golden age, a period that helped define New Journalism, breech-birth gonzo journalism, and, quite crucially, formalize rock media’s language, context, and canon.
For the most part, it was entirely men leading this charge. Robin Green was the first woman to write at the magazine, in 1971, but her tenure was brief. In the early 70s, the only women on the editorial floor were secretaries, ambitious young women with master’s degrees and years of experience primarily charged with answering phones. In January 1974, a 28-year-old feminist named Marianne Partridge began to change that, quietly changing the shape of Rolling Stone from inside, and eventually putting six women on the Rolling Stone masthead. Their stories have historically been obscured by the long shadows of the men they worked for and wrangled—Hunter S. Thompson, Joe Eszterhas, Cameron Crowe—but the history these women recall is the story of how Rolling Stone became a true journalistic endeavor, and the story of women learning to speak for themselves decades before topics like sexual harassment and equal pay became mainstream.
READ ENTIRE ARTICLE AT VANITY FAIR
‘The Academy Is Largely Itself Responsible for Its Own Peril’
On writing the story of America, the rise and fall of the fact, and how women’s intellectual authority is undermined.

Q. For democracy to work, of course, the people must be well informed. Yet we live in an age of epistemological mayhem. How did the relationship between truth and fact come unwound?
A. I spend a lot of time in the book getting it wound, to be fair. There’s an incredibly rich scholarship on the history of evidence, which traces its rise in the Middle Ages in the world of law, its migration into historical writing, and then finally into the realm that we’re most familiar with, journalism. That’s a centuries-long migration of an idea that begins in a very particular time and place, basically the rise of trial by jury starting in 1215. We have a much better vantage on the tenuousness of our own grasp of facts when we understand where facts come from.
The larger epistemological shift is how the elemental unit of knowledge has changed. Facts have been devalued for a long time. The rise of the fact was centuries ago. Facts were replaced by numbers in the 18th and 19th centuries as the higher-status unit of knowledge. That’s the moment at which the United States is founded as a demographic democracy. Now what’s considered to be most prestigious is data. The bigger the data, the better.
That transformation, from facts to numbers to data, traces something else: the shifting prestige placed on different ways of knowing. Facts come from the realm of the humanities, numbers represent the social sciences, and data the natural sciences. When people talk about the decline of the humanities, they are actually talking about the rise and fall of the fact, as well as other factors. When people try to re-establish the prestige of the humanities with the digital humanities and large data sets, that is no longer the humanities. What humanists do comes from a different epistemological scale of a unit of knowledge.
Q. How is the academy implicated in or imperiled by this moment of epistemological crisis?
A. The academy is largely itself responsible for its own peril. The retreat of humanists from public life has had enormous consequences for the prestige of humanistic ways of knowing and understanding the world.
Universities have also been complicit in letting sources of federal government funding set the intellectual agenda. The size and growth of majors follows the size of budgets, and unsurprisingly so. After World War II, the demands of the national security state greatly influenced the exciting fields of study. Federal-government funding is still crucial, but now there’s a lot of corporate money. Whole realms of knowing are being brought to the university through commerce.
READ ENTIRE ARTICLE AT THE CHRONICLE OF HIGHER EDUCATION
How Personal Letters Built the Possibility of a Modern Public
The first newspapers contained not high-minded journalism, but hundreds of readers’ letters exchanging news with one another.

The new medium would spread half-truths, propaganda and lies. It would encourage self-absorption and solipsism, thereby fragmenting communities. It would allow any amateur to become an author and degrade public discourse.
Sound familiar? Such were the anxieties that the invention of printing unleashed on the world as 16th-, 17th- and 18th-century authorities worried and argued about how print would transform politics, culture and literature. The ‘printing revolution’ was by no means universally welcomed as the democratiser of knowledge or initiator of modern thought.
In 1620, Francis Bacon named printing, gunpowder and the nautical compass as the three modern inventions that ‘have changed the appearance and state of the whole world’. For others, this outsized influence was exactly the problem. A few decades earlier, the scholar William Webbe complained about the ‘infinite fardles of printed pamphlets; wherewith thys Countrey is pestered, all shoppes stuffed, and every study furnished.’ Around the same time, the pseudonymous Martine Mar-Sixtus lamented: ‘We live in a printing age,’ which was no good thing, for ‘every rednosed rimester is an author, every drunken mans dreame is a booke’. Printing was for hacks, partisans and doggerel poets.
In The Structural Transformation of the Public Sphere (1962), Jürgen Habermas argued that print enabled the establishment of an arena of public debate. Print first made it possible for average people to come together to discuss matters of public concern. In turn, their knowledge and cooperation undermined the control that traditional royal and religious powers had over information. Habermas pointed to the early 18th century as a crucial moment of change. At that time, newspapers and periodicals exploded in both number and influence. ‘In The Tatler, The Spectator and The Guardian the public held a mirror up to itself,’ Habermas noted of the impact of a trio of periodicals by the journalists Joseph Addison and Richard Steele. The new publications allowed readers to shed their personal identities as rich or poor, male or female. Instead, in print they could enter into conversation as anonymous equals rationally engaging with the topics of the day.
The NYT’s Jake Silverstein Concocts “a New Origin Story” for the 1619 Project
The project’s editor falsifies the history of American history-writing, openly embracing the privileging of “narrative” over “actual fact.”

On November 9, the New York Times published a new defense of the 1619 Project by Jake Silverstein, editor-in-chief of the New York Times Magazine (“The 1619 Project and the Long Battle Over U.S. History”). Silverstein’s purpose was to prepare public opinion for the release of a book version of the project titled The 1619 Project: A New Origin Story, which was released on November 16. The original magazine edition, masterminded by Times staff writer Nikole Hannah-Jones, suffered devastating criticism after its release in August 2019—criticism that began with the WSWS.
Silverstein has staked his reputation on the 1619 Project. This has gone badly for him. His name will forever be associated with the secretive manner through which the project invented its false and error-ridden historical interpretation, as well as the orchestration of the cover-up that has followed.
Specifically, Silverstein bears responsibility for the exclusion of leading scholars of American history—who would have objected to the 1619 Project’s central historical claims—and the intentional disregarding of objections made by the project’s own handpicked “fact-checkers.” Silverstein penned the devious reply to leading historians who pointed to the project’s errors. He then organized surreptitious changes to the already published 1619 Project, and, when exposed, claimed that it had all been a matter of word choice.
Silverstein’s 8,250-word essay is just the latest in this long line of underhanded journalism and bogus history. Once again, he fails to deal with any of the substantive historical criticism of the 1619 Project—in relationship to the origins of slavery, the nature of the American Revolution, the emergence of capitalism and the interracial character of past struggles for equality.
Instead of addressing any of this, and in keeping with the modus operandi of the 1619 Project, Silverstein’s essay piles new layers of falsification on old. If the original 1619 Project falsified American history, Silverstein’s latest essay falsifies the history of American history-writing—and it openly embraces a historical method that privileges “narrative” over “actual fact.”
READ ENTIRE ARTICLE AT THE WORLD SOCIALIST WEB SITE
Do Cartels Exist?
A revisionist view of the drug wars.

A lot of the language used to talk about traffickers and cartels is confusing, perhaps even intentionally misleading. Take “turf wars”—struggles over what in Spanish is called la plaza, the main square. Turf wars go back to the earliest days of Mexican drug trafficking. But it wasn’t traffickers duking it out: it was policemen and politicians. They wanted their piece of a protection racket.
The government war on drugs in Mexico wasn’t an import from the United States, at least not in the beginning. A century ago, an authoritarian Mexican government decided to crack down on a drug that had come to be associated with poor and indigenous users. Introduced in the sixteenth century by the Spanish, who wished to grow hemp for rope, cannabis gradually gained a Mexican name, marijuana, and became a home remedy. It became the drug of choice during the Mexican Revolution. In his myth-busting and perversely enjoyable history The Dope, Benjamin T. Smith relays one theory about where the word originated: “Juan was the name given to the average Mexican soldier. His camp wife was often termed María, María-Juan became Marijuana.” Mexico banned growing and selling the herb in 1920, before the United States did. Mexico also banned the importation of opium, which prompted enterprising farmers to start planting poppies in the Golden Triangle, an area of northern Mexico that now produces a healthy portion of the United States’ cocaine.
Once the United States banned marijuana in 1937, running it across the border became good business. Mexican families smuggled alcohol, then marijuana, then cocaine. According to Smith, the Mexican government began cashing in on the trade, too. The first protection rackets emerged in northern Mexico, creating a model that persists today: policemen and politicians take regular payments to look the other way when certain people move drugs, then crack down on their rivals. Everyone is happy: there are arrests to publicize, and those in on the arrangement make plenty of money.
Smith writes that some of the early protection rackets helped fund schools and infrastructure, especially in Ciudad Juárez and Baja California. Later, local politicians tended to line their own pockets. But the traffickers wouldn’t pay up without persuasion. If that proved violent, all the better: protection rackets could be made to look like crackdowns. That’s where the “drug war” violence began. Drugs had to be made illegal; traffickers needed to learn to accept being shaken down. Smith writes of an early governor of Baja California, Esteban Cantú, who decided that imposing the change involved “an unpleasant errand. To prove his antinarcotics credentials and persuade traffickers to pay up, Cantú killed a group of established traffickers.”
READ ENTIRE ARTICLE AT HARPER’S
A Matter of Facts
The New York Times’ 1619 Project launched with the best of intentions, but has been undermined by some of its claims.

No effort to educate the public in order to advance social justice can afford to dispense with a respect for basic facts. In the long and continuing battle against oppression of every kind, an insistence on plain and accurate facts has been a powerful tool against propaganda that is widely accepted as truth. That tool is far too important to cede now.
My colleagues and I focused on the project’s discussion of three crucial subjects: the American Revolution, the Civil War, and the long history of resistance to racism from Jim Crow to the present. No effort to reframe American history can succeed if it fails to provide accurate accounts of these subjects.
The project’s lead essay, written by the Times staff writer Nikole Hannah-Jones, includes early on a discussion of the Revolution. Although that discussion is brief, its conclusions are central to the essay’s overarching contention that slavery and racism are the foundations of American history. The essay argues that “one of the primary reasons the colonists decided to declare their independence from Britain was because they wanted to protect the institution of slavery.” That is a striking claim built on three false assertions.
“By 1776, Britain had grown deeply conflicted over its role in the barbaric institution that had reshaped the Western Hemisphere,” Hannah-Jones wrote. But apart from the activity of the pioneering abolitionist Granville Sharp, Britain was hardly conflicted at all in 1776 over its involvement in the slave system. Sharp played a key role in securing the 1772 Somerset v. Stewart ruling, which declared that chattel slavery was not recognized in English common law. That ruling did little, however, to reverse Britain’s devotion to human bondage, which lay almost entirely in its colonial slavery and its heavy involvement in the Atlantic slave trade. Nor did it generate a movement inside Britain in opposition to either slavery or the slave trade. As the historian Christopher Leslie Brown writes in his authoritative study of British abolitionism, Moral Capital, Sharp “worked tirelessly against the institution of slavery everywhere within the British Empire after 1772, but for many years in England he would stand nearly alone.” What Hannah-Jones described as a perceptible British threat to American slavery in 1776 in fact did not exist.
“In London, there were growing calls to abolish the slave trade,” Hannah-Jones continued. But the movement in London to abolish the slave trade formed only in 1787, largely inspired, as Brown demonstrates in great detail, by American antislavery opinion that had arisen in the 1760s and ’70s. There were no “growing calls” in London to abolish the trade as early as 1776.
READ ENTIRE ARTICLE AT THE ATLANTIC
When Is a Nazi Salute Not a Nazi Salute?
Were the celebrities in this 1941 photograph making a patriotic gesture or paying their respects to Hitler?

The photograph above appeared with Sarah Churchwell’s recent article for the Daily, “American Fascism: It Has Happened Here” (June 22). It shows Senator Burton K. Wheeler, former aviator Charles Lindbergh, and novelist and newspaper columnist Kathleen Norris at a rally in New York’s Madison Square Garden of the isolationist America First Committee (at right, mostly cropped out in this use, is also the pacifist minister and socialist Norman Thomas). Per the information from Getty Images with this photo, one of several similar images, our original caption in the piece read thus: Senator Burton K. Wheeler, Charles Lindbergh, and novelist Kathleen Norris giving the Nazi salute at an America First Committee rally, New York, October 30, 1941. (As can still be viewed via the Wayback Machine.)
A few days after publication, I received an email from a biographer of Wheeler that insisted the senator was not giving a Nazi salute; it was, he wrote, a “Bellamy salute,” a patriotic gesture to the American flag widely used at pledge of allegiance ceremonies. We should certainly correct our caption, I was told, since it was an unmerited slur against Wheeler, who was no fascist or anti-Semite.
READ ENTIRE ARTICLE AT THE NEW YORK REVIEW
The 1619 Project Unrepentantly Pushes Junk History
Nikole Hannah-Jones’ new book sidesteps scholarly critics while quietly deleting previous factual errors.

“I too yearn for universal justice,” wrote Zora Neale Hurston in her autobiography, Dust Tracks on a Road, “but how to bring it about is another thing.” The black novelist’s remarks prefaced a passage where she grappled with the historical legacy of slavery in the African-American experience. Perhaps unexpectedly, Hurston informed her readers that she had “no intention of wasting my time beating on old graves with a club.”
Hurston did not aim to bury an ugly past but to search for historical understanding. Her 1927 interview with Cudjoe Lewis, among the last living survivors of the 1860 voyage of the slave ship Clotilda, contains an invaluable eyewitness account of the middle passage as told by one of its victims. Yet Hurston saw only absurdity in trying to find justice by bludgeoning the past for its sins. “While I have a handkerchief over my eyes crying over the landing of the first slaves in 1619,” she continued, “I might miss something swell that is going on in” the present day.
Hurston’s writings present an intriguing foil to The New York Times‘ 1619 Project, which the newspaper recently expanded into a book-length volume. As its subtitle announces, the book aims to cultivate a “new origin story” of the United States where the turmoil and strife of the past are infused into a living present as tools for attaining a particular vision of justice. Indeed, it restores The 1619 Project’s original aim of displacing the “mythology” of 1776 “to reframe the country’s history, understanding 1619 as our true founding.” This passage was quietly deleted from The New York Times‘ website in early 2020 just as the embattled journalistic venture was making a bid for a Pulitzer Prize. After a brief foray into self-revisionism in which she denied ever making such a claim, editor Nikole Hannah-Jones has now apparently brought this objective back to the forefront of The 1619 Project.
Vacillating claims about The 1619 Project’s purpose have come to typify Hannah-Jones’ argumentation. In similar fashion, she selectively describes the project as a work either of journalism or of scholarly history, as needed. Yet as the stealth editing of the “true founding” passage revealed, these pivots are often haphazardly executed. So too is her attempt to claim the mantle of Hurston. In a recent public spat with Andrew Sullivan, Hannah-Jones accused the British political commentator of “ignorance” for suggesting that “Zora Neale Hurston’s work sits in opposition to mine.” She was apparently unaware that Dust Tracks on a Road anticipated and rejected the premise of The 1619 Project eight decades prior to its publication.
I Helped Fact-Check the 1619 Project. The Times Ignored Me.
The paper’s series on slavery made avoidable mistakes. But the attacks from its critics are much more dangerous.

On August 19 of last year I listened in stunned silence as Nikole Hannah-Jones, a reporter for the New York Times, repeated an idea that I had vigorously argued against with her fact-checker: that the patriots fought the American Revolution in large part to preserve slavery in North America.
Hannah-Jones and I were on Georgia Public Radio to discuss the path-breaking New York Times 1619 Project, a major feature about the impact of slavery on American history, which she had spearheaded. The Times had just published the special 1619 edition of its magazine, which took its name from the year 20 Africans arrived in the colony of Virginia—a group believed to be the first enslaved Africans to arrive in British North America.
Weeks before, I had received an email from a New York Times research editor. Because I’m an historian of African American life and slavery, in New York, specifically, and the pre-Civil War era more generally, she wanted me to verify some statements for the project. At one point, she sent me this assertion: “One critical reason that the colonists declared their independence from Britain was because they wanted to protect the institution of slavery in the colonies, which had produced tremendous wealth. At the time there were growing calls to abolish slavery throughout the British Empire, which would have badly damaged the economies of colonies in both North and South.”
I vigorously disputed the claim. Although slavery was certainly an issue in the American Revolution, the protection of slavery was not one of the main reasons the 13 Colonies went to war.
READ ENTIRE ARTICLE AT POLITICO
Framing the Computer
Before social media communities formed around shared concerns, interests, politics, and identity, print media connected communities.

Turing Award winner Geoffrey Hinton and former Google chief executive Eric Schmidt are only two among many people who have recently voiced their concerns about an existential threat from AI. [See, for example, Brown 2023; Roush 2023.] But what does it mean for a technology or, for that matter, something else such as a natural disaster, a war, or a political ideology to be an existential threat? An existential threat must be directed at a particular target audience that is threatened and to a particular aspect of their life that is at risk. We only care, as a society, about existential threats if the target audience is dear to us and the item at risk is significant. The consequences of an existential threat may be biological, psychological, economic, organizational, or cultural. Thus, multiple academic fields provide a perspective on existential threat.
In this article, our goal is not to argue about whether AI is or is not an existential threat—or even to describe who AI threatens and what elements are at stake. Instead, our focus here is to provide the reader with a multidisciplinary toolset to consider the concept of existential threat. To achieve this, we introduce the reader to four organizations that study this topic, along with the relevant literature by individual scholars. It is beyond the scope here to apply these various perspectives on existential threat to developments in AI.
READ ENTIRE ARTICLE AT THE CHARLES BABBAGE INSTITUTE
Media Ethics
Fox News’s Handling of Election Lies Was Extreme but Far From Unusual
News organizations air lies from political figures more often than you’d think, but for very different reasons than Fox News.

Documents from a defamation lawsuit by Dominion, a manufacturer of voting machines, have laid bare that Fox News executives and hosts knew that claims of fraud and a stolen 2020 election were false but that they allowed allies of Donald Trump to push these claims on their airwaves anyway. The revelations have been shocking to many observers. “I don’t think we’ve ever had a moment like this, where a major news network has been exposed as deliberately deluding its viewers or readers,” Sen. Chris Murphy (D-Conn.) told a Washington Post columnist. “This is a seminal moment in the history of mass media. And we need to treat it that way.”
While the exposure of the gap between what Fox personnel thought behind the scenes and what aired on their network has shown the network to be mercenary and uninterested in journalistic integrity, the situation of a news organization’s private memos betraying a certain amount of hypocrisy is less unusual than many like Murphy think. Fox executives and hosts were stuck grappling with a question that has plagued the media for a century or more: How should outlets cover false claims from political officials and their allies? After all, as the investigative reporter I.F. Stone famously quipped, “All governments lie.”
And while profit — Fox executives and hosts feared alienating their audience and hurting their stock price — rather than journalistic norms drove the decisions for Fox News, the network’s executives actually ended up adopting a common media practice. Journalists have long allowed officials to tell lies so long as they do it on the record. Media outlets adopted this policy to ensure that they appeared scrupulously objective and neutral, but it forces audiences to be the arbiters of truth and, as the Fox case illustrates, creates the opportunity for misinformation to flourish.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
The Battle Between NBC and CBS to be the First to Film a Berlin Wall Tunnel Escape
Declassified government documents show how both sides of the Iron Curtain worked to have the projects canned.

When the Berlin Wall was completed in August 1961, East German residents immediately tried to figure out ways to circumvent the barrier and escape into West Berlin.
By the following summer, NBC and CBS were at work on two separate, secret documentaries on tunnels being dug under the Berlin Wall.
The tunnel CBS chose was a disaster that resulted in arrests and court trials. NBC’s tunnel ended up being in one of the most decorated documentaries in American television history. And yet, in the fall of 1962, NBC was under tremendous pressure from both sides of the Iron Curtain to scrap its documentary altogether.
You would think that the U.S. government would be thrilled to have a film broadcast to Americans showing the desperation and resolve to escape communist East Germany. After all, when the Berlin Wall fell 30 years ago, images of East Berliners streaming across the border were broadcast around the world in what was cast as a triumph for Western democracies and capitalism.
But in my new book, “Contested Ground: The Tunnel and the Struggle over Television News in Cold War America,” I use declassified government documents to tell the story of how political pressure and naked journalistic competition nearly derailed the NBC documentary before more than a handful of people had seen a single frame of the film.
READ ENTIRE ARTICLE AT THE CONVERSATION
Trump’s Campaign against Fauci Ignores the Proven Path for Defeating Pandemics
When medicine and journalism defeated cholera.

“Dr. Fauci’s a nice man, but he’s made a lot of mistakes,” President Trump recently told Sean Hannity of Fox News. Fauci, who has served as director of the National Institute of Allergy and Infectious Diseases since 1984 and who led the fight against AIDS in the 1980s, has been sidelined by the Trump administration, which has attacked him publicly and through anonymously leaked smears. In contrast to Fauci’s stellar reputation as a man of science, the administration has made Fauci a target in its effort to push questionable medicines and force Americans to learn to live with the coronavirus pandemic.
We’ve seen this tension between science, journalism and public health before.
The first wave of cholera came to New York City in July 1832. In less than two months, at least 2,000 New Yorkers had died, more than one percent of the city’s population. Residents hurried past the graveyard of St. Patrick’s in horror; the poorly buried bodies brought clouds of flies and a ghastly stench. In a familiar historical refrain, the well-to-do fled the city and managed to escape, but other communities — the poor, the newly immigrated and people of color — were decimated.
Two more major cholera epidemics would hit the city, in 1849 and 1866. But from 1832 to 1866, medicine, science and journalism transformed American society and destroyed cholera. This transformation reminds us in the midst of the coronavirus pandemic of the importance of science over superstition.
In 1832, the medical profession was dominated by a “philosophical” approach that rejected observable data, according to Charles E. Rosenberg, who has explained how “’Empiric’ was — as it had been for generations — a synonym for quack.” Doctors rejected signs that the disease might be contagious. Instead, they and public officials sought an explanation for God’s wrath. They looked to the heavens for answers and blamed sin, the poor, immigrants and bad “atmosphere” for the illness. Public officials sought to assure worried citizens that the “atmosphere” was improving, that this too would pass.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
We Had Witnessed an Exhibition
A new book about the Lindbergh baby kidnapping focuses on the role played by the media.

In 1927, Charles Lindbergh became the first aviator to complete a solo transatlantic flight, departing from New York and landing in Paris 33 and a half hours later. The trip garnered rabid media attention: Fox Movietone newsreels captured Lindbergh’s takeoff in his now famed monoplane, Spirit of St. Louis, and hundreds of millions of people around the world listened intently as flight updates were broadcast across radio waves. When Lindbergh finally touched ground in Paris, he was greeted by what cultural historian Thomas Doherty calls “an orgy of jubilation.” It is in this moment of “mass hysteria, wild rapture, a nation gone mad,” Doherty writes, that Lindbergh the icon was born.
In his new book, Little Lindy Is Kidnapped: How the Media Covered the Crime of the Century, Doherty — currently a professor of American studies at Brandeis University — traces Lindbergh’s ascent to contextualize the personal (and simultaneously very public) tragedy that would ultimately define his life: the 1932 abduction of his 20-month-old son. The book provides a thorough cultural history of the media coverage of the kidnapping and subsequent trial, and in doing so, captures a critical moment in the evolution of American journalism.
The moment Lindbergh arrived in Paris, he became a media darling. A celebratory reception was held in Washington, DC, following the transatlantic flight, paid for by Hollywood mogul William Fox and produced by entertainment maestro Samuel L. “Roxy” Rothafel. The festivities were broadcast nationwide by the NBC Radio Network, turning the young pilot of the hour into a household name. Lindbergh soon received offers to appear in Hollywood films and go out on a celebration tour, though he rejected these proposals in what appears to be an admirable refusal to cash in on his own fame. Two years after the flight, Lindbergh married Anne Spencer Morrow, daughter of an American ambassador and heir to the Morrow estate, with whom he had a baby boy named Charles Jr., born June 22, 1930. The press covered the birth with celebratory zeal and quickly coined a nickname for the child — Little Lindy.
But the Lindberghs’ lives were soon marred by calamity: on March 1, 1932, at the peak of Lindbergh’s celebrity, Little Lindy was kidnapped from his family’s home in Hopewell, New Jersey. The news spread rapidly across the wire services, as the Associated Press, United Press, and International News Service anxiously reported on the abduction. The entire country became transfixed by the disappearance of the celebrity child. Various news outlets deemed the kidnapping “the Crime of the Century”; some journalists decried it as “a final affront to American civilization” — a tragedy made unbearable by the devastation of the Great Depression, which was already at its worst.
READ ENTIRE ARTICLE AT THE LOS ANGELES REVIEW OF BOOKS
A Better Journalism?
‘Time’ magazine and the unraveling of the American consensus.

The clattering sound of typewriters—“slapsplapslap…slap…slapslap….slapslap…ching!”—can be heard in veteran Time magazine essayist Lance Morrow’s new book. But the more insistent sound in this slim volume is the thud, thud, thud of name dropping. If you were born in this century, you might need a scorecard to know who the players are.
To be fair, Morrow describes himself as a kind of Zelig. He has known—or at least encountered, read, or reported on—a great many powerful people and influential writers. Just a partial list of the major and minor characters who make an appearance in The Noise of Typewriters: Remembering Journalism, includes: Time publisher and editor Henry Luce, his glamorous and accomplished second wife Clare Boothe Luce, Franklin D. Roosevelt, JFK (a careless PT boat skipper), Lyndon Johnson, Richard Nixon, Bill and Hillary Clinton, Elie Wiesel, Joe DiMaggio, Norman Mailer, Mary McGrory, Robert Caro, Alger Hiss, Joseph McCarthy, William F. Buckley Jr., Carl Bernstein, Henry Kissinger, New York Times executive editor Abe Rosenthal, Allen Ginsberg (Morrow “detested” his poetry), and an interminably rambling Mikhail Gorbachev.
I could go on; Morrow certainly does. Thucydides and Herodotus are also summoned on stage as archetypal journalists, indispensable “storytellers,” and presumably a reminder of the value of a good pre–Vatican II Jesuit education. Morrow attended Gonzaga College High School in Washington D.C. before Harvard. He graduated from Gonzaga two years after the tribal Catholic brawler Pat Buchanan. In addition to teaching Latin and Greek, the “Jebbies” did not hesitate to discipline unruly students with a biblical rod of iron, or a fist. It was a different time. Much of The Noise of Typewriters is also, as Morrow’s subtitle suggests, about “a different time.”
After college and a brief stint at the Washington Evening Star, where he worked with columnist Mary McGrory as well as Carl Bernstein of All the President’s Men fame, Morrow joined Time in 1965. He stayed for forty years, and his new book is an apologia for that magazine’s partisan journalism in defense of capitalism, liberal democracy, anti-communism, the Republican Party, and middle-class American values. Morrow laments the loss of the social and cultural consensus of the 1940s and ’50s, an era Time’s publisher evangelically named “The American Century.” The emergence of a prosperous and seemingly homogeneous American middle class was celebrated and to some extent shaped by “Harry” Luce’s artful and wildly successful magazines, which included Life, Fortune, and Sports Illustrated, as well as Time. (Morrow neglects to mention that Life, once the most popular magazine in America, was as much Clare Boothe Luce’s idea as Harry’s.)
READ ENTIRE ARTICLE AT COMMONWEAL
Traffic Jam
Ben Smith’s book on the history of the viral internet doesn’t truly reckon with the costs of traffic worship.

The story of the viral internet, as Ben Smith tells it in his new book Traffic: Genius, Rivalry, and Delusion in the Billion-Dollar Race to Go Viral, begins in 2001 with future BuzzFeed CEO Jonah Peretti in an ill-fitting blazer, waiting to go on the Today show to explain a prank: he had tried to get Nike to make him custom shoes that said “sweatshop” on them, Nike turned him down, and the resulting email thread, when published, set the internet alight. More horseplay ensued, until there seemed to be enough for an entire website: BuzzFeed, a soupy mélange of viral clickbait, personality quizzes, and esoteric internet reporting. Unlike existing media properties, BuzzFeed’s brilliance was that it was specifically reverse engineered to be a site that favored pageviews above content. It didn’t necessarily matter if the content was good, per se. It only mattered if it clicked.
Smith is a smart guy, a good reporter, and exudes an endearing—and enduring—earnestness about the media. As the cofounder and editor-in-chief of the new website Semafor, he’s shown gumption in getting a news organization off the ground in hostile economic times. As the former media columnist for the New York Times, he tried his hand at chronicling downtown New York subcultures and how digital ads are making a comeback (remains to be seen). As the editor in chief of BuzzFeed News, he took an enterprise known for quizzes that told you which Disney character you were and expanded it into a legitimate journalistic juggernaut. In Traffic, oddly, he recounts the industry-reshaping history of viral journalism—mostly at BuzzFeed and Gawker—from a removed perspective, as if he’s convinced he’s a neutral reporter rather than a central protagonist.
I didn’t quite expect this book to be funny—the absurdity of the early aughts internet is enough on its own—but at least that it would be juicy. The untold number of illegal stimulants that went into building the viral internet, the scads of interoffice dick pics, the terrible pay. It’s not, which is disappointing. The lack of juice, however, means that the book doesn’t truly engage with the dirty business of how all this traffic got created in the first place, and who created it.
Perhaps to go there would open up a whole other can of worms, and I suppose the qualities I admire in Ben don’t lend themselves to telling that story. He was a newsroom leader, not a peon shoveling memes into a CMS. And so we get a narrative that reads at once like a plea for David Fincher to option it and a benign mea culpa. Was writing this an act of bravery, or self-delusion? At times I feel like Ben is almost being genuinely contrite, but he ultimately emerges as a cipher, and one who now walks through the world a richer man, seemingly unencumbered by the wrath traffic wrought.
READ ENTIRE ARTICLE AT THE BAFFLER
How a Star Reporter Got Paid by Government Agencies He Covered
New York Times reporter William L. Laurence, known as ‘Atomic Bill,’ became an apologist for the military and a serial defier of journalism’s mores.

Shortly after the atomic bombing of Hiroshima and Nagasaki, Maj. Gen. Leslie R. Groves of the U.S. Army, who directed the making of the weapons, told Congress that succumbing to their radiation was “a very pleasant way to die.”
His aide in misinforming America was William L. Laurence, a science reporter for The New York Times. At the general’s invitation, the writer entered a maze of secret cities in Tennessee, Washington and New Mexico. His exclusive reports on the Manhattan Project, when released after the Hiroshima bombing, helped shape postwar opinion on the bomb and atomic energy.
Before the war, Mr. Laurence’s science reporting won him a Pulitzer. Working with and effectively for the War Department during the bomb project, he witnessed the test explosion of the world’s first nuclear device and flew on the Nagasaki bombing run. He won his second Pulitzer for his firsthand account of the atomic strike as well as subsequent articles on the bomb’s making and significance. Colleagues called him Atomic Bill.
Now, a pair of books, one recently published, one forthcoming, tell how the superstar became not only an apologist for the American military but also a serial defier of journalism’s mores. He flourished during a freewheeling, rough-and-tumble era both as a Times newsman and, it turns out, a bold accumulator of outside pay from the government agencies he covered.
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
How the 1619 Project Took Over 2020
It’s a hashtag, a talking point, a Trump rally riff. The inside story of a New York Times project that launched a year-long culture war.

One morning in mid-September, Nikole Hannah-Jones woke to a text message from a friend noting an unusual event on President Trump’s schedule: the first “White House Conference on American History.”
It might have sounded banal, but Hannah-Jones, a staff writer for the New York Times Magazine, sensed a subtext immediately: This was about her and the project she says is the most important work of her career.
Sure enough, that afternoon, Trump thundered from a lectern at the National Archives Museum that “the left has warped, distorted and defiled the American story with deceptions, falsehoods, and lies. There is no better example than the New York Times’s totally discredited 1619 Project.”
You’ve probably heard of it by now. The 1619 Project has emerged as a watchword for our era — a hashtag, a talking point, a journalism case study, a scholarly mission. It is the subject of dueling academic screeds, Fox News segments, publishers’ bidding wars and an upcoming series of Oprah-produced films. It is a Trump rally riff that reliably triggers an electric round of jeers.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
The Dangerous Power of the Photo Op
American photojournalism has always been entangled with race and religion.

On June 1, 2020, within the space of 48 minutes, Washington’s Lafayette Square went from being a peaceful protest site to a presidential photo opportunity. In between those two realities, police forcibly removed demonstrators who had assembled across from the White House to protest against police brutality and systemic anti-black racism. Rubber bullets, smoke grenades, and tear gas rained down on demonstrators as President Donald Trump spoke in the Rose Garden. He then proceeded to walk through the now-cleared square to St. John’s Episcopal Church to pose with a Bible for news cameras. In the week since, there has been fierce debate about what it all means.
Many have mocked the photographs of the president at the church, pointing to Trump’s awkward posture and seeming unfamiliarity with the sacred text. (Late-night talk show host James Corden even provided a tutorial on proper Bible-holding.). Others, including some white evangelical leaders, have celebrated it as a faithful demonstration of reverence for a church that had been damaged during protests. And still others, like Bishop Mariann Budde of the Episcopal Diocese of Washington, have decried the event as a “stunt” that conscripted religious symbols and spaces into a brazen political power move. Speaker of the House Nancy Pelosi and presumptive Democratic presidential nominee Joe Biden also joined in admonishing the pageantry of the event, both emphasizing that Trump held a Bible but did not read from it and did not speak from it. And on June 2, the American Bible Society issued a statement, declaring “we should be careful not to use the Bible as a political symbol, one more prop in a noisy news cycle.”
The substance and symbolism of the Bible cut across much of the coverage. But something else was going on that day that seems to have been overlooked by most commentators, hiding in plain sight. The president did not just stand in front of a church with a Bible. He stood in front of a church with a Bible for the camera. Americans have been socialized for generations not to see the political, social, and cultural power of photography, even as we have become so intimately habituated to the visual worlds they produce. That is, on some level we know that photographs have been used to polish, persuade, coerce, and deceive but still we so often trust them to put us at the scene, on the ground, as an eyewitness to history. That is the allure and the damning power of the photograph. A picture can never be reduced to a thousand words.
What is more, since its origins in the 1930s, the industry of American photojournalism has always been part of the power grid of race and religion, electrifying them through its images. President Trump, an expert at publicity, is no stranger to using photographs for his own ends. Looking outside the frame of him at St. John’s, and listening attentively to the cacophony of camera shutters recorded by video footage of the event, we should take the genre of “photo op” seriously as a mechanism of power. In doing so, we can better approach photojournalism, in particular, as a central feature of modern American political discourse. And we also open ourselves to seeing American religion, including the power structures of American Christianity, not only as subjects that have been represented in photography, but as subjects that have been produced through photographic practices. To be seduced by debates about what the photographs are of, in short, is to risk missing the much deeper entanglement of photojournalism, race, and religion.
READ ENTIRE ARTICLE AT RELIGION AND POLITICS
Keeping Speech Robust and Free
Dominion Voting Systems lawsuit against Fox News’ coverage of claims that the company had rigged the 2020 election may soon become an artifact of a vanished era.

The libel lawsuit filed in March 2021 by Dominion Voting Systems against Fox News, over the network’s coverage of claims that the company had rigged the 2020 election, was settled this spring, but the case may soon become an artifact of a vanished era. In pretrial skirmishing, the two sides agreed on this much: the law of libel is governed by the Supreme Court’s 1964 decision in New York Times v. Sullivan. In the last legal arguments before the jury was to be seated, Rodney A. Smolla, one of the lawyers for Dominion, called Sullivan “the landmark decision that is the genesis for all of our modern First Amendment principles involving defamation law.” Erin E. Murphy, a lawyer for Fox, likewise said that the principle governing the case “starts in Sullivan.” But the emboldened conservative majority on the Supreme Court, having dispatched Roe v. Wade to the dustbin of overruled precedents, may now target Sullivan for the same treatment. Such a change would have fundamental consequences for both those who speak and those who are spoken about.
It’s a fitting time, then, to take a fresh look at Sullivan—how it came about and what it means today. In Actual Malice: Civil Rights and Freedom of the Press in New York Times v.Sullivan,Samantha Barbas, a professor at the University of Buffalo School of Law, tells the improbable story of the advertisement that gave rise to the case and the decision that Justice William J. Brennan ultimately wrote. It’s a tale that has been told before—notably in books by Anthony Lewis and Aimee Edmondson—but Barbas has a distinctive and relevant argument.
Like the earlier authors, Barbas makes the reasonable claim that Sullivan represented a straightforward battle between good and evil. It was, she writes, “one of a string of libel lawsuits brought by Southern segregationist officials against Northern media outlets…to prevent them from reporting on the civil rights movement.” By ruling for the Times, the Supreme Court “freed the press to cover the civil rights movement” and, not incidentally, likely saved the newspaper from being bankrupted by the damages it would have been ordered to pay in this and similar libel cases. But Barbas’s endorsement of the Sullivan decision is more nuanced than those of Lewis and Edmondson, and more reflective of the current moment. She appreciates the need for libel lawsuits at a time when “damaging falsehoods can spread online with a click, and reputations [can be] destroyed instantly.” But she recognizes that the protections of Sullivan are needed as much, or more, by individuals as by media companies. The story of Sullivan, and of the precedent’s possible demise, reveals as much about our own times as it does the 1960s.
READ ENTIRE ARTICLE AT THE NEW YORK REVIEW
Jeff Zucker Helped Create Donald Trump. That Show May Be Ending.
The coziness between the TV executive and Mr. Trump is a Frankenstein story for the cable news era. But then the monster got away.

In December 2015, after the demagoguery of Donald Trump’s presidential campaign became clear, I asked CNN’s president, Jeff Zucker, if he regretted his role in Mr. Trump’s rise.
First Mr. Zucker — who put “The Apprentice” on NBC in 2004 and made Mr. Trump a household name — laughed uproariously, if a bit nervously. Then he said, “I have no regrets about the part that I played in his career.”
I was thinking about that exchange when Tucker Carlson of Fox News recently gleefully aired recordings of conversations with Mr. Zucker that Mr. Trump’s fixer, Michael Cohen, had deviously taped in March 2016.
Mr. Zucker is heard speaking in flattering and friendly terms about Mr. Trump, or, as he called him, “the boss.”
“You guys have had great instincts, great guts and great understanding of everything,” Mr. Zucker says to Mr. Cohen of Mr. Trump’s campaign.
And Mr. Zucker shows an eager interest in Mr. Trump’s television stardom. “I have all these proposals for him,” Mr. Zucker says beseechingly at one point. “Like, I want to do a weekly show with him and all this stuff.”
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
The Deadly Race Riot ‘Aided and Abetted’ by the Washington Post a Century Ago
A front-page article helped incite the violence in the nation’s capital that left as many as 39 dead.

The man attacked Louise Simmons in the afternoon, as she was leaving the Washington school where she taught. He had ridden up on a bicycle, leapt off and started pummeling her, she said. He dragged her toward a grove of trees; Simmons fought back until she was able to escape. It’s unclear how good a look she got of his face, but she could tell that, like her, he was black.
On that day, June 25, 1919, there were four major newspapers in the nation’s capital competing for readers. The Washington Herald published a small item about the attack on Simmons on page two, under the headline “Negro attacks negress.” The Washington Times ran a longer story but buried it in the back of the paper. The Evening Star and the smallest paper of the four — The Washington Post — didn’t mention it.
Five days later, a white woman said she was attacked by a black man, and the response was complete fury.
What followed was weeks of hysteria ginned up by the media, the arrest of hundreds of innocent black men, a riot that left as many as 39 dead and 150 injured, and put two black men in prison for decades for crimes they most likely did not commit.
The white woman, Bessie Gleason, said she was walking through the woods near her Takoma Park home when a black man leapt from the bushes, beat her with a club and choked her until she lost consciousness.
Police told newspapers another white woman had been accosted the same day. Different papers gave varying descriptions of the incident. In one, she merely saw a black man and ran away screaming. In another, the man “embraced” her, and she screamed until she was rescued by a white soldier.
READ ENTIRE ARTICLE AT RETROPOLIS
A Century Ago, American Reporters Foresaw the Rise of Authoritarianism in Europe
A new book tells the stories of four interwar writers who laid the groundwork for modern journalism.

As World War I ended and Americans flocked to Europe, American writers went with them. A circle of reporters, many of them young Midwesterners and Southerners just out of college, arrived in London and Paris and from there dispersed further east. In places like Munich, Vienna, Warsaw and Moscow, they recorded the war’s remnants and uncovered the kindling for future conflicts.
How these journalists—from Dorothy Thompson, the first American reporter expelled from Nazi Germany, to H.R. Knickerbocker, who was once the highest-paid foreign correspondent in the world—thought about the still-new Soviets, the rise of fascism and the fate of democracy was tangled up with their own preconceptions, love lives and other preoccupations. Yet how they wrote about the world after the Great War was how many Americans came to see it.
Reading Deborah Cohen’s new book, Last Call at the Hotel Imperial: The Reporters Who Took on a World at War, during the Russian invasion of Ukraine feels both timely and terribly unsettling, reminding us of all the the world wars we proclaimed would never be forgotten. For Americans, for Westerners, World War II stands as a battle against an extraordinary evil, one against which all military conflicts are measured, and one in which a moral world order—standing against injustice and supporting democracies that ensure a better future for all—was defined.
The conflict catalogued by Thompson, Knickerbocker and their peers was a three-cornered battle between democracy, fascism and communism. The war in Ukraine, meanwhile, “is more sharply democratic governance versus authoritarian governance,” says Cohen. A striking difference between the two, the historian adds, “is the ability to fool oneself about the nature of dictatorship is now much less.” In the 1920s and ’30s, “they had experience with kings and emperors and tyrants of various sorts, but modern dictatorship was a new phenomenon. [And] you can see how badly people misjudged it.”
READ ENTIRE ARTICLE AT SMITHSONIAN MAGAZINE
The Writers Who Went Undercover to Show America Its Ugly Side
In the 1940s, a series of books tried to use the conventions of detective fiction to expose the degree of prejudice in postwar America.

The seeds of what would eventually become the civil-rights movement included not only mass protest and political mobilization but a wide array of cultural and artistic expressions. Some of them—Frank Sinatra’s song and short film The House I Live In; a Superman radio serial pitting the Man of Steel against a thinly veiled version of the Ku Klux Klan—sought nothing less than a redefinition of American identity that would embrace racial and religious minorities. In his 1945 film, Sinatra came to the defense of a Jewish boy menaced by a gentile mob. On the radio serial a year later, Superman protected a Chinese American teenager from the lethal assault of the “Clan of the Fiery Cross.” The lyrics of The House I Live In captured the new ethos: “The faces that I see / All races and religions / That’s America to me.”
Alongside these sunnier affirmations of inclusion, there appeared a withering critique of American bigotry in the form of a very specific subset of books. All of them, whether fictional or factual, employed the identical device of a writer going undercover to discover and expose the bigoted netherworld of white Christian America. Within the finite period of six years beginning in 1943, these books became both commercial phenomena and effective goads to the national soul. They explicitly sought a mass audience by employing devices borrowed from detective novels, espionage fiction, and muckraking journalism: the secret search, the near-escape from being found out, the shocking revelation of the rot hiding just below the surface of normal life. Whatever these books may have lacked in sentence-to-sentence literary elegance, they made up for with page-turning drama.
Unfortunately, for the most part, they have since been forgotten, or simply overwhelmed by the volume of World War II self-congratulation, however well deserved. But in their own time period, when these books were reaching millions of readers, a victorious America was by no means presumed to be an innocent America. Within a year of V-J Day, the investigative journalist John Roy Carlson released his exposé of domestic right-wing extremism, The Plotters, and laid out the stakes starkly:
We’ve won the military war abroad but we’ve got to win the democratic peace at home. Hitlerism is dead, but incipient Hitlerism in America has taken on a completely new star-spangled face. It follows a ‘Made in America’ pattern which is infinitely subtler and more difficult to guard against than the crude product of the [pro-fascist German American] Bundists. It is found everywhere at work in our nation. It’s as if the living embers had flown over the ocean and started new hate fires here while the old ones were dying in Europe.
READ ENTIRE ARTICLE AT THE ATLANTIC
What Gilles Peress Saw on 9/11
The Magnum photographer looks back on capturing an “inconceivable event.”

The photographer Gilles Peress, who has chronicled war and its aftershocks all over the world, was at home in Brooklyn on the morning of September 11, 2001, when he got a call from his studio manager, telling him to turn on the TV: a plane had just hit one of the World Trade Center towers. “I looked at it, and it was evident that it was not only a major incident but that it was not an accident; it was an attack,” Peress recalled. He had a contract with The New Yorker, and the magazine’s editor, David Remnick, phoned as Peress was getting ready to head toward the site. “I drove to the Brooklyn Bridge—there was no way to get across by car. I parked the car and walked across against the traffic of people fleeing lower Manhattan. I got to the other side as the second plane was hitting the second tower, and I continued toward the scene. A cop tried to stop me. He said, ‘You’re crazy, you’re going to die,’ and I said, ‘O.K.,’ and I bypassed him. I arrived as the second tower was falling. There were very few people there.” The only people he recalled seeing at first “were a group of about six firemen, who were trying to do the impossible.”
The photos Peress took that day (some of which were published in The New Yorker at the time) convey the sense of 9/11 as what he calls “an inconceivable event,” an unmooring, transitional moment when we “encounter historical systems that are beyond our comprehension or knowledge, that we have a problem placing in a continuum of our experience of history so far.” Firefighters spray water into a mass of rubble so pulverized that no sense can be made of it. Ranks of cars are indiscriminately covered with a coating of gray ash on a street rendered unrecognizable. Medical personnel in hospital scrubs and surgical masks stand around, waiting to help injured survivors, of whom there are none in sight. Everywhere, clouds of dun-colored smoke, shot through with the yellow light of a clear September day, swathe the ruins, creating a new and foreboding sort of weather. In a cataclysmic scene where you expect to see dead bodies and wounded people, there are none in these pictures. As the critic Susie Linfield has written about 9/11 photography, “There is little evidence of the dead, because most were burnt into dust: Ground Zero was a mass grave but one without many bodies.” The destruction of the Twin Towers was an epochal tragedy for which photographers, like Peress, had to find a different semiotics of loss.
READ ENTIRE ARTICLE AT THE NEW YORKER
They Did It for the Clicks
How digital media pursued viral traffic at all costs and unleashed chaos.

“What Cheese Are You?” was a quiz that appeared online in February 2014. BuzzFeed, the digital media company that published the quiz, returned to the same theme in December 2016 (“What Delicious Cheese Are You Based on Your Zodiac Sign?”), February 2018 (“Everyone Has a Cheese That Matches Their Personality—Here’s Yours”), August 2019 (“This Personality Quiz Will Reveal What Kind of Cheese You Are”), January 2020 (“What Kind of Cheese Are You?”), and October 2022 (“Only *I* Know What Type of Cheese You Are in Your Soul”). BuzzFeed’s latest variation on the form launched in December 2022 with the declaration, “As Strange as It Sounds, There’s a Cheese That Describes You to a T—Take This Quick Quiz to Find Out What It Is.” (After nearly a decade of cheese quizzes, it did not sound strange at all.)
Like, I suspect, many others, I’ve wasted a small portion of my life answering these quizzes. But they’ve provided no certainty. Just the opposite, in fact: They’ve shown my identity to be oozingly mutable. At times, I’ve been a Camembert; at others, a cheddar, a Parmesan, a mozzarella, a Monterey Jack. “As I write this letter, the world is witnessing a grave humanitarian crisis in Russia’s attack on Ukraine,” BuzzFeed founder and CEO Jonah Peretti wrote in his introduction to the company’s annual report in March 2022. “During these times, our mission to spread truth, joy and creativity on the internet has never been more important.” But if BuzzFeed exists to spread truth, what are the truths it deals in? What cheese am I, really?
In reality, of course, I am as unsimilar to a block of cheese as BuzzFeed is alien to the mission of propagating truth. Informational chaos, not narrative clarity, is the internet’s guiding epistemological mode. The digital era has staged a corporate contest not for truth but for attention—a malleable asset that can be put to countless uses, whether it be to convince readers the 2020 election was stolen or to show them how their preference for Netflix over Hulu means they’re totally a Gorgonzola. All content now is designed to be shareable, to get us to click—but shareable for what? Once you have caught the public’s attention, what do you do with it? What social, political, or cultural purpose does a page impression, a retweet, a video view serve?
READ ENTIRE ARTICLE AT THE NEW REPUBLIC
‘The New York Times’ Can’t Shake the Cloud Over a 90-Year-Old Pulitzer Prize
n 1932, Walter Duranty won a Pulitzer for stories defending Soviet policies that led to the deaths of millions of Ukrainians.

The New York Times is looking to add to its list of 132 Pulitzer Prizes — by far the most of any news organization — when the 2022 recipients for journalism are announced on Monday.
Yet the war in Ukraine has renewed questions of whether the Times should return a Pulitzer awarded 90 years ago for work by Walter Duranty, its charismatic chief correspondent in the Soviet Union.
“He is the personification of evil in journalism,” says Oksana Piaseckyj, a Ukrainian-American activist who came to the U.S. as a child refugee in 1950. She is among the advocates for the return of the award. “We think he was like the originator of fake news.”
A new voice now adds himself to the cause: former New York Times executive editor Bill Keller — himself a Pulitzer Prize winner in 1989 for his own reporting for the Times on the Soviet Union.
In the 1930s, as now, an autocrat’s decrees led to mass deaths of Ukrainian civilians and relied on misinformation to try to cover it up. Reporters, including Duranty, were censored and threatened. (A U.S. diplomat once wrote that Duranty told him his reports had to reflect “the official opinion of the Soviet regime.”) Yet in a time before social media and the internet, foreign journalists were among the only ones who could get news out to the rest of the world.
Duranty was The New York Times‘ man in Moscow, as the line went, with a cushy apartment in which to entertain expatriates and a reputation as a leading authority on the Soviet Union. Duranty had staked his name on the idea that Josef Stalin was the strong leader the communist country needed. He is often credited with coining the term “Stalinism.”
The Evolution of Conservative Journalism
From Bill Buckley to our 24/7 media circus.

When I learned that my first job out of college would take me to a political magazine in Washington, D.C., I headed to the serials room of the main library at the University of Michigan. To get there, if memory serves, I would have walked by rows of card catalogues, whose contents covering several million books were then shifting onto computers that displayed bright green text on vacuum-tube monitors. It was the spring of 1992, and I wanted to use the amazing technology of microfilm.
My goal was to read articles by Fred Barnes, the White House correspondent for the New Republic and my future boss. I had worked with microfilm before, but I was more accustomed to the method that it had replaced: opening thick volumes of the Readers’ Guide to Periodical Literature, jotting down references on request slips, and hoping the library had physical copies of what I wanted to examine. With microfilm, I could just grab a box of reels and go to a terminal. For two or three evenings, I ignored my senior thesis on an obscure novelist from the 18th century and devoted myself to reviewing years of Fred’s journalism. As I turned the spools and adjusted the focus, my research felt fast and efficient.
This was before the internet; I’m barely old enough to know what it was like to be an adult in a world without the Web. My first professional online experience came in the cub-reporter pit at the New Republic, which had a dial-up modem and a subscription to LexisNexis, a database that was a gusher of news and information. I became addicted to its searching powers and instant results as I chased down accounts of events, profiles of politicians, and copies of speeches. The technology of journalism was transforming right before us on our screens, and even a conservative like me, who is supposed to regard change with suspicion, saw it as good.
READ ENTIRE ARTICLE AT THE NATIONAL REVIEW
How Neil Sheehan Really Got the Pentagon Papers
Exclusive interviews with Daniel Ellsberg and a long-buried memo reveal new details about one of the 20th century’s biggest scoops.

On the night of March 23, 1971, New York Times reporter Neil Sheehan excitedly called Max Frankel, the Times’s Washington bureau chief, to give him the news he had been waiting weeks to hear. “I got it all,” Sheehan told Frankel.
Sheehan had just accomplished one of the greatest journalistic coups of the 20th century. He had obtained the Pentagon Papers, the Defense Department’s 7,000-page secret history of the Vietnam War, which revealed that the government had been lying to the American people about the brutal conflict since it began. It was the first mass leak of classified documents in modern American journalism, four decades before WikiLeaks and Edward Snowden.
But Sheehan had lied to his source, Daniel Ellsberg, a disillusioned former defense analyst turned whistleblower, to get the documents. He had secretly copied them after he had promised Ellsberg he wouldn’t.
Sheehan confessed to his editors that he had “Xeroxed the materials without permission and the source was unaware that he had done so,” according to a remarkable and previously unpublished 1971 legal memo obtained by The Intercept. When confronted by an anxious Times lawyer, Sheehan insisted that the Pentagon Papers “were not stolen, but copied,” according to the memo.
The long-buried memo contains Sheehan’s contemporaneous and confidential account of his relationship with Ellsberg, as well as Sheehan’s version of events inside the Times as it prepared to publish the Pentagon Papers. It offers an unprecedented, real-time depiction of Sheehan’s actions — including his phone call to Frankel and his admission to his editors that he had lied to his source.
READ ENTIRE ARTICLE AT THE INTERCEPT
“Public Opinion” at 100
Walter Lippmann’s seminal work identified a fundamental problem for modern democratic society that remains as pressing—and intractable—as ever.

One hundred years ago, a young American journalist named Walter Lippmann published a book called Public Opinion. Though it is one of the most important books of the twentieth century and still acknowledged as a foundational text in the study of social psychology, media, and propaganda, its centenary has passed, for the most part, unacknowledged. This is ironic, because its central question—put simply, “How can a truly self-governing society function under the conditions of ‘mass culture’?”—has rarely been more relevant. Our current debates about disinformation and the pernicious effects of social media could be rather more productive if the participants would bother to read Lippmann—not because Lippmann provides any workable solutions, but because his analysis of the extent of the problem is so clear-eyed.
Public Opinion’s publication year, 1922, is a significant one. The book came out after four years of global war followed by four years of civil war in the former Russian, Ottoman, and Austro-Hungarian empires, a period in which information had been weaponized to a previously unimaginable extent. The government and the press, especially in the United Kingdom, had collaborated in ginning up support for the war through the publication of a number of astonishing lies—the Angels of Mons and infamous “corpse factories” being only the most flagrant. The full sense that the world had entered a new era in which reality was fragmented and the truth impossible to know was starting to be felt. Lippmann had enlisted as an intelligence officer when the United States entered the Great War in 1917, and witnessed the intimate relationship between news reporting and the war effort in France—the second chapter of Public Opinion opens with a striking account of how a roomful of French generals spent hours tinkering with the wording of a press release during the disastrous third day of the Verdun offensive. It seems the experience raised some serious doubts about the function of journalism during wartime, because upon being discharged in 1919 Lippmann immediately went to work on a series of essays castigating the press for its failure to report the news accurately and objectively.
“A Test of the News” is a famous 1920 study of the New York Times’s coverage of the Russian Revolution that Lippmann co-wrote and published in the magazine he helped found, the New Republic. In it, Lippmann and co-writer Charles Merz showed that America’s flagship paper had reported events that never took place—including atrocities that never happened—and that it had claimed at least ninety-one times that the Bolsheviks were on the brink of collapse. The conclusion, to Lippmann and Merz, was obvious: “The news about Russia is a case of seeing not what was, but what men wished to see.” For Lippmann, this was a grave dereliction of duty. How could citizens and legislators, especially in the democratic countries, make informed judgments if they were being fed lies, hearsay, and gross exaggerations of fact? If people didn’t know the truth, how could they be free?
READ ENTIRE ARTICLE AT THE BULWARK
Fear and Spectacle
The Spread
Jill Lepore on disease outbreaks of pandemic proportions, media scares, and the parrot-fever panic of 1930.

“ ‘PARROT’ DISEASE BAFFLES EXPERTS” the Washington Post reported in an issue that went to press the night of January 8th, thrilling readers with a medical mystery that would capture the nation’s attention with the prospect of a parrot-fever pandemic. Reports, cabled and wired and radioed across land and sea, were printed in the daily paper or broadcast, within minutes, on the radio: tallies, theories, postmortems, more to fear. Before it was over, an admiral in the U.S. Navy ordered sailors at sea to cast their pet parrots into the ocean. One city health commissioner urged everyone who owned a parrot to wring its neck. People abandoned their pet parrots on the streets. Every sneeze seemed a symptom. As the story grew, it took on certain familiar—and, as it turned out, durable—features, features that borrow as much from pulp fiction as from public health: super scientists fight super bugs in race to defeat foreign menace invading American homes, beneath the very Christmas tree.
Epidemics follow patterns because diseases follow patterns. Viruses spread; they reproduce; they die. Epidemiologists study patterns in order to combat infection. Stories about epidemics follow patterns, too. Stories aren’t often deadly but they can be virulent: spreading fast, weakening resistance, wreaking havoc. During the recent swine-flu panic, Joe Biden warned Americans not to ride the subway or fly on an airplane, and pharmacies ran out of surgical masks. Why was it so hard to tell, as the story was breaking, if a flu outbreak of pandemic proportions was under way? The world is a far better place for the work epidemiologists do. Maybe, though, we could do with a few more narratologists.
The stories about epidemics that are told in the American press—their plots and tropes—date to the nineteen-twenties, when modern research science, science journalism, and science fiction were born. The germ theory of disease dates to the mid-eighteen-hundreds. Pasteur developed a rabies vaccine in 1885, launching a global battle against infectious illness. By the nineteen-twenties, scientists had developed a vaccine for diphtheria; other vaccines, like the one for polio, would take decades, but hopes ran high. In “The Conquest of Disease” (1927), Thurman B. Rice, a professor of sanitary science, predicted the eradication of sickness itself.
READ ENTIRE ARTICLE AT THE NEW YORKER
Fear in the Heartland
How the case of the kidnapped paperboys accelerated the “stranger danger” panic of the 1980s.

In the early morning hours of Sunday, Sept. 5, 1982, 12-year-old Johnny Gosch vanished while delivering copies of the Des Moines Register. Two years later, 13-year-old paperboy Eugene Wade Martin disappeared under virtually identical circumstances on the south side of Des Moines. These cases terrified residents of Des Moines and Iowa, many of whom believed that the Midwest—a “safe,” and implicitly white, place—ought to be immune from “this type of terrorism,” as one local put it in 1984. “This city and this geographical area are supposed to be comfortable, safe places to raise children, work and lead productive lives,” he wrote in a letter to the Register.
Gosch and Martin disappeared amid an intensifying moral panic concerning “stranger danger” and child exploitation. They joined other high-profile cases—namely those of Etan Patz in Manhattan (1979), Adam Walsh in South Florida (1981), and Kevin Collins in San Francisco (1984)—to distort Americans’ understanding of the threats confronting the nation’s children. Publicized by concerned politicians, bereaved parents (such as John Walsh and Johnny Gosch’s mother, Noreen), and an increasingly tabloidized news media, these cases and the inflated statistics surrounding them drastically exaggerated the “stranger danger” threat. (Some insisted that 50,000 or more children fell victim to stranger kidnapping in the U.S. each year.) The media and political emphasis on these sorts of cases seemed to imply that white children like Gosch and Martin were most likely to be victimized. Yet stranger kidnappings were and remain extremely rare (fewer than 300 cases annually), and children of color have long been underrepresented in news media coverage of missing children.
Present-day accounts often trace the origins of the 1980s “stranger danger” scare, which still haunts parents today, to the Etan Patz and Adam Walsh kidnappings. But the lesser-known kidnappings of Gosch and Martin played a crucial role in stoking this panic and the parental anxieties associated with it. Even though Gosch and Martin were never seen again, and their cases were never solved, they live on—not only as cautionary tales for Iowa parents and children, but also as potent symbols of endangered white childhood. That’s partly because Gosch and Martin were the first missing children to be featured on the sides of milk cartons. After two Des Moines dairies began placing missing children’s photographs, including Gosch’s and Martin’s, on their products in the fall of 1984, the practice caught on in the Midwest and then nationwide. All told, some 700 dairies took part, producing and distributing approximately 3 billion milk cartons adorned with images of missing kids. At a moment of national economic and political uncertainty, as fears of familial and national decline abounded, the image of imperiled white childhood resonated far and wide, from the prairies to the sea. The consequences have been dire.
Superpredator
The media myth that demonized a generation of Black youth.
The epithet is a quarter-century old, but it still has sting: “He called them superpredators,” Donald Trump insisted in his final debate with Joe Biden. “He said that, he said it. Superpredators.” “I never, ever said what he accused me of saying,” Biden protested. While there is no record of Biden using the phrase, much of the harsh anti-crime legislation embraced by both parties in the 1990s continues to be a hot-button issue to this day. From the moment the term was born, 25 years ago this month, “superpredator” had a game-changing potency, derived in part from the avalanche of media coverage that began almost immediately.
“It was a word that was constantly in my orbit,” said Steve Drizin, a Chicago lawyer who defended teenagers in the 1990s. “It had a profound effect on the way in which judges and prosecutors viewed my clients.”
An academic named John J. DiIulio Jr. coined the term for a November 1995 cover story in The Weekly Standard, a brand-new magazine of conservative political opinion that hit pay dirt with the provocative coverline, “The Coming of the Super-Predators.”
Then a young professor at Princeton University, DiIulio was extrapolating from a study of Philadelphia boys that calculated that 6 percent of them accounted for more than half the serious crimes committed by the whole cohort. He blamed these chronic offenders on “moral poverty … the poverty of being without loving, capable, responsible adults who teach you right from wrong.”
John DiIulio defined the word “superpredator” on CBS News in April of 1996. CBS News
DiIulio warned that by the year 2000 an additional 30,000 young “murderers, rapists, and muggers” would be roaming America’s streets, sowing mayhem. “They place zero value on the lives of their victims, whom they reflexively dehumanize as just so much worthless ‘white trash,’” he wrote.
But who was doing the dehumanizing? Just a few years before, the news media had introduced the terms “wilding” and “wolf pack” to the national vocabulary, to describe five teenagers—four Black and one Hispanic—who were convicted and later exonerated of the rape of a woman in New York’s Central Park.
“This kind of animal imagery was already in the conversation,” said Kim Taylor-Thompson, a law professor at New York University. “The superpredator language began a process of allowing us to suspend our feelings of empathy towards young people of color.”
The “superpredator” theory, besides being a racist trope, was not borne out in crime statistics. Juvenile arrests for murder—and juvenile crime generally—had already started falling when DiIulio’s article was published. By 2000, when tens of thousands more children were supposed to be out there mugging and killing, juvenile murder arrests had fallen by two-thirds.
It failed as a theory, but as fodder for editorials, columns and magazine features, the term “superpredator” was a tragic success—with an enormous, and lasting, human toll.
READ ENTIRE ARTICLE AT THE MARSHALL PROJECT
The Media and the Ku Klux Klan: A Debate That Began in the 1920s
The author of “Ku Klux Kulture” breaks down the ‘mutually beneficial’ relationship between the Klan and the media.

In the 1920s, the membership of the Ku Klux Klan exploded nationwide, thanks in part to its coverage in the news media. One newspaper exposé is estimated to have helped the Klan gain hundreds of thousands of members.
Dr Felix Harcourt, a professor of history at Austin College and the author of Ku Klux Kulture, breaks down what he calls the “mutually beneficial” relationship between the Klan and the press – and explains how much the debate that raged over coverage of the Ku Klux Klan in the 1920s mirrors today’s arguments.
In 1921, the New York World ran a three-week front page exposé of the Klan: daily denunciations of its ideology, of its activities, of its hooded secrecy, and its propensity to violence. They managed to get virtually every major New York representative on record in opposition to the Klan. They ultimately spark a congressional hearing into the Klan’s growing power. By some estimates it boosts the World’s circulation by over 100,000 readers. It is syndicated to 17 other newspapers and sparks similar exposés around the country. But some have estimated that while the World picks up 100,000 readers, the Klan’s gain is in the hundreds of thousands of new members – reportedly even cutting out membership applications from the New York World stories to join this organization they were just now hearing about.
READ ENTIRE ARTICLE AT THE GUARDIAN
QAnon Didn’t Just Spring Forth From the Void
Calling QAnon a “cult” or “religion” hides how its practices are born of deeply American social and political traditions.

you’ve been online at any point in the past year—if not, welcome!—your aimless clicks and doomscrolling may have brought you glimpses of the “world” of QAnon: the conspiracy theory that argues that US President Donald Trump is in the midst of a secret war against sex-trafficking, Satan-worshipping pedophiles. Its increasing prominence and power, especially as a worldview dissociated from fact, have compelled many an analyst, journalist, and pundit to festoon their analyses of QAnon in religious language. According to this speculative genre, QAnon is a new American religion, or even a cult. It’s an abusive cabal unlike any other form of belief and practice preceding it.
Enter religious studies scholar Megan Goodwin, co-host of Keeping it 101: A Killjoy’s Introduction to Religion, and author of Abusing Religion: Literary Persecution, Sex Scandals, and American Minority Religions. Fluent in the place of religion in the media, and with a recent book on the religio-political history and power of sex abuse allegations, Goodwin contends that QAnon is far from unprecedented. Goodwin traces the group’s prominence and lineage, as well as its zealous determination to “save the children,” to the rise of the New Christian Right and the Satanic Panic of the 1980s.
Calling QAnon a cult or religion marks the movement for its alterity and supposed irrationality, and hides how its practices are born of American social and political traditions. Its apocalypticism and accusations of pedophilia are but a contemporary symptom of a religiously-inflected political strategy older than Christianity itself. Historical precedents further suggest that the state lacks the tools and incentives to curb the group’s rise.
QAnon certainly holds power as both a political force and a site of projected public scrutiny. The realistic range of political options may seem exasperatingly curtailed, but Goodwin closes by discussing the strategies and extant projects that hope to birth greater solidarity between media professionals and religious studies scholars. These practices may sensitize and historicize the public discourse surrounding—and responses to—QAnon, or whatever comes next.
READ ENTIRE ARTICLE AT RELIGION DISPATCHES
Stranger Dangers: The Right’s History of Turning Child Abuse into a Political Weapon
Josh Hawley’s attacks on Judge Ketanji Brown Jackson are part of a long, sad tradition.

At some point between the ’80s and now, leaving children unattended in public became unthinkable. To let children as old as, say, 10 walk by themselves became grounds to investigate parents for neglect. As a child of the late ’90s and early 2000s, I knew latchkey kids existed, but nearly exclusively from the aging 1980s children’s paperbacks in my elementary school’s library. My friends whose parents worked too late to pick them up from school stayed in the building for a child care program or took a bus to the nearby Boys & Girls Club.
Statistics confirm the decline of the latchkey kid that I witnessed and that continues today. A primary reason for the change was the fear that children were constantly on the cusp of being kidnapped, abused, or taken advantage of, and thus could never be left alone.
Paul Renfro, an assistant professor of history at Florida State University, chronicled in his 2020 book Stranger Danger: Family Values, Childhood, and the American Carceral State, how such a notion became widespread in the ’80s and ’90s. Pictures of missing and abducted children were plastered on milk cartons, as media ramped up coverage of random, isolated incidents of children being abducted in ways that it hadn’t before—even as the number of children who were abducted did not substantially increase.
READ ENTIRE ARTICLE AT MOTHER JONES
The Forgotten Baldwin
Baldwin demands that the Atlanta child murders be more than a mere media spectacle or crime story, and that black lives matter.

Few have inspired the Movement for Black Lives as much as James Baldwin. His books that plumb the psychological depths of U.S. racism, notably Notes of a Native Son (1955) and The Fire Next Time (1963), speak to the present in ways that seem not only relevant but prophetic. However, Baldwin’s renewed status as a household name, cemented by the critical success of Raoul Peck’s 2016 film I Am Not Your Negro, makes it easy to forget that for several decades Baldwin fell from public favor.
Although Baldwin continued to work through the late 1980s, his canonical works were all published during the 1950s and ’60s, and he is seldom associated with the post–civil rights era. Some ascribe this abrupt decline in his reputation to a falling out with the white literary establishment, who believed Baldwin sacrificed his promise for political and moral commitments to Black Power. Others felt it had to do with Baldwin’s insecure role in black America. According to Hilton Als, when Baldwin became the official voice of black America, he compromised his voice as a writer. Others argued just the opposite: Baldwin lost his place precisely because he refused to identify with the essentialist logic of identity politics and any of its associated movements.
Still others believed his diminishment resulted from becoming bitter. Baldwin, they said, refused to acknowledge the progress the United States had made since the 1950s. As the New York Times’ Michael Anderson wrote in a 1998 review of Baldwin’s collected essays: “Little wonder he lost his audience: America did what Baldwin could not—it moved forward.” In a world of Black Lives Matter activism and the Trump administration, this triumphalist narrative of the United States’ racial progress looks especially naïve. And it is not surprising then that Baldwin’s words resonate for us yet again.
Arguably no single work by Baldwin is as connected to the issues animating Black Lives Matter as his final nonfiction book, The Evidence of Things Not Seen (1985), but the work was written long after Baldwin had lost the public’s affection. Perhaps as a result, The Evidence of Things Not Seen remains little known and awaits the recuperation that many of his earlier works have experienced.
READ ENTIRE ARTICLE AT BOSTON REVIEW
When Dungeons & Dragons Set Off a ‘Moral Panic’
D&D attracted millions of players, along with accusations by some religious figures that the game fostered demon worship and a belief in witchcraft and magic.
Going back at least to the 1690s, when the elders of colonial Salem, Mass., executed 20 women and men for supposedly practicing witchcraft, there have been Americans who find the devil’s hand in all manner of human activity.
Satanic messages have been ascribed to the corporate symbols of major companies like Starbucks and Procter & Gamble. Some religious fundamentalists are certain that 666, the number of the beast in the Book of Revelation, lurks in swirls that are central to the logos of the Olympic Games, Google Chrome and the Walt Disney Company. The Harry Potter series, with its incantations and wizardry, has also come under fire (and brimstone) for ostensibly promoting occultism.
Then there is Dungeons & Dragons, introduced in 1974 as the first role-playing game made commercially available. D&D players, working collaboratively, can let their minds roam free through stories about brave warriors locked in combat with trolls, orcs, dragons and other evildoers. The game’s millions of players include prominent writers like Junot Díaz and Cory Doctorow, who have described it as their apprenticeship to storytelling, a gateway to the essence of fantasy and narrative.
But not everyone has smiled benignly upon D&D. That is reflected in this offering from Retro Report, a series of video documentaries examining major news stories of the past and their reverberations.
READ ENTIRE ARTICLE AT RETRO REPORT, THE NEW YORK TIMES
Popular Journalism’s Day in ‘The Sun’
The penny press of the nineteenth century was a revolution in newspapers—and is a salutary reminder of lost ties between reporters and readers.

When Benjamin Day came up with the plan of selling newspapers to the poor in 1833, he did so with the ravenous maw of poverty threatening to swallow him up. Born in 1810 in Springfield, Massachusetts, to a hatter, Day was pulled out of school when he was fourteen to apprentice to a weekly paper. The work involved long hours setting type, letter by letter, line by line, then pulling the heavy levers of the presses, over and over, hundreds of times a night. It was hard physical labor, and though Day would become a very successful journalist and then businessman, he would never stop identifying with the working class.
By the age of twenty, Day knew the trade well enough to make a new start, and in 1829, he moved to New York City. The sights and smells that greeted him were like nothing a small-town boy could have imagined. New York was undergoing an industrial revolution that was making a few people very rich. Carriages took them from their homes uptown to their jobs on Wall Street, in the stock market and in banking, and back again. By 1828, the top 4 percent of taxpayers owned half the City’s assets.
It was a time, in other words, much like our own, and the city was transforming itself to meet the demands of the affluent. New York was taking on a larger and larger share of the nation’s business, especially its import business, and when Day arrived in New York, the first sight that greeted him would have been the hundreds of ships crowding the city’s harbor and piers, bobbing optimistically in the morning sun. As night fell, Day would have seen the city’s newly installed gas lighting, the cast-iron lines running up and down the main commercial streets, illuminating them with streetlamps. He would have seen New York’s stores open late into the evening, lit from within.
READ ENTIRE ARTICLE AT THE NEW YORK REVIEW
The Bloody History of the True Crime Genre
True Crime is having a renaissance with popular TV series and podcasts. But the history of the genre dates back much further.

The critics have spoken: true crime is officially hot. For over a year, news outlets have touted the return of true crime drama. One popular refrain claims that true crime is now joining the ranks of “quality” culture. From the celebrated Serial podcast to the recent pair of well-received O.J. Simpson docuseries, these new crime anthologies have elevated the genre’s status. Prestige series like HBO’s The Jinx and Netflix’s Making a Murderer effectively blend entertainment with real-life investigation.
As more and more networks jump on the bandwagon—NBC has ordered Law & Order: True Crime, whose first season will cover the infamous Menendez brothers’ case, and CBS is producing its own unscripted series on the JonBenét Ramsey murder—the inevitable backlash has begun to set in. Such debates over true crime’s moral and aesthetic merits are nothing new. Although it occasionally aims for respectability—Truman Capote’s In Cold Blood and Norman Mailer’s The Executioner’s Song are often held up as paragons of the genre—true crime is usually relegated to the bin of “trash” culture, a term that denotes cheaply produced, simplistic materials catering to the uncritical masses.
This debate about the value of true crime speaks to our ambivalence over consuming real-life tales of horror. That anybody benefits—through monetary gain or personal titillation—from domestic murder, sex crimes, and grotesque violence seems distasteful, and so we want to consign true crime to the lowest rungs of culture. Yet the genre’s long, rich history shows us that there is much more to true crime than penny dreadfuls and blood-spattered paperbacks. These recent series are merely the most recent iteration of a genre that has always been interested in more than bloody deeds and disfigured bodies.
READ ENTIRE ARTICLE AT JSTOR DAILY
Eyewitness Accounts of the 1906 San Francisco Earthquake
The heart of this book is the sharp and disjointed accounts of survivors, their experience not yet shorn of its surprise.

The San Francisco Calamity by Earthquake and Fire (1906) was known in the publishing industry as an “instant disaster book”. This genre coalesced partly because there were so many disasters at the turn of the last century: the 1889 Johnstown Flood; the 1900 Galveston Hurricane; the 1902 eruption of Mount Pelée; the 1903 Iroquois Theatre Fire; the Great Baltimore Fire of 1904; the 1904 burning of the steamboat General Slocum; and finally, the San Francisco Earthquake of 1906, which killed over 3,000 people and destroyed 80% of the city. All of these became topics of books that followed a certain pattern. A journalist was hastily dispatched to the scene, he furiously filed copy, the page count was fattened up with previously published odds and ends, and images were cut in, the more the better. Get the book to market before interest flits away.
The author of The San Francisco Calamity was Charles Morris, though he’s actually credited as editor, an elision that allowed publishers a freer rein on the book’s final components. Morris was a professional writer who published a great number of popular histories, as well as pseudonymous dime store novels. It’s not clear when he arrived in San Francisco from Philadelphia, nor when he finished his manuscript, but his publisher claimed it was a matter of weeks. If the book wasn’t the first account of the earthquake, it was certainly among the first.
The San Francisco Calamity justifies its length with a comparative survey of many other earthquakes, as well as a history of San Francisco, but the heart of the book is a small section that begins about fifty pages in, when Morris directly quotes his stunned interviewees. Their eyewitness accounts are sharp and disjointed, their experience not yet shorn of its surprise. Nothing has been smoothed or strategically forgotten. They describe a pageant of wretchedness, still unfolding.
The billboard advertising beer that was converted into a public message board, and crowded with death notices. Thieves cutting off fingers and biting off earlobes to seize the jewelry of the dead. Shelters made of fine lace curtains and table cloths. Injuries seeping blood, and thousands of people with nothing to wear aside from their pajamas. Some survivors never stopped shrieking and others went comatose. Some refused to be parted with their piano, or their sewing machine, or their canary, or their lover’s body. Garbage wagons toted corpses. The air was thick with the smell of gas and smoke, and dangling electrical wires shot off blue sparks.
READ ENTIRE ARTICLE AT THE PUBLIC DOMAIN REVIEW
After Attica, the McKay Report in the Prison Press
How was the famous prisoner uprising and its aftermath depicted in the prison press?

In September 1971, law enforcement stormed New York’s Attica Correctional Facility and opened fire on prisoners who had taken 38 guards hostage to demand basic human rights, including access to healthcare. State authorities killed 29 incarcerated people and 10 correctional officers in the process. Six weeks later, the governor of New York called for a Special Grand Jury investigation into the uprising.
Within a month of the takeover and subsequent massacre (as it was called in the Pa’aho Press), the prison press began circulating commentaries on what happened at Attica. The often-controversial subject remained a frequent topic in the penal press. People opined on many facets of the crisis, the McKay Commission’s official investigation, and finally their report, which spawned even more commentary from incarcerated voices. Via Reveal Digital’s American Prison Newspapers collection, contemporaneous accounts from incarcerated writers are now available online open access.
To investigate the mass casualty event, the state appointed a Special Deputy Attorney General who formed the Attica Task Force. By November the governor had appointed a special commission on Attica, chaired by NYU Law School Dean Robert B. McKay. The McKay Commission investigated the events that lead up to and transpired during and after the uprising, producing a 514-page report about it the next year.
READ ENTIRE ARTICLE AT JSTOR DAILY
They Were Fearless 1890s War Correspondents—and They Were Women
Were Harriet Boyd and Cora Stewart rivals in Greece in 1897? The fog of war has obscured a groundbreaking tale.

The Greco-Turkish War of 1897 was a short and ignominious conflict, waged for a mere month or so after the Greeks attempted to annex the Ottoman province of Crete. It did not, however, lack for innovations. Doctors brought X-ray machines to a theater of war for the first time in Greece. It also was the first conflict shot with a movie camera.
Yet perhaps the war’s most enduring legacy—and mystery—is the prominent part played by two American women on its front lines. Harriet Boyd was a Smith College graduate living in Athens. Cora Stewart traveled to Greece with author Stephen Crane and later became his common-law wife. (She is best known to history as “Cora Crane.”)
The two women apparently never crossed paths in Greece. But within the space of 24 hours in May 1897, William Randolph Hearst’s New York Journal and Advertiser trumpeted both Stewart and Boyd—individually and in separate articles—as being the “only” woman covering the war.
Disingenuous? Yes. Boyd and Stewart’s articles appeared in the same newspaper, after all. But two American women publishing dispatches from the same conflict was unprecedented.
Before 1897, there had been only one female war correspondent: Jane McManus Storm Cazneau, who accompanied New York Sun editor Moses Y. Beach on an official 1846 peace mission during the Mexican-American War—and then filed dispatches for that paper from U.S. General Winfield Scott’s successful siege of Vera Cruz.
READ ENTIRE ARTICLE AT THE NEW REPUBLIC
Slandering the Unborn
How bad science and a moral panic, fueled in part by the news media, demonized mothers and defamed a generation.

Legislative intrusion into the womb has a long history in the United States, and nowhere is this paternalism more forceful than when illegal drugs are part of the equation. If the country’s war on drugs functions as a system of social control, that control is doubly exercised when a fetus is involved.
Today, with some notable exceptions, the nation is reacting to the opioid epidemic by humanizing people with addictions — depicting them not as hopeless junkies, but as people battling substance use disorders — while describing the crisis as a public health emergency. That depth of sympathy for a group of people who are overwhelmingly white was nowhere to be seen during the 1980s and 90s, when a cheap, smokable form of cocaine known as crack was ravaging black communities across the country.
News organizations shoulder much of the blame for the moral panic that cast mothers with crack addictions as irretrievably depraved and the worst enemies of their children. The New York Times, The Washington Post, Time, Newsweek and others further demonized black women “addicts” by wrongly reporting that they were giving birth to a generation of neurologically damaged children who were less than fully human and who would bankrupt the schools and social service agencies once they came of age.
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
How the ‘Central Park Five’ Changed the History of American Law
Ava DuVernay’s miniseries shows why more children had to stand trial as adults than at any other time before this 1989 case.

Coverage of violent crime is a staple of American news, yet only a handful of stories capture the attention of the nation. Even fewer go on to inform the trajectory of American legal proceedings. The acclaimed filmmaker Ava DuVernay tackles one of the most significant criminal cases of the 1990s with her miniseries When They See Us, which premiered on Netflix on May 31. In four episodes, DuVernay provides the most complete account of the impact of the “Central Park Jogger” case on the lives of the defendants and their families.
On April 19, 1989, police found the body of a 28-year-old white woman in New York’s Central Park. She was covered in blood and nearly dead after a brutal sexual assault. Trisha Meili, the injured party, was not the only victim of the night’s horrific events. So, too, were Raymond Santana, Kevin Richardson, Korey Wise, Yusef Salaam, and Antron McCray—the kids, ages 14, 15, and 16, who were wrongfully convicted of her attack. Despite no DNA evidence, fingerprints, blood, or semen linking any of the black and brown boys to the crime, all five defendants grew up in prison, each one spending between six and 13 years behind bars.
READ ENTIRE ARTICLE AT THE ATLANTIC
Combating the Myth of the Superpredator
In the 1990s, a handful of researchers inspired panic with a dire but flawed prediction: the imminent arrival of a new breed of “superpredators.”
In the 1990s, a handful of researchers inspired panic with a dire but flawed prediction: the imminent arrival of a new breed of “superpredators.”
In 1995, John DiIulio, Jr., then a Princeton professor, coined a phrase that seemed to sum up the nation’s fear of teen violence: “superpredator.” In the previous decade, teenage crime rates had exploded. Television news led with story after story of seemingly incomprehensible violence committed by children as young as 10. Many criminologists feared the trend would continue, and DiIulio warned that hundreds of thousands of remorseless teen predators were just over the horizon.
The “superpredator” caught the attention of reporters and politicians, some of whom used it to push for the continued overhaul of a juvenile justice system they considered too lenient. By the end of the 1990s, nearly every state had passed laws to make it easier to try juveniles in adult courts or to increase penalties for violent juvenile crimes.
Today, states are reconsidering life prison sentences of people who were given mandatory life terms as juveniles – a practice that has since been ruled unconstitutional by the Supreme Court.
READ ENTIRE ARTICLE AT RETRO REPORT, THE NEW YORK TIMES
What the “Crack Baby” Panic Reveals about the Opioid Epidemic
Journalism in two different eras of drug waves illustrates how strongly race factors into empathy and policy.

In the space of a few paragraphs, the story introduces a mother and child and the drug dependency with which they both struggle, and also expands its scope outwards to note the nature of the epidemic in which they are snared. It doesn’t ignore the personal choices involved in drug abuse, but—as is typical for reporting on other health problems—it considers those choices among a constellation of etiologies. In a word, the article is humanizing, and as any public health official will attest, humanization and the empathy it allows are critical in combating any epidemic.
The article is an exemplar in a field of public-health-oriented writing about the opioid crisis—the most deadly and pervasive drug epidemic in American history—that has shaped popular and policy attitudes about the crisis. But the wisdom of that field has not been applied equally in recent history. The story of Jamie Clay and Jay’la Cy’anne stood out to me because it is so incongruous with the stories of “crack babies” and their mothers that I’d grown up reading and watching.
The term itself still stings. “Crack baby” brings to mind hopeless, damaged children with birth defects and intellectual disabilities who would inevitably grow into criminals. It connotes inner-city blackness, and also brings to mind careless, unthinking black mothers who’d knowingly exposed their children to the ravages of cocaine. Although the science that gave the world the term was based on a weak proto-study of only 23 children and has been thoroughly debunked since, the panic about “crack babies” stuck. The term made brutes out of people of color who were living through wave after wave of what were then the deadliest drug epidemics in history.Even the pages of the Times weren’t immune from that panic.
READ ENTIRE ARTICLE AT THE ATLANTIC
The Lost Legacy of the Girl Stunt Reporter
At the end of the nineteenth century, a wave of women rethought what journalism could say, sound like, and do. Why were they forgotten?

Like any good protagonist, the girl stunt reporter possessed foils and shadow selves. “In their use of deception and disguise,” Todd argues, Bly, McDonald, and their ilk mirrored “the rise of the private detective.” But there was a closer parallel: the actor. Female journalists occasionally honed their skills in the theatre—McDonald was recruited to the St. Paul Globe after her turn in a local stage production—and Todd lavishes attention on how her subjects costumed and packaged themselves. Winifred Sweet, a one-time thespian who became the toast of the San Francisco Examiner, adorned “her head with towering constructions of feathers and velvet”; Bly showed up to a job interview in “a floor-length silk cape and a fur turban,” which “projected, if not grandness, at least the idea of grandness.” And yet Todd seems to brush past the deeper link between her subjects and the theatre. These women were given a platform and a voice, but only if they obeyed the script that men provided them. McGuirk, for instance, performed the part of the high-spirited heroine, daring to sit in an electric chair, but her opinions about capital punishment were secondary, if not entirely irrelevant.
This not-quite-omission gets at a larger dynamic in the book. Todd is having fun with her material. She palpably enjoys the color of a more formal era—the voice of William Randolph Hearst, according to one of his colleagues, was “the fragrance of violets made audible”—and the book’s own tone absorbs some of its characters’ can-do corniness. (“If there was a beehive to be poked with a stick,” Todd writes, “McDonald wasn’t going to stand around eating store-bought honey.”) But, at times, the scholar’s perspective feels inseparable from that of her subjects. Did McDonald really try to find work as a schoolteacher, only to be rejected as “too scrawny,” at which point an editor who admired her acting knocked on her door? An author’s note advises that, “as required by their profession, some of these journalists could be quite self-mythologizing. Unless I have evidence to the contrary (such as a census record showing a woman couldn’t have been born when she said), I take them at their word.”
“Sensational,” then, explicitly buys into the girl reporters’ own empowerment narrative. The result doesn’t seem wrong, exactly—just incomplete. Todd makes a convincing case that these journalists’ undercover efforts presaged Barbara Ehrenreich, Mara Hvistendahl, and Suki Kim, for example, and that their direct, inventive style foretold Tom Wolfe. But, reading the book, I also thought of a less reputable phenomenon: the personal-essay industrial complex. Like Internet confessionals, which often traffic in aestheticized female trauma, girl stunt reporting profited from the spectacle of a beautiful woman in danger. It turned narrators who lacked social clout into curiosities. Maybe these reporters weren’t interested in expressing their souls; maybe they enjoyed the artifice. Still, it’s hard to look at the indignities of the gig—the occasionally exploitative gimmicks, how one writer tended to blend into the next—and not feel a flinch of recognition.
READ ENTIRE ARTICLE AT THE NEW YORKER
How TV Paved America’s Road to Trump
“A brand mascot that jumped off the cereal box”: a TV critic explains the multimedia character Trump created.

Donald Trump is the show we can’t turn off, the car crash we can’t look away from, the news cycle we can’t escape.
There are just too many reasons why we got here to distill into a single explanation. But certainly one reason for Trump’s ascendance is television. It’s not quite right to say that TV made Trump president, but it is fair to say that TV created the conditions that made Trump’s presidency possible.
This, at least, is the thesis of James Poniewozik’s new book Audience of One: Donald Trump, Television, and the Fracturing of America. Poniewozik is a TV critic for the New York Times, and his book is an attempt to explain how Trump turned himself into the protagonist of his own TV show and then pulled all of us into it. It’s also about what TV has done to our political culture and why Trump is the logical fulfillment of all the media trends of the last two decades or so.
According to Poniewozik, Trump is fundamentally a creature of TV. His whole public persona was shaped by TV and he cleverly used the medium, with shows like The Apprentice, to propel his political career. He also knew exactly what TV media craves — spectacle, drama, and outrage — and capitalized on it throughout his presidential campaign.
“Donald Trump is not a person,” Poniewozik writes, “he’s a character that wrote itself, a brand mascot that jumped off the cereal box and entered the world.” And, of course, he’s now entered the White House.
Missed in Coverage of Jack Johnson, the Racism Around Him
The Times’ coverage of Johnson, the first black boxer to win the heavyweight title, reveals racially coded attitudes.

“The big black” and “the big negro” are just two of the phrases that The New York Times used to describe Jack Johnson.
“Johnson Weds White Girl” was the headline when he married Lucille Cameron in 1912. He has been called a “negro pugilist and convicted white slaver,” who left a stain “on boxing and on his race” and abused “the fame and fortune that came to him.” Yet, condescendingly, he also was described as being “far above the average negro both mentally and physically.”
For The Times, Johnson, who in 1908 became the first black boxer to win the world heavyweight title, was inseparable from his race. It permeated how the newspaper covered every detail of his life, from his boxing to his legal troubles to his demeanor and success.
The Times’s coverage illuminates the challenges for broad acceptance faced by Johnson, who inspired the 1967 play and 1970 movie “The Great White Hope,” an account of his life and career and the resolve of white society to dethrone him, both in the ring and outside it.
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
Deception and Lies
The Pentagon Papers at 50: A Special Report
The story behind the bombshell scoop and a look at its legacy of First Amendment protection and government accountability on the Vietnam War.

Brandishing a captured Chinese machine gun, Secretary of Defense Robert S. McNamara appeared at a televised news conference in the spring of 1965. The United States had just sent its first combat troops to South Vietnam, and the new push, he boasted, was further wearing down the beleaguered Vietcong.
“In the past four and one-half years, the Vietcong, the Communists, have lost 89,000 men,” he said. “You can see the heavy drain.”
That was a lie. From confidential reports, McNamara knew the situation was “bad and deteriorating” in the South. “The VC have the initiative,” the information said. “Defeatism is gaining among the rural population, somewhat in the cities, and even among the soldiers.”
Lies like McNamara’s were the rule, not the exception, throughout America’s involvement in Vietnam. The lies were repeated to the public, to Congress, in closed-door hearings, in speeches and to the press. The real story might have remained unknown if, in 1967, McNamara had not commissioned a secret history based on classified documents — which came to be known as the Pentagon Papers.
By then, he knew that even with nearly 500,000 U.S. troops in theater, the war was at a stalemate. He created a research team to assemble and analyze Defense Department decision-making dating back to 1945. This was either quixotic or arrogant. As secretary of defense under Presidents John F. Kennedy and Lyndon B. Johnson, McNamara was an architect of the war and implicated in the lies that were the bedrock of U.S. policy.
Daniel Ellsberg, an analyst on the study, eventually leaked portions of the report to The New York Times, which published excerpts in 1971. The revelations in the Pentagon Papers infuriated a country sick of the war, the body bags of young Americans, the photographs of Vietnamese civilians fleeing U.S. air attacks and the endless protests and counterprotests that were dividing the country as nothing had since the Civil War.
The lies revealed in the papers were of a generational scale, and, for much of the American public, this grand deception seeded a suspicion of government that is even more widespread today.
Officially titled “Report of the Office of the Secretary of Defense Vietnam Task Force,” the papers filled 47 volumes, covering the administrations of President Franklin D. Roosevelt to President Lyndon B. Johnson. Their 7,000 pages chronicled, in cold, bureaucratic language, how the United States got itself mired in a long, costly war in a small Southeast Asian country of questionable strategic importance.
They are an essential record of the first war the United States lost. For modern historians, they foreshadow the mind-set and miscalculations that led the United States to fight the “forever wars” of Iraq and Afghanistan.
SEE COLLECTION AT THE NEW YORK TIMES
Man-Bat and Raven: Poe on the Moon
A new book recovers the reputation Poe had in his own lifetime of being a cross between a science writer, a poet, and a man of letters.

On 25 August 1835, the New York Sun ran a sensational scoop: the ‘Great Astronomical Discoveries, Lately Made by Sir John Herschel, L.L.D., F.R.S., &c, at the Cape of Good Hope’. Herschel – former president of the Royal Astronomical Society and son of William Herschel, the discoverer of Uranus – had sailed from Britain to South Africa two years before with a giant reflecting telescope, on a mission to map the southern skies and observe the return of Halley’s Comet. Now, the Sun reported, he had trained his telescope on the Moon, with astonishing results. The lunar surface wasn’t merely a cratered desert after all. He could also see wooded areas with strange quadrupeds roaming about in them: bison-like creatures with a single huge horn, and species of goat-antelope cavorting through the glades. Herschel named the region ‘The Valley of the Unicorn’. Subsequent instalments in the Sun piled wonder on wonder. There were lunar prairies with bipedal beavers living in crude huts, wisps of smoke visible from their chimneys. On the crags above, Herschel discerned the silhouettes of winged figures; switching to his most powerful lens, he identified them as gliding primates, covered in orange fur like orangutans. Further specimens were observed lounging by the side of a lake, making conversational gestures with their hands. Herschel named this species ‘Vespertilio-homo’, or man-bat.
By now the newsboys of New York were selling out every edition of the paper and clamouring for more. On 29 August a special pamphlet edition of the series sold twenty thousand copies almost as soon as it appeared. The Sun’s circulation jumped to an unprecedented 19,360, larger even than the Times of London, a city six times the size of New York. A lithograph print of the man-bats swooping over a picturesque lake followed, priced at 25 cents. Within weeks a giant diorama was installed on Broadway, illustrating the entire lunar landscape across a thousand feet of rotating canvas. It was only when a New York reporter tracked Herschel to his hotel in Cape Town and told him the story – to the astronomer’s great astonishment and amusement – that it was conclusively refuted.
The Great Moon Hoax, once exposed, provoked much debate about the state of American science, the responsibilities of the popular press and the credulity of its readers. The Sun denied it had been duped, without ever quite claiming that the story was genuine: it had published in good faith, it claimed, out of a proper respect for Herschel’s scientific eminence. But the most indignant response came from a young journalist at the Southern Literary Messenger in Richmond, Virginia. This was Edgar Allan Poe, who had just weeks earlier published a short story, ‘The Unparalleled Adventure of One Hans Pfaall’, about an accidental voyage to the Moon in a hot air balloon. It was a knockabout yarn in the vein of Baron Munchhausen’s adventures but presented as reportage, much of it devoted to scientifically detailed descriptions of the Earth’s surface as viewed from the upper atmosphere, the curvature of the globe illuminated by sunrise, and the gradual diminution of gravity as the balloon was slowly but surely tugged towards the lunar surface. There was a brief, playful paragraph describing the ‘wild and dreamy regions of the Moon’ and their curious inhabitants: tiny, earless humanoids who communicated by telepathy. This must have been the inspiration, as Poe saw it, for the Sun’s plagiarised account.
READ ENTIRE ARTICLE AT LONDON REVIEW OF BOOKS
Trump Lied to Me about His Wealth to Get onto the Forbes 400
Posing as ‘John Barron,’ he claimed he owned most of his father’s real estate empire.
In May 1984, an official from the Trump Organization called to tell me how rich Donald J. Trump was. I was reporting for the Forbes 400, the magazine’s annual ranking of America’s richest people, for the third year. In the previous edition, we’d valued Trump’s holdings at $200 million, only one-fifth of what he claimed to own in our interviews. This time, his aide urged me on the phone, I needed to understand just how loaded Trump really was.
The official was John Barron — a name we now know as an alter ego of Trump himself. When I recently rediscovered and listened, for first time since that year, to the tapes I made of this and other phone calls, I was amazed that I didn’t see through the ruse: Although Trump altered some cadences and affected a slightly stronger New York accent, it was clearly him. “Barron” told me that Trump had taken possession of the business he ran with his father, Fred. “Most of the assets have been consolidated to Mr. Trump,” he said. “You have down Fred Trump [as half owner] . . . but I think you can really use Donald Trump now.” Trump, through this sockpuppet, was telling me he owned “in excess of 90 percent” of his family’s business. With all the home runs Trump was hitting in real estate, Barron told me, he should be called a billionaire.
At the time, I suspected that some of this was untrue. I ran Trump’s assertions to the ground, and for many years I was proud of the fact that Forbes had called him on his distortions and based his net worth on what I thought was solid research.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
A “Malicious Fabrication” by a “Mendacious Scribbler for the ‘New York Times’”
The Times, as a “venomous Abolition Journal” could not be trusted to provide the truth for a white, slave-owning southerner.

Sounds familiar, doesn’t it? Okay, maybe we don’t typically use the word “mendacious” (which means lying) much anymore, but this quote sounds like it could be about a current headline. It reflects an incredibly divided country in which words were a weapon used to condemn everyone and everything on the other side. This quote, though, is from December 1860 and refers to an account of a mob meeting the Prince of Wales during his visit to Richmond, Virginia. The letter was written by John Rutherfoord, a prominent political figure in Virginia, in response to his English cousin’s question about the reports he had heard that a mob met the prince in Richmond.
The report in question was an article published in the New York Times on “The Prince’s Visit to the United States. The Richmond Mob and the Irish Insult.” Despite Rutherfoord’s furious denials of the report as an outright “malicious fabrication” by a reporter for the “venomous Abolition Journal” the New York Times, the article was actually originally published in the London Times and focuses primarily on anti-Irish prejudice. The author assured his readers that most Americans were disgusted by “the Richmond mob” and had “no sympathy with the acts of Irish emigrants in New-York.” Even the author’s description of events in Richmond revolved around anti-Irish sentiment: he thought the mob could have been “stirred up by some Irish or semi-Irish demagogue.” According to the author, the “disorderly mob” pressed constantly on the prince and his party and threw insults at them. After saying that lower-class southern whites were the most “ruffianly and depraved” in America, he then painted a picture of the mob for his readers: “Fancy a mob of four or five hundred slave-dealers, horse-dealers, small planters, liquor-store keepers, and loungers, together with, probably, a large sprinkling of blackguardism from Ireland.” The rest of the article had a similar anti-Irish tone, focusing on the response to fears of a similar situation with Irish immigrants in New York City.
READ ENTIRE ARTICLE AT THE DEVIL’S TALE, DUKE UNIVERSITY
Ari Fleischer Lied, and People Died
The former Bush mouthpiece had more to do personally with the Iraq WMD catastrophe than he wants us to believe.

Ari Fleischer, the former White House Press Secretary under President George W. Bush, ignited a firestorm of controversy Wednesday when, while commenting on the 16th anniversary of the U.S invasion of Iraq, he sought to defend the reputation of his boss when it came to the veracity of the claims about Iraqi Weapons of Mass Destruction (WMD) that underpinned President Bush’s case for war.
“The Iraq war began sixteen years ago tomorrow,” Fleischer tweeted on March 19. “There is a myth about the war that I have been meaning to set straight for years. After no WMDs were found, the left claimed ‘Bush lied. People died.’ This accusation itself is a lie. It’s time to put it to rest.”
Fleischer goes on to declare that “The fact is that President Bush (and I as press secretary) faithfully and accurately reported to the public what the intelligence community concluded,” before noting that “The CIA, along with the intelligence services of Egypt, France, Israel and others concluded that Saddam had WMD. We all turned out to be wrong. That is very different from lying.”
As a Chief Weapons Inspector with the United Nations Special Commission (UNSCOM) in Iraq from 1991 through 1998, I was intimately familiar with the intelligence used by the U.S. Intelligence Community to underpin the case for war (which I debunked in June 2002 in an article published in Arms Control Today). Armed with the unique insights that came from this experience, I can state clearly and without any reservation that Ari Fleischer, once again, has misrepresented the facts when it comes to the Bush administration’s decision to invade Iraq in March 2003.
The fact is, the Iraq War was never about WMD. Rather, it was waged for one purpose and one purpose only—regime change. Getting rid of Saddam Hussein was the sole focus of this effort, and the so-called “intelligence” used to justify this act was merely an excuse for action. Ari Fleischer knows this, and to contend otherwise—as he does via twitter—is simply a continuation of the lies he told from the very beginning about the U.S. case for war with Iraq.
READ ENTIRE ARTICLE AT THE AMERICAN CONSERVATIVE
‘The Temperature in Saigon Is 105 and Rising’
What I learned about American power watching the U.S. leave Vietnam — and then Afghanistan decades later.

As a Marine Corps veteran of Vietnam (1965-1966), a reporter who was among the last to be evacuated from Saigon by helicopter (1975) and a correspondent who covered the Soviet invasion of Afghanistan from the Afghan side (1980), I can say with authority that I agree wholeheartedly with Secretary of State Antony Blinken’s statement, “This is not Saigon.”
It’s worse.
Compared to what’s happening now in Kabul, the chaotic U.S. exodus from Saigon seems in retrospect to have been as orderly as the exit of an audience from an opera.
But there are similarities that can’t be ignored. The news and images from Kabul — the thud of helicopters, the roar of transport planes landing and taking off, along with footage of civilians mobbing the planes, desperate to get on — summon my memories of April 29, 1975, when, trapped in a city under siege, my colleague Ron Yates summed up the uniquely American feeling of empire at sundown.
“Know how I feel? The way you do at a football game when it’s the last two minutes of the fourth quarter and the score’s fifty-six to zip and your side’s the one with the zip,” said Yates, who was the Far East correspondent for the Chicago Tribune at the time.
Yates and I were huddled with around two dozen other foreigners, mostly war correspondents, in the first-floor corridor of the Continental Palace hotel in Saigon as the North Vietnamese Army rolled south. The building shimmied and shook as NVA artillery pummeled Saigon.
It was ten-thirty in the morning, and the shells had been falling for six hours. Enemy tanks had penetrated the city’s outer defenses. The day before, we had been given our instructions and assignments to evac teams, each of which was issued a Citizens Band radio to monitor coded traffic over the airwaves.
The code words we waited to hear were: “The temperature in Saigon is one hundred and five and rising,” which were to be followed by a few bars from Bing Crosby’s “White Christmas.”
READ ENTIRE ARTICLE AT POLITICO
The 19th-Century Swill Milk Scandal That Poisoned Infants with Whiskey Runoff
Vendors hawked the swill as “Pure Country Milk.”

In the 1850s, New York City babies were being mysteriously poisoned.
Nearly 8,000 babies a year shriveled to death from uncontrollable diarrhea, as reported by The New York Times. Without the luxury of advanced medical diagnostics, doctors struggled to identify the culprit. The public floated theories—nutritional and digestive diseases like cholera infantum and marasmus, to give name to the epidemic—but with little evidence, they ultimately gave a collective shrug. That is, until 1858, when an enterprising journalist named Frank Leslie unveiled the offender in a series of scathing exposés: milk.
Swill milk, to be exact—the tainted result of miasmic dairy cows being fed leftover mash from Manhattan and Brooklyn whiskey distilleries. It was the result of distillers looking to profit from their leftover grain. It was an especially lucrative era to be producing cow milk: Americans at the time considered cow milk to be highly nutritious and an effective substitute for breastmilk. Back then, economic and societal pressures pushed women to wean their babies sooner. In her book Taming Manhattan: Environmental Battles in the Antebellum City, Dr. Catherine McNeur writes that vendors sometimes sold swill milk for as little as six cents per quart, which especially appealed to lower-class mothers who needed to wean early so they could return to work. But the poor weren’t the only ones looking for a solution.
“[Middle-class women] were having numerous children, but also had to fulfill the obligations of politeness of middle class society, which required them to be available for visiting, and leading the household. In order to do those things, it was necessary they not be breastfeeding all the time,” says Dr. Melanie Dupuis, author of Dangerous Digestion: The Politics of American Dietary Advice. “Other theories for why women were weaning earlier include corsets, and women’s health during that time period, which tended to be poor. Also, men didn’t like their women to be breast feeding all the time. Some argue it was modesty.”
READ ENTIRE ARTICLE AT ATLAS OBSCURA
The Intelligence Coup of the Century
For decades, the CIA read the encrypted communications of allies and adversaries.

For more than half a century, governments all over the world trusted a single company to keep the communications of their spies, soldiers and diplomats secret.
The company, Crypto AG, got its first break with a contract to build code-making machines for U.S. troops during World War II. Flush with cash, it became a dominant maker of encryption devices for decades, navigating waves of technology from mechanical gears to electronic circuits and, finally, silicon chips and software.
The Swiss firm made millions of dollars selling equipment to more than 120 countries well into the 21st century. Its clients included Iran, military juntas in Latin America, nuclear rivals India and Pakistan, and even the Vatican.
But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company’s devices so they could easily break the codes that countries used to send encrypted messages.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
How the U.S. Lost Its Mind
Make America reality-based again.
Each of us is on a spectrum somewhere between the poles of rational and irrational. We all have hunches we can’t prove and superstitions that make no sense. Some of my best friends are very religious, and others believe in dubious conspiracy theories. What’s problematic is going overboard—letting the subjective entirely override the objective; thinking and acting as if opinions and feelings are just as true as facts. The American experiment, the original embodiment of the great Enlightenment idea of intellectual freedom, whereby every individual is welcome to believe anything she wishes, has metastasized out of control. From the start, our ultra-individualism was attached to epic dreams, sometimes epic fantasies—every American one of God’s chosen people building a custom-made utopia, all of us free to reinvent ourselves by imagination and will. In America nowadays, those more exciting parts of the Enlightenment idea have swamped the sober, rational, empirical parts. Little by little for centuries, then more and more and faster and faster during the past half century, we Americans have given ourselves over to all kinds of magical thinking, anything-goes relativism, and belief in fanciful explanation—small and large fantasies that console or thrill or terrify us. And most of us haven’t realized how far-reaching our strange new normal has become.
Much more than the other billion or so people in the developed world, we Americans believe—really believe—in the supernatural and the miraculous, in Satan on Earth, in reports of recent trips to and from heaven, and in a story of life’s instantaneous creation several thousand years ago.
We believe that the government and its co-conspirators are hiding all sorts of monstrous and shocking truths from us, concerning assassinations, extraterrestrials, the genesis of aids, the 9/11 attacks, the dangers of vaccines, and so much more.
And this was all true before we became familiar with the terms post-factual and post-truth, before we elected a president with an astoundingly open mind about conspiracy theories, what’s true and what’s false, the nature of reality.
We have passed through the looking glass and down the rabbit hole. America has mutated into Fantasyland.
READ ENTIRE ARTICLE AT THE ATLANTIC
Watergate’s Ironic Legacy
Amidst the January 6 hearings, the fiftieth anniversary of Nixon’s scandal reminds us that it has only gotten harder to hold presidents accountable.

As the House January 6 committee lays out its case against Donald Trump for his part in an attempted coup, Americans may be drawn to reflect on the presidential scandal that began fifty years ago. In the early morning hours of June 17, 1972, police arrested five men at Democratic National Committee headquarters in the Watergate Office Building in Washington, D.C., triggering the Watergate saga. Over two years later, President Richard M. Nixon resigned, the only time in U.S. history that a president was forced out of office under the threat of impeachment. Following his resignation, his successor, President Gerald R. Ford, declared that the “Constitution works.” This has been the consensus view ever since.
However, a half-century of experience suggests that the Constitution does not work so well to check abuse of power in the presidency. Looking back at how Watergate progressed, it remains unclear how much we can attribute Nixon’s undoing to the Constitution working or how much owes to lucky breaks and the unique collection of personalities involved. Even if the rule of law ultimately prevailed in Watergate, it has become more difficult to contest presidential power since. Paradoxically, in the process of dealing with President Nixon, the institutions involved—Congress, the Supreme Court, and the special prosecutor—set precedents that made it harder to check a runaway presidency. Changes in the political dynamics in the intervening years have added to the difficulties. Americans may soon have a better understanding of how well the Constitution works today pending the outcome of the January 6 committee’s investigation of President Trump.
On June 22, 1972, a few days after the Watergate break-in, President Nixon met with H. R. Haldeman, his chief of staff. “It sounds like a comic opera,” Nixon said, so poorly executed that no one would think “we could have done it.” Haldeman agreed, picturing well-dressed men installing wiretaps with rubber gloves, “their hands up and shouting ‘Don’t shoot’ when the police come in.” Yet the arrests raised concerns at the White House. With less than five months before Election Day, Nixon and his advisers worried that the FBI investigation of the break-in might reveal other illegal activities.
They had cause for concern. Warrantless wiretapping and burglaries run out of the White House were not unprecedented for the Nixon administration. Indeed, some of those involved in the Watergate operation had previously broken into a Los Angeles psychiatrist’s office, seeking information to discredit a defense analyst who had leaked the Pentagon papers. The publication of this classified history led President Nixon to establish an off-the-books investigative unit known to White House insiders as the Plumbers because they would plug leaks. The administration also used “dirty tricks,” mostly juvenile tactics, such as stink bombs at rallies to disrupt the campaigns of Democratic presidential candidates. The most successful was a letter smearing the wife of Senator Edmund Muskie which sparked an emotional reaction from Muskie that knocked him out of the primaries. At the time, he was running even with Nixon. Then there were financial improprieties. International Telephone and Telegraph Corporation gave $400,000 to the Republican Party while engaged in negotiations with the Justice Department over an antitrust suit. Nixon decided to elevate supports for milk prices after dairy industry lobbyists agreed to contribute $2 million to his reelection campaign. Ambassadorships were for sale. Meanwhile, the president was using public funds for improvements on his own homes.
READ ENTIRE ARTICLE AT BOSTON REVIEW
Why We’re Still Obsessed with Watergate
The reasons that Nixon’s scandal endures when other presidents’ disgraces have not.

As we commemorate, on its 50th anniversary, the Watergate crisis that brought down Richard Nixon’s presidency, it is worth remarking on the extraordinary fact that we’re still commemorating it at all.
Presidential scandals, after all, don’t age well. Few people can tell you much if anything about the Whiskey Ring that tarred Ulysses Grant’s presidency, the corruption flaps that rocked Harry Truman’s and Dwight Eisenhower’s White Houses, or even the Teapot Dome affair, which stood as the benchmark for presidential malfeasance until Nixon came along a half-century later. Even controversies of recent vintage that seemed like a very big deal at the time — Iran-contra, the Lewinsky affair, George Bush Sr.’s Christmas Eve pardons, George Bush Jr.’s U.S. attorneys scandal — didn’t occasion national reflection on their fifth or 10th or 25th anniversaries on anything like the scale that Watergate did.
One wouldn’t necessarily have predicted this. For a time, in the 1980s especially, journalists became enamored of a narrative that imagined Nixon staging a “comeback” or “rehabilitating” himself, as the overstocked smorgasbord of presidential crimes — the Kissinger wiretaps and the White House tapes, the illegal break-ins and the FBI, CIA, and IRS abuses, the hush money and the serial lying — receded into history. Newsweek put Nixon on its cover, declaring, “He’s Back”; pundits whom he wined and dined gushed about his command of world affairs. One historian ventured the colossally mistaken judgment that Watergate was destined to become a “ dim and distant curiosity.”
But Watergate never faded. Younger generations, it’s true, may no longer get references to the Rosemary Stretch, puns about Hugh Sloan and the Hughes Loan, or allusions to the Prussian Guard. But we still talk about political smoking guns; the Enemies List and the Saturday Night Massacre remain reference points in our political cultures; and we continue to name new scandals with the suffix “-gate.” Watergate has remained the scandal by which subsequent ones are measured. Textbooks and high school teachers still explain it as the defining event of the Nixon years — and, indeed, a watershed in American public life that put an end to our heroic view of the presidency.
READ ENTIRE ARTICLE AT POLITICO
What Really Happened to JFK?
One thing’s for sure: The CIA doesn’t want you to know.

In 1988, in an elevator at a film festival in Havana, the director Oliver Stone was handed a copy of On the Trail of the Assassins, a newly published account of the murder of President John F. Kennedy. Stone admired Kennedy with an almost spiritual intensity and viewed his death on November 22, 1963 — 60 years ago this month — as a hard line in American history: the “before” hopeful and good; the “after” catastrophic. Yet he had never given much thought to the particulars of the assassination. “I believed that Lee Oswald shot the president,” he said. “I had no problem with that.” On the Trail of the Assassins, written by the Louisiana appellate judge Jim Garrison, proposed something darker. In 1963, Garrison had been district attorney of New Orleans, Oswald’s home in the months before the killing. He began an investigation and had soon traced the contours of a vast government conspiracy orchestrated by the CIA; Oswald was the “patsy” he famously claimed to be. Stone read Garrison’s book three times, bought the film rights, and took them to Warner Bros. “I was hot at the time,” Stone told me. “I could write my own ticket, within reason.” The studio gave him $40 million to make a movie.
The resulting film, JFK, was a scandal well before it came anywhere near a theater. “Some insults to intelligence and decency rise (sink?) far enough to warrant objection,” the Chicago Tribune columnist Jon Margolis wrote just as shooting began. “Such an insult now looms. It is JFK.” Newsweek called the film “a work of propaganda,” as did Jack Valenti, the head of the Motion Picture Association of America, who specifically likened Stone to the Nazi filmmaker Leni Riefenstahl. “It could spoil a generation of American politics,” Senator Daniel Patrick Moynihan wrote in the Washington Post.
READ ENTIRE ARTICLE AT INTELLIGENCER
How Israel Is Borrowing From the U.S. Playbook in Vietnam
Justifying civilian casualties has a long history.

Gen. William Westmoreland, commander of the US forces in Vietnam, deemed accidental civilian deaths “a great problem” in 1966, but similarly charged that the war was “designed by the insurgents and the aggressors to be fought among the people,” suggesting that the Viet Cong were ultimately responsible for their deaths. Westmoreland had established rules that were supposedly meant to limit civilian casualties: Residents of a village suspected of hiding Viet Cong first had to be warned via leaflet or loudspeaker planes before an air strike was carried out, unless it was under total communist control, in which case it was a “specified strike zone”—later known as a “free-fire zone”—and US forces could bomb whatever they wanted. In one such case, US forces killed 20 civilians and wounded 32.
“Some of this is just human failure, bad judgment,” one high-ranking military figure commented. Much like Israeli leaflets telling Gazans to evacuate today, these warnings were a thin justification for what followed, as when troops spent hours telling civilians via loudspeaker to leave a communist-controlled area in one 1968 assault, but “for some reason they didn’t leave before US forces attacked with napalm,” leaving 17 dead, according to the Associated Press.
Vietnam-era US officials even made use of an allegation that has become ubiquitous in the current war: enemy forces using innocent people as human shields. In one January 1967 incident, South Vietnamese forces shelled a village, killing 10 children and wounding 16, only for a US spokesperson to claim—falsely, it soon turned out—the Viet Cong had “herded” civilians in front of them as they advanced. “These civilian casualties are very regrettable and are directly attributable to the callous use of civilians by the Viet Cong in military operations,” the spokesperson said. Such claims abounded throughout the war.
In reality, as journalist Nick Turse uncovered decades later when he went digging through archives and interviewing ex-GIs, all of these US statements papered over a far more brutal truth: That, far from being accidents or the tragic ugliness of war, the high number of Vietnamese civilian deaths resulted from deliberate policy set at the top and enacted by US troops on the ground, rooted in officials’ emphasis on “body count” and a view of Vietnamese as “animals,” all of whom, even women and children, were potential threats. Even the infamous My Lai massacre was at first publicized by the US government as a grand military triumph over enemy fighters.
READ ENTIRE ARTICLE AT THE NATION
Kissinger, Me, and the Lies of the Master
‘Off off the record’ with the man who secretly taped our telephone calls.

My dance with Kissinger did not begin until early 1972 when I was asked by Abe Rosenthal, the executive editor of the Times, to join the newspaper’s staff in Washington and write what I wanted as an investigative reporter about the Vietnam War—with the proviso that I had better be damn sure I was right. By then, I had won lots of prizes, including the Pulitzer, for my reporting on the My Lai massacre in Vietnam and published two books, enough to land me a job at the best place in the world for a writer: as a reporter for the New Yorker. But Rosenthal’s offer and my hatred for the war led me to leave the magazine for the daily rush of a newspaper.
When I arrived at the Washington bureau in the spring of 1972, my desk was directly across from the paper’s main foreign policy reporter, a skilled journalist who was a master at writing coherent stories for the front page on deadline. I learned that around 5 pm on days when there were stories to be written about the war or disarmament—Kissinger’s wheelhouse—the bureau chief’s secretary would tell my colleague that “Henry” was on the phone with the bureau chief and would soon call him. Sure enough, the call would come and my colleague would frantically take notes and then produce a coherent piece reflecting what he had been told would invariably be the lead story in the next morning’s paper. After a week or two of observing this, I asked the reporter if he ever checked what Kissinger had told him—the stories he turned out never cited Kissinger by name but quoted senior Nixon administration officials—by calling and conferring on background with William Rogers, the secretary of state, or Melvin Laird, the secretary of defense.
“Of course not,” my colleague told me. “If I did that, Henry would no longer deal with us.”
Please understand—I am not making this up.
READ ENTIRE ARTICLE BY SEYMOUR HERSH AT SUBSTACK
Blaming the Media
A Century Ago, Progressives Were the Ones Shouting ‘Fake News’
The term “fake news” dates back to the end of the 19th century.

Donald Trump may well be remembered as the president who cried “fake news.”
It started after the inauguration, when he used it to discredit stories about the size of the crowd at his inauguration. He hasn’t let up since, labeling any criticism and negative coverage as “fake.” Just in time for awards season, he rolled out his “Fake News Awards” and, in true Trumpian fashion, it appears he is convinced that he invented the term.
He didn’t. As a rhetorical strategy for eroding trust in the media, the term dates back to the end of the 19th century.
Then – as now – the term became shorthand for stories that would emerge from what we would now call the mainstream media. The only difference is that righteous muckrakers were usually the ones deploying the term. They had good reason: They sought to challenge the growing numbers of powerful newspapers that were concocting fake stories to either sell papers or advance the interests of their corporate benefactors.
READ ENTIRE ARTICLE AT THE CONVERSATION
When Richard Nixon Declared War on the Media
Like Nixon, Trump has managed to marginalize the media, creating an effective foil.

More than 40 years ago, Richard Nixon subtly changed the modern presidency. During past administrations, the American news media had always been referred to as “the press,” but Nixon, whose contentious relationship with the nation’s newsrooms was longstanding, tweaked that policy, and began labeling the press as “the media,” a term he felt sounded more ominous and less favorable. As Jon Marshall wrote in 2014 for The Atlantic, Nixon was the first president to exclusively use this term, and while subsequent presidents were similarly at odds with those whose job it is to hold the country’s chief executive in check, none were as vitriolic as Nixon.
Donald Trump has come the closest, evidenced by this week’s post-midterm election press conference — his first in months — which quickly went off the rails moments after his opening remarks, devolving into presidential rants accusing those assembled of perpetuating hoaxes while advancing bogus claims of “racist” questions peddled by the “fake media.” The surreality of the conference was part-carnival, part-grand guignol, but it wasn’t without historical precedent: Asked how to “change” the tone of the country, Trump claimed that it “begins with the media — we used to call it the press.” He didn’t credit Nixon, but the connection was readily apparent, even more when the administration followed up by barring CNN reporter Jim Acosta from the White House, revoking the press credentials of the veteran reporter and gadfly.
White House aide grabs and tries to physically remove a microphone from CNN Correspondent Jim Acosta during a contentious exchange with President Trump at a news conference. https://t.co/ogHhRsO0AI pic.twitter.com/WZbbP5Jwq5
— NBC News (@NBCNews) November 8, 2018
Acosta had the temerity to first question the dog-whistle issue of the caravan and then follow-up by asking about Robert Mueller’s probe; in the process, he shielded the microphone from a White House aide attempting to censor him. The government’s decision was outlandish, perhaps, but not all that surprising — Masha Gessen had predicted a similar action in the New York Review of Books just two years earlier, and second, Nixon kept a master list of his “enemies,” dozens of which were members of the press and one specifically whom he banned.
When Stuart Loory wrote a January 1971 article for the Los Angeles Times about the cost to taxpayers of maintaining Nixon’s Western White House in San Clemente and a vacation property in Key Biscayne (which, between the years of 1969 and 1972, would later be revealed to amount to $218,676 — or $1.3 million in 2018), he likely expected pushback from the White House. After all, Loory had written a column for the paper just six months earlier examining the supposed effectiveness of Henry Kissinger, then the special presidential advisor on foreign affairs (as Louis Liebovich recounts in Richard Nixon, Watergate, and the Press, aide Herbert Klein complained about the coverage to then publisher Otis Chandler). What Loory didn’t expect, though, was to be banned from the White House, which, despite his reputation — the reporter had an illustrious career, working for the New York Herald Tribune, the New York Times, the Los Angeles Times, and finally CNN — he summarily was.
READ ENTIRE ARTICLE AT LONGREADS
When the War on the Press Turns Violent, Democracy Itself Is at Risk
The bloody history of attacks on American journalists.

The constant churn among President Trump’s communications staff — including the abrupt ouster of Anthony Scaramucci, who days ago promised to bring “an era of a new good feeling” to press relations in his role as communications director — has not obscured the underlying principle driving their media efforts: unremitting hostility toward a free and objective press.
Even newly appointed Chief of Staff John F. Kelly appears to scorn the news media. In May, when a ceremonial saber was given to Trump at the U.S. Coast Guard Academy graduation ceremony, Kelly suggested that Trump might “use it on the press.” The president seemed to like the idea, as well he might. Last month, he retweeted a GIF depicting his appearance on “WrestleMania” with a CNN logo superimposed on the face of the man he pummeled.
While both Trump and Kelly passed off their comments as jokes, the increasingly commonplace threats of violence directed at the press ought to be taken quite seriously.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
Why Trump’s Assault on NBC and “Fake News” Threatens Freedom of the Press
Restricting the press backfires politically.

Yesterday it was NBC that bore the brunt of President Trump’s most recent harangue about fake news. And this time, he didn’t just attack NBC’s reporting, he also threatened to revoke the network’s broadcast license, hinting at a more aggressive political maneuver to come if needed to silence the opposition.
Despite Trump’s recent proclamation that he invented the moniker fake news, efforts to delegitimize opponents in the press have actually been deeply ingrained in American politics since the nation’s founding, and politically dangerous. While the First Amendment protects freedom of the press, in 1798, the specter of “fake news” fueled passage of the Sedition Act, which limited the scope of legal criticisms of the government. Like John Adams before him, Trump’s attacks threaten to stifle and undermine the press, but if he acts on them, they also might just blow up in his face.
In the 18th century, fake news was out of control. The earliest U.S. newspapers shared news with little regard for accuracy. In 1731, Benjamin Franklin, who was then the editor of the Pennsylvania Gazette, expressed a widely-held sentiment when he suggested that printers should simply print whatever came in over the transom in hopes that “when Truth and Error have fair Play, the former is always an overmatch for the latter.”
READ ENTIRE ARTICLE AT THE WASHINGTON POST
What Facebook Did to American Democracy
And why it was so hard to see it coming.

In the media world, as in so many other realms, there is a sharp discontinuity in the timeline: before the 2016 election, and after.
Things we thought we understood—narratives, data, software, news events—have had to be reinterpreted in light of Donald Trump’s surprising win as well as the continuing questions about the role that misinformation and disinformation played in his election.
Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election. Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.
But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.
READ ENTIRE ARTICLE AT THE ATLANTIC
Don’t Look For Patient Zeros
Naming the first people to fall sick often leads to abuse.

On March 30, the New York Times flagship podcast, The Daily, released an episode titled “New Jersey’s Patient Zero.” What followed was a wrenching and compassionate story about New Jersey’s first confirmed coronavirus patient, who was identified in the story by name. Early that morning, the show’s host, Michael Barbaro, promoted that day’s episode as “the story of what one hospital, in N.J., learned from its patient zero back in early March.” Yet not everyone was pleased by this framing. The next day, the patient tweeted back, “Hi, Michael. I really appreciate your hard work. One thing is that I am not a patient zero.”
As cases of the coronavirus have mounted over the last month, the term “patient zero” has begun popping up with astounding regularity. Sources from CNN to the BBC, The Washington Post to People, the New York Post to New York Governor Andrew Cuomo, have reported on supposed “patient zeros,” nearly all of the stories identifying these supposed first patients by name.This pattern has also spread beyond traditional sources of media. One fake news website ran an article titled, “COVID-19: Chinese Health Authorities Confirm Patient Zero ‘Had Sex With Bats,’” which was promptly shared on social media some 30,000 times. The Daily Show called Donald Trump the “patient zero” of a looming “pandumbic.” Such rhetoric can be dangerous: What might happen if a reader or a listener decided to enact some form of revenge on one of these “patient zeros,” helpfully identified by name? Reporting from Kenya has shown that the woman labeled that country’s “patient zero” has been subjected to relentless online bullying.
READ ENTIRE ARTICLE AT THE NEW REPUBLIC
This 1874 New York Herald Feature Sent Manhattanites Running for Their Lives
James Gordon Bennett Jr.’s most eccentric public service announcement.

Now, on this early November morning, the Herald’s night editor must have been cringing as he had the still-warm draft of the first edition sent to his mercurial boss. The Herald contained a lead story that, if executed properly, was guaranteed to cause the kind of stir that Gordon Bennett delighted in. It was one of the most incredible and tragic news exclusives that had ever run in the Herald’s pages. The story was headlined, “A Shocking Sabbath Carnival of Death.”
The Commodore scanned the paper and began to take in the horrifying details: Late that Sunday afternoon, right around closing time at the zoo in the middle of Central Park, a rhinoceros had managed to escape from its cage. It had then rampaged through the grounds, killing one of its keepers—goring him almost beyond recognition. Other zookeepers, who had been in the midst of feeding the animals, rushed to the scene, and somehow in the confusion, a succession of carnivorous beasts—including a polar bear, a panther, a Numidian lion, several hyenas, and a Bengal tiger—had slipped from their pens. What happened next made for difficult reading. The animals, some of which had first attacked each other, then turned on nearby pedestrians who happened to be strolling through Central Park. People had been trampled, mauled, dismembered—and worse.
In 1918 and 2020, Race Colors America’s Response to Epidemics
A look at how Jim Crow affected the treatment of African Americans fighting the Spanish flu.

American epidemics, race is a preexisting condition.
Whether it’s the influenza pandemic of 1918 or COVID-19 over a century later, race and ethnicity have been, and continue to be, enormous factors in determining whether people will receive medical attention when they become ill, and the sort of attention they will receive.
In “The 1919 Influenza Blues,” Essie Jenkins documented the toll the flu took on the country, noting that viruses don’t discriminate when it comes to their victims. She sang:
“People died everywhere
death went creepin’ through the air
and the groans of the rich
sure were sadBut it was God’s own mighty plan
He’s judging this old land
North and South, East and West
can be seenHe killed rich and poor
and he’s going to
kill some more …”
According to the Centers for Disease Control and Prevention estimates, the 1918 flu infected 500 million people worldwide and resulted in 50 million deaths around the globe, 675,000 of which were American. But while viruses don’t discriminate, people do. In cities across the nation, black people struck by the flu were often left to fend for themselves. They received substandard care in segregated hospitals, where they could be relegated to close quarters in basements, or they were only allowed admittance to black-only hospitals. Even in death, black bodies were neglected by white public infrastructure. In Baltimore that year, white sanitation department employees refused to dig graves for black flu victims after the city’s only black cemetery, Mount Auburn, could not accommodate any more graves.
READ ENTIRE ARTICLE AT ANDSCAPE
How Media was Social in the 1790s
What would the French Revolution have looked like on Twitter?

What would the French Revolution have looked like on Twitter? It’s not hard to imagine. Embedded videos of the Bastille’s storming. “#foreignplot” trending on the sidebar. Raynal’s threads going viral. “Antoinette was innocent! RT if you agree!” It’s an enjoyable thought experiment, and maybe even a useful teaching exercise. Every so often, historians joke their way through a viral twitter thread with some version of that prompt. Recently, there was “Name a historical figure that never had access to Twitter but would have been great at it.” A couple of days later, it was (much more sensibly) “Name a historical figure you are extremely grateful never had access to Twitter.”
But maybe we don’t have to wonder.
It’s tempting to think of social media as a fundamentally different vehicle for communication than the media landscapes of the past. What could Instagram have in common with the telegraph, or Facebook with newspapers? But little is actually new about these Internet media. Those who lived through the late eighteenth century would have recognized many of the problems that beset the twenty-first century’s news media.
Our familiar challenges with verification, fake news, irresponsible sharing, and partisan media would have been familiar to those who lived through the tumultuous 1790s. Indeed, North Americans living through the French Revolution experienced what my recent Journal of the Early Republic article calls a “Reign of Error.” They might not have witnessed the French Revolution through Twitter, but their experience with news media wasn’t as far off from our social media as we might imagine. Robert Darnton once wrote, “every age was an age of information, each in its own way.” We might add that all media are social, each in its own way.
Spend an hour with the newspapers of the 1790s and it will be easy to spot their similarities with our present media landscape. In North America, newspaper printers relied on letters, ship captains’ reports, and foreign newspaper accounts for news about the French Revolution and the wars that it inspired. They took what they could get. They had almost no way to verify news, other than waiting to see what came next. As a result, they shared many, many falsehoods. In early 1794, for example, a single inaccurate letter sparked a rumor that an English military commander had been taken prisoner in France. This caused such a “sensation” when news reached the U.S. Congress that it was forced to adjourn for the day. But after a short time, the arrival of more reports convinced everyone that this was false. New York printer Thomas Greenleaf dolefully commented that the rumors which had “lately crossed the Atlantic in various directions” had “caused us to be sharers in the general deception.”
READ ENTIRE ARTICLE AT THE PANORAMA
20 Years Later, Columbine Is The Spectacle The Shooters Wanted
Searching for meaning in the shooters’ infamous “basement tapes.”

SEE ENTIRE INFOGRAPHIC AT THE NIB
The Tyranny Of The Map: Rethinking Redlining
In trying to understand one of the key aspects of structural racism, have we constructed a new moralistic story that obscures more than it illuminates?

Teaching the history of racism in America can be a difficult thing. Not because students deny it, but because it is something that is so ubiquitous, so all encompassing, that many (particularly white) students let the idea roll over and past them. They know racism existed in the past and people did racist things, but they have difficulty understanding how it changes over time, morphs and moves, taking on a different character and structure depending on time and place.
This makes any document, evidence, or concept that can explain racism and white supremacy so valuable in every American history classroom. Violence—of slavery, Jim Crow, and mass incarceration—is usually the easiest to illustrate and explain, because it’s the most horrific. It’s also because generations of Americans have been taught that racism is only about violence that stems from hatred, primarily from sneering Southern plantation owners and rednecks in white hoods. Racism is about bigoted interpersonal interactions, we are told, and thus rooted in our own morality. Structural racism, especially as it relates to American cities, can be more difficult to communicate. Census records that show neighborhood segregation, numbers and charts of arrest rates, or homeownership—all of these have been tabulated by generations of researchers eager to document and quantify the often-mundane world of structural inequality. This data is easy to compile in a handout or Powerpoint slide. But they are often best at showing the results of racist practices and not the policies themselves. And they don’t do the work of helping students understand the impact on individual lives, families, and communities.
But then you find the maps. The Homeowners Loan Corporation Residential Security maps (HOLC maps). They are racism and white supremacy in full 1930s, Wizard of Oz technicolor. Detailed and specific, all the more valuable because almost every major American city had one, and students can find places and neighborhoods they know, have lived in, had family from. For a scholar of twentieth-century United States history, especially one who researches and teaches urban and environmental history, the HOLC maps have long been a keystone slide in my teaching presentations, something to build out the entire lecture, unit, or even course. This is what structural racism means, when the state and its private-capital partners decide winners and losers, build generational wealth in one community, and condemn other groups to be starved of resources for decades.
I first started using HOLC maps in 2008, right out of graduate school. Google’s image search was relatively new, but it was easy to find scanned versions of the maps on various websites and depositories. They were always best paired with the actual HOLC research reports, which provided detailed descriptions of each neighborhood, justifying that investment was high risk because of “Negro incursion” or “low-class Italians.” They were also extraordinarily helpful for introducing the concept of “redlining,” the practice of denying mortgage credit to a community because of its racial and sometimes ethnic makeup. I always emphasized redlining originated before the 1930s and continued long after, but it became codified and federally sanctioned during the New Deal.
READ ENTIRE ARTICLE AT THE METROPOLE
Typhoid Mary Was a Maligned Immigrant Who Got a Bum Rap
Now, she’s become hashtag shorthand for people who defy social distancing orders.

The country’s most notable healthy carrier of a deadly disease, Mary Mallon, is back – not in person, but as a hashtag: #TyphoidMary.
In the current pandemic, people may unknowingly harbor and spread the coronavirus before they feel sick, largely because it has an incubation period of between two and 14 days. The Centers for Disease Control and Prevention now says that one in four people could be asymptomatic carriers, never showing symptoms even as they infect others.
But there are also those who, knowing they could be carriers, refuse to cover their mouths or practice social distancing. They include the spring breakers who crowded Florida beaches and the protesters gathering in some state capitals.
Mary Mallon, known as Typhoid Mary, was until now the most prominent example in the U.S. of the unknowing disease carrier. She spread typhoid fever to at least 53 people, causing three deaths between 1900 and 1915.
But Mallon has long been unfairly characterized as knowingly spreading the deadly disease she carried. Her memory has been resurrected recently, largely on Twitter, as a shorthand description of those who intentionally infect others with the coronavirus, #TyphoidMary.
As the author of “Constructing the Outbreak: Epidemics in Media and Collective Memory,” I can attest to the media’s past and continuing distortion of the Mary Mallon case. It’s unfair to Mallon to attach her name to such consciously bad behavior.
READ ENTIRE ARTICLE AT THE CONVERSATION
Captured Confederate Flags and Fake News in Civil War Memory
Fake news has been central to the Lost Cause narrative since its inception, employed to justify and amplify the symbolism of Confederate monuments and flags.

Earlier this summer, after a decades-long fight that gained traction over the past four years, the city of Charlottesville finally removed its infamous statues of Robert E. Lee and Stonewall Jackson. In doing so, Charlottesville joined the ranks of cities like New Orleans, Baltimore, and Richmond, southern cities that have removed their Confederate monuments in the last decade. When his city began its monument removal, Richmond Mayor Levar Stoney noted that statues of Confederate soldiers and leaders create a false history of the Civil War and its aftermath by honoring men who committed treason: “It’s the fake news of their time.” Fake news has been inherent to the Lost Cause narrative since its inception, employed to justify the construction of Confederate statues and the deification of Confederates across the country. It has also played a key role in the history of another, often overlooked, variety of Confederate monument: captured Confederate flags.
During the Civil War, Federal regiments captured and brought home hundreds of Confederate battle flags. Some are maintained by Northern states to this day, which has made them a unique type of monument. Minnesota, for example, famously refuses to return a flag captured from the 28th Virginia Infantry at Gettysburg, creating a strange situation in which a northern state owns, and has occasionally displayed, a Confederate banner. In the immediate aftermath of the war, little attention was paid to these trophies. Indeed, some captured flags vanish from the historical record almost immediately after their capture. By the 1880s, however, as the nation emerged from Reconstruction, some legislators began to understand the symbolic significance of flag returns and, by the end of the decade, it was a controversy making headlines across the country. The heart of the conflict was a 1887 Executive Order issued by President Grover Cleveland that mandated battle flags held in federal custody be returned to the states from which they originated. In theory, returning captured flags would strike a reconciliatory tone, signaling to both sides that wartime was over. In practice, it was met with near-universal disdain. Northerners saw it as a cowardly concession to a South they had not yet forgiven; ex-Confederates considered it an insult, a political misstep, or simply an empty gesture.
Then, as now, high tempers led to reckless journalism. Both Southerners who wanted their flags back and Northerners who vehemently opposed the idea claimed the endorsement of a particularly polarizing figure: Jefferson Davis. Soon after the original order was issued by the Cleveland administration, the New York Sun reported on a letter allegedly sent by Davis in which he argued that “the order of the War Department to return the captured flags to the late Confederate states was a violation of all known military precedents,” and went on to say that the flags should be returned to the states that captured them. This was a fairly radical stance for an ex-Confederate to take; even most conservative Southern newspapers had admitted to wanting the flags back if they could get them. For Davis to come forward with the notion that the captured flags belonged to the victors was news indeed. There was only one problem: the letter was fake news.
From the beginning, there was some doubt as to its authenticity. One Minnesota newspaper ran the letter with a note stating, “it may be that the above letter is not authentic.” Soon, the public caught on to the fabrication, and Southerners were, predictably, outraged. The Staunton Spectator wrote that “such conduct is unpardonable,” and, a few days later, published a Davis letter of its own. In that letter—which was also carried by the Sun—Davis wrote that “to retain as a point of pride a flag captured in battle by either the Union or Confederate soldiers would be equivalent to renewed exultation of triumph by one or the other, and surely not a step toward the restoration of peace.” Davis, echoing other Southern papers, noted that the South had not requested the return of the flags, but still saw the gesture as one of goodwill and reconciliation, a potential recognition of the “nobility” of the Confederate cause.
READ ENTIRE ARTICLE AT MUSTER: THE JOURNAL FOR THE CIVIL WAR ERA
The Fake-News Fallacy
Old fights about radio have lessons for new fights about the Internet.

Donald Trump’s victory has been a demonstration, for many people, of how the Internet can be used to achieve those very ends. Trump used Twitter less as a communication device than as a weapon of information warfare, rallying his supporters and attacking opponents with hundred-and-forty-character barrages. “I wouldn’t be here without Twitter,” he declared on Fox News in March. Yet the Internet didn’t just give him a megaphone. It also helped him peddle his lies through a profusion of unreliable media sources that undermined the old providers of established fact. Throughout the campaign, fake-news stories, conspiracy theories, and other forms of propaganda were reported to be flooding social networks. The stories were overwhelmingly pro-Trump, and the spread of whoppers like “Pope Francis Shocks World, Endorses Donald Trump for President”—hardly more believable than a Martian invasion—seemed to suggest that huge numbers of Trump supporters were being duped by online lies. This was not the first campaign to be marred by misinformation, of course. But the sheer outlandishness of the claims being made, and believed, suggested to many that the Internet had brought about a fundamental devaluing of the truth. Many pundits argued that the “hyper-democratizing” force of the Internet had helped usher in a “post-truth” world, where people based their opinions not on facts or reason but on passion and prejudice.
Yet, even among this information anarchy, there remains an authority of sorts. Facebook and Google now define the experience of the Internet for most people, and in many ways they play the role of regulators. In the weeks after the election, they faced enormous criticism for their failure to halt the spread of fake news and misinformation on their services. The problem was not simply that people had been able to spread lies but that the digital platforms were set up in ways that made them especially potent. The “share” button sends lies flying around the Web faster than fact checkers can debunk them. The supposedly neutral platforms use personalized algorithms to feed us information based on precise data models of our preferences, trapping us in “filter bubbles” that cripple critical thinking and increase polarization. The threat of fake news was compounded by this sense that the role of the press had been ceded to an arcane algorithmic system created by private companies that care only about the bottom line.
READ ENTIRE ARTICLE AT THE NEW YORKER
The Nation Is Imperfect. The Constitution Is Still a ‘Glorious Liberty Document.’
As part of its “1619” inquiry into slavery’s legacy, The New York Times revives 19th century revisionist history on the founding.

Across the map of the United States, the borders of Tennessee, Oklahoma, New Mexico, and Arizona draw a distinct line. It’s the 36º30′ line, a remnant of the boundary between free and slave states drawn in 1820. It is a scar across the belly of America, and a vivid symbol of the ways in which slavery still touches nearly every facet of American history.
That pervasive legacy is the subject of a series of articles in The New York Times titled “The 1619 Project.” To cover the history of slavery and its modern effects is certainly a worthy goal, and much of the Project achieves that goal effectively. Khalil Gibran Muhammad’s portrait of the Louisiana sugar industry, for instance, vividly covers a region that its victims considered the worst of all of slavery’s forms. Even better is Nikole Hannah-Jones’s celebration of black-led political movements. She is certainly correct that “without the idealistic, strenuous and patriotic efforts of black Americans, our democracy today would most likely look very different” and “might not be a democracy at all.”
Where the 1619 articles go wrong is in a persistent and off-key theme: an effort to prove that slavery “is the country’s very origin,” that slavery is the source of “nearly everything that has truly made America exceptional,” and that, in Hannah-Jones’s words, the founders “used” “racist ideology” “at the nation’s founding.” In this, the Times steps beyond history and into political polemic—one based on a falsehood and that in an essential way, repudiates the work of countless people of all races, including those Hannah-Jones celebrates, who have believed that what makes America “exceptional” is the proposition that all men are created equal.
For one thing, the idea that, in Hannah-Jones’ words, the “white men” who wrote the Declaration of Independence “did not believe” its words applied to black people is simply false. John Adams, James Madison, George Washington, Thomas Jefferson, and others said at the time that the doctrine of equality rendered slavery anathema. True, Jefferson also wrote the infamous passages suggesting that “the blacks…are inferior to the whites in the endowments both of body and mind,” but he thought even that was irrelevant to the question of slavery’s immorality. “Whatever be their degree of talent,” Jefferson wrote, “it is no measure of their rights. Because Sir Isaac Newton was superior to others in understanding, he was not therefore lord of the person or property of others.”
What War of the Worlds Did
The uncanny realism of Orson Welles’s radio play crystallised a fear of communication technology that haunts us today.
Just past 8pm on 30 October 1938, a 20-year-old college student named Robbins turned on his car radio. He was driving a friend towards their campus in New Jersey after a visit to a women’s college. Ramón Raquello and his Orchestra were beaming in from the Meridian Room in New York via CBS. Suddenly, the music broke off, and an announcer delivered a special bulletin – scientists had observed several incandescent gas explosions on Mars. The music returned, but soon there was a new interruption – an interview with a noted astronomer who dismissed as coincidence the fact that a large meteor had landed in New Jersey soon after the explosions were noted. Robbins thought he was still listening to a music programme when the bulletins placed the meteor on a farm 22 miles from where he was headed. But as Robbins would soon hear, it was no meteor.
Instead, a New Jersey town had been invaded by a race of Martians bent on systematically destroying everything in their path. At least 40 people had died when the meteor revealed itself to be a Martian ship containing extraterrestrials capable of shooting heat rays. Various voices of authority took to the radio; a brigadier general named Montgomery Smith even said he put several counties under martial law and closed the very road to Trenton that Robbins was travelling on at the time.
Robbins pulled off at a drugstore on the side of the highway. There were four people there, and he told them what was happening. He tried to call his family, but the lines were busy. As the other shoppers began to panic, he and his friend decided to return to the women’s college to rescue his friend’s girlfriend. But she was in no danger. According to legend, Robbins and more than a million other Americans had been panicked by a radio play – Orson Welles’s infamous adaptation of H G Wells’s novel The War of the Worlds (1898).
The Problem with “Reagan Democrats”
Does the trope obscure more than it illuminates about the 2016 election?

In the wake of the 2016 election, the media’s attention focused almost immediately on what had, only days earlier, been referred to as Hillary Clinton’s “firewall states.” Wisconsin, Michigan, and Pennsylvania had all turned red. Reporters were quickly dispatched to bars and diners across the Rustbelt to divine the mood of Trump voters over bacon, eggs and rounds of Budweisers. It didn’t take long before an old political trope was once again coloring their analyses – that of the so-called “Reagan Democrat.”
The construct dates back to 1980, when a cadre of longtime Democrats – mostly white union men, a majority without college degrees, and most situated within the relative comfort of older suburbs surrounding industrial cities – broke to the right and handed Reagan his commanding presidential victory. Ever since, “Reagan Democrats” have been treated as the essential electoral constituency – the sleeping dog at the center of American presidential politics.
Hillary Clinton has been widely criticized for neglecting this political phenomenon – attempting to campaign around the “Reagan Democrats,” and let the sleeping dog lie. Trump, meanwhile, is credited with updating Reagan’s political alchemy for a digital age. Reagan’s calls to “Drain the Swamp” of overbearing bureaucracy and regulation were echoed in Trump’s fulminations against Washington’s swamp of special interests; Reagan’s breezy confidence in simple solutions for thorny problems anticipated Trump’s sense that easy solutions were just a well-bargained deal away; and Reagan’s racialized and racist “Welfare Queen” trope morphed into Trump’s tirades on birtherism and “illegals.”
Considering the similarities between these two winning scripts, it can be tempting to attribute the success of Trump to the same white, working-class voting bloc that elected Reagan a generation earlier. Historian Bruce Schulman points out that this narrative predates the Gipper himself. “Nixon’s ‘Silent Majority,’ Scammon and Wattenberg’s ‘New Majority,’ ‘Reagan Democrats,’ Carville and Stephanopolous’s ‘It’s the Economy Stupid’ (and their attacks on Jesse Jackson and Sister Souljah), Bush’s ‘Compassionate Conservatism,’ and the archetypal ‘Trump voter’: these all constitute different names for what pundits classify as the same phenomenon.”
But Schulman is one of several political historians we spoke to for this article who worry about an overreliance on this analytical frame. The “construct is obviously problematic,” he says, “not least because the folks that threw Michigan to Trump in 2016 were largely not the same people who voted for Reagan.” Indeed, critical differences exist between the outlooks and experiences of these two sets of voters that remain woefully under-examined. Scholars and journalists became hyper-focused on the first generation of “Reagan Democrats” without exploring those voters’ underlying demographics, the deeper historical roots of their lurch to the right, or the paths they and their successors have followed since the 1980s. Despite the ubiquity of the concept, we know far less than we presume about the Reagan Democrats’ evolving political views; their relationship to groups like the Tea Party movement; or the demographic differences between them and the Rustbelt voters who elected Trump. Nevertheless, closer examination of the “Reagan Democrat” phenomenon reveals a good deal about American politics and media more broadly.
Here’s what we do know about the Reagan Democrats. The pollster Stanley B. Greenberg coined the term over four decades ago. Greenberg zeroed in on Macomb County, Michigan, a white working and middle-class suburb north of Detroit that flourished along with the region’s automobile economy. In 1960, 63 percent of Macomb County voters chose John F. Kennedy, making Macomb the most Democratic suburb in the country. Lyndon B. Johnson did even better, earning 74 percent in 1964. Richard M. Nixon lost Macomb in 1968 before winning it in 1972. But in 1980 Ronald Reagan decisively reversed the liberal tide, and did even better in 1984 when he won 67 percent of Macomb’s votes. A central theme that emerged from Greenberg’s interviews of Macomb residents was a sense of outrage among working and middle-class whites about “black privilege.” As one white Reagan voter reported, “We really don’t have much representation because Republicans are for big business, and the Democrats are for the giveaways, and we are the ones that pay all the taxes.”
This sense of racial grievance and white victimhood certainly has echoes today. But the term “Reagan Democrats” too often obviates careful parsing of voters’ motivations. As the public policy scholar Theodore Johnson, III, notes, the moniker “Reagan Democrats” offers a “useful euphemism to quell the anxiety” about describing “a group of familiar people in unflattering terms.” But the label risks overemphasizing “the impact Reagan’s economic and security platform had on this group while minimizing the role that race relations played in its electoral calculus.”
Indeed, the political outlooks of the voters who won Michigan for Reagan in 1980 and 1984 and for Trump in 2016 are different in important ways. While Reagan ran explicitly antigovernment campaigns and proposed dismantling programs like Medicaid and Medicare, Trump’s economic message was often to the left of Hillary Clinton’s. Reagan tied his racial demagoguery to undoing government programs. But, as Slate‘s Jamelle Bouie has pointed out, Trump yoked “his racial demagoguery to a liberal-sounding economic message, activating racial resentment while promising jobs, entitlements, and assistance.” Voters in liberal “firewall states” who had prioritized economic policy over racial populism in recent elections – many of whom had voted for Barack Obama – now had a candidate who didn’t force them to compromise. In states like Michigan, Wisconsin, and Pennsylvania, that sliver of the electorate made a big difference.
Which is to say that historical context matters greatly. Many of Reagan’s white, working-class voters clung to private-sector union jobs, took for granted the broader, often invisible social safety net, and construed their tax burdens in zero-sum competition with public programs and services they believed disproportionately benefited minority voters. Fast-forward to 2016, and members of the white working class were still as likely to think in racialized zero-sum terms about public welfare programs. But they were also less likely to be members of unions than were black workers (thanks largely to the relative rise of public-sector unions) – meaning their employment may well have been more precarious. And their evolving views on visible programs like Medicaid and Obamacare suggested a deeper if grudging appreciation of the social safety net at a moment when secure working-class employment is more and more a thing of the past. In short, lumping working-class white voters of very different political and economic historical eras together under the same name risks flattening crucial differences in outlook and historical context.
But the widespread use of the Reagan Democrat trope among scholars and pundits limits our understanding of American politics in other ways. Though the phenomenon was real enough, the act of naming the concept offered squeamish pundits a useful abstraction that conflated a range of voters’ motivations. Greenberg’s interviews made clear that racist grievances drove many Reagan Democrats. But when journalists deployed the “Reagan Democrat” trope, they deemphasized overt racism in favor of a more agreeable mish-mash of concerns about the government and the economy that sometimes flared up along “racial” (never “racist”) lines. Almost immediately, the “Reagan Democrat” trope offered the pundit class a convenient shorthand for deflecting more sustained consideration of the motivations of white working-class voters – an important segment of the media’s viewers, readers, and listeners, it should be noted.
The Reagan Democrat phenomenon also obscured other crucial transformations in American politics, particularly changes within the Democratic Party itself. Lily Geismer, an historian of the “New Democrats” of the 1970s and 1980s says that fixating on the white working class led many to miss the centrality of “suburban liberals, the Atari Democrats and the New Democrats of the Democratic Leadership Council… in shaping the political landscape.” It was these Democrats, not working-class swing voters, that best exemplified “the strengths and limits of the Democratic Party” beginning in the 1980s and into the Clinton era and beyond.
While journalists often used the Reagan Democrats trope in ways that submerged the importance of race and racism, the opposite proved to be the case when the trope was deployed by the “New Democrats.” For many middle and upper class liberals, the trope provided useful political cover, enabling them to place their Party’s electoral failures at the feet of white working-class voters: it was the Reagan Democrats’ exceptional racism that blocked progressive policies. As they turned up their noses at the white working class, these liberals deflected their culpability in ongoing social crises. As historian Matthew Lassiter puts it, the “trope allows white liberals and intellectuals to de-emphasize the responsibility of many Democrats, upper middle-class types… in producing and sustaining racial and other forms of inequality.”
In fact, this hypocrisy on the part of white liberals who considered themselves socially progressive played an often-overlooked role in driving the Reagan Democrats to the Republicans. This tension was evident long before these Democrats voted for Reagan. Indeed, working-class whites and their allies (often in Catholic dioceses) were among the first to recognize upper middle class suburban liberals’ hypocrisy when it came to race and the growing urban crises of the 1960s and 1970s — an important break in the liberal coalition that would push many of these voters into Reagan’s arms. In 1970, Cleveland, Ohio’s Bishop William Cosgrove savaged the city’s suburban liberal elite for decamping from the city, “parasitically” controlling the city’s politics, and pitting white and black neighborhoods “against each other.” As the Catholic syndicated columnist Andrew Greeley put it, “You,” elite liberals “can have a [Black] Panther to supper and persuade yourself that you’re not a racist. But those [working-class white] Poles, they certainly are.”
In Cleveland, a working-class white “ethnic” named Ralph Perk sensed the unrest in his community and mounted an insurgent campaign to seize the city’s dormant Republican Party apparatus. He was elected Cleveland’s mayor in 1971, becoming the only Republican in charge of a large American city. Across the Rust Belt – from Philadelphia to Macomb County, Michigan – working-class whites pulled the lever for Richard Nixon in 1972. Nixon reciprocated by cultivating a political strategy that targeted blue-collar white voters, and sought to develop ties with mayors like Perk and Philadelphia’s conservative Democratic Mayor Frank Rizzo (who later became a Republican).
This tension between working-class whites and suburban liberals was perhaps best captured by the epithet “limousine liberals” (and its updated version, “latte liberals”). Mario Procaccino coined the famous phrase in his failed 1969 bid for the mayoralty of New York City. He charged liberals like Mayor John Lindsay (a liberal Republican who switched his party affiliation in 1971) with creating social programs to benefit the poor, foisting responsibility for their funding on whites like Procaccino, and then casting aspersions about white working-class racism as their limousines sped them home to lily-white suburbs. Long before Reagan claimed these Democrats, they had begun a move to the right spurred in equal measure by revulsion for liberal elites and support for the kind of law-and-order, low tax/high service politics offered by politicians like Perk, Nixon, and Rizzo.
Ultimately, as Theodore Johnson argues,“if we are to understand the convergence of conservatism, securitization, and white racial anxiety, we must understand the Reagan Democrats.” But, understanding the Reagan Democrats requires at once broadening our historical frame and being much more specific about demography and generational change. It requires a much more nuanced understanding of the evolving American working class. Indeed, pundits and scholars’ overwhelming attention on the white, maleworking class – the residue of the Reagan Democrat phenomenon – has subtly abetted a white, conservative identity politics that travels under the more acceptable flags of class grievance and populism. And so it is our hope that the concept of “identity politics” is the subject of further scholarly inquiry here at Bunk and elsewhere.
How the Murder of a CIA Officer Was Used to Silence the Agency’s Greatest Critic
A new account sheds light on the Ford administration’s war against Sen. Frank Church and his landmark effort to rein in a lawless intelligence community.

Welch’s assassination was huge news and struck a painful political nerve in Washington, coming at the end of a year of stunning disclosures about the CIA and the rest of the U.S. intelligence community by the Senate’s Church Committee, which, throughout 1975, had been conducting the first major congressional investigation of the CIA. The Church Committee uncovered so many secrets and generated so many headlines that pundits were already calling 1975 “the Year of Intelligence.”Before the Church Committee was created in January 1975, there had been no real congressional oversight of the CIA. The House and Senate Intelligence Committees did not yet exist, and the Church Committee’s unprecedented investigation marked the first effort by Congress to unearth decades of abusive and illegal acts secretly committed by the CIA — and to curb its power.
Sen. Frank Church, the liberal Democrat from Idaho who chaired the committee, had come to believe that the future of American democracy was threatened by the rise of a permanent and largely unaccountable national security state, and he sensed that at the heart of that secret government was a lawless intelligence community. Church was convinced it had to be reined in to save the nation.
To a great degree, he succeeded. By disclosing a series of shocking abuses of power and spearheading wide-ranging reforms, Church and his Committee created rules of the road for the intelligence community that largely remain in place today. More than anyone else in American history, Church is responsible for bringing the CIA, the FBI, the National Security Agency, and the rest of the government’s intelligence apparatus under the rule of law.
But first, Church and his committee had to withstand a brutal counterattack launched by a Republican White House and the CIA, both of which wanted to blunt Church’s reform efforts. The White House and CIA quickly realized that the Welch killing, which occurred just as the Church Committee was finishing its investigations and preparing its final report and recommendations for reform, could be used as a political weapon. President Gerald Ford’s White House and the agency falsely sought to blame the Church Committee for Welch’s murder, claiming, without any evidence, that its investigations had somehow exposed Welch’s identity and left him vulnerable to assassination.
There was absolutely no truth to the claims, but the disinformation campaign was effective. The Ford administration’s use of the Welch murder to discredit the Church Committee was a model of propaganda and disinformation; an internal CIA history later praised the “skillful steps” that the agency and the White House “took to exploit the Welch murder to U.S. intelligence benefit.”
READ ENTIRE ARTICLE AT THE INTERCEPT
Yellow Journalism: The “Fake News” of the 19th Century
Peddling lies goes back to antiquity, but during the Tabloid Wars of the 19th-century it reached the widespread outcry and fever pitch of scandal familiar today.

It is perhaps not so surprising to hear that the problem of “fake news” — media outlets adopting sensationalism to the point of fantasy — is nothing new. Although, as Robert Darnton explained in the NYRB recently, the peddling of public lies for political gain (or simply financial profit) can be found in most periods of history dating back to antiquity, it is in the late 19th-century phenomenon of “Yellow Journalism” that it first seems to reach the widespread outcry and fever pitch of scandal familiar today. Why yellow? The reasons are not totally clear. Some sources point to the yellow ink the publications would sometimes use, though it more likely stems from the popular Yellow Kid cartoon that first ran in Joseph Pulitzer’s New York World, and later William Randolph Hearst’s New York Journal, the two newspapers engaged in the circulation war at the heart of the furore.
Although these days his name is somewhat synonymous with journalism of the highest standards, through association with the Pulitzer Prize established by provisions in his will, Joseph Pulitzer had a very different reputation while alive. After purchasing The New York World in 1884 and rapidly increasing circulation through the publication of sensationalist stories he earned the dubious honour of being the pioneer of tabloid journalism. He soon had a competitor in the field when his rival William Randolph Hearst acquired the The New York Journal in 1885 (originally begun by Joseph’s brother Albert). The rivalry was fierce, each trying to out do each other with ever more sensational and salacious stories. At a meeting of prominent journalists in 1889 Florida Daily Citizen editor Lorettus Metcalf claimed that due to their competition “the evil grew until publishers all over the country began to think that perhaps at heart the public might really prefer vulgarity”.
The phenomenon can be seen to reach its most rampant heights, and most exemplary period, in the lead up to the Spanish-American War — a conflict that some dubbed “The Journal‘s War” due to Hearst’s immense influence in stoking the fires of anti-Spanish sentiment in the U.S. Much of the coverage by both The New York World and The New York Journal was tainted by unsubstantiated claims, sensationalist propaganda, and outright factual errors. When the USS Maine exploded and sank in Havana Harbor on the evening of 15 February 1898, huge headlines in the Journal blamed Spain with no evidence at all. The phrase, “remember the Maine, to Hell with Spain”, became a populist rousing call to action. The Spanish–American War began later that year.
READ ENTIRE ARTICLE AT THE PUBLIC DOMAIN REVIEW
Media Ownership
The Answer to the Media Industry’s Woes? Publicly Owned Newspapers.
Newspapers must be for the people. It’s worth investing our tax dollars in them.

As the economic fallout from the coronavirus further decimates financially struggling small-town and city newspapers — still Americans’ main source for original local journalism — a desperate search is underway for alternative models. Analysts are looking around the world and back through history for examples of news media that don’t depend on advertising revenue — a collapsing business model that is unlikely to ever return. Ideas range from starting donor-funded nonprofit organizations to repurposing public broadcasting systems. But one intriguing experiment from American history has been almost entirely forgotten: the municipal newspaper.
During the Progressive era, public outrage grew over commercial excesses such as yellow journalism and propaganda — the “clickbait” and “fake news” of the early 20th century. A nonprofit, municipal-owned newspaper seemed like an idea whose time had come. George H. Dunlop, a “good government” progressive and former Hollywood mayor, conducted a successful petition, and Los Angeles became a test case for this experiment.
In a December 1911 vote, a majority supported the proposal to establish a taxpayer-funded paper, and the Los Angeles Municipal News launched in April 1912. With a government-guaranteed annual subsidy of $36,000 (worth nearly $1 million today), the city helped finance the distribution of up to 60,000 copies. To ensure accountability, the mayor appointed a commission of three citizen volunteers to govern the paper. They served four-year terms but were subject to recall by voters at any time. Dunlop, the newspaper’s original architect, was chosen as one of the commissioners.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
Does Journalism Have a Future?
In an era of social media and fake news, journalists who have survived the print plunge have new foes to face.

Jonah Peretti started out soaking up postmodern theory at U.C. Santa Cruz in the mid-nineteen-nineties, and later published a scholarly journal article about the scrambled, disjointed, and incoherent way of thinking produced by accelerated visual experiences under late capitalism. Or something like that. Imagine an article written by that American Studies professor in Don DeLillo’s “White Noise.” Peretti thought that watching a lot of MTV can mess with your head—“The rapid fire succession of signifiers in MTV style media erodes the viewer’s sense of temporal continuity”—leaving you confused, stupid, and lonely. “Capitalism needs schizophrenia, but it also needs egos,” Peretti wrote. “The contradiction is resolved through the acceleration of the temporal rhythm of late capitalist visual culture. This type of acceleration encourages weak egos that are easily formed, and fade away just as easily.” Voilà, a business plan!
Peretti’s career in viral content began in 2001, with a prank involving e-mail and Nike sneakers while he was a graduate student at the M.I.T. Media Lab. (Peretti ordered custom sneakers embroidered with the word “sweatshop” and then circulated Nike’s reply.) In 2005, a year the New York Times Company laid off five hundred employees and the Post began paying people to retire early, Peretti joined Andrew Breitbart, a Matt Drudge acolyte, and Ken Lerer, a former P.R. guy at AOL Time Warner, in helping Arianna Huffington, a millionaire and a former anti-feminist polemicist, launch the Huffington Post. Peretti was in charge of innovations that included a click-o-meter. Within a couple of years, the Huffington Post had more Web traffic than the Los Angeles Times, the Washington Post, and the Wall Street Journal. Its business was banditry. Abramson writes that when the Times published a deeply reported exclusive story about WikiLeaks, which took months of investigative work and a great deal of money, the Huffington Post published its own version of the story, using the same headline—and beat out the Times story in Google rankings. “We were learning that the internet behaved like a clattering of jackdaws,” Rusbridger writes. “Nothing remained exclusive for more than two minutes.”
Pretty soon, there were jackdaws all over the place, with their schizophrenic late-capitalist accelerated signifiers. Breitbart left the Huffington Post and started Breitbart News around the same time that Peretti left to focus on his own company, Contagious Media, from which he launched BuzzFeed, where he tested the limits of virality with offerings like the seven best links about gay penguins and “YouTube Porn Hacks.” He explained his methods in a pitch to venture capitalists: “Raw buzz is automatically published the moment it is detected by our algorithm,” and “the future of the industry is advertising as content.”
READ ENTIRE ARTICLE AT THE NEW YORKER
Richard Nixon Probably Would Not Have Been Saved by Fox News
The 37th president used methods of media manipulation that Donald Trump can only fantasize about.

What would Richard Nixon’s fate had been if the Watergate break-in happened in 2016 instead of 1972? For some political and journalistic commentators, the historical speculation is too tantalizing to resist. “There’s more likelihood he might have survived,” former Nixon legal counsel John Dean mused five months ago, “if there’d been a Fox News.”
The Nixon-Fox escape is an increasingly popular theory these days. It’s a way to analogize (prematurely) President Donald Trump’s murky Russia-related behavior to the impeachable sins of Watergate, and it’s a way to bemoan the power-aggrandizing feedback loop within the contemporary conservative media bubble. “Fox News might save Trump from another Watergate,” posited Vox last week. “Nixon never would have been forced to resign if you existed in your current state back in 1972, ’73, ’74,” Geraldo Rivera told Sean Hannity on the latter’s radio show in February.
At best, these counterfactuals do not take into consideration the ways that the more tightly regulated media landscape of the early 1970s played directly into Nixon’s dirty hands. At worst, they morph into calls for reviving the Fairness Doctrine and strengthening Public Interest requirements—constitutionally questionable regulatory tools that Nixon enjoyed, Trump envies, and too many in the media pine for.
The Last Days of Time Inc.
An oral history of how the pre-eminent media organization of the 20th century ended up on the scrap heap.

It was once an empire. Now it is being sold for parts.
Time Inc. began, in 1922, with a simple but revolutionary idea hatched by Henry R. Luce and Briton Hadden. The two men, graduates of Yale University, were rookie reporters at The Baltimore News when they drew up a prospectus for something called a “news magazine.” After raising $86,000, Mr. Hadden and Mr. Luce quit their jobs. On March 3, 1923, they published the first issue of Time: The Weekly News-Magazine.
In 1929, the year of Mr. Hadden’s sudden death, Mr. Luce started Fortune. In 1936, he bought a small-circulation humor publication, Life, and transformed it into a wide-ranging, large-format weekly. Later came Sports Illustrated, Money, People and InStyle. By 1989, with more than 100 publications in its fold, as well as significant holdings in television and radio, Time Inc. was rich enough to shell out $14.9 billion for 51 percent of Warner Communications, thus forming Time Warner.
The flush times went on for a while. But then, starting about a decade ago, the company began a slow decline that, in 2018, resulted in the Meredith Corporation, a Des Moines, Iowa, media company heavy on lifestyle monthlies like Better Homes and Gardens, completing its purchase of the once-grand Time Inc. in a deal that valued the company at $2.8 billion. The new owner wasted no time in prying the Time Inc. logo from the facade of its Lower Manhattan offices and announcing that it would seek buyers for Time, Fortune, Sports Illustrated and Money. The deadline for first-round bids was May 11.
We reached out to more than two dozen editors and writers who worked at Time Inc., asking them to reflect on the heyday of this former epicenter of power and influence, as well as its decline. These interviews have been condensed and edited.
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
The Irony of Complaints About Nikole Hannah-Jones’s Advocacy Journalism
The White press helped destroy democracy in the South. Black journalists developed an activist tradition because they had to.

In the mid-20th century, Walter Hussman Sr. transformed a handful of small newspapers in southern Arkansas into one of the South’s most profitable media companies. His son, Walter Hussman Jr., joined the family business in the 1970s and helped it gain control of two historically important Southern newspapers — the Arkansas Democrat and the Arkansas Gazette. Now, as local and regional news outlets struggle to survive, the younger Hussman is giving back. He is donating $25 million to the University of North Carolina’s school of journalism and media, a generous investment in the future of journalism that should have cemented the Hussman family’s honored place in Southern media.
But Hussman’s objection to the school’s hiring with tenure of Black journalist Nikole Hannah-Jones highlights a different legacy, one that pitted Southern newspapers like the Gazette and the Democrat against African American journalists like Hannah-Jones.
Hussman reportedly worried that Hannah-Jones’s investigative work appeared to be “trying to push an agenda.” And it is. But her agenda — to focus broad public attention on the enduring impact of slavery, Jim Crow and anti-Black racism — is merely a continuation of the long legacy of the Black press and its battle to save democracy in the United States.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
Examining Katharine Graham’s Groundbreaking Life
“Cover Story,” at the NY Historical Society, illustrates the Washington Post publisher’s courage and tenacity as the first woman to lead a Fortune 500 company.

Giving a luxurious designer evening gown the central position in an exhibition about Katharine Graham, former chief executive of the Washington Post Company, may at first seem as logical as highlighting a pair of aviator glasses in a show about Gloria Steinem. Yes, each fashion choice was worn by a famous woman, but did it really have much to do with her power and influence?
For Mrs. Graham, the answer is a resounding yes.
That is one of the revelations in “Cover Story: Katharine Graham, CEO,” an exhibition at the New-York Historical Society that illustrates both her courage and tenacity as the first woman to lead a Fortune 500 company and the museum’s reinvention as an institution committed to women’s history.
Mrs. Graham, who died in 2001 at 84, wore the gown, a beaded Balmain design, as the guest of honor at Truman Capote’s Black and White Ball at the Plaza Hotel in Manhattan in November 1966. The historical society show, which opened on Friday and runs through Oct. 3 in the museum’s Center for Women’s History, presents that masquerade party as a pivotal event in her evolution from a society matron to “the most powerful woman in America,” as Ms. Magazine described her in 1974.
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
Over Three Decades, Tech Obliterated Media
A front-row seat to a slow-moving catastrophe. How tech both helps and hurts our world.

In 1995, a quirky programmer in San Francisco named Craig Newmark started emailing friends a list of local events, job opportunities, and things for sale. The next year, he turned Craigslist into a web-based service and eventually started expanding it all over the country and the world.
It was clear this list was a giant killer, and I told everyone who would listen to me at the Post that we needed to put all the money, all the people, and all the incentives into digital. I insisted that the bosses had to make readers feel like digital was the most important thing. But the bosses never did because the business they knew was the physical paper. I relayed my worries about the turtle pace of digital change many times to the Washington Post Company’s affable CEO, Don Graham, the son of legendary publisher and surprisingly entertaining badass Katharine Graham. Don Graham was inexplicably humble and even sheepish about his power. The very worst thing that Graham — always apologetic for having interrupted me, as I strafed big retail advertisers in my stories about the sector’s decline locally — would say to me was “Ouch.” Then he would saunter away from my desk with a jaunty wave. And while Graham was interested when I talked about what Newmark was doing, he laughed when I told him that Craigslist would wipe out his classifieds business.
“You charge too much, the customer service sucks, it’s static, and most of all, it doesn’t work,” I lectured him about this business, which was crucial to his bottom line. “It will disappear as an analog product, since it is a perfect target for digital destruction. You’re going to die by the cell and not even know it until it’s over and you’re dead on the ground.”
Don smiled at me with a kindness I certainly did not deserve at that moment. “Ouch,” he said.
The Post, of course, is now owned by a tech mogul, Jeff Bezos, and other Silicon Valley machers have taken over or invested heavily in legacy media, but they have not prevented its relentless decline, or the hemorrhaging of thousands of jobs from the industry in just the past few years, as the digital world has both sucked up and diminished print business models. Graham, who retired from the Post in 2015, did make a number of energetic digital efforts to keep up (and also was on the board of Facebook), most of which did not stanch the bleeding. Most other media executives seemed to have a genetic predisposition to oppose change and innovation and spent many years refusing to bend to the coming disaster to their bottom lines and their fleets of Town Cars (which would, of course, go last).
READ ENTIRE ARTICLE AT THE INTELLIGENCER
The Battlefields of Cable
How cable TV transformed politics—and how politics transformed cable TV.

When C-SPAN’s cameras came to Congress in 1979, a wave of bloodless guerrilla maneuvers followed. Newt Gingrich, in those days a brash young Georgia congressman, led an insurgent cell of Republicans looking for ways to make the cameras work for them. One of their techniques was to deliver so-called “special order” speeches when the day’s business was over, which in addition to airing live on cable could be shared with local channels back home.
“These speeches frequently called out the Democratic opposition directly, daring them to respond,” the Purdue historian Kathryn Cramer Brownell writes in 24/7 Politics, a new history of cable news and the regulatory forces that shaped it. “But nobody did,” she adds, “because the legislative session had ended and everyone was gone.” Not that you could tell that by watching the show, since the camera’s eye stayed fixed on the person speaking.
On May 10, 1984, House Speaker Tip O’Neill (D–Mass.) launched a counterinsurgency. As part of the deal that had allowed C-SPAN into the legislature, the channel had agreed to let the House have control of the cameras. And so as Rep. Bob Walker (R–Penn.) was delivering an address accusing Democratic staffers of altering congressional records to distort or conceal events, the camera suddenly started to pan across the floor, showing that what might have seemed like one side of a heated debate was in fact a man speaking to an almost entirely empty room.
Once Walker realized what was happening, he started to huff and puff about it, declaring on live television that this was “one more example of how this body is run, the kind of arrogance of power.” One need not admire the congressional leadership of the 1980s to recognize that those shots of the House floor were themselves an act of transparency—the very thing that Walker had theoretically been demanding—and that reacting this way to being caught only made the congressman look sillier.
What History Tells us About the Dangers of Media Ownership
Is media bias attributable to corporate power or personal psychology? Upton Sinclair and Walter Lippmann disagreed.

‘Mr Upton Sinclair speaks for a large body of opinion in America,’ Lippmann acknowledged in Public Opinion. Sinclair was far from the first or only critic who took his era’s media to task for what many perceived as its pro-business, anti-labour leanings. As the historian Sam Lebovic has argued, most early 20th-century commentators agreed that the major metropolitan newspapers served the interests of the elite.
Sinclair argued that the news was a commodity, produced by hierarchical organisations working to maximise profit and squeeze labour
But Sinclair’s The Brass Check went further, arguing that even liberal-leaning city papers systematically ignored or denigrated labour unrest and radical social movements. Sinclair believed that this was an inevitable result of the media industry’s financial structure. In the 19th century, most newspapers were party organs: funded by political machines and expected by owners and readers alike to adhere to their dogmas. By Sinclair’s time, though, a decline in partisanship and a rise in literacy rates had spurred the rise of a thriving commercial press. Increasingly consolidated under the ownership of a few very wealthy men, Sinclair believed this press was beholden to its owners’ class interests.
Newspapers’ claims of editorial autonomy defied what Sinclair saw as the power dynamics inherent to any industry. To start a successful newspaper, you must develop ‘a large and complex institution, fighting day and night for the attention of the public … [and] the pennies of the populace,’ the muckraker wrote in The Brass Check. ‘You have foremen and managers … precisely as if you were a steel-mill or a coal mine.’ Like steel or coal or meat, the news was a commodity, produced, like any other in a capitalist economy, by hierarchical organisations working on an imperative to maximise profit and squeeze labour. Only the most ruthless would survive. The Carnegies and Rockefellers of the media industry were the newspaper chain owners William Randolph Hearst, Edward Scripps, and the Chicago Tribune’s Robert McCormick.
Lippmann criticised what he saw as Sinclair’s reductive attribution of the media’s problems to ‘a more or less conscious conspiracy of the rich owners of newspapers’. Sinclair was, to be sure, unsubtle and somewhat conspiratorial, and it didn’t help his case that he devoted much of The Brass Check to hashing out personal grievances with editors. But Sinclair also got something that Lippmann didn’t want to admit: the agendas of a few men did shape the operations of complex organisations, if imperfectly, indirectly, and with inconsistent results. Newspaper owners, in Sinclair’s view, had no more or less power than any other bosses. They conveyed what and how their workers should produce by way of diffuse directives transmitted through layers of managerial hierarchy. The only source of their tyranny was what the 19th-century jurist Ferdinand Lassalle had called ‘the iron law of wages’.
Why Ajit Pai is Wrong about Net Neutrality
FCC regulations have long promoted innovation that benefits consumers, not stifled it.

The Federal Communications Commission just changed the Internet.
The commission voted Thursday afternoon to officially end the series of rules structuring what’s known as “net neutrality.” In its most basic formulation, net neutrality ensures that telecom providers don’t discriminate in providing Internet service to consumers by price or content. FCC Chairman Ajit Pai argues that these rules have restricted both competition and investment in broadband expansion, and favor content companies like Netflix over actual providers of Internet service (like Verizon and Comcast). Pai contends that the FCC must regulate less and collaborate more with the industries it oversees.
But when the FCC has previously allowed such “corporate capture” of communication regulation to take shape, it stifled innovation and reduced competition. For example, the Radio Corporation of America (RCA) went to great lengths to (successfully) persuade the FCC to needlessly complicate the implementation of FM radio in order to protect its AM network from competition. FM radio, which was ready in the 1930s, didn’t ultimately become widespread until the 1960s.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
How Capitalism—Not a Few Bad Actors—Destroyed the Internet
Twenty-five years of neoliberal political economy are to blame for today’s regime of surveillance advertising, and only public policy can undo it.

The race to commercialize the Internet is over, and advertising is the big winner. This is excellent news if you are an executive or major shareholder of one of the handful of companies that dominate the $600 billion global digital advertising economy. For almost everyone else, advertising’s good fortunes have meant the erosion of privacy, autonomy, and security, as well as a weakening of the collective means to hold power accountable.
This is because the industry’s economic success is rooted in its virtually unrestrained monetization of consumer surveillance. Digital advertising technologies are widely distributed but largely operate under the control of a few giant companies whose monopoly-like market power has, among other ills, unleashed a wave of manipulative communication and deepened a revenue crisis among the nation’s most important journalism outlets. For the ownership class of Silicon Valley, digital advertising has been a gold mine of epic proportions. For democratic society, it is gasoline on a fire.
The deep problem is surveillance advertising: a business model based on persistent and invasive data collection. At its core, surveillance advertising uses data to try to find ever more effective ways to predict and influence people’s behaviors and attitudes. Of course, advertising is old; companies, politicians, and other groups have long been interested in knowing and influencing many kinds of publics. Today’s regime of surveillance advertising on the Internet is not so much a new development as an acceleration of long-standing social trends at the intersection of technology, marketing, politics, and capitalism at large.
That acceleration has been in the works for decades. Though widespread popular scrutiny of Internet tech companies has exploded only in recent years, the key moments in the historical construction of surveillance advertising unfolded in the mid-1990s, when the new technology of the World Wide Web was transformed from an outpost on the fringes of business to a central nervous system for commercial monitoring. To paraphrase Thomas Streeter, surveillance advertising is not something that happened; it is something that was done. In other words, the massive data collection infrastructure that undergirds the Internet today is the result of twenty-five years of technical and political economic engineering. Surveillance advertising was created by marketers, technology start-ups, investors, and politicians, a coalition bound by the desire to commercialize the web as quickly as possible. Through bouts of competition and collaboration, private and public sector interests steered digital networks toward maximizing their monitoring and influence capacities, tilling the soil for all manner of deceptive communication practices and wreaking havoc on less invasive media business models. The legacy of this period is the concentration of surveillance capacity in corporate hands and the normalization of consumer monitoring across all digital media platforms we have come to know today.
READ ENTIRE ARTICLE AT BOSTON REVIEW
William Randolph Hearst for President
Another news cycle, another media mogul stirring up electoral buzz.

In the spring of 1904, a letter to a New York newspaper made the case for a new kind of presidential candidate. “The American people—like all people—are interested in PERSONALITY,” the writer noted. “He appeals to the people—not to a corporation. Not even the most venal of newspapers has suggested that anyone owns [him], or that he would be influenced by anything save the will of the people in the event of the election.”
The letter was referring to William Randolph Hearst, who owned several large newspapers himself. This vast media empire was matched only by Hearst’s ego, which was on rich display during his failed quest for the Democratic presidential nomination that summer. “It is not simply that we revolt at Hearst’s huge vulgarity; at his front of bronze; at his shrieking unfitness mentally, for the office he sets out to buy,” one editorialist explained in a rival newspaper. “There never has been a case of a man of such slender intellectual equipment, absolutely without experience in office, impudently flaunting his wealth before the eyes of the people and say, ‘Make me President.’”
Hearst lost the Democratic contest to Alton J. Parker, who would go on to lose to GOP incumbent Theodore Roosevelt. In recent weeks, some Democrats have put forth their own favored media empire-builder, Oprah Winfrey, as a possible challenger to President Trump in 2020. But that was a bad idea in 1904, and it’s a bad idea now. Hearst’s story should remind us of the dangers of promoting media titans for the executive branch, whether you share their ideas or not.
In his early years, Hearst’s politics were so progressive that critics called him a socialist. Elected to Congress in 1902, he put forth bills to establish stronger railroad regulations, an eight-hour day for government workers, and the nationalization of the telegraph industry. But he rarely followed up on his bills or bothered to appear in Congress at all: of two hundred roll calls in his first year, he missed all but four. The day-to-day grind of politics bored him.
READ ENTIRE ARTICLE AT LAPHAM’S QUARTERLY
‘Mrs. Frank Leslie’ Ran a Media Empire and Bankrolled the Suffragist Movement
A new book tells the scandalous secrets of a forgotten 19th-century tycoon, Miriam Follin Peacock Squier Leslie Wilde, also known as Mrs. Frank Leslie.

Searching newspaper archives for famous women can be tough, because for a long time women were generally referred to by their husband’s names. For example, “Mrs. George Putnam” was actually Amelia Earhart, and Whitney Museumfounder Gertrude Vanderbilt Whitney was called “Mrs. Harry Payne Whitney” long after her husband had died.
That isn’t the case with Mrs. Frank Leslie. When she inherited her husband’s newspaper business in 1880, she also legally changed her name to her late husband’s,Frank Leslie. “Mrs. Frank Leslie” was her actual name, and with it she rebuilt a publishing empire and became one of the most famous and notorious women of the 19th century.
Leslie has beenlargely forgotten today, but author Betsy Prioleau recounts her life in the new book “Diamonds and Deadlines: A Tale of Greed, Deceit, and a Female Tycoon in the Gilded Age.” There are many startling revelations: Leslie was probably, and secretly, biracial, had been a prostitute in her youth, was both a writer and frequent subject of gossip and fashion columns, and was four times married and three times divorced in an age where one divorce could spell public ruin.
READ ENTIRE ARTICLE AT RETROPOLIS, THE WASHINGTON POST
How ‘the Kingfish’ Turned Corporations into People
Seventy-five years before Citizens United, the Supreme Court ruled that newspapers were entitled to First Amendment protections.

When the Supreme Court first began to breathe life into the First Amendment in the early twentieth century, the speakers who inspired the newfound protections were politically persecuted minorities: socialists, anarchists, radicals, and labor agitators. Today, however, in the aftermath of the 2010 Supreme Court ruling in Citizens United v. Federal Election Commission, which held that corporations have the same right as individuals to influence elections, the First Amendment is used by wealthy and powerful business interests seeking to overturn food-labeling laws, securities disclosure laws, and campaign finance regulations. Yet the seeds of this transformation were planted decades ago in a different Supreme Court case—though one eerily evocative of the Trump era—involving a blustery, dough-faced politician who railed against “fake news.”
Huey Long was Trump before Trump. The fiery populist governor elected on the eve of the Great Depression had an aggressive agenda to make Louisiana great again—and little tolerance for dissent. Long set up a state board to censor newsreels and another to decide which newspapers would be allowed to print profitable government notices. When the student paper at Louisiana State University published an unflattering editorial about him, an outraged Long—referring to himself, as autocrats often do, in the third person—sent in the state police to seize copies, saying he wasn’t “going to stand for any students criticizing Huey Long.”
After Louisiana’s larger daily newspapers came out against him, “the Kingfish” declared war. “The daily newspapers have been against every progressive step in the state,” Long said, “and the only way for the people of Louisiana to get ahead is to stomp them flat.” To do so, in 1934 Long’s allies enacted a 2 percent tax on the advertising revenue of the state’s largest-circulation newspapers. Long said the tax “should be called a tax on lying, two cents per lie.”
READ ENTIRE ARTICLE AT THE NEW YORK REVIEW
‘Anyone Ever Seen Cocaine?’ What We Found in the Archives of Bernie Sanders’s TV Show.
What a forgotten trove of videotapes reveals about the man who rewrote America’s political script.
Dozens of children scurry on the screen across Ethan Allen Park in Burlington, Vermont, bobbing for apples and running three-legged races. It is a beaming July day, and they’re at a summer camp for kids who live in local housing projects. The video is washed in a yellow light, like a newspaper left too long in the sun. The year is 1987. Atop a wooden picnic table nearby sits a man, clasping a microphone with both hands as he hunches with his elbows on his knees like a camp counselor. He’s wearing gray slacks and a short-sleeved white button-down, and he looks like he’s been on this earth for far longer than a half-century, but he’s only 45.
This is Bernie Sanders, the city’s socialist mayor, and for whatever reason, he wants to talk about drugs.
“Do any of the older kids you know have some problems with drugs?” Sanders asks. “Who wants to talk to me about that? What about drugs? Is that a problem?”
“I like coke!” a little boy who looks 10 or 12 exclaims.
“Tell me about that,” Sanders says.
“I like Coca-Cola!” the boy clarifies.
“Oh, Coca-Cola. Alright, but who knows about cocaine?” Sanders asks. “Anyone ever seen cocaine?” Do any of the kids know people who use drugs like that? “You don’t have to tell me who,” he says, “but I bet you do.”
This scene is taking place at the height of “Just Say No,” the national ad campaign and moral panic fanned by then-first lady Nancy Reagan. After a few children tell him they’ve maybe gotten a look at cocaine, Sanders warns them, abruptly, “Hold it!” before adding in a warmer tone that it “screws up your mind.” They nod along. Sanders changes the subject to cigarettes. “Who here smokes?” he asks. “Come on, raise your hand.” A child, sitting in an adult’s lap, responds: “I don’t smoke because I’m a little kid. I’m only 5 years old.” At another point, a kid asks Sanders, “Did you know you look like somebody on Back to the Future?’”
READ ENTIRE ARTICLE AT POLITICO MAGAZINE
Taking Responsibility
Maligned in Black and White
Southern newspapers played a major role in racial violence. Do they owe their communities an apology?

Following the Civil War up until the Civil Rights Movement — and beyond — white-owned newspapers across the South served as cheerleaders for white supremacy. Their racist coverage had sometimes fatal consequences for African Americans. Now, some of these papers are accepting responsibility for this coverage and apologizing for it.
In January, the Orlando Sentinel posted an impassioned apology for inflaming racial tensions decades ago in a 1949 case called “The Groveland Four.” Four young African American men were accused of raping a white woman, charges that resulted in the extra-judicial killings of two of them and the lengthy imprisonment of two others.
The occasion for the editorial was the anticipated official pardon by the State of Florida’s Clemency Board, chaired by newly elected Gov. Ron DeSantis, who characterized the case as a “miscarriage of justice.” This followed a vote two years ago by the Florida legislature for a “heartfelt apology” to the families of the four men.
In recent years, a handful of the region’s newspapers have stepped forward to accept responsibility for biased reporting and editorials, shouldering their share of the burden of racist Southern history. They are acknowledging — belatedly — what their forebearers did and did not do in covering racism, white supremacy, terror and segregation over the past 150 years. Some newspapers, including the Sentinel, had especially grievous sins to confess.
READ ENTIRE ARTICLE AT POYNTER
On Atonement
News outlets have apologized for past racism. That should only be the start.

In 1898, a midterm election year, white North Carolinians plotted an overthrow. In March, Josephus Daniels, the editor and publisher of the News & Observer, the state’s most powerful daily newspaper, traveled from Raleigh, the capital, to New Bern, a port city on the Neuse and Trent Rivers in an area he called “the Negroized East.” There, he took a meeting with Furnifold Simmons, a white supremacist who was the state chairman of the Democratic Party, to discuss “the Negro Problem.”
The News & Observer was a reliable mouthpiece for Democrats. A story line of particular interest was how Black people had gained footing during Reconstruction; Black male suffrage had given way to a Black-majority vote in North Carolina’s eastern counties, which left the Democratic Party (until 1876 called the Conservative Party) feeling insecure. Its elite members, mostly bankers and railroad men, had lost support from poor white people as they suffered through a deep recession: when the poor whites asked for reforms, and none came, they’d broken off to form a new political organization—the Populist, or People’s, Party—aligning themselves with Republicans, the party of Lincoln, which included Black voters. The result was what David Zucchino, in his recent book Wilmington’s Lie, calls “an uneasy political and racial alliance” known as Fusion—the most successful multiracial political experiment in the post-Reconstruction South. By 1894, they’d taken the state.
White Fusionists still held the majority of elected positions, having ceded only a fraction of their power to Black collaborators. But in Wilmington, North Carolina’s largest city at the time, a multiracial government emerged. Black people served as magistrates, mail clerks, even police officers with jurisdiction over whites. And as the years went by, Wilmington, which was 56 percent Black, became home to a budding Black middle class; in some neighborhoods, Black and white people lived next door to each other. That made the city, in the minds of Daniels and Simmons, North Carolina’s worst violator of the racist social order to which they were committed.
In New Bern, Daniels and Simmons devised a coup. They would need “men who could ride”—armed white vigilantes—as well as “men who could write.” Daniels got to work. The News & Observer waged an anti-Black propaganda campaign that catered to the racist core of the white people who had abandoned the Democratic Party: There were sensationalist articles and fabricated headlines; a story claimed that Black men with “big feet” were standing in front of white people on trains. Daniels hired a cartoonist named Norman Jennett to draw racist caricatures of Black people that ran on the front page; one featured a large boot, labeled “The Negro,” stepping on a white man. Other newspapers joined in the vitriol; a frequent subject was Black crime, especially against white women. For the News & Observer, Jennett drew a large bat with the face of a Black man, a white woman beneath his claws; on the wings were the words “Negro Rule.” Alexander Manly, the editor of a Black newspaper called the Daily Record, weighed in on the Black-men-are-brutes trope, which had been used repeatedly as a pretext for lynchings: “You set yourselves down as a lot of carping hypocrites in that you cry aloud for the virtue of your women while you seek to destroy the morality of ours.”
READ ENTIRE ARTICLE AT COLUMBIA JOURNALISM REVIEW
American Journalism’s Role in Promoting Racist Terror
History must be acknowledged before justice can be done.

Although it’s rarely discussed, even at elite journalism schools like the one where I teach, a significant portion of America’s existing news business was built on slavery and other forms of racist terror. While the Orleans Gazette ceased publication in the 19th century, there still exist numerous newspapers that once sustained themselves by selling advertising space to slaveholders who wanted to recapture runaways or who sought to sell their human property at auctions like the one where my ancestor Stephen once found himself. These outlets include some of America’s oldest periodicals, both Northern and Southern: The Baltimore Sun, the New York Post, The Times-Picayune (New Orleans), The Augusta Chronicle (Ga.), the Richmond Times-Dispatch (Va.), The Commercial Appeal (Memphis), and The Fayetteville Observer (N.C.), among others. A typical example of such an advertisement is one printed in the July 3, 1805, edition of the New-York Evening Post, the predecessor title of today’s New York Post, which reads, “TWENTY DOLLARS REWARD. RAN-AWAY from the subscriber on the 30th of June, a negro man named Joseph, aged about 30 years.” Or this one printed on April 18, 1854, in The Daily Picayune, the predecessor title of the present-day Times-Picayune, which reads, “For Sale…A likely lot of SLAVES, consisting of men, women and children, which I will sell low for cash or city acceptance.”
There is no known record of the sights and sounds of the 1820 auction where Stephen was sold, but descriptions of similar auctions were published elsewhere. Such accounts were usually provided by Northern abolitionist outlets like Frederick Douglass’ Paper and The Anti-Slavery Bugle, but in a letter published in The Washington Union on November 14, 1857, the pro-slavery journalist Edward A. Pollard describes witnessing the auction of several families in Macon, Ga., and even praises the virtue—the “humanity”—of the slaveholders:
During the sale referred to, a lot was put up consisting of a woman and her two sons, one of whom was epileptic (classified by the crier as “fittified”). It was stated that the owner would not sell them unless the epileptic boy was taken along at the nominal price of one dollar, as he wished him provided for. Some of the bidders expressed their dissatisfaction at this, and a trader offered to give two hundred dollars more on condition that the epileptic boy should be thrown out. But the temptation was unheeded, and the poor boy was sold with his mother. There are frequent instances at the auction-block of such humanity as this on the part of masters.
Anti-Blackness has been at the heart of American society since this nation’s founding. It was codified in the original US Constitution, which stated that an enslaved person would count as three-fifths of a white person for the purposes of taxation and representation. Two decades into the 21st century, more than a century and a half since the legal abolition of slavery, Black Americans continue to experience the long-term impact of slavery and other forms of racist terror—much of it promoted and perpetuated by the news media.
READ ENTIRE ARTICLE AT THE NATION
A Conservative Activist’s Quest to Preserve All Network News Broadcasts
Convinced of rampant bias on the evening news, Paul Simpson founded the Vanderbilt Television News Archive.

Fifty years ago, in the middle of a typically hot and humid Nashville summer, a Metropolitan Life insurance manager named Paul Simpson sat with Frank Grisham, the director of the Vanderbilt University Library, in the rare books room of the main library building.
Using three Ampex video recording machines, three television sets and $4,000 of Simpson’s own money, they began what they thought would be a 90-day experiment: From then until election night in November, they would record the ABC, NBC and CBS evening news broadcasts, which usually aired at the same time.
The day Simpson and Grisham started taping, August 5, 1968, was an eventful one. The Republican Convention began, and Ronald Reagan officially announced his candidacy for the presidential nomination, joining with liberal Republican Nelson Rockefeller in an attempt to stop Richard Nixon’s hopes of a first ballot nomination.
The news broadcasts also included the era’s biggest stories: fighting in Vietnam, communist leaders meeting in Eastern Europe and the civil war in Nigeria. Other reports from that day sound hauntingly familiar: an Israeli strike into Jordan and a violent incident at the Korean Demilitarized Zone, in which an American and North Korean soldier were killed.
READ ENTIRE ARTICLE AT THE CONVERSATION
The Truth in Black and White: An Apology From the Kansas City Star
Today we are telling the story of a powerful local business that has done wrong.

Today we are telling the story of a powerful local business that has done wrong.
For 140 years, it has been one of the most influential forces in shaping Kansas City and the region. And yet for much of its early history — through sins of both commission and omission — it disenfranchised, ignored and scorned generations of Black Kansas Citians. It reinforced Jim Crow laws and redlining. Decade after early decade it robbed an entire community of opportunity, dignity, justice and recognition.
That business is The Kansas City Star.
Before I say more, I feel it to be my moral obligation to express what is in the hearts and minds of the leadership and staff of an organization that is nearly as old as the city it loves and covers:
We are sorry.
The Kansas City Star prides itself on holding power to account. Today we hold up the mirror to ourselves to see the historic role we have played, through both action and inaction, in shaping and misshaping Kansas City’s landscape.
It is time that we own our history.
It is well past time for an apology, acknowledging, as we do so, that the sins of our past still reverberate today.
This spring, the Memorial Day death of George Floyd in Minneapolis beneath the knee of a white police officer ignited protests worldwide over racial injustice. In doing so, it has forced institutions to look inward.
Inside The Star, reporters and editors discussed how an honest examination of our own past might help us move forward. What started as a suggestion from reporter Mará Rose Williams quickly turned into a full-blown examination of The Star’s coverage of race and the Black community dating to our founding in 1880.
Today The Star presents a six-part package. It is the result of a team of reporters who dug deeply into the archives of The Star and what was once its sister paper, The Kansas City Times. They pored over thousands of pages of digitized and microfilmed stories, comparing the coverage to how those same events were covered in the Black press — most notably by The Kansas City Call and The Kansas City Sun, each of which chronicled critical stories the white dailies ignored or gave short shrift.
READ ENTIRE ARTICLE AT THE KANSAS CITY STAR
Video of the Police Assault of Rodney King Shocked Us. But What Did It Change?
Thirty years after the police beating of Rodney King, it’s clear that shock and anger don’t translate into meaningful reform.

Today is the 30th anniversary of the nightLos Angeles police officers nearly killed 25-year-old Rodney King. On a road near the San Fernando Valley freeway, more than a dozen officers surrounded King while at least four Tasered, clubbed and kicked the young Black man until he appeared to them to be dead. They threw a sheet over his head when they were finished.
The assault on King became national news because Lake View Terrace resident George Holliday captured nine minutes of it on his camcorder. That video footage became evidence in the high-profile 1992 trial against four LAPD officers. And, of course, the juxtaposition between the horrific scenes on that videotape and the jury acquittal of the officers was a catalyst for that year’s Los Angeles riots.
But that moment between the traffic stop and the trial verdict is important. The powerful visual evidence of racism and brutality convinced Black activists and political leaders nationwide that there might finally be accountability. And it heartened survivors of police violence. Although it was a window of radical possibility that closed with extraordinary disappointment, Black activists channeled thwarted hope into the work of explaining the insidiousness of racial violence to a public that didn’t yet get it.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
The Trigger Presidency
How shock jock comedy gave way to Donald Trump’s Republican Party.

Shock radio is a broadcast genre in which extreme, aggressive, explicit talk is meant to outrage the mainstream public while making the show’s devoted fans laugh. It took off nationally in the 1980s and ’90s, when the term “shock jock” applied interchangeably to a handful of radio personalities—Bubba, Don Imus, Rush Limbaugh, Howard Stern among them—who had distinct audiences and often loathed one another, but shared a taste for the wildly irreverent. Since then, however, the spirit of shock radio has come to animate the political language of modern times, transforming from fart jokes into today’s “triggering the libs” culture, which ranges from anti-Semitic 4Chan memes of concentration camp ovens to President Trump’s relentless trolling of Kim Jong-un, LeBron James, and pretty much everyone else. And no one has mastered that language like our troll-in-chief. How the merger of right-wing politics and dick-joke comedy came to pass is a story about Donald Trump, yes, but it has its roots in the late days of the Reagan administration.
While local shock jocks like Chicago’s Steve Dahl were already established in the 1970s, the shock jock era truly began in August 1987, when the Reagan Federal Communications Commission (FCC) made two changes that literally rewrote the rules of American media and comedy. First, it repealed the Fairness Doctrine, which required radio stations to balance controversial views with an opposing point of view. Rush Limbaugh was already the top-rated show at Sacramento’s KFBK and a veteran of “insult radio” when the doctrine was overturned, finally freeing him to do the show he wanted.
Limbaugh could now be as controversial as he liked without liberal pushback (or context, or fact-checking), and his now familiar menu of right-wing satire and conservative dogma quickly connected to likeminded listeners. Within a year, WABC brought him to New York to begin his national show. To break up his daily three hours of super-villain monologuing, Limbaugh introduced news about Ted Kennedy with a song called “The Philanderer,” a parody of Dion’s “The Wanderer.” And in an early example of a conservative mocking liberals as the real fascists, he began referring to feminists as “feminazis.”
READ ENTIRE ARTICLE AT THE NEW REPUBLIC
Remember El Mozote
On December 11, 1981, El Salvador’s US-backed soldiers carried out one of the worst massacres in the history of the Americas at El Mozote.

“Which son of a bitch says that?” It was December 11, 1981 in El Mozote, a small town in El Salvador, and the major wanted to know which one of his men had refused to kill the children.
The military had just spent an entire day murdering its hundreds of inhabitants. Now, just the town’s kids were left. Gathered outside a schoolhouse in which a number of the children were being kept, the soldiers had had an argument. Some didn’t want to kill the kids, many of whom were under twelve years old and some of whom were infants. The major, without hesitation, walked over, scooped a little boy from a crowd of kids, flung him into the air, and speared him with a bayonet as he came back down. There was no more arguing.
The boy was one of over eight hundred slaughtered that day and the next, thirty-five years ago.
El Mozote was neither the first nor the last mass atrocity in El Salvador’s nightmarish civil war. The rape and murder of four US churchwomen by the National Guard, the assassination of Archbishop Oscar Romero while he held mass, the massacre of at least three hundred civilians at Sumpúl River, a similar mass killing a year later at Lempa River, the execution-style murder of six Jesuit priests, their housekeeper, and her daughter at the University of Central America — the list of horrors goes on and on, and is so long and brutal that it risks overshadowing the daily dumpings of bullet- and torture-riddled bodies of those who dared to speak out against the hard-right government on city streets and in public parks during the Salvadoran Civil War, which stretched from 1980 to 1992.
The overwhelming majority of these atrocities was carried out by the Salvadoran National Guard and the death squads to which many of their soldiers and other sympathizers belonged. Their aim was to destroy the Farabundo Martí National Liberation Front (FMLN in its Spanish acronym), a coalition of leftist guerrillas with strong support throughout the country; any workers, peasants, and religious workers who sympathized with them; and any other dissenters who disagreed with the program of the corrupt, right-wing government, which could not have existed or endured without US backing.
READ ENTIRE ARTICLE AT JACOBIN
True Stories about the Great Fire
A movement’s early days as told by those who rose up, those who bore witness, those who grieved, and those who hoped.

George Floyd, 46, is killed while being arrested by four Minneapolis police officers on Monday, May 25. Bystander footage of the incident later shows Officer Derek Chauvin pin his knee to Floyd’s neck for more than eight excruciating minutes.
ANDREA JENKINS
member, Minneapolis City Council
I’m sitting up watching Netflix, and the message was, “Have you talked to the mayor yet?” That’s all it said. I looked at it but I didn’t answer. Then I take the call from the mayor, and he goes, “There’s been a police-involved death, but there were no guns involved.” As if to say, “The police didn’t shoot anybody.” He explained that he thought that it had to do with some kind of forgery.
STEPHEN JACKSON
former NBA player, Houston-area native, longtime close friend of Floyd’s
I was taking a nap on the couch with my six-year-old daughter that Tuesday. When I woke up, I saw that my girlfriend had sent me a video. She and I talk about police brutality all the time. I looked at it briefly and thought, Damn, they killed another brother. Then I saw I had about 50 messages. I opened one from my friend Bun B, and it said, “You see what they did to your twin in Minnesota?” That’s when I realized it was Floyd. I lost it. Started screaming, throwing stuff around. Scared the mess out of my daughter.
KEITH ELLISON
Minnesota attorney general
I usually get up around 5:30 and look at my phone. And I started getting emails and texts containing the video from people I knew. I’m sitting there with my legs dangling over my bed in my boxers and my T-shirt, and I’m reading this stuff and I’m like, Oh, my goodness. Here we go again.
MURIEL BOWSER
mayor, Washington, D.C.
I was struck by not just the officer who choked Mr. Floyd, but the other officers watching. You have an officer that is applying deadly force and other officers watching. All of it is so hard to fathom, that all of those people were watching this as he was begging for his life. And I was thinking how scary that must be for this child who was doing the recording.
SHERRILYN IFILL
president, NAACP Legal Defense and Educational Fund
You’re listening to the residents, who are saying what they’re saying, “Stop it. You’re killing him. Are you going to just allow this to happen? Take his pulse,” and so forth. So, you’re hearing all of that. There is another video where there’s just engagement with one of the officers on the scene and basically the crowd kind of exhorting him to find his humanity. So there are lots of ways to view this, and each one of them is actually very, very important to understand the sickness that we have allowed to pervade our system of law enforcement in this country.
JAEL KERANDI
former student body president, University of Minnesota
I saw the video, but I didn’t want to watch it right away. You could see the captions. I knew what I was waiting to see.
READ ENTIRE ARTICLE AT VANITY FAIR
Tulsa, 1921
On the 100th anniversary of the riot in that city, we commemorate the report written for this magazine by a remarkable journalist.

It is highly doubtful if the exact number of casualties will ever be known. The figures originally given in the press estimate the number at 100. The number buried by local undertakers and given out by city officials is ten white and twenty-one colored. For obvious reasons these officials wish to keep the number published as low as possible, but the figures obtained in Tulsa are far higher. Fifty whites and between 150 and 200 Negroes is much nearer the actual number of deaths. Ten whites were killed during the first hour of fighting on Tuesday night. Six white men drove into the colored section in a car on Wednesday morning and never came out. Thirteen whites were killed between 5:30 a.m. and 6:30 a.m. Wednesday. O.T. Johnson,commandant of the Tulsa Citadel of the Salvation Army, stated that on Wednesday and Thursday the Salvation Army fed thirty-seven Negroes employed as grave diggers and twenty on Friday and Saturday. During the first two days these men dug 120 graves in each of which a dead Negro was buried. No coffins were used. The bodies were dumped into the holes and covered over with dirt. Added to the number accounted for were numbers of others–men, women, and children–who were incinerated in the burning houses in the Negro settlement. One story was told me by an eye-witness of five colored men trapped in a burning house. Four burned to death. A fifth attempted to flee, was shot to death as he emerged from the burning structure, and his body was thrown back into the flames. There was an unconfirmed rumor afloat in Tulsa of two truck loads of dead Negroes being dumped into the Arkansas River, but that story could not be confirmed.
What is America going to do after such a horrible carnage–one that for sheer brutality and murderous anarchy cannot be surpassed by any of the crimes now being charged to the Bolsheviki in Russia? How much longer will America allow these pogroms to continue unchecked? There is a lesson in the Tulsa affair for every American who fatuously believes that Negroes will always be the meek and submissive creatures that circumstances have forced them to be during the past three hundred years. Dick Rowland was only an ordinary bootblack with no standing in the community. But when his life was threatened by a mob of whites, every one of the 15,000 Negroes of Tulsa, rich and poor, educated and illiterate, was willing to die to protect Dick Rowland. Perhaps America is waiting for a nationwide Tulsa to wake her. Who knows?
READ ENTIRE ARTICLE AT THE NATION
How Northern Newspapers Covered Lynchings
They became more critical in the 1890s when activists like Ida B. Wells made lynching an international embarrassment.

From the late 1800s well into the 20th century, thousands of people, mostly black and poor, were murdered by lynch mobs that sometimes burned their victims alive, castrated them or cut their bodies up into little pieces that were passed around as souvenirs.
Southern newspapers justified these horrors by calling lynching victims “fiends,” “brutes” or “ravishers,” leaving their guilt unquestioned. Lurid details of supposed rapes of white women by black men, often entirely fabricated, were recounted in Southern papers to justify, or even to incite, lynchings.
In a landmark move, The Montgomery Advertiser recently apologized for its role in justifying and promoting lynching. But many Northern papers were just as complicit.
Consider these headlines about Southern lynchings from The Washington Post and The Chicago Tribune, respectively: “Negro Brute Lynched: Attempted Assault on Young Daughter of a Farmer,” “Criminal Calendar: Two Murderous and Thieving Negroes Lynched by a Kentucky Mob.”
READ ENTIRE ARTICLE AT THE NEW YORK TIMES
Montgomery’s Shame and Sins of the Past
The Montgomery Advertiser recognizes its own place in the history of racial violence in its own community.
We were wrong.
On the day when people from across the globe come to our capital city to consider the sordid history of slavery and lynching and try to reconcile the horrors of our past, the Montgomery Advertiser recognizes its own shameful place in the history of these dastardly, murderous deeds.
We take responsibility for our proliferation of a false narrative regarding the treatment of African-Americans in those disgraceful days.
The Advertiser was careless in how it covered mob violence and the terror foisted upon African-Americans from Reconstruction through the 1950s. We dehumanized human beings. Too often we characterized lynching victims as guilty before proven so and often assumed they committed the crime.
Take Oliver Jackson, for example. Accused of killing someone in 1894, a group of masked “men” took him from a buggy and shot him to death on the side of the road in Montgomery County. After detailing his death, the Advertiser wrote that Jackson was a born murderer, despite the fact his parents were quiet and hard working.
After Riley Webb was mobbed and killed in Selma in 1892, the first line of the story included the rarely used exclamation point, “He has been caught!” Later, we recognized his burial, writing he was “planted” in the ground, as if he was something less than other men, as if he was a shrub.
Part of our responsibility as the press is to explore who we are, how we live together and analyze what impacts us. We are supposed to hold people accountable for their wrongs, and not with a wink and a nod. We went along with the 19th- and early 20th-century lies that African-Americans were inferior. We propagated a world view rooted in racism and the sickening myth of racial superiority.
The coverage we have done this past week related to the memorial and museum openings and our daily stories describing the years of lynching and horror that African-Americans endured has prompted numerous calls from readers. Some calls have been complimentary while others wish we would leave the past in the past.
We can’t do that. There are thousands of names on the memorial of people we don’t know enough about. People who never received a chance to live their lives free from the fear of being killed just because someone else didn’t like the way they looked.
READ ENTIRE ARTICLE AT THE MONTGOMERY ADVERTISER
The Battle over Memory at El Mozote
Four decades on, the perpetrators of the El Mozote massacre have not been held to account.

No one in El Salvador told the truth about the massacre. The news was first picked up by a clandestine guerrilla radio, Radio Venceremos, but it did not make the mainstream media. La Prensa Gráfica, a major print outlet, reported on the Mozote operation with a picture of kids greeting troops. They said that the population had happily received the army’s entrance to recover territories that were under “terrorists” control. Neighboring countries, also under military regimes, didn’t report on El Mozote either.
It was in this context of a general media blackout—of state-sanctioned denial—that survivors went to the court in 1990. Their path through the legal system was as intricate as the henequen plantations under which many had hid. In 1977, a legal aid office, Socorro Juridico, had been opened in the Catholic Church by Oscar Romero, the archbishop of San Salvador whose assassination in 1980 marked for some the start of the civil war. Romero was a human right’s leader and the country’s foremost voice against social injustice and repression; a symbol of liberation theology, which tried to bring the church closer to the poor and oppressed, he was made a Catholic Saint in 2018. Socorro Juridico, which was renamed Tutela Legal in 1982, started gathering survivors’ testimonies soon after the massacre. Victims left the El Mozote area and scattered over the country and into refugee camps in Honduras. As survivors came back to El Mozote, Tutela Legal stepped up their investigations and got enough victims on board to present the case in October 1990.
The military then still had a strong grip on the country’s institution. The Attorney General’s office and the Supreme Court was filled with military allies. The president of the Court opposed the investigation in El Mozote, claiming that “only dead guerillas were buried” there. But after more persistence from Tutela Legal, the excavations began in El Mozote’s main square in October 1992, and the earth unveiled the truth: thousands of bones were dug up; more than two hundred of them were of children. Yet in 1993, six days after an UN Truth Commission on the war was published, the Legislative Assembly passed a broad amnesty law. There would be no trial in El Mozote.
READ ENTIRE ARTICLE AT THE BAFFLER
Lessons From the El Mozote Massacre
A conversation with two journalists who were among the first to uncover evidence of a deadly rampage.

The carnage was unspeakable. Across four days in December 1981, during El Salvador’s long civil war, American-trained and -equipped soldiers slaughtered nearly 1,000 civilians in and near El Mozote, a village in the country’s northeast. It was the largest massacre in recent Latin American history. Among the victims were hundreds of children.
Raymond Bonner, then a New York Times correspondent, was one of the first journalists to bear witness to El Mozote’s torment, along with the photographer Susan Meiselas. His reporting was roundly – and wrongly – assailed by the Ronald Reagan administration and others on the American right, but history has borne out the truth of his first-hand accounts.
As part of a team from Retro Report, Mr. Bonner returned to El Mozote to see what had changed over the years. Perhaps most important, those accused of responsibility for the massacre have still not been held to account: Long-dormant trials of retired military commanders that were revived in 2016 have been placed on hold by Nayib Bukele, the president of El Salvador since 2019.
The shadows cast by the events at El Mozote bear out William Faulkner’s oft-quoted observation, “The past is never dead. It’s not even past.” With that in mind, we talked with Mr. Bonner about his reporting.
READ ENTIRE ARTICLE AT RETROREPORT
We Didn’t Vanquish Polio. What Does That Mean for Covid-19?
The world is still reeling from the pandemic, but another scourge we thought we’d eliminated has reemerged.

In 2005, I published a memoir about the epidemic as I experienced it called The Broken Boy. I described my experiences in the context of my family and of Ireland in the 1950s. The title was something of a misnomer, since I felt singularly unbroken, but it did at least tell the reader that the book was about the suffering of a small child.
I am glad I researched and wrote the book when I did, because many of the best-informed witnesses died soon after its publication. Much of the text made gloomy reading, but it ended on an upbeat note. At the end of the final chapter, I had written dismissively of the last prophetic line in Albert Camus’s novel The Plague, in which he wrote that “perhaps the day would come when, for the bane and the enlightening of men, it would rouse up its rats again and send them forth to die in a happy city.” I found this a bit portentous and out-of-date, writing that polio might have been among the last of the life-threatening plagues, such as leprosy, cholera, tuberculosis, typhus, measles, malaria, and yellow fever, to be eliminated or brought under control during the 20th century. That turned out to be overoptimistic.
Polio epidemics had a surprisingly short career: less than 70 years between the end of natural immunity and the widespread use of the Salk vaccine. It was a story with a seemingly happy ending—and this was the topic of my original book. Few people realized—certainly I didn’t—that if polio epidemics were a product of modernity and not of backwardness, then the way might be open for other epidemics of equal or greater severity to appear.
I was surprised but not very alarmed when Covid-19 was identified in Wuhan at the end of 2019, because previous coronavirus outbreaks, such as SARS 1 and MERS, had not spread far and had been suppressed. As more information about the virus emerged in the early months of 2020, it struck me that in some respects the pandemic more resembled a polio epidemic on a world scale than the 1918-19 Spanish flu outbreak to which it was often compared. Covid-19 and poliomyelitis (to give it its full name) are alike in being highly infectious—and because most of those infected have few if any symptoms and swiftly recover. But they become carriers all the same, infecting others, some of whom may belong to the unlucky 1 or 2 percent—there is great dispute about the fatality rate among victims of Covid-19—who will feel the virus’s full destructive impact.
READ ENTIRE ARTICLE AT THE NATION
Guardian Owner Apologises for Founders’ Links to Transatlantic Slavery
Scott Trust to invest in decade-long programme of restorative justice after academic research into newspaper’s origins.
The owner of the Guardian has issued an apology for the role the newspaper’s founders had in transatlantic slavery and announced a decade-long programme of restorative justice.
The Scott Trust said it expected to invest more than £10m (US$12.3m, A$18.4m), with millions dedicated specifically to descendant communities linked to the Guardian’s 19th-century founders.
It follows independent academic research commissioned in 2020 to investigate whether there was any historical connection between chattel slavery and John Edward Taylor, the journalist and cotton merchant who founded the newspaper in 1821, and the other Manchester businessmen who funded its creation.
The Scott Trust Legacies of Enslavement report, published on Tuesday, revealed that Taylor, and at least nine of his 11 backers, had links to slavery, principally through the textile industry. Taylor had multiple links through partnerships in the cotton manufacturing firm Oakden & Taylor, and the cotton merchant company Shuttleworth, Taylor & Co, which imported vast amounts of raw cotton produced by enslaved people in the Americas.
READ ENTIRE ARTICLE AT THE GUARDIAN
Police and the License to Kill
Detroit police killed hundreds of unarmed Blacks during the civil rights movement. Their ability to get away with it shows why most proposals for police reform are bound to fail.

Clifford “Chucky” Howell, a thirteen-year-old Black male, was walking home from playing at a friend’s house when a white Detroit police officer shot him near his own backyard on the evening of Sept. 13, 1969. The patrol team did not summon medical assistance for at least forty minutes, and Chucky died later at the hospital. Officers on the scene claimed that he had been fleeing the burglary of a white family’s home, a felony, and that it was therefore appropriate to shoot him. Numerous eyewitness accounts, however, insisted Chucky had been an oblivious bystander. His parents and local Black organizations protested, but law enforcement agencies refused their requests to examine the evidence. Through a secretive internal investigative process, the Wayne County Prosecutor’s Office found that “all facts and circumstances indicate justifiable action” in the officer’s use of fatal force.
Chucky was one of more than a hundred unarmed people killed by Detroit law enforcement officers between 1967 and 1973, the majority of whom were young Black males allegedly fleeing property crimes or robberies, all declared justifiable homicides by Wayne County prosecutor William Cahalan. At the time, the Detroit Police Department’s (DPD) use of firearms policy empowered officers to prevent the escape of “fleeing felons” with deadly force and located this power in officers’ own “sound discretion,” which effectively provided a license to kill insulated from legal consequences. This policy facilitated an extraordinary degree of police impunity, which the DPD used to commit violence against Black youth alleged to have committed low-level property crimes. It also provided an advance script for law enforcement officers to self-exonerate any murders or otherwise wrongful shootings they committed by framing the victims: all they had to do was say that they knew that the person had committed a burglary and that—in their split-second judgment—opening fire was necessary to apprehend the suspect.
Still, the degree to which the DPD availed themselves of this license to kill is astounding. Fatal force against unarmed and fleeing Black teenagers and young adults represented the largest category of law enforcement homicides in Detroit in the late 1960s and early 1970s, when the DPD was the deadliest police department per capita in the nation. Civil rights and Black Power groups in the city organized sustained protests against this policy, and many other urban police departments nationwide began banning use of firearms in the unarmed “fleeing felon” scenario during the 1970s, especially against juveniles. In 1984 the U.S. Supreme Court finally ruled in Tennessee v. Garner that state laws authorizing law enforcement to use deadly force against unarmed and fleeing suspects who posed no direct threat were unconstitutional.
However, while Tennessee v. Garner significantly reduced the number of fatal force incidents in such circumstances nationwide, the pattern of questionable police homicides just shifted to other situations still dubiously permissible under state and local laws and police regulations: home invasions during drug raids, armed responses to mental health crises, and, most common of all, escalation of low-level traffic stops based on racial profiling. This is largely thanks to a 1989 Supreme Court ruling, Graham v. Connor, which effectively insulated these and other scenarios from legal accountability. The case established a subjective use-of-force standard based on the “perspective of a reasonable officer on the scene” and made it almost impossible to second-guess “split-second decisions” that resulted in fatal shootings by law enforcement.
Yet the history of police violence in Detroit shows that this problem cannot be solved simply by tightening use-of-force regulations or prosecuting individual officers whose actions most clearly violate the laws. Despite decades of trying to end police killings through precisely such reforms, there remains a fundamental continuity between the 1960s and ’70s and our current moment: racially targeted and discretionary policing continues to be excused by deliberate, and deliberately secretive, internal processes under which law enforcement agencies self-investigate their own violence and cover their tracks.
READ ENTIRE ARTICLE AT BOSTON REVIEW
What Trump Could Learn from America’s Long History of Sex Scandals
Too bad Trump isn’t a student of history.

Although it often feels like every tale out of the White House is shocking and unprecedented, one aspect of Trump’s presidency is actually fairly normal: A lot of presidents have been at the center of sex scandals. From stories about “Mr. Jefferson’s Congo Harem” at the dawn of the 19th century—a reference to Thomas Jefferson’s serial rape of his slave (and dead wife’s half-sister), Sally Hemings—to whispers about JFK’s star-studded intrigues to Bill Clinton’s many affairs and alleged assaults, sex scandals have been a feature of presidential politics for nearly as long as the office has existed. John Quincy Adams, Andrew Jackson, James Buchanan, James Garfield, Grover Cleveland, FDR, Dwight Eisenhower, LBJ, Richard Nixon, Ronald Reagan, and both Bushes faced some level of sex scandal in the lead-up to, during, or after their terms. Less concrete rumors and speculation have hovered over about a half dozen other presidents.
None of these past scandals look exactly like the Stormy Daniels affair, which is dominating the news thanks to the 60 Minutes interview where she talked about having sex with Trump in 2006 and later being threatened by a mysterious man after dishing about the encounter to a tabloid. Many presidential sex scandals were even more absurd than the current moment. For instance, Warren G. Harding allegedly had trysts in a White House closet, relying on the Secret Service to tip him off when his wife was walking down the hall. Yet only a few sex scandals have picked up real steam and persistently dogged presidents for months or years on end, even in the modern era. To wit, while we all know about Monica Lewinsky, some of Clinton’s earlier affairs remain obscure to most Americans. And while both George H.W. and W. Bush’s were of accused of indiscretions (and in W’s case, rape), those allegations never got nearly as much air time as those leveled against Clinton.
So why has the Stormy Daniels story gained so much momentum? Looking back at history, it seems most presidents came up with tools and tactics to control the flow of information about their sex lives, or at least to mitigate the spread and damage of stories that leaked out. Not all of these tactics apply in the modern era. But many could have been of use to Trump and his team even today, if he’d taken the time to look back at his predecessors for (sneaky, sometimes morally bankrupt) insights.
Locked Up: The Prison Labor That Built Business Empires
Companies across the South profited off the forced labor of people in prison after the Civil War – a racist system known as convict leasing.

More than 150 years ago, a prison complex known as the Lone Rock stockade operated at one of the biggest coal mines in Tennessee.
It was powered largely by African American men who had been arrested for minor offenses — like stealing a hog — if they committed any crime at all. Women and children, some as young as 12, were sent there as well.
The work, dangerous and sometimes deadly, was their punishment.
The state was leasing these prisoners out to private companies for a fee, in a practice known all across the South as convict leasing. In states like Texas, Florida, Georgia and Alabama , prisoners were also used to help build railroads, cut timber, make bricks, pick cotton and grow sugar on plantations.
READ ENTIRE ARTICLE AT AP NEWS
The Dark Legacy of Henry Ford’s Anti-Semitism
The Dearborn Independent, a newspaper Ford owned, regularly supported and spread anti-Semitic conspiracy theories.

Mayor Randy Henderson of nearby Fort Myers recently had to withdraw a proposal to rename a local bridge in honor of Henry Ford. In 1915, Ford built a winter residence next door to Thomas Edison’s house, and today the two homes are a popular tourist attraction.
But Alan Isaacs, the director of the local Jewish Federation, and others criticized the idea: “He (Ford) really has a very dark history as far as the Jewish community and Jews are concerned.”
Henderson said he didn’t know about Ford’s anti-Semitic history prior to the controversy, yet nonetheless “there’s a legitimate outcry against this and it doesn’t take a rocket scientist to read Mr. Ford’s background.”
Ford’s “background” is indeed “dark.” Let me count the ways.
In 1919, Ford purchased The Dearborn Independent, then an obscure newspaper published in the Michigan city that was the headquarters of his automobile company. For the next eight years, the weekly publication reflected his bigoted views.
READ ENTIRE ARTICLE AT THE WASHINGTON POST
Collected and curated by Bunk History