Monday, April 21, 2008

The Lost Art of Writing About Art (by Eric Gibson, the Wall Street Journal)

Note to the Caption: Olaf Breuning’s installation, “The Army” (2008), is on view at the Park Avenue Armory. (Ruby Washington/The New York Times)

Note to the Caption: The installation for "Cheese" (2008), a multichannel video piece by Mika Rottenberg, at the Whitney Museum of American Art. (Librado Romero/The New York Times)

The Lost Art of Writing About Art

April 18, 2008; Page W13

In certain circles, the Whitney Museum's Biennial exhibition of contemporary art is known as "the show everybody loves to hate." Usually the criticism comes in the form of negative reviews. But this year it's different, with the brickbats directed at the exhibition's accompanying commentary instead of the art itself. Texts written by the Whitney's curators and outside contributors are being widely (and accurately) dismissed as unalloyed gibberish.

What makes this complaint particularly significant is that it comes not from the public, whom the museum might privately dismiss as benighted philistines, but from insiders -- artists and critics who know their stuff and are generally well-disposed toward the museum and its efforts.

When the show opened last month, artist and critic Carol Diehl blogged about the "impenetrable prose from the Whitney Biennial." As examples, she offered "random quotes" about individual artists and their work taken from the exhibition's wall texts and catalog. Among the gems:

 ". . . invents puzzles out of nonsequiturs to seek congruence in seemingly incongruous situations, whether visual or spatial . . . inhabits those interstitial spaces between understanding and confusion."

 "Bove's 'settings' draw on the style, and substance, of certain time-specific materials to resuscitate their referential possibilities, to pull them out of historical stasis and return them to active symbolic duty, where new adjacencies might reactivate latent meanings."

Ms. Diehl's complaint was quickly taken up by others. Richard Lacayo, on a Time magazine blog, likened reading the show's introductory wall text ("Many of the projects . . . explore fluid communication structures and systems of exchange") to "being smacked in the face with a spitball." To combat such verbiage, he recommended banning five words long popular with critics that nonetheless say nothing: "interrogates," "problematizes," "references" (as a verb), "transgressive" and "inverts."

On his Modern Art Notes blog, Tyler Green dismissed the Whitney prose as an "embarrassment" and suggested that every candidate for a contemporary-art curatorship be required to pass a writing test. And an art blogger known only as C-Monster pleaded simply for "smart writing that is precise and unmuddled," adding plaintively: "Making it enjoyable to read wouldn't hurt."

Once upon a time, art writing was all those things. Critics of an earlier age, such as John Ruskin, had no problem making themselves understood, and they are still read today. The same is true of the great art historians of the postwar era, such as Erwin Panofsky and Ernst Gombrich. Panofsky, among whose books was the definitive study of Albrecht Dürer, was a supremely elegant prose stylist. Gombrich's 1950 survey, "The Story of Art," has sold six million copies and been translated into 23 languages. By the way, English was the second language for both men. And Alfred Barr, founding director of the Museum of Modern Art, wrote catalogs on topics ranging from Matisse to Surrealism that made the mysteries of modern art accessible to the American public.

It was Marcel Duchamp who unwittingly launched art criticism on its current path of willful obscurantism. His "Readymade" art -- mass-produced commercial objects (most famously a urinal) that the artist removed from everyday utilitarian contexts and displayed in a museum -- almost required this development.

Until Duchamp, criticism was aesthetically based. The critic talked about a painting's subject, the way the artist handled color, drawing, composition and the like. With Readymades, the object's appearance and beauty were no longer the issue -- indeed, they were irrelevant. What mattered was the idea behind the work -- the point the artist was trying to make. So art criticism moved from the realm of visual experience to that of philosophy. The writer no longer had to base his critical observations on a close scrutiny of the work of art. He could simply riff.

Conceptual art like Duchamp's took a while to catch on, but by the 1980s it had become mainstream. Around that time, academics and critics drove another nail into the coffin of accessible writing. They turned to areas outside of art and aesthetics -- disciplines such as linguistics and ideologies such as Marxism and feminism -- to interpret art.

From the late 19th century to just after World War II, writing about modern art was clear. It had to be. Critics from Émile Zola to Clement Greenberg were trying to explain new and strange art forms to a public that was often hostile to the avant-garde. To have a hope of making their case, these writers couldn't afford to obfuscate. Today, when curators and critics can count on a large audience willing to embrace new art simply because it is new, they don't have to try as hard.

Still, there is no excuse for a museum letting nonsense of the sort quoted above out in the open, particularly an institution whose mission includes educating the public. If the Whitney continues to snub this public -- its core audience -- by "explaining" art with incomprehensible drivel, it shouldn't be surprised if people decide to return the favor and walk away.

Mr. Gibson is the Journal's Leisure & Arts features editor. Write to Eric Gibson at

Note to the Caption:M K Guth’s “Ties of Protection and Safekeeping” (2007-8). (Ruby Washington/The New York Times)

Note to the Caption: Olaf Breuning’s installation, “The Army” (2008), is on view at the Park Avenue Armory. (Ruby Washington/The New York Times)

March 7, 2008
Art Review | Whitney Biennial 2008
Art’s Economic Indicator

News link

Whitney Biennial 2008 interactive feature

Advertisements for the 2008 Whitney Biennial promise a show that will tell us “where American art stands today,” although we basically already know. A lot of new art stands in the booths of international art fairs, where styles change fast, and one high-polish item instantly replaces another. The turnover is great for business, but it has made time-lag surveys like the biennial irrelevant as news.

Maybe this is changing with the iffy economy. Several fairs, including Pulse in London, have recently suspended operation. And this year we have a Whitney show that takes lowered expectations — lessness, slowness, ephemerality, failure (in the words of its young curators, Henriette Huldisch and Shamim M. Momin) — as its theme.

A biennial for a recession-bound time? That’s one impression it gives. With more than 80 artists, this is the smallest edition of the show in a while, and it feels that way, sparsely populated, even as it fills three floors and more of the museum and continues at the Park Avenue Armory, that moldering pile at 67th Street, with an ambitious program of performance art (through March 23).

Past biennials have had a festive, party-time air. The 2004 show was all bright, pop fizz; the one two years ago exuded a sexy, punk perfume. The 2008 edition is, by contrast, an unglamorous, even prosaic affair. The installation is plain and focused, with many artists given niches of their own. The catalog is modest in design, with a long, idea-filled essay by Ms. Momin, hard-working, but with hardly a stylistic grace note in sight. A lot of the art is like this too: uncharismatic surfaces, complicated back stories.

There are certainly dynamic elements. A saggy, elephantine black vinyl sculpture by the Los Angeles artist Rodney McMillian is one. Phoebe Washburn’s floral ecosystem is another. Spike Lee’s enthralling, appalling HBO film about Katrina-wrecked New Orleans is a third. In addition, certain armory performances — a 40-part vocal performance organized by Marina Rosenfeld; Kembra Pfahler and her group, the Voluptuous Horror of Karen Black commandeering the Drill Hall — should make a splash.

But again, the overall tenor of the show is low-key, with work that seems to be in a transitional, questioning mode, art as conversation rather than as statement, testing this, trying that. Assemblage and collage are popular. Collaboration is common. So are down-market materials — plastic, plywood, plexiglass — and all kinds of found and recycled ingredients, otherwise known as trash.

Jedediah Caesar, one of the show’s 29 West Coast artists, encases studio refuse — wood scraps, disposable coffee cups, old socks — in blocks of resin for display. Charles Long makes spidery, Giacometti-esque sculptures — the shapes are based on traces of bird droppings — from plaster-covered debris. Cheyney Thompson cannibalizes his own gallery shows to make new work. With thread and a box of nails Ry Rocklen transforms an abandoned box spring into a bejeweled thing, iridescent if the light is right.

Devotees of painting will be on a near-starvation diet, with the work of only Joe Bradley, Mary Heilmann, Karen Kilimnik, Olivier Mosset and (maybe) Mr. Thompson to sustain them. Hard-line believers in art as visual pleasure will have, poor things, a bitter slog. But if the show is heedless of traditional beauty, it is also firm in its faith in artists as thinkers and makers rather than production-line workers meeting market demands.

Not so long ago, Whitney biennials were little more than edited recaps of gallery seasons. Much of the art in them had already been exhibited in galleries and commercially preapproved. By contrast, the Whitney commissioned the bulk of what appears in the 2008 biennial expressly for the occasion. If some artists failed to meet curatorial hopes, others seized the chance to push in new directions. Whatever the outcome, the demonstration of institutional faith was important. It means that, for better or worse, the new art in this show is genuinely new.

And new comes out of old. Almost every biennial includes a contingent of influential elders. This one does. Ms. Heilmann is one. Her pop-inflected, rigorously casual abstraction is a natural reference point for Ms. Kilimnik’s brushy historical fantasies, for Frances Stark’s free-associative collages, and for a very Heilmann-esque Rachel Harrison piece that includes a harlequin-patterned sculpture and the film “Pirates of the Caribbean” projected on the gallery wall. (Work by Ms. Harrison is also in the New Museum’s “Unmonumental: The Object in the 21st Century,” a show that overlaps the biennial’s sensibility.)

The California Conceptualist John Baldessari — born in 1931 and deeply networked into the art world — generates another, even wider sphere of influence. His hybrid forms — not painting, not sculpture, not photography, but some of each — offer a permissive model for a lot of new art, from Mr. Bradley’s figure-shaped abstract paintings to Patrick Hill’s tie-dyed sculptures to a multimedia installation by Mika Tajima who, with Howie Chen, goes by the collaborative moniker New Humans.

Mr. Baldessari’s use of fragmented Hollywood film stills in his work has opened new paths for artists exploring narrative. And there’s a wealth of narrative in this biennial, much of it in film.

The video called “Can’t Swallow It, Can’t Spit It Out” by Harry (Harriet) Dodge and Stanya Kahn, is a kind of lunatic’s tour of an abject and empty Los Angeles. Amy Granat and Drew Heitzler turn Goethe’s “Sorrows of Young Werther” into an Earth Art road trip. In a multichannel video piece called “Cheese,” with an elaborate, barnlike setting, Mika Rottenberg updates a 19th-century story of seven sisters who turned their freakishly long hair to enterprising ends.

And there’s a beautiful new film by Javier Téllez, produced by Creative Time, that dramatizes an old Indian parable about the uncertainties of perception. In the film the artist introduces six blind New Yorkers to a live elephant and records their impressions, derived through touch. The encounters take place in what looks like the open, empty plaza in front of a temple or church, though the building is actually the vacant Depression-era bathhouse of the McCarren Park swimming pool in Williamsburg, Brooklyn.

Architecture and design form a subcategory of motifs in the biennial, partly as a sendup of the luxe environments that much new art is destined to inhabit, but also in line with the show’s concern with transience and ruin. Alice Könitz’s faux-modernist furniture sculpture, Matthew Brannon’s wraparound graphics display, and Amanda Ross-Ho’s fiercely busy domestic ensembles all mine this critical vein.

But William Cordova’s “House That Frank Lloyd Wright Built 4 Fred Hampton and Mark Clark” makes a specific historical reference. An openwork maze of wood risers, it may look unfinished, but it’s as complete as it needs to be: its basic outline replicates the footprint of the Chicago apartment where two Black Panthers were ambushed and killed in a predawn police raid in 1969. Here the scene of a stealth attack is open for the world to see.

The passing of baldly political art from market fashion has been much noted during the past decade. But the 2008 Biennial is a political show, at least if you define politics, as Ms. Huldisch and Ms. Momin do, in terms of indirection, ambiguity; questions asked, not answered; truth that is and is not true.

An assemblage by Adler Guerrier impressionistically documents an explosion of racial violence that scarred Miami Beach, near his home, in 1968. While Mr. Guerrier attributes the piece to a fictional collective of African-American artists active around Miami at the time, the collective, like the piece itself, is entirely his invention.

Omer Fast weaves together sex, lies, and a civilian shooting in Iraq in a film-within-a-film based on actor-improvised memories. William E. Jones takes a very personal tack on the subject of civilian surveillance by recycling an old police video of illicit homosexual activity shot in an Ohio men’s room. The video dates from 1962, the year the artist, who is gay, was born, and the police sting triggered a wave of antigay sentiment in the town where he grew up.

There’s more: videos by Natalia Almada and Robert Fenz dramatize, in utterly different ways, the border politics of Mexican-United States immigration. One of the show’s largest pieces, “Divine Violence,” by Daniel Joseph Martinez, fills a substantial room with hundreds of gilded plaques carrying the names of what Mr. Martinez labels terrorist organizations, from Al Qaeda to tiny nationalist and religious groups.

Mr. Martinez, an extremely interesting artist, is making a return biennial appearance. He contributed metal museum-admission tags reading “I Can’t Imagine Ever Wanting to Be White” to the famously political biennial in 1993. (One of that show’s curators, Thelma Golden, now director of the Studio Museum in Harlem, is an adviser to the current exhibition, along with Bill Horrigan of the Wexner Center for the Arts at Ohio State University and Linda Norden, an independent curator.)

For a total immersion in the political and the personal, there’s nothing quite like Mr. Lee’s television film “When the Levees Broke,” which is on continuous view in the show, though for me Coco Fusco’s hourlong video “Operation Atropos” is almost as powerful. For this exercise in creative nonfiction, Ms. Fusco and six other women submitted to a “prisoner-of-war interrogation-resistance program” conducted by former United States military personnel. Technically, the whole program is a species of docudrama performance, a highly specialized endurance challenge. Even knowing that, the sight of men making women gradually break down under pressure is hair-raising, as is a follow-up scene of the women being briefed on how they can do the same to others.

The growing presence of women as military interrogators will be the subject of a live performance by Ms. Fusco at the armory, the ideal setting for it. And under the auspices of the nonprofit Art Production Fund, several other biennial artists have made site-specific works in the building’s outsize, baronial, wood-paneled halls.

In one Olaf Breuning has mustered a cute army of teapots with lava-lamp heads. Mario Ybarra Jr.’s “Scarface Museum,” composed entirely of memorabilia related to Brian De Palma’s 1983 remake of that 1932 gangster film, is in another. In a third M K Guth, an artist from Portland, Ore., invites visitors to participate in therapeutic hair-braiding sessions, the hair being fake, the psychological benefits presumably not.

Ms. Guth’s project has a sweet, New Agey expansiveness that is atypical for this year’s hermetic, uningratiating show. Ms. Pfahler and the Voluptuous Horror of Karen Black, with their teased wigs, low-budget props and friends-of-friends underground roots are firmly in the 2008 picture. Ms. Pfahler’s Biennial stint will include a seminar on an art movement she recently founded. Based on the idea of the attraction of abjection, it is called “Beautalism,” and a fair amount of what is in the Whitney show qualifies for inclusion.

'My Young Years': Rubinstein's Enchanting Prelude (by Jonathan Yardley, the Washington Post)

Note to the Caption: The classical pianist wrote the memoir of his early life when he was in his 80s. (1967 Photo By Eddie Adams)

'My Young Years': Rubinstein's Enchanting Prelude

Saturday, April 19, 2008; Page C01

An occasional series in which The Post's book critic reconsiders notable and/or neglected books from the past.

Midway through "My Young Years," his memoir of the first three decades of what turned out to be an exceptionally long life, the incomparable classical pianist Arthur Rubinstein recalls an anecdote about two cousins, one of them "the greatest Don Juan of his time," who became involved with the same beautiful woman but whose friendship managed to survive this rather extreme complication. Rubinstein tells the tale and then shrugs: "Even if it were only half-true, it was a good story."

That is exactly how I feel about "My Young Years." How much of it is true and how much mere invention no one now can say -- Rubinstein died a quarter-century ago at the age of 95, and all his contemporaries are long since gone -- but veracity in this case really matters less than the unflagging zest with which Rubinstein recalls those years between 1887, when he was born in Poland, and 1917, when his career as a concert pianist finally began to achieve the success that had been predicted for him since he was a boy. Published in 1973, and followed seven years later by the rather less interesting "My Many Years," "My Young Years" was an international bestseller. It now is out of print, a puzzling development when one considers that Rubinstein's recordings, especially of Chopin, continue to be played and admired.

Whatever the explanation for its disappearance from the bookstores, "My Young Years" remains a classic autobiography in the grand manner. Unlike the memoirs that now crowd the bookshelves, exercises in self-administered therapy in which narcissistic narrators of no apparent accomplishment whine ad nauseam about real or imagined angst, this is an exuberant account of what Rubinstein calls, in his brief foreword, "the struggles, the mistakes, the adventures, and . . . the miraculous beauty and happiness of my young years." His was a life lived to the full, with triumphs and disappointments galore, and by the time he reached his 80s and began to write this book, Rubinstein had such great stature that his story virtually commanded readers' attention.

It was written in English, one of several languages in which Rubinstein was fluent, and it is written remarkably well, with scarcely a trace of the diction of his native Polish or the other languages (Russian, German, French) he spoke during his youth. I first read it about 30 years ago -- my copy is the third printing of the 1973 paperback -- when I was in the midst of a Rubinstein binge, gobbling up his recordings of Chopin, his fellow Pole, one after the other. I make no claim to particular knowledge of classical music, but I was drawn then (as I am now) to the lyricism and abundant feeling of Rubinstein's Chopin, and I simply wanted to know more about the man who made the music. I was enchanted by the book then, and I remain enchanted by it today.

Rubinstein says, in the same foreword, "I have never kept a diary, and even if I had, it would have been lost with all the rest of my belongings in the two world wars. But, it is my good fortune to be endowed with an uncanny memory which allows me to trace my whole long life almost day by day." This is why the reader does well to approach the book with a certain amount of friendly skepticism, especially with regard to the author's accounts of his numerous youthful amours, but the overall impression it conveys is that veracity wins out over invention. No doubt the many conversations Rubinstein recalls fall considerably short of total accuracy, but they have the clear ring of truth, a sense that is heightened by Rubinstein's willingness to portray himself in an unflattering light when circumstances call for it and by the mixture of pride and self-deprecation with which he describes his formative years.

He was born in Lodz into a relatively prosperous family. His musical gifts became apparent when he was very young, and he was taken to Berlin to undergo the scrutiny of the celebrated violinist Joseph Joachim, who "took it upon himself to direct my musical and cultural education," not as his teacher but as his mentor. At the outset, "one important stipulation that Professor Joachim made was that my mother had to promise not to exploit me as a child prodigy" and he "insisted that I should get a full education until I was artistically mature." Rubinstein seems to have been less a supervised student than an autodidact whose learning was scattershot, but he became a deeply cultured man with passionate opinions across a broad range of subjects.

He had more than a little bit of a lazy streak -- he had a "capacity to work well only if there was something special to work for, like a concert, or, later, my recordings" -- and it became a problem as his musical education proceeded. By the time he was well into his 20s he had begun to accumulate a reputation in Europe and had made his first tour of the United States, but his "repertoire needed expansion." He writes:

"Two major Beethoven sonatas, short pieces by Brahms and Schumann, and the great B minor Sonata of Chopin were added to it in less than two weeks. As before, and as would prove true for many years after, the processes of my means of approach to the music at hand were made up of a peculiar combination: a clear conception of the structure of a composition and complete empathy with the composer's intentions were always within my reach, but because of my lazy habits, I would neglect to pay attention to detail and to a finished and articulate performance of difficult passages that I hated to practice. I used to put the whole weight on the inner message of the music."

Doubtless his laziness was aided and abetted by his sheer precocity. The piano came so naturally and easily to him that he could get by with half an effort where lesser performers would have had to practice endlessly and still would have come up short. He also, notwithstanding all the depth of his love for music, had a somewhat cynical attitude toward audiences: "I learned . . . that a loud, smashing performance, even the worst from a musical standpoint, will always get an enthusiastic reception by the uninitiated, unmusical part of the audience, and I exploited this knowledge, I admit it with shame, in many concerts to come." Beyond that, he was as much a born playboy as a born pianist. He began having affairs, mostly with older women, when he was barely out of short pants, and he was always good for a party, a game of pool or poker, a boisterous conversation into the smallest hours of the morning.

Not to mince words, he could be childish and irresponsible. He was "totally devoid of a sense of economy -- a failure that has proved fatal for most of my life" -- and seems to have felt a deep sense of entitlement where other people's money was concerned. Sometime early in the 20th century (he is not great about supplying dates), while still a teenager, he found himself down and out in Paris, living "the excruciating life of someone constantly short of money, constantly in debt," a period that "was typical of my life for many years, consisting as it did of the discrepancy between the daily struggle for survival and the frequent escapes into [the] most refined luxuries," escapes that were made possible by friends, of whom he had many, and by music lovers eager to be in his company.

He could be totally shameless. Once he persuaded a friend to tide him over with a large amount of money. When the friend agreed, Rubinstein immediately proposed that they blow it all on a trip to Paris, London and other stops on the glitterati trail, which is exactly what they did. He doesn't really seem to have been spoiled -- by the time he was in his teens, he was pretty much estranged from his family -- but was merely willful and self-indulgent. There are moments when one wants to wring his neck, but the candor with which he confesses his youthful misadventures is so free and unaffected that these moments soon pass. Obviously he was immensely likable. He had many friends who were, or would become, famous in musical and artistic circles -- Pablo Casals, Fyodor Chaliapin, Karol Szymanowski, Paul Dukas, Igor Stravinsky, Pablo Picasso, Sergei Diaghilev -- but his accounts of these friendships never sound like mere name-dropping. These simply were the circles in which he traveled.

Much of his time in those early years was spent in the salons of the wealthy, the titled and the privileged. Hanging around with these sublimely boring people doesn't seem to have bothered him -- after all, they brought a fair amount of money his way -- but one of his best stories is at their expense. The great Polish pianist and patriot Ignace Jan Paderewski was asked to perform privately for an English duchess. He "demanded a very large sum of money which was readily granted." Then "he received a letter from the Duchess: 'Dear Maestro, accept my regrets for not inviting you to the dinner. As a professional artist, you will be more at ease in a nice room where you can rest before the concert. Yours, etc.' " Paderewski replied: "Dear Duchess: thanks for your letter. As you so kindly inform me that I am not obliged to be present at your dinner, I shall be satisfied with half of my fee. Yours, etc."

There are many other delicious stories in this book's nearly 500 pages. There is also a pervasive sense of the lost world of pre-World War I Europe, "the long era of the easy, peaceful intercourse between nations, of gracious living, of good taste, of good manners, of prosperity," a world that, with the war's onset, "was gone forever." Thus for all the happiness with which this book is imbued -- his "secret of happiness," Rubinstein writes, is, "Love life for better or for worse, without conditions" -- there is also an undercurrent of sadness, of grief not merely for the author's youth but for the world in which he lived it. All in all "My Young Years" is a lovely book, and it's a real pity that prospective readers must go hunting for it in used bookstores and libraries or buy it online.

Jonathan Yardley's e-mail address

Tuesday, April 01, 2008

Out of Print (by Eric Alterman, the New Yorker)

Caption: Arianna Huffington questions newspapers’“veneer of unassailable trustworthiness.”

The News Business
Out of Print
The death and life of the American newspaper.
by Eric Alterman March 31, 2008

The American newspaper has been around for approximately three hundred years. Benjamin Harris’s spirited Publick Occurrences, Both Forreign and Domestick managed just one issue, in 1690, before the Massachusetts authorities closed it down. Harris had suggested a politically incorrect hard line on Indian removal and shocked local sensibilities by reporting that the King of France had been taking liberties with the Prince’s wife.

It really was not until 1721, when the printer James Franklin launched the New England Courant, that any of Britain’s North American colonies saw what we might recognize today as a real newspaper. Franklin, Benjamin’s older brother, refused to adhere to customary licensing arrangements and constantly attacked the ruling powers of New England, thereby achieving both editorial independence and commercial success. He filled his paper with crusades (on everything from pirates to the power of Cotton and Increase Mather), literary essays by Addison and Steele, character sketches, and assorted philosophical ruminations.

Three centuries after the appearance of Franklin’s Courant, it no longer requires a dystopic imagination to wonder who will have the dubious distinction of publishing America’s last genuine newspaper. Few believe that newspapers in their current printed form will survive. Newspaper companies are losing advertisers, readers, market value, and, in some cases, their sense of mission at a pace that would have been barely imaginable just four years ago. Bill Keller, the executive editor of the Times, said recently in a speech in London, “At places where editors and publishers gather, the mood these days is funereal. Editors ask one another, ‘How are you?,’ in that sober tone one employs with friends who have just emerged from rehab or a messy divorce.” Keller’s speech appeared on the Web site of its sponsor, the Guardian, under the headline “NOT DEAD YET.”

Perhaps not, but trends in circulation and advertising––the rise of the Internet, which has made the daily newspaper look slow and unresponsive; the advent of Craigslist, which is wiping out classified advertising––have created a palpable sense of doom. Independent, publicly traded American newspapers have lost forty-two per cent of their market value in the past three years, according to the media entrepreneur Alan Mutter. Few corporations have been punished on Wall Street the way those who dare to invest in the newspaper business have. The McClatchy Company, which was the only company to bid on the Knight Ridder chain when, in 2005, it was put on the auction block, has surrendered more than eighty per cent of its stock value since making the $6.5-billion purchase. Lee Enterprises’ stock is down by three-quarters since it bought out the Pulitzer chain, the same year. America’s most prized journalistic possessions are suddenly looking like corporate millstones. Rather than compete in an era of merciless transformation, the families that owned the Los Angeles Times and the Wall Street Journal sold off the majority of their holdings. The New York Times Company has seen its stock decline by fifty-four per cent since the end of 2004, with much of the loss coming in the past year; in late February, an analyst at Deutsche Bank recommended that clients sell off their Times stock. The Washington Post Company has avoided a similar fate only by rebranding itself an “education and media company”; its testing and prep company, Kaplan, now brings in at least half the company’s revenue.

Until recently, newspapers were accustomed to operating as high-margin monopolies. To own the dominant, or only, newspaper in a mid-sized American city was, for many decades, a kind of license to print money. In the Internet age, however, no one has figured out how to rescue the newspaper in the United States or abroad. Newspapers have created Web sites that benefit from the growth of online advertising, but the sums are not nearly enough to replace the loss in revenue from circulation and print ads.

Most managers in the industry have reacted to the collapse of their business model with a spiral of budget cuts, bureau closings, buyouts, layoffs, and reductions in page size and column inches. Since 1990, a quarter of all American newspaper jobs have disappeared. The columnist Molly Ivins complained, shortly before her death, that the newspaper companies’ solution to their problem was to make “our product smaller and less helpful and less interesting.” That may help explain why the dwindling number of Americans who buy and read a daily paper are spending less time with it; the average is down to less than fifteen hours a month. Only nineteen per cent of Americans between the ages of eighteen and thirty-four claim even to look at a daily newspaper. The average age of the American newspaper reader is fifty-five and rising.

Philip Meyer, in his book “The Vanishing Newspaper” (2004), predicts that the final copy of the final newspaper will appear on somebody’s doorstep one day in 2043. It may be unkind to point out that all these parlous trends coincide with the opening, this spring, of the $450-million Newseum, in Washington, D.C., but, more and more, what Bill Keller calls “that lovable old-fashioned bundle of ink and cellulose” is starting to feel like an artifact ready for display under glass.

Taking its place, of course, is the Internet, which is about to pass newspapers as a source of political news for American readers. For young people, and for the most politically engaged, it has already done so. As early as May, 2004, newspapers had become the least preferred source for news among younger people. According to “Abandoning the News,” published by the Carnegie Corporation, thirty-nine per cent of respondents under the age of thirty-five told researchers that they expected to use the Internet in the future for news purposes; just eight per cent said that they would rely on a newspaper. It is a point of ironic injustice, perhaps, that when a reader surfs the Web in search of political news he frequently ends up at a site that is merely aggregating journalistic work that originated in a newspaper, but that fact is not likely to save any newspaper jobs or increase papers’ stock valuation.

Among the most significant aspects of the transition from “dead tree” newspapers to a world of digital information lies in the nature of “news” itself. The American newspaper (and the nightly newscast) is designed to appeal to a broad audience, with conflicting values and opinions, by virtue of its commitment to the goal of objectivity. Many newspapers, in their eagerness to demonstrate a sense of balance and impartiality, do not allow reporters to voice their opinions publicly, march in demonstrations, volunteer in political campaigns, wear political buttons, or attach bumper stickers to their cars.

In private conversation, reporters and editors concede that objectivity is an ideal, an unreachable horizon, but journalists belong to a remarkably thin-skinned fraternity, and few of them will publicly admit to betraying in print even a trace of bias. They discount the notion that their beliefs could interfere with their ability to report a story with perfect balance. As the venerable “dean” of the Washington press corps, David Broder, of the Post, puts it, “There just isn’t enough ideology in the average reporter to fill a thimble.”

Meanwhile, public trust in newspapers has been slipping at least as quickly as the bottom line. A recent study published by Sacred Heart University found that fewer than twenty per cent of Americans said they could believe “all or most” media reporting, a figure that has fallen from more than twenty-seven per cent just five years ago. “Less than one in five believe what they read in print,” the 2007 “State of the News Media” report, issued by the Project for Excellence in Journalism, concluded. “CNN is not really more trusted than Fox, or ABC than NBC. The local paper is not viewed much differently than the New York Times.” Vastly more Americans believe in flying saucers and 9/11 conspiracy theories than believe in the notion of balanced—much less “objective”—mainstream news media. Nearly nine in ten Americans, according to the Sacred Heart study, say that the media consciously seek to influence public policies, though they disagree about whether the bias is liberal or conservative.

No less challenging is the rapid transformation that has taken place in the public’s understanding of, and demand for, “news” itself. Rupert Murdoch, in a speech to the American Society of Newspaper Editors, in April, 2005—two years before his five-billion-dollar takeover of Dow Jones & Co. and the Wall Street Journal—warned the industry’s top editors and publishers that the days when “news and information were tightly controlled by a few editors, who deigned to tell us what we could and should know,” were over. No longer would people accept “a godlike figure from above” presenting the news as “gospel.” Today’s consumers “want news on demand, continuously updated. They want a point of view about not just what happened but why it happened. . . . And finally, they want to be able to use the information in a larger community—to talk about, to debate, to question, and even to meet people who think about the world in similar or different ways.”

One month after Murdoch’s speech, a thirty-one-year-old computer whiz, Jonah Peretti, and a former A.O.L. executive, Kenneth Lerer, joined the ubiquitous commentator-candidate-activist Arianna Huffington to launch a new Web site, which they called the Huffington Post. First envisaged as a liberal alternative to the Drudge Report, the Huffington Post started out by aggregating political news and gossip; it also organized a group blog, with writers drawn largely from Huffington’s alarmingly vast array of friends and connections. Huffington had accumulated that network during years as a writer on topics from Greek philosophy to the life of Picasso, as the spouse of a wealthy Republican congressman in California, and now, after a divorce and an ideological conversion, as a Los Angeles-based liberal commentator and failed gubernatorial candidate.

Almost by accident, however, the owners of the Huffington Post had discovered a formula that capitalized on the problems confronting newspapers in the Internet era, and they are convinced that they are ready to reinvent the American newspaper. “Early on, we saw that the key to this enterprise was not aping Drudge,” Lerer recalls. “It was taking advantage of our community. And the key was to think of what we were doing through the community’s eyes.”

On the Huffington Post, Peretti explains, news is not something handed down from above but “a shared enterprise between its producer and its consumer.” Echoing Murdoch, he says that the Internet offers editors “immediate information” about which stories interest readers, provoke comments, are shared with friends, and generate the greatest number of Web searches. An Internet-based news site, Peretti contends, is therefore “alive in a way that is impossible for paper and ink.”

Though Huffington has a news staff (it is tiny, but the hope is to expand in the future), the vast majority of the stories that it features originate elsewhere, whether in print, on television, or on someone’s video camera or cell phone. The editors link to whatever they believe to be the best story on a given topic. Then they repurpose it with a catchy, often liberal-leaning headline and provide a comment section beneath it, where readers can chime in. Surrounding the news articles are the highly opinionated posts of an apparently endless army of both celebrity (Nora Ephron, Larry David) and non-celebrity bloggers—more than eighteen hundred so far. The bloggers are not paid. The over-all effect may appear chaotic and confusing, but, Lerer argues, “this new way of thinking about, and presenting, the news, is transforming news as much as CNN did thirty years ago.” Arianna Huffington and her partners believe that their model points to where the news business is heading. “People love to talk about the death of newspapers, as if it’s a foregone conclusion. I think that’s ridiculous,” she says. “Traditional media just need to realize that the online world isn’t the enemy. In fact, it’s the thing that will save them, if they fully embrace it.”

It’s an almost comically audacious ambition for an operation with only forty-six full-time employees—many of whom are barely old enough to rent a car. But, with about eleven million dollars at its disposal, the site is poised to break even on advertising revenue of somewhere between six and ten million dollars annually. What most impresses advertisers—and depresses newspaper-company executives—is the site’s growth numbers. In the past thirty days, thanks in large measure to the excitement of the Democratic primaries, the site’s “unique visitors”—that is, individual computers that clicked on one of its pages––jumped to more than eleven million, according to the company. And, according to estimates from Nielsen NetRatings and comScore, the Huffington Post is more popular than all but eight newspaper sites, rising from sixteenth place in December.

Arthur Miller once described a good newspaper as “a nation talking to itself.” If only in this respect, the Huffington Post is a great newspaper. It is not unusual for a short blog post to inspire a thousand posts from readers—posts that go off in their own directions and lead to arguments and conversations unrelated to the topic that inspired them. Occasionally, these comments present original perspectives and arguments, but many resemble the graffiti on a bathroom wall.

The notion that the Huffington Post is somehow going to compete with, much less displace, the best traditional newspapers is arguable on other grounds as well. The site’s original-reporting resources are minuscule. The site has no regular sports or book coverage, and its entertainment section is a trashy grab bag of unverified Internet gossip. And, while the Huffington Post has successfully positioned itself as the place where progressive politicians and Hollywood liberal luminaries post their anti-Bush Administration sentiments, many of the original blog posts that it publishes do not merit the effort of even a mouse click.

Additional oddities abound. Whereas a newspaper tends to stand by its story on the basis of an editorial process in which professional reporters and editors attempt to vet their sources and check their accuracy before publishing, the blogosphere relies on its readership—its community—for quality control. At the Huffington Post, Jonah Peretti explains, the editors “stand behind our front page” and do their best to insure that only trusted bloggers and reliable news sources are posted there. Most posts inside the site, however, go up before an editor sees them. Only if a post is deemed by a reader to be false, defamatory, or offensive does an editor get involved.

The Huffington Post’s editorial processes are based on what Peretti has named the “mullet strategy.” (“Business up front, party in the back” is how his trend-spotting site BuzzFeed glosses it.) “User-generated content is all the rage, but most of it totally sucks,” Peretti says. The mullet strategy invites users to “argue and vent on the secondary pages, but professional editors keep the front page looking sharp. The mullet strategy is here to stay, because the best way for Web companies to increase traffic is to let users have control, but the best way to sell advertising is a slick, pretty front page where corporate sponsors can admire their brands.”

This policy is hardly without its pitfalls. During the Hurricane Katrina crisis, the activist Randall Robinson referred, in a post, to reports from New Orleans that some people there were “eating corpses to survive.” When Arianna Huffington heard about the post, she got in touch with Robinson and found that he could not support his musings; she asked Robinson to post a retraction. The alacrity with which the correction took place was admirable, but it was not fast enough to prevent the false information from being repeated elsewhere.

The tensions between the leaders of the mainstream media and the challengers from the Web were presaged by one of the most instructive and heated intellectual debates of the American twentieth century.

Between 1920 and 1925, the young Walter Lippmann published three books investigating the theoretical relationship between democracy and the press, including “Public Opinion” (1922), which is credited with inspiring both the public-relations profession and the academic field of media studies. Lippmann identified a fundamental gap between what we naturally expect from democracy and what we know to be true about people. Democratic theory demands that citizens be knowledgeable about issues and familiar with the individuals put forward to lead them. And, while these assumptions may have been reasonable for the white, male, property-owning classes of James Franklin’s Colonial Boston, contemporary capitalist society had, in Lippmann’s view, grown too big and complex for crucial events to be mastered by the average citizen.

Journalism works well, Lippmann wrote, when “it can report the score of a game or a transatlantic flight, or the death of a monarch.” But where the situation is more complicated, “as for example, in the matter of the success of a policy, or the social conditions among a foreign people—that is to say, where the real answer is neither yes or no, but subtle, and a matter of balanced evidence,” journalism “causes no end of derangement, misunderstanding, and even misrepresentation.”

Lippmann likened the average American—or “outsider,” as he tellingly named him—to a “deaf spectator in the back row” at a sporting event: “He does not know what is happening, why it is happening, what ought to happen,” and “he lives in a world which he cannot see, does not understand and is unable to direct.” In a description that may strike a familiar chord with anyone who watches cable news or listens to talk radio today, Lippmann assumed a public that “is slow to be aroused and quickly diverted . . . and is interested only when events have been melodramatized as a conflict.” A committed élitist, Lippmann did not see why anyone should find these conclusions shocking. Average citizens are hardly expected to master particle physics or post-structuralism. Why should we expect them to understand the politics of Congress, much less that of the Middle East?

Lippmann’s preferred solution was, in essence, to junk democracy entirely. He justified this by arguing that the results were what mattered. Even “if there were a prospect” that people could become sufficiently well-informed to govern themselves wisely, he wrote, “it is extremely doubtful whether many of us would wish to be bothered.” In his first attempt to consider the issue, in “Liberty and the News” (1920), Lippmann suggested addressing the problem by raising the status of journalism to that of more respected professions. Two years later, in “Public Opinion,” he concluded that journalism could never solve the problem merely by “acting upon everybody for thirty minutes in twenty-four hours.” Instead, in one of the oddest formulations of his long career, Lippmann proposed the creation of “intelligence bureaus,” which would be given access to all the information they needed to judge the government’s actions without concerning themselves much with democratic preferences or public debate. Just what, if any, role the public would play in this process Lippmann never explained.

John Dewey termed “Public Opinion” “perhaps the most effective indictment of democracy as currently conceived ever penned,” and he spent much of the next five years countering it. The result, published in 1927, was an extremely tendentious, dense, yet important book, titled “The Public and Its Problems.” Dewey did not dispute Lippmann’s contention regarding journalism’s flaws or the public’s vulnerability to manipulation. But Dewey thought that Lippmann’s cure was worse than the disease. While Lippmann viewed public opinion as little more than the sum of the views of each individual, much like a poll, Dewey saw it more like a focus group. The foundation of democracy to Dewey was less information than conversation. Members of a democratic society needed to cultivate what the journalism scholar James W. Carey, in describing the debate, called “certain vital habits” of democracy—the ability to discuss, deliberate on, and debate various perspectives in a manner that would move it toward consensus.

Dewey also criticized Lippmann’s trust in knowledge-based élites. “A class of experts is inevitably so removed from common interests as to become a class with private interests and private knowledge,” he argued. “The man who wears the shoe knows best that it pinches and where it pinches, even if the expert shoemaker is the best judge of how the trouble is to be remedied.”

Lippmann and Dewey devoted much of the rest of their lives to addressing the problems they had diagnosed, Lippmann as the archetypal insider pundit and Dewey as the prophet of democratic education. To the degree that posterity can be said to have declared a winner in this argument, the future turned out much closer to Lippmann’s ideal. Dewey’s confidence in democracy rested in significant measure on his “faith in the capacity of human beings for intelligent judgment and action if proper conditions are furnished.” But nothing in his voluminous writings gives the impression that he believed these conditions—which he defined expansively to include democratic schools, factories, voluntary associations, and, particularly, newspapers—were ever met in his lifetime. (Dewey died in 1952, at the age of ninety-two.)

The history of the American press demonstrates a tendency toward exactly the kind of professionalization for which Lippmann initially argued. When Lippmann was writing, many newspapers remained committed to the partisan model of the eighteenth- and nineteenth-century American press, in which editors and publishers viewed themselves as appendages of one or another political power or patronage machine and slanted their news offerings accordingly. (Think of Thomas Jefferson and Alexander Hamilton battling each other through their competing newspapers while serving in George Washington’s Cabinet.) The twentieth-century model, in which newspapers strive for political independence and attempt to act as referees between competing parties on behalf of what they perceive to be the public interest, was, in Lippmann’s time, in its infancy.

As the profession grew more sophisticated and respected, in part owing to Lippmann’s example, top reporters, anchors, and editors naturally rose in status to the point where some came to be considered the social equals of the senators, Cabinet secretaries, and C.E.O.s they reported on. Just as naturally, these same reporters and editors sometimes came to identify with their subjects, rather than with their readers, as Dewey had predicted. Aside from biennial elections featuring smaller and smaller portions of the electorate, politics increasingly became a business for professionals and a spectator sport for the great unwashed—much as Lippmann had hoped and Dewey had feared. Beyond the publication of the occasional letter to the editor, the role of the reader was defined as purely passive.

The Lippmann model received its initial challenge from the political right. Many conservatives regarded the major networks, newspapers, and newsweeklies—the mainstream media—as liberal arbiters, incapable of covering without bias the civil-rights movement in the South or Barry Goldwater’s Presidential campaign. They responded by building think tanks and media outlets designed both to challenge and to bypass the mainstream media. The Reagan revolution, which brought conservatives to power in Washington, had its roots not only in the candidate’s personal appeal as a “great communicator” but in a decades-long campaign of ideological spadework undertaken in magazines such as William F. Buckley, Jr.,’s National Review and Norman Podhoretz’s Commentary and in the pugnacious editorial pages of the Wall Street Journal, edited for three decades by Robert Bartley. The rise of what has come to be known as the conservative “counter-establishment” and, later, of media phenomena such as Rush Limbaugh, on talk radio, and Bill O’Reilly, on cable television, can be viewed in terms of a Deweyan community attempting to seize the reins of democratic authority and information from a Lippmann-like élite.

A liberal version of the Deweyan community took longer to form, in part because it took liberals longer to find fault with the media. Until the late nineteen-seventies, many in the mainstream media did, in fact, exhibit the “liberal bias” with which conservatives continue to charge them, regarding their unquestioned belief both in a strong, activist government and in its moral responsibility to insure the expansion of rights to women and to ethnic and racial minorities. But a concerted effort to recruit pundits from the new conservative counter-establishment, coupled with investment by wealthy right-wing activists and businessmen in an interlocking web of counter-establishment think tanks, pressure groups, periodicals, radio stations, and television networks, operated as a kind of rightward gravitational pull on the mainstream’s reporting and helped to create a far more sympathetic context for conservative candidates than Goldwater supporters could have imagined.

Duncan Black, a former economics professor who writes a popular progressive blog under the name Atrios, explains that he, too, believed in what he calls “the myth of the liberal media.” He goes on, “But watching the press’s collective behavior during the Clinton impeachment saga, the Gore campaign, the post-9/11 era, the run-up to the Iraq war, and the Bush Administration’s absurd and dangerous claims of executive power rendered such a belief absurd. Sixty-five per cent of the American public disapproves of the Bush Administration, but that perspective, even now, has very little representation anywhere in the mainstream media.”

The birth of the liberal blogosphere, with its ability to bypass the big media institutions and conduct conversations within a like-minded community, represents a revival of the Deweyan challenge to our Lippmann-like understanding of what constitutes “news” and, in doing so, might seem to revive the philosopher’s notion of a genuinely democratic discourse. The Web provides a powerful platform that enables the creation of communities; distribution is frictionless, swift, and cheap. The old democratic model was a nation of New England towns filled with well-meaning, well-informed yeoman farmers. Thanks to the Web, we can all join in a Deweyan debate on Presidents, policies, and proposals. All that’s necessary is a decent Internet connection.

What put the Huffington Post on the map was a series of pieces during the summer and autumn of 2005, in which Arianna Huffington relentlessly attacked the military and foreign-affairs reporting of the Times’ Judith Miller. Huffington was fed by a steady stream of leaks and suggestions from Times editors and reporters, even though much of the newspaper world considered her journalistic credentials highly questionable.

The Huffington Post was hardly the first Web site to stumble on the technique of leveraging the knowledge of its readers to challenge the mainstream media narrative. For example, conservative bloggers at sites like Little Green Footballs took pleasure in helping to bring down Dan Rather after he broadcast dubious documents allegedly showing that George W. Bush had received special treatment during his service in the Texas Air National Guard.

Long before the conservatives forced out Dan Rather, a liberal freelance journalist named Joshua Micah Marshall had begun a site, called Talking Points Memo, intended to take stories well beyond where mainstream newspapers had taken them, often by relying on the voluntary research and well-timed leaks of an avid readership. His site, begun during the 2000 Florida-recount controversy, ultimately spawned several related sites, which are collectively known as TPM Media, and which are financed through a combination of reader donations and advertising. In the admiring judgment of the Columbia Journalism Review, Talking Points Memo “was almost single-handedly responsible for bringing the story of the fired U.S. Attorneys to a boil,” a scandal that ultimately ended with the resignation of Attorney General Alberto Gonzales and a George Polk Award for Marshall, the first ever for a blogger. Talking Points Memo also played a lead role in defeating the Bush Social Security plan and in highlighting Trent Lott’s praise for Strom Thurmond’s 1948 segregationist Presidential campaign. Lott was eventually forced to step down as Senate Majority Leader.

According to Marshall, “the collaborative aspect” of his site “came about entirely by accident.” His original intention was merely to offer his readers “transparency,” so that his “strong viewpoint” would be distinguishable from the facts that he presented. Over time, however, he found that the enormous response that his work engendered offered access to “a huge amount of valuable information”––information that was not always available to mainstream reporters, who tended to deal largely with what Marshall terms “professional sources.” During the Katrina crisis, for example, Marshall discovered that some of his readers worked in the federal government’s climate-and-weather-tracking infrastructure. They provided him and the site with reliable reporting available nowhere else.

Marshall’s undeniable achievement notwithstanding, traditional newspaper men and women tend to be unimpressed by the style of journalism practiced at the political Web sites. Operating on the basis of a Lippmann-like reverence for inside knowledge and contempt for those who lack it, many view these sites the way serious fiction authors might view the “novels” tapped out by Japanese commuters on their cell phones. Real reporting, especially the investigative kind, is expensive, they remind us. Aggregation and opinion are cheap.

And it is true: no Web site spends anything remotely like what the best newspapers do on reporting. Even after the latest round of new cutbacks and buyouts are carried out, the Times will retain a core of more than twelve hundred newsroom employees, or approximately fifty times as many as the Huffington Post. The Washington Post and the Los Angeles Times maintain between eight hundred and nine hundred editorial employees each. The Times’ Baghdad bureau alone costs around three million dollars a year to maintain. And while the Huffington Post shares the benefit of these investments, it shoulders none of the costs.

Despite the many failures at newspapers, the vast majority of reporters and editors have devoted years, even decades, to understanding the subjects of their stories. It is hard to name any bloggers who can match the professional expertise, and the reporting, of, for example, the Post s Barton Gellman and Dana Priest, or the Times’ Dexter Filkins and Alissa Rubin.

In October, 2005, at an advertisers’ conference in Phoenix, Bill Keller complained that bloggers merely “recycle and chew on the news,” contrasting that with the Times’ emphasis on what he called “a ‘journalism of verification,’ ” rather than mere “assertion.”

“Bloggers are not chewing on the news. They are spitting it out,” Arianna Huffington protested in a Huffington Post blog. Like most liberal bloggers, she takes exception to the assumption by so many traditional journalists that their work is superior to that of bloggers when it comes to ferreting out the truth. The ability of bloggers to find the flaws in the mainstream media’s reporting of the Iraq war “highlighted the absurdity of the knee jerk comparison of the relative credibility of the so-called MSM and the blogosphere,” she said, and went on, “In the run-up to the Iraq war, many in the mainstream media, including the New York Times, lost their veneer of unassailable trustworthiness for many readers and viewers, and it became clear that new media sources could be trusted—and indeed are often much quicker at correcting mistakes than old media sources.”

But Huffington fails to address the parasitical relationship that virtually all Internet news sites and blog commentators enjoy with newspapers. The Huffington Post made a gesture in the direction of original reporting and professionalism last year when it hired Thomas Edsall, a forty-year veteran of the Washington Post and other papers, as its political editor. At the time he was approached by the Huffington Post, Edsall said, he felt that the Post had become “increasingly driven by fear—the fear of declining readership, the fear of losing advertisers, the fear of diminishing revenues, the fear of being swamped by the Internet, the fear of irrelevance. Fear drove the paper, from top to bottom, to corrupt the entire news operation.” Joining the Huffington Post, Edsall said, was akin to “getting out of jail,” and he has written, ever since, with a sense of liberation. But such examples are rare.

And so even if one agrees with all of Huffington’s jabs at the Times, and Edsall’s critique of the Washington Post, it is impossible not to wonder what will become of not just news but democracy itself, in a world in which we can no longer depend on newspapers to invest their unmatched resources and professional pride in helping the rest of us to learn, however imperfectly, what we need to know.

In a recent episode of “The Simpsons,” a cartoon version of Dan Rather introduced a debate panel featuring “Ron Lehar, a print journalist from the Washington Post.” This inspired Bart’s nemesis Nelson to shout, “Haw haw! Your medium is dying!”

“Nelson!” Principal Skinner admonished the boy.

“But it is!” was the young man’s reply.

Nelson is right. Newspapers are dying; the evidence of diminishment in economic vitality, editorial quality, depth, personnel, and the over-all number of papers is everywhere. What this portends for the future is complicated. Three years ago, Rupert Murdoch warned newspaper editors, “Many of us have been remarkably, unaccountably complacent . . . quietly hoping that this thing called the digital revolution would just limp along.” Today, almost all serious newspapers are scrambling to adapt themselves to the technological and community-building opportunities offered by digital news delivery, including individual blogs, video reports, and “chat” opportunities for readers. Some, like the Times and the Post, will likely survive this moment of technological transformation in different form, cutting staff while increasing their depth and presence online. Others will seek to focus themselves locally. Newspaper editors now say that they “get it.” Yet traditional journalists are blinkered by their emotional investment in their Lippmann-like status as insiders. They tend to dismiss not only most blogosphere-based criticisms but also the messy democratic ferment from which these criticisms emanate. The Chicago Tribune recently felt compelled to shut down comment boards on its Web site for all political news stories. Its public editor, Timothy J. McNulty, complained, not without reason, that “the boards were beginning to read like a community of foul-mouthed bigots.”

Arianna Huffington, for her part, believes that the online and the print newspaper model are beginning to converge: “As advertising dollars continue to move online—as they slowly but certainly are—HuffPost will be adding more and more reporting and the Times and Post model will continue with the kinds of reporting they do, but they’ll do more of it originally online.” She predicts “more vigorous reporting in the future that will include distributed journalism—wisdom-of-the-crowd reporting of the kind that was responsible for the exposing of the Attorneys General firing scandal.” As for what may be lost in this transition, she is untroubled: “A lot of reporting now is just piling on the conventional wisdom—with important stories dying on the front page of the New York Times.”

The survivors among the big newspapers will not be without support from the nonprofit sector. ProPublica, funded by the liberal billionaires Herb and Marion Sandler and headed by the former Wall Street Journal managing editor Paul Steiger, hopes to provide the mainstream media with the investigative reporting that so many have chosen to forgo. The Center for Independent Media, headed by David Bennahum, a former writer at Wired, recently hired Jefferson Morley, from the Washington Post, and Allison Silver, a former editor at both the Los Angeles Times and the New York Times, to oversee a Web site called the Washington Independent. It’s one of a family of news-blogging sites meant to pick up some of the slack left by declining staffs in local and Washington reporting, with the hope of expanding everywhere. But to imagine that philanthropy can fill all the gaps arising from journalistic cutbacks is wishful thinking.

And so we are about to enter a fractured, chaotic world of news, characterized by superior community conversation but a decidedly diminished level of first-rate journalism. The transformation of newspapers from enterprises devoted to objective reporting to a cluster of communities, each engaged in its own kind of “news”––and each with its own set of “truths” upon which to base debate and discussion––will mean the loss of a single national narrative and agreed-upon set of “facts” by which to conduct our politics. News will become increasingly “red” or “blue.” This is not utterly new. Before Adolph Ochs took over the Times, in 1896, and issued his famous “without fear or favor” declaration, the American scene was dominated by brazenly partisan newspapers. And the news cultures of many European nations long ago embraced the notion of competing narratives for different political communities, with individual newspapers reflecting the views of each faction. It may not be entirely coincidental that these nations enjoy a level of political engagement that dwarfs that of the United States.

The transformation will also engender serious losses. By providing what Bill Keller, of the Times, calls the “serendipitous encounters that are hard to replicate in the quicker, reader-driven format of a Web site”—a difference that he compares to that “between a clock and a calendar”—newspapers have helped to define the meaning of America to its citizens. To choose one date at random, on the morning of Monday, February 11th, I picked up the paper-and-ink New York Times on my doorstep, and, in addition to the stories one could have found anywhere—Obama defeating Clinton again and the Bush Administration’s decision to seek the death penalty for six Guantánamo detainees—the front page featured a unique combination of articles, stories that might disappear from our collective consciousness were there no longer any institution to generate and publish them. These included a report from Nairobi, by Jeffrey Gettleman, on the effect of Kenya’s ethnic violence on the country’s middle class; a dispatch from Doha, by Tamar Lewin, on the growth of American university campuses in Qatar; and, in a scoop that was featured on the Huffington Post’s politics page and excited much of the blogosphere that day, a story, by Michael R. Gordon, about the existence of a study by the RAND Corporation which offered a harsh critique of the Bush Administration’s performance in Iraq. The juxtaposition of these disparate topics forms both a baseline of knowledge for the paper’s readers and a picture of the world they inhabit. In “Imagined Communities” (1983), an influential book on the origins of nationalism, the political scientist Benedict Anderson recalls Hegel’s comparison of the ritual of the morning paper to that of morning prayer: “Each communicant is well aware that the ceremony he performs is being replicated simultaneously by thousands (or millions) of others of whose existence he is confident, yet of whose identity he has not the slightest notion.” It is at least partially through the “imagined community” of the daily newspaper, Anderson writes, that nations are forged.

Finally, we need to consider what will become of those people, both at home and abroad, who depend on such journalistic enterprises to keep them safe from various forms of torture, oppression, and injustice. “People do awful things to each other,” the veteran war photographer George Guthrie says in “Night and Day,” Tom Stoppard’s 1978 play about foreign correspondents. “But it’s worse in places where everybody is kept in the dark.” Ever since James Franklin’s New England Courant started coming off the presses, the daily newspaper, more than any other medium, has provided the information that the nation needed if it was to be kept out of “the dark.” Just how an Internet-based news culture can spread the kind of “light” that is necessary to prevent terrible things, without the armies of reporters and photographers that newspapers have traditionally employed, is a question that even the most ardent democrat in John Dewey’s tradition may not wish to see answered.