Saturday, April 4, 2026

Is It Wrong to Write a Book with A.I.? (New Yorker 4/3/2026)

 Full disclosure. Can I do a 180? Haven't read it all yet, but will consider the ideas when my head clears. Appreciate the electronic music idea so far.

 

Is It Wrong to Write a Book with A.I.?

The nature of authorship isn’t as straightforward as it seems.

By Joshua Rothman

April 3, 2026

A human hand holding a pencil writes on a sheet of paper as a robot hand places words on it.

Illustration by Josie Norton


You’re reading Open Questions, Joshua Rothman’s weekly column exploring what it means to be human.


The Roland TR-808, a dictionary-size drum machine released in 1980, weighed eleven pounds, cost twelve hundred dollars, and was technologically unprecedented. Although drum machines had existed for decades, they’d typically used preset sounds (snare, kick, hi-hat) to play preset rhythms (foxtrot, waltz, bossa nova). The 808, by contrast, had an onboard computer, allowing musicians to program their own sounds and percussion patterns. These could be arranged into longer, songlike sequences that played automatically.


When the 808 was released, pretty much no one knew what to do with it. It was quickly discontinued. But then secondhand prices fell by some ninety per cent. Musicians started buying used 808s and experimenting with them. The machines soon helped create countless hit songs (Marvin Gaye’s “Sexual Healing,” Whitney Houston’s “I Wanna Dance with Somebody”), and began to exert an influence on popular music comparable to that of the electric guitar. Musicians discovered that they could employ 808s to build tracks by themselves, pursuing their own idiosyncratic visions without collaboration or compromise. They used the machine’s proudly synthetic sounds and pattern-based logic to create distinctive new genres—electro, dance, hip-hop.


Today, the sound of the 808 is everywhere, and instantly recognizable. What might be less obvious is that the compositional structures of the 808 and its descendants are pervasive, too. Many songs are now written on computers, using sequencers, patterns, and loops, with notes laid out in perfect synchrony on a rhythmic, 4/4 grid. Sound design—the particular timbre of a bass drum or a synth sweep—often defines the identity of a track. More generally, musicians are no longer hemmed in by limits. To make fantastic songs, they don’t need to know music theory or even own instruments; using synths and samples, they can twist knobs to transpose chords, or select a symphony orchestra from a drop-down menu. In fact, they can compose sounds that have never been created by any instrument on Earth, and listeners will gladly follow them into new sonic realms. Traditional instruments haven’t been replaced—we still listen to acoustic guitar—but they exist within a much larger synthesized landscape.


Suppose that the story of “artistic” artificial intelligence follows roughly the same shape as the story of the drum machine. Where are we in that story? We might be in 1983—the year the 808’s successor, the 909, was introduced. (To hear the difference, compare Gaye’s “Sexual Healing” to Madonna’s “Vogue.”) At that time, electronic music was still new, and people had all sorts of objections to it. They said that computerized instruments required neither musicianship nor talent, and that their sound was intrinsically clunky and inexpressive. (To some musicians, that was the point.) They argued that music made with machines lacked the soul and spontaneity of sweaty, striving human beings jamming in a room. (For some, this impersonality was desirable.) They worried that electronic music would put musicians out of work. (The American Federation of Musicians, which had opposed recorded music in the nineteen-thirties, also picketed Don Lewis, the engineer behind the 808, when he performed.) Prestigious producers maintained that, while drum machines might be useful during the songwriting process, they shouldn’t appear on completed recordings. Many listeners felt that, on some fundamental level, electronic music was simply wrong—that it was a form of deception and cheating, that it was destroying music itself.


All of these concerns were valid. But they paled before the forces of democratization, creativity, and taste. It turned out that electronic instruments got lots of people into music-making; that those people used them in clever, fun, and resonant ways; and that listeners liked what they heard. There was simply no arguing with “It’s Tricky” and “Planet Rock,” with “Born Slippy” and “Nothing Compares 2 U.” And so, if the parallel holds, it will soon be unpersuasive to say that art made with A.I. is automatically fake or bad. The tools themselves won’t determine what counts as art; that will depend on how they’re used.


Parallel lines can always diverge; we might conclude that A.I. is no ordinary tool, and that it has no place in our artistic lives. This seems to be the lesson of “Shy Girl,” the horror novel that was recently unmasked as being generated at least partly through artificial intelligence. The book, which follows a young woman who’s been imprisoned as a “pet” by her sugar daddy, was originally self-published; it found a substantial audience online and was acquired by the publishing house Hachette. A few months ago, readers started pointing out that its prose seemed synthesized. The writing, they said, was weighed down by endless adjectives and metaphors, and had a chatbot’s unvarying cadence and tone. Pangram, an A.I.-detection firm, analyzed “Shy Girl” and declared that it was seventy-eight per cent A.I.-generated. Mia Ballard, the book’s author, suggested that a freelance editor to whom she’d given her manuscript might have run the book through A.I. without her consent. Eventually, Hachette cancelled its publication.


Reading all this, you might imagine a detective story in which the synthetic nature of “Shy Girl” is slowly unearthed by tech-savvy online sleuths. But, actually, the novel reads like A.I. from its opening lines:


I wear a pink dress, the kind that promises softness and delivers none. Its tulle is brittle and sharp, brushing against my fur like a thousand tiny teeth, a cruel lover that bites with every move. Every scratch keeps me in place, a reminder of what I am: a pet, a thing shaped for looking, for praise, for command.


If you have the sounds and rhythms of A.I. in your ear, then you can recognize them here almost instantly. To you, “Shy Girl” might feel automated. It’s as though its author failed to program her own patterns, leaving us to listen to the machine’s preset samba and cha-cha. For many people, this is the most obvious argument against using A.I. to write fiction: it simply doesn’t sound good.


And yet the value of a novel isn’t only in its prose. On Amazon, “Shy Girl” has a rating of four out of five stars, based on input from hundreds of reviewers. Many of them praise its premise and ideas—features of the book that it seems reasonable to think were shaped by human decision-making. (One reviewer describes knowing about “the controversy” surrounding the novel, but liking it anyway: “The premise sucked me in.”) The big-picture reality is that many novels are poorly written. They can still succeed with readers because fiction, like music, is a forgiving art form. Just as a good song can have a groovy beat but a predictable melody, so a piece of fiction can work on some levels but not others. Partial success can be enough, as long as readers find something that moves them—suspense, beauty, realism, fantasy, even just a sympathetic protagonist in whom they can recognize themselves.


If the creation of fiction is a layered endeavor—if premise, plot, style, and so on are to some extent separable—then must all the layers be made by the same individual? This question has already been answered by practicing writers in a variety of disciplines, who often work in groups and teams. James Patterson, who produces one out of every seventeen hardcover novels sold in the United States, does so by providing collaborators with detailed outlines and treatments, effectively running what’s been described as a “novel factory.” (He might oversee thirty projects simultaneously, publishing fifteen books a year.) This practice exiles him completely from the realm of literary fiction; some might even question whether Patterson is really a writer. But our expectations vary by context, with implicit understandings that we rarely make explicit. When reading a Booker Prize-winning novel, we expect every word to have been written by the author, but when reading journalism we assume that both writers and editors played a role. We frequently praise the showrunners of prestige television, who rely on groups of writers to produce scripts. When a screenwriter wins an Oscar for Best Original Screenplay, the word “original” means only that the script isn’t an adaptation; a lot of people, credited or not, may have contributed to the final product. Perhaps we’re more open to writerly collaboration when it’s part of a larger project, such as a film, which is itself inherently collaborative. But what if the larger project is the continuation of Patterson’s “Alex Cross” series, which has been running since 1993? No individual could write that many books. A factory is simply required.


It seems inevitable that writers will use A.I. to start their own factories. In February, in the Times, Alexandra Alter interviewed Coral Hart, a pseudonymous romance novelist who has used A.I. tools to speed-write hundreds of novels, which she’s self-published on Amazon under dozens of names. After Hart prompts her system into motion, it can produce, in forty-five minutes, a draft ready for human revision (about, for example, “a rancher who falls for a city girl running away from her past”). Although none of Hart’s novels have been best-sellers, she makes “six figures” through her method, Alter reports, and also offers online classes for aspiring A.I.-assisted romance novelists. The future implied by the story is one of depersonalized, industrial-scale fiction production, where authors become showrunners, supervising A.I. writers’ rooms. One risk, of course, is that readers of such fiction won’t necessarily know who, or what, was involved in producing what they read, undermining the implicit understandings on which they depend. (Amazon asks Hart to disclose her use of A.I.; she sometimes doesn’t.)


But is high-volume production the only option A.I. offers? A lot depends on your goals and perspective. I’m an extremely amateur musician, and I’ve certainly found that technology has increased my productivity. Sitting at my computer, armed only with a two-octave MIDI keyboard, I can hurtle through the steps of composition; I could spam Spotify with two new tracks a day, writing an album a week. But that’s not what I’m doing. Instead, I’m using musical technology to help me get to where I want to go. I couldn’t possibly perform my songs for an audience—I can barely play a dozen bars of piano without making a mistake—but that’s not my aim. I just want to listen out loud to what I hear in my head. To put it in grandiose terms, I want to realize a vision.


Presumably, many aspiring writers want to do the same. They have ideas and want to realize them, but can’t; they need nudges, aids, templates, rough drafts. The further artists move out of amateur hour and into the professional realm, of course, the more we expect their work to reflect their “real” capabilities. But what is real? Through the audio-software company Spitfire, the award-winning Icelandic composer Ólafur Arnalds offers a tool called Cells, which listens to what you play, conjuring around it a shimmering orchestral cloud, characterized by “harmonic movement that follows the composer’s tonality.” Many similar, behind-the-scenes tools assist professional musicians, allowing them to move from improvised ideas to finished compositions. The evolving strings created by Cells sound amazing; you’ve likely heard something like them in a movie theatre. This isn’t A.I.—but it almost is. Has it destroyed the essence of music?


Writing is different—we might reasonably hold this view. I’m sympathetic to it. I write for a living, which means that I’m in love with both the idea of writing and with doing it myself, without A.I. I have an exalted sense of what good writing is. When people ask me what I do for work, I tell them that I’m a journalist; I don’t like to use the word “writer” because I don’t feel worthy of it. I’ve worked my whole life to acquire what skills I have, and I consider myself adept at addressing the essentially technical problems that often bedevil less experienced writers. But the higher, more inspired levels of writing—the imaginative, artistic, inspired aspects—feel a bit out of reach.


My respect for the craft is part of a broader outlook. In my twenties, I went to graduate school in English. My professors there were gifted close readers. Some, such as the poetry critic Helen Vendler, had come up during the rise of the New Criticism—the school of thought, dominant around the nineteen-forties, which held that the best way to read was to inspect every word and punctuation mark, asking what it accomplished. The professor who influenced me the most, Philip Fisher, taught us the “classic” novels—“Pride and Prejudice,” “The Brothers Karamazov”—and knew how to close-read their structural features. He could explain how individual scenes were contrapuntally related, or how strands of plot and registers of language entwined and diverged to make meaning.


It wasn’t all about ideas. The university had a rare-books library. While teaching “Ulysses,” I showed my students Joyce’s marked-up galleys: they could see where, writing in the margin, he’d added a second “yes” to the end of Molly Bloom’s soliloquy. In another class, we looked at a collection of miniature handmade books in which, as children, Charlotte Brontë and her brother Branwell had written their poems and stories. These artifacts affirmed that writing was both a life-spanning enterprise and a way of life. It was one of the highest uses for a mind.


It was easy, especially in that atmosphere, to take a certain conception of “writing” for granted. I knew that, whatever the activity of writing was—difficult, mysterious, important, even, in various postmodern ways, “unstable”—it was also direct and unambiguous. In other art forms, the role of the creator could be more complex, and the status of the art work more fluid. In music, there was Brian Eno; in painting, there was Andy Warhol; in sculpture, there was Andy Goldsworthy; in film, there was Werner Herzog. Marina Abramović depended on audience participation for her performance art. Jeff Koons was a C.E.O.-artist, running an assembly line. In all sorts of ways, artists were using technology to extend themselves, and to change their relationship to their art. But the written word, I thought, was largely exempt from all this. There was experimental literature in which the role of the author could be tweaked; in popular fiction, authorship could be a flexible concept. Yet “real” writing—the literary kind—remained simple.


Now that artificial intelligence is breaching the wall around writing, that simplicity is no longer something we’ll be able to assume. But, by the same token, the virtues of the traditional approach, which once hardly needed to be articulated, now stand in greater relief. In an autotuned or A.I.-synthesized world, perfection and imperfection carry new meanings: the human faults that technology irons out become perfect in their own way, and the smooth surfaces created through technology risk feeling blank and featureless. “Our relationship with technology is very ambivalent in the fact that it’s a very strong love-hate relationship,” Thomas Bangalter, of Daft Punk, told the art magazine Whitewall, in 2009. “There is no limit anymore with technology.” But, he went on, “any kind of human behavior has to be put against some kind of frustration.” The same technologies that expand the creative process also threaten to short-circuit it. To be an artist, Bangalter concluded, “What you have to learn is restraint—put your own limits.” 


No comments:

Once upon a time...