AI Brought Anthony Bourdain's Voice Back To Life. Should It Have?
"We can have a documentary-ethics panel about it later," joked Morgan Neville, director of the new Anthony Bourdain documentary Roadrunner, as he revealed to The New Yorker that three lines in his movie — which sounded like they were being delivered by the late chef-turned-media personality — were actually generated by AI.
Well, later has arrived.
When I wrote my review I was not aware that the filmmakers had used an A.I. to deepfake Bourdain’s voice for portions of the narration. I feel like this tells you all you need to know about the ethics of the people behind this project. https://t.co/7s1mdDOfzl pic.twitter.com/zv2pEvtTim— Sean Burns (@SeanMBurns) July 15, 2021
The film uses a variety of clips from Bourdain's wide back catalog of TV shows, radio and podcast appearances, and audiobook recordings. By design, Neville wanted the AI generated voice overs to blend in with those recordings, so audience members would never know the difference. Critics, like Sean M. Burns, found the technique duplicitous, tweeting "I feel like this tells you all you need to know about the ethics of the people behind this project."
The writer of the original New Yorker piece, Helen Rosner, had a moregracious read of the situation, calling the use of expansive storytelling techniques "entirely consistent with how Bourdain worked."
Is it creepy, knowing about it now? Absolutely. Was it wrong? I don't think so.
Writer and critic Jason Sheehan, who reviewed Roadrunner for NPR before its use of AI became public, says he isn't entirely sure how to feel. "I mean, is it all that different than Ken Burns having Sam Waterston read Abraham Lincoln's letters in his Civil War documentary? Neville claims that he used Bourdain's own words — things that he'd written or said that just didn't exist on tape — and that matters," Sheehan says. "If Burns had asked Waterston to make Lincoln say how much he loved the new Subaru Outback, then sure. That's a problem. But this isn't that. This is the (admittedly queasy) choice to bring back to life the voice of a dead guy, and make that voice speak words that already existed in another form. Is it creepy, knowing about it now? Absolutely. Was it wrong? I don't think so. But these things are decided in public. It'll get hashed out on social media and in spaces like this. And then we'll move on, all of us having been forced to briefly consider the possibility of an endless zombie future where nothing we've ever said or written ever really goes away."
In response to some of the criticism he was getting, Neville responded, saying to Variety that "There were a few sentences that Tony wrote that he never spoke aloud. With the blessing of his estate and literary agent we used AI technology. It was a modern storytelling technique that I used in a few places where I thought it was important to make Tony's words come alive."
Though Bourdain's ex-wife Ottavia Bourdain later tweeted "I certainly was NOT the one who said Tony would have been cool with that."
Is this all ethically squirrelly or is it an interesting use of voice and technology? Does this information feel unsettling because someone did something bad, or have we just not yet acquiesced ourselves to the new reality of deepfakes? Is it even Neville's duty to adhere to the strict truth of things? Are people particularly invested in this instance because of their parasocial relationship with Bourdain as a media identity, instead of the flawed, idiosyncratic figure the movie paints him as? All are worthwhile questions — bring them to the panel.
Copyright 2021 NPR. To see more, visit https://www.npr.org.