This article was produced in collaboration with Columbia Journalism Review.
Ask a working journalist about AI, and you’re likely to hear a string of expletives, an indictment of tech companies, and—especially if the person is over fifty—a searing lament for times past.
The use of AI in journalism has recently created controversy everywhere from the Cleveland Plain Dealer to the Associated Press, as many reporters have passionately disagreed with managers who have insisted on its value in reporting and drafting stories. According to an exclusive by Semafor’s Max Tani, a Slack conversation hosted by AP turned ugly when Aimee Rinehart, senior product manager for AI, cited the virtues of AI use and wrote that “resistance is futile.”
One AP reporter responded on the multi-newsroom chat that the “dismissiveness and disdain some of you have shown for human writing are insulting and abhorrent. Strong reporting and clear writing are the lifeblood of journalism, not AI-written slop,” according to Semafor. Another found it “hard not to escape the feeling that the people hyping/guiding the decisions around these powerful tools exist in a totally different reality than the people who wake up every day and do the work of reporting.” (The AP told Semafor in a statement that explained its AI use further, “This discussion among staffers from different departments doesn’t reflect the overall position of the AP regarding the use of AI.”)
The reporters’ sentiments are totally understandable. Nonetheless, in the past, you might have heard much the same outrage about the calamitous effects of radio, TV, 24-hour cable, the computer, electronic editing, the internet, email, streaming services, and social media. Without diminishing legitimate concerns that jobs may be lost and the profession degraded by the AI juggernaut, a juggernaut it is, and journalists will have to get a little more clear-eyed and comfortable with it.
These days, in my role at New York University’s Ethics and Journalism Initiative, I’m finding it helpful to demystify AI chatbots by separating their use into three categories known to just about every journalist: source, colleague, and assistant. Within each category, it becomes easier to see where the risks, as well as the opportunities, may lie. It also may make it easier for newsrooms to articulate clear guidelines for their use. (Open AI, the company behind ChatGPT, provided seed money to the NYU Ethics and Journalism Initiative in 2023. The Initiative operates independently of any of its financial backers.)
AI as a source…
Given the mass of information an AI bot can synthesize almost instantaneously, I can’t imagine ignoring it as a source, despite its obvious flaws. I liken its abilities to those of an erratic human, someone I’ve nonetheless found to be useful, if occasionally frustrating, in providing background information, suggesting story angles, and dishing about people and events that may turn out to be newsworthy. Such sources offer a lot to check out and confirm or debunk, even though you know they sometimes make things up–and quite often don’t even know they’re doing so.
Just as with most human interviewees, you can’t rely on a chatbot as a single source. You have to factor out biases, verify all the purported facts they assemble, and inform audiences, as far as possible, how you, and they, got the information. Of course, a chatbot can’t be held accountable for its choices and its errors. It’s important to remember that it’s not human. But, on the plus side, unlike human sources, chatbots are always available when you need them.
AI as a colleague…
Reuters’ Mo Tamman changed my way of thinking about chatbots when, in late 2023, he described to an NYU journalism audience how he employs a bot as a perpetually handy colleague, chatting away with it on one screen—vetting story ideas, considering approaches, reviewing what others have said and written on a topic—while he works on his story on a second screen. He said he uses generative AI throughout his workday, and, while “I don’t expect it to be right half the time, when I’m asking a question, I’m having a conversation with it.” After hearing from Mo, I started doing this too, as have other journalists I know, and it often produces good results when well prompted. (I’m happy to say that Mo, a terrific former colleague of mine, also continues to talk to humans.)
AI as an assistant…
AI’s role as an assistant, or junior colleague, might include outlining or writing first drafts, analyzing and visualizing data, providing transcriptions and translations, creating story summaries, and helping critique and edit one’s work.
In fact, I used Google Gemini and ChatGPT to critique earlier drafts of this article, which, to be clear, I did write myself. I found the process helpful. With some caveats, I’m pretty comfortable when AI is truly assisting the work rather than taking it over.
When it comes to writing, I certainly believe it is essential for journalism students to learn how to draft and organize their own stories, mostly because the writing and thinking processes are intertwined. But if a story is really straightforward, like a sports results recap or a corporate earnings report, AI can probably draft it faster and about as well as I can from the audience’s perspective. It writes, I check.
Same with story summaries, which many news organizations place atop their articles. To be responsible, though, a news organization must have people checking the summaries it allows AI to create, given how inaccurate these can be. Bloomberg learned this last year when it had to correct several dozen A.I.-generated summaries.
I would never rely on an assistant to draft more complex stories — such as features, analytical pieces, investigative reports, or New Yorker and Atlantic-style narratives. These require deeper thinking, more mature judgment, and more attention to nuance, elegance, and human emotion, not AI’s strong suits, at least for now. As journalism shades into art, I still want my Jane Mayer or my George Packer.
My choice was to use Google Gemini and ChatGPT to critique, but not to write or edit, earlier drafts of this piece. Thus, I controlled the process. For the most part, the two chatbots produced results that were similarly appropriate and detailed, though ChatGPT had considerable trouble with current affairs. In one draft, I had quoted Pope Leo XIV’s admonition to priests not to use AI in writing homilies. ChatGPT responded a bit snippily: “As of now, there has not been a Pope Leo XIV. The current pope is Pope Francis.” When challenged, it doubled down, responding that readers might “wonder whether you intentionally inserted a false pope to test AI.”
The two chatbots proved especially valuable and reliable on form and flow. I fixed a few punctuation errors and repetitions that the bots flagged and tightened the piece in several places.
In the “AI as a source section,” I further agreed with ChatGPT’s suggestions and as a result, I stressed that chatbots do make mistakes, added a clause on providing transparency about AI’s sourcing and a sentence emphasizing that, unlike humans, bots can’t be held responsible for their actions. In ChatGPT’s words, “A human source can be held accountable; a chatbot cannot.”
My own use of generative AI in researching and honing this article underlined my view that it shouldn’t be ignored, despite the fears of many that it will destroy journalism jobs, introduce errors, propagate bias, render elegant writing and deep thinking obsolete, and, not incidentally, drain vital environmental resources.
I remember when we started working on Microsoft Windows at The Wall Street Journal in the ‘90s, we had to endure several days of dense classroom instruction. Many of us wanted to stick with our clunky old software called XyRite, but Windows was much better, and we got used to it – and then forgot that it had ever been a choice. I also recall that, when I was editor-in-chief of BusinessWeek in the late aughts, some of the most senior reporters refused to write for the online edition, even though it was obviously where readers were migrating. Over time, the earlier opposition appeared positively quaint – if you thought about it at all. The new technology had become, at the very least, familiar.
Yes, use it carefully, but to disregard it would be like ignoring the internet at the turn of this century–or electricity in the previous one. Reflecting on the growing chasm between those with AI fluency and those who look away, Gina Chua, executive director of CUNY’s Tow-Knight Center for Journalism Futures, writes that falling dangerously behind is “a fate I fear for newsrooms that don’t take AI seriously. And not just as a threat, although they should do that, but as a real opportunity.”
NOTE TO READERS:
Open AI, which developed and released ChatGPT in 2022, provided seed money to the NYU Ethics and Journalism Initiative in 2023. The Initiative operates independently of any of its financial backers., which include the Knight Foundation, Craig Newmark Philanthropies, Nathan S. Collier, and others.
The Associated Press supports the Ethics and Journalism Initiative through an in-kind contribution. In a statement on AP’s use of artificial intelligence in response to a query for this story, Patrick Maks, AP’s director of media relations & corporate communications, further said this:
“We’ve been an industry leader in setting AI standards that safeguard the vital role of journalists, while also allowing for AI use for things like language translation, summarizations, transcriptions and content tagging. …Our journalists are as important as ever.”
Stephen J. Adler is the director of the Ethics and Journalism Initiative at the NYU Arthur L. Carter Journalism Institute and a member of the SPJ Ethics Committee, which is currently revising the SPJ Code of Ethics.

