

Large language and image models are already here and impacting all areas of human life, including journalism.

By Sara Goudarzi
Associate Editor, Disruptive Technologies
Bulletin of the Atomic Scientists
In November 2022, tech media website CNET began publishing pieces generated entirely by artificial intelligence tools. By January, when Futurism reported on it, CNET had already published more than 70 articles under the โCNET Money Staffโ byline. The move by CNET and errors discovered in 41 of those stories prompted Wiredโs global editorial director Gideon Lichfield to put out policies on how his magazine will and will not use AI tools.
โI decided we needed guidelines, both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us,โ Lichfield says.
The guidelines, for example, explicitly state that the magazine will not publish articles written or edited by AI tools. However, the editorial staff could use language or image generatorsโlike DALL-E 2 and Midjourneyโfor brainstorming.
โI was impressed by Wired magazine; those [guidelines] are the best in class that Iโve seen so far,โ says David Karpf, an associate professor in the School of Media and Public Affairs at the George Washington University. โThey sent a strong line that they are not going to publish anything written by generative AIโthey would treat that as akin to plagiarismโbut that what they will use it for is idea creation.โ
READ ENTIRE ARTICLE AT BULLETIN OF THE ATOMIC SCIENTISTS


