After an embarrassing foray into using AI to help produce articles and then going back and correcting many of them, CNET has clarified its AI policy.
CNET began experimenting with ChatGPT around the turn of the year, using it to produce a number of articles without notifying readers. The experiment did not go well, with the articles plagued with factual errors, forcing multiple corrections. In fact, The Washington Post declared the whole fiasco a “journalistic disaster.”
Concerns over AI’s use were one of the major reasons for CNET’s creative staff voting to unionize, specifically calling out “a lack of transparency and accountability” regarding the use of AI.
CNET appears to be doing damage control, releasing an official AI policy. The company outlined two important ethical standards:
Our ethical standard includes two tenets:
- Every piece of content we publish is factual and original, whether it’s created by a human alone or assisted by our in-house AI engine, which we call RAMP. (It stands for Responsible AI Machine Partner.) If and when we use generative AI to create content, that content will be sourced from our own data, our own previously published work, or carefully fact-checked by a CNET editor to ensure accuracy and appropriately cited sources.
- Creators are always credited for their work. The use of our AI engine will include training on processes that prioritize accurate sourcing and include standards of citation.
The company also promised that no content would be written entirely by AI, at least for the time being.
Writing full stories: None of the stories on CNET have been or will be completely written by an AI. If that changes, as technology and our processes evolve, we will disclose it here. For now, articles may contain portions of text that were generated by AI and then edited and fact-checked by our editors.
The policy is sure to help assuage writers’ concerns, at least for now.