The Desk appreciates the support of readers who purchase products or services through links on our website. Learn more...

CNET editor unapologetic over use of error-prone robot

A robot with the CNET logo superimposed on the chest.
(Original photo by Pavel Danilyuk, Composite image by The Desk)

The top editor of Red Ventures’ technology website CNET said recent revelations that the brand used artificial intelligence software to write stories that were filled with errors and plagiarized sentences has pushed the publication to change some of the ways it uses the software and discloses its use to readers.

In a lengthy note to readers published on Wednesday, CNET Editor-in-Chief Connie Guglielmo used the controversy as an opportunity to posit the brand as an innovator in next-generation journalism, one where robots do a lot of the heavy lifting so reporters and editors can spend their time reviewing products and writing original journalism (at least until they’re laid off).

“We stand by the integrity and quality of the information we provide our readers, and we believe you can create a better future when you embrace new ideas,” Guglielmo wrote.

The affirmation came several weeks after the website Futurism uncovered several examples of CNET using artificial intelligence to publish explanatory guides that were connected to affiliate programs that earned CNET and Red Ventures a commission when readers purchased products or services through affiliate links within the content. CNET initially hid the fact that the articles — which contained numerous, basic errors and were found to have plagiarized content from other websites — were written by software; instead, the publication used a generic “CNET Money Staff” byline, and required readers to hover over the byline to see the disclosure.

Guglielmo said less than 80 short stories were written using the software, which accounted to just 1 percent of CNET’s overall content output during the month of November. That was the month the CNET Money team decided to use the software to create content, which Guglielmo said wasn’t automatically published and required a rigorous set of checks by human editors before it appeared online.

The statement appeared to contradict reports from other news outlets, which said CNET and Red Ventures had used artificial intelligence software to create content for well over a year before the first set of Futurism stories were published this month. Some editorial employees knew CNET and Red Ventures used the software, but didn’t know to what extent. Other staffers apparently didn’t know about the software at all until Futurism and The Verge published their stories.

Guglielmo said CNET would use input from its editorial team and readers about the software. She also affirmed the brand would continue using artificial intelligence software, because it wants to be on the leading edge of new journalism.

“The process may not always be easy or pretty, but we’re going to continue embracing it, and any new tech that we believe makes life better,” she wrote.

The use of the software certainly makes things easier and better for CNET’s owners, particularly after CNET’s former parent owner, Paramount Global, laid off 100 workers in order to satisfy a condition of the $500 million deal with Red Ventures.

Reporters noted that the use of artificial intelligence software appeared to be part of a strategy to create a numerous amount of content as quickly as possible in order to draw traffic from Google search engine results and social media platforms. Done correctly, that web traffic has the potential to earn Red Ventures a significant amount in ad revenue and affiliate commissions from readers; according to the New York Times, the company pulls in about $2 billion in revenue annually.

The tone of Guglielmo’s note to readers on Wednesday was more contrite than her comments made on an all-staff conference call earlier in the week, in which she defended CNET’s use of artificial intelligence software and downplayed the controversy surrounding the errors.

During the call, Guglielmo said CNET never used the software “in secret,” but rather decided to use it “quietly,” and didn’t mean to mislead readers into thinking the content was created by human writers and editors.

“Some writers, who I won’t call ‘reporters,’ have conflated these two things and had caused confusion and have somehow said that using a tool to insert numbers into interest rate or stock price stories is somehow part of some — I don’t know — devious enterprise,” she said sarcastically, noting that the Wall Street Journal and Forbes used automation software to display stock prices inside stories. (Forbes was one of the news outlets whose content was stolen by the CNET robot.)

Lindsey Turrentine, CNET’s executive vice president of content, said the criticism over the website’s use of artificial intelligence will eventually blow over, but she declined to answer questions from employees who had concerns about erroneous data used by artificial intelligence software and possible plagiarism in some robot-written stories.

“This will pass,” Turrentine said. “We will get through it, and the news cycle will move on.”

Get stories like these in your inbox, plus free breaking news alerts on business and policy matters involving media and tech.

Get stories like these in your inbox, plus free breaking news alerts on business and policy matters involving media and tech.

Photo of author

About the Author:

Matthew Keys

Matthew Keys is a nationally-recognized, award-winning journalist who has covered the business of media, technology, radio and television for more than 11 years. He is the publisher of The Desk and contributes to Know Techie, Digital Content Next and StreamTV Insider. He previously worked for Thomson Reuters, the Walt Disney Company, McNaughton Newspapers and Tribune Broadcasting.
Home » News » Industries » Technology » CNET editor unapologetic over use of error-prone robot