
The Wikimedia Foundation has paused an experiment that displayed summaries of Wikipedia articles created through artificial intelligence.
The test followed a similar rollout of AI-generated summaries by Google several weeks ago. Wikipedia is one of the largest independent data providers used by Google to determine authority and rank of brands and websites.
Wikipedia relies on a large group of volunteer editors, who regularly moderate its millions of articles. Editors have input in some of the policies and direction of Wikipedia as a whole — and many expressed concern over the rollout of AI-generated summaries.
“Just because Google has rolled out its AI summaries doesn’t mean we need to one-up them, I sincerely beg you not to test this, on mobile or anywhere else,” one editor named Cremastra wrote in a Wikipedia forum.
Another said the summaries should be made available to users as an “opt-in” feature, rather than displayed automatically to all, regardless of the article that was displayed.
Most of Wikipedia’s volunteers — though not all of them — follow a strict set of guidelines for creating and amending articles. The guidelines encourage edits to be made through a neutral point-of-view and with adequate sourcing from reliable publishers, among other things.
One editor said the summaries could be problematic because it concentrates power to a single editor, whose contributions would comprise the entirety of the summary, which would “reinforce the idea that Wikipedia cannot be relied on, destroying a decade of policy work.”
“I don’t think I would feel comfortable contributing to an encyclopedia like this,” the editor wrote. “No other community has mastered collaboration to such a wondrous extent, and this would throw that away.”
A spokesperson for the Wikimedia Foundation said the experiment was “focused on making complex Wikipedia articles more accessible to people with different reading levels” and was meant to “gauge interest in a feature like this, and to help us think about the right kind of community moderation systems to ensure humans remain central to deciding what information is shown on Wikipedia.”
“It is common to receive a variety of feedback from volunteers, and we incorporate it in our decisions, and sometimes change course,” the spokesperson continued.
The Wikimedia Foundation admitted its rollout of the AI-summaries was poorly planned and that “we could have done a better job introducing the idea and opening up the conversation” before it launched more broadly.
“As internet usage changes over time, we are trying to discover new ways to help new generations learn from Wikipedia to sustain our movement into the future,” a project manager said. “n consequence, we need to figure out how we can experiment in safe ways that are appropriate for readers and the Wikimedia community.”
The matter highlights a broader problem within the Wikipedia community, one where the business objectives of the Wikimedia Foundation often clash with its desire to democratize a portion of the Internet by giving voice and power to a consortium of volunteers who otherwise have no vested interest in the platform beyond their free work.
Few editors expressed concerns over whether the AI-generated summaries would be beneficial or unhelpful to readers. The majority of comments published in various talk forums focused largely on whether their volunteer contributions would be diminished or entirely replaced, indicating that most volunteer editors are motivated by ego and power, rather than contributing to a broader knowledge base.
As is typical when controversial topics are discussed on Wikipedia, dissenting editors shouted down the moderate voices who tried to find a balance between maintaining the website’s reliability and leaning into new forms of reading and consuming Wikipedia’s knowledge.
The loudest voices won. On Wikipedia, they typically do, even when they may be wrong.