Comcast’s global news channel Sky News says it has started testing the abilities of artificial intelligence to pitch and create news stories.
The trial was part of an experiment to see if machine-learning tools could one day replace real-world journalists, a spokesperson for Sky News affirmed this week.
For the experiment, Sky News asked science and technology editor Tom Clarke to work with several others, including producer Hanna Schnitzer, to develop and deploy a robot that could pitch and write stories.
Clarke had Schnitzer record herself speaking in front of a chroma key wall — commonly referred to as a “green screen” — so the tool could capture the sound of her voice, her image and the likeness of her personality.
The footage was sent off to a company called HeyGen, which used it to develop what Clarke called the “brain” of the robot. Clarke then reached out to software developer Kris Fagerlie, who engaged ChatGPT and other publicly-available software.
Fagerlie then built a program that pulled in various news stories from the websites of Sky News, YouTube and Google News, which helped the AI-powered journalist get “an idea of what’s going on in the world,” Clarke reported.
Eventually, Sky News developed both an AI reporter and an AI editor who were expected to work together on creating content. The experiment required the AI reporter to pitch eight stories to the AI editor, with both sides settling on one that the robot reporter would research, then create.
The robot ultimately pitched a variety of stories — from a feature article on the discovery of a Roman-era sex toy in the border county of Northumberland, to an examination on how online protests could impact and influence modern politics.
The AI-powered editor called the pitches “compelling,” though Clarke noted that one was “factually incorrect.” That pitch involved a suggestion that pouring milk on local roadways could make them safer. Clarke said it appeared to be influenced by an actual news story in which a dairy truck tipped over on a highway one day earlier, which caused more than 200,000 gallons of milk to spill onto the road.
Clarke said the pitched headline, which included the word “laughing,” might rub people the wrong way, including the families of two people who were injured in the crash. He also questioned a notion that researchers had found “proteins in milk reacted with the gravel and the dirt, creating a sticky residue that enhanced road traction.”
The claim appeared to come from a fictitious quote attributed to an apparent climate scientist — “Milk is biodegradable and when it decomposes, it nourishes local ecosystems by providing essential nutrients,” the dubious quote said — and Clarke affirmed he was unable to find anything substantial in his own research to back that claim.
The robots ultimately didn’t go with that pitch. Instead, they decided to pursue a story about how heat waves affect public health, which Clarke said was topical.
After getting approval from the robot editor, the AI-powered journalist did some research and created a script. It also suggested various B-roll film to be used in the report, which was entirely composed of stock footage. (Clarke noted that one limitation of an AI-powered robot is that it can’t chase down people for interviews — at least, not yet.)
How did the video come out in the end? See for yourself:
Clarke said the script, as delivered, was “clear and concise,” and ended with a good, yet alarming, note: “As climate change remains unabated, being prepared is our best line of defense against the blistering onslaught of a hotter future.”
That said, Clarke noted that the robot reporter didn’t seem to get Schnitzer’s voice down quite right — “Hanna’s accent could be all over the place: Sometimes authentic, sometimes a bit Irish, sometimes American,” he wrote, though he did admit seeing some potential with where things were going.
The video package was only one part of the experiment. The robo-reporter was also tasked with writing a feature-length news story for the Sky News website, and a different AI tool was asked to create an image for that story.
Clarke said the featured image looked “a bit like something you might see adorning the walls of a soul-less AirBNB,” which wasn’t a compliment.
The story itself was a bit better, with subheadings that broke the story into several sections, and the inclusion of quotes from sources suggested by the robot.
Overall, the robot did what it was supposed to do, albeit with a few quirks. But Clarke was not impressed. “I can only imagine the withering reaction from my real-life editor if I filed this back to them at Sky News HQ,” he wrote.
Clarke seemed concerned that the robot didn’t understand that getting information correctly was only part of the equation. Whether online or on-air, Sky News viewers expect stories that are informative, enlightening and easy to digest. The AI-powered journalist understood the first element, but where was the shine, the pizazz, the imagination that makes for a good TV story that people actually want to watch?
The conclusion Clarke and others reached was that an AI-powered journalist built with off-the-shelf tools over the course of a few weeks did perform some tasks well. If those tasks were delegated to tools like the robot reporting and editing team, it might free up Sky News journalists to do other things with their time, such as interviews and research on in-depth packages.
But the tools aren’t good enough to work on their own, without oversight or human intervention. Their redeeming qualities are offset by a lack of social awareness, nuance and ethics. The spilled milk pitch seemed to rattle Clarke quite a bit, as did the robot’s inability to get Schnitzer’s voice down correctly.
“For now, I can breathe a sigh of relief…and know that my job is currently secure,” he said.