Is Artificial Intelligence Putting People Out of Work? The Story of the Science Journal Cosmos

Source.

A couple of weeks ago, the Australian science journal Cosmos became much more famous than before. But not because it published some groundbreaking paper. It was because it used generative artificial intelligence (AI) to create scientific papers. The experiment the journal embarked on caused criticism and outrage not only from the scientific community and readers, but also from the publication’s former authors, editors, and two founders. What was going on?


Context of the AI ​​experiment

The experiment began after the magazine received a grant from the Meta News Fund in 2023. The Walkley Foundation was responsible for distributing and monitoring the funds. The project involved creating a “customized AI service” that would generate research articles based on an archive of 15,000 materials accumulated by the magazine over the years. The funds were allocated to study the possibilities and risks of AI tools in journalism.

A specially trained AI service was used to create six articles on complex scientific topics. They were then published on the Cosmos website in July 2024 with a note stating that the content was generated by AI. Jackson Ryan, president of the Australian Science Journalists Association, noted that at least one of the published articles contained inaccuracies. “The texts were created using OpenAI’s GPT-4 and then verified using the Cosmos article archive,” he said.

Six months earlier, in February 2024, five of Cosmos’ eight freelance writers were fired without prior warning. They received emails informing them that their services were no longer needed and that new articles would no longer be accepted. The journal said the decision was motivated by financial concerns. Whether this was the real reason for the layoffs is unknown. However, participation in an AI project casts a shadow on the journal’s policies on work ethics, transparency, and trust in science journalism.


The situation was made worse when it was revealed that no one outside the company's management knew about the experiment. Ian Connellan, the former editor of Cosmos, said that none of the editorial staff were aware either. Even though they were responsible for editorial decisions before the magazine was transferred to CSIRO. Cosmos founders Kylie Ahern and Wilson da Silva also expressed their outrage and disappointment at the decision. In their opinion, using an AI service instead of journalists is definitely not the path they chose.

The project has caused ethical and legal problems for Cosmos and its governing body, CSIRO. Cosmos' interim editor Gavin Stone forwarded all complaints to CSIRO Publishing. Only then did a company representative confirm that the AI ​​service had not been trained on articles by Cosmos authors. Whether this was actually the case is still being investigated.

Trust in science and journalism


Former journalists say the AI ​​service devalues ​​their work and threatens the importance of the journalism profession as a whole. The Walkley Foundation, which funded the project, has also come under fire for its grant. Many journalists believe that the organisation set up to support their profession in Australia should not fund projects like this.
“The use of AI to present facts backed by scientific research raises serious concerns and could have disastrous consequences. At a time when trust in science and the media is in steep decline, particularly in the media, conducting AI experiments without due transparency is ignorant at best and dangerous at worst,” warns Jackson Ryan, president of the Australian Science Journalists Association.

He also cited the example of the American technology site CNET, which in late 2022 published dozens of articles created using customized AI.

“This has happened before. In late 2022, the respected US technology site CNET, where I worked as a science editor until August 2023, published dozens of articles created by a customized AI. This ‘robot writer’ produced 77 posts, and after investigation by competitors, more than half of them were found to contain inaccuracies,” Ryan said. The case was called a “journalistic disaster,” and similar concerns have been raised about Cosmos.

Today, the Cosmos pilot project is on hold for a detailed investigation and resolution of the issues that have arisen. But CSIRO remains “committed to the ethical and responsible use of AI technologies.”

Prospects for the use of AI in media


Despite all the risks, the use of AI in journalism has potential, especially for smaller publications that face difficulties in content creation. AI can help, for example, automate routine tasks. However, for successful full integration into the media industry, it is necessary to ensure openness of information and take into account the opinion of the audience.

The goal of science is to reduce uncertainty, but it cannot eliminate it entirely. Effective science journalism helps audiences understand this uncertainty and, as research shows, builds trust in the scientific process. However, generative AI remains a predictive text tool that can undermine this process, producing content that sounds persuasive but often makes no sense.

This doesn’t mean generative AI should be banned. With proper oversight, AI can be an important tool for smaller publishers like Cosmos, helping them maintain a steady stream of content without losing the trust of their audience.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *