Global group of media organizations releases principles for AI development

A global group of 25 organizations including news and publishing companies is calling on developers, operators and deployers of artificial intelligence systems to respect intellectual property rights. A metal head made of motor parts symbolizes artificial intelligence, or AI, at the Essen Motor Show for tuning and motorsports in Essen, Germany on Nov. 29, 2019, file photo. THE CANADIAN PRESS/AP/Martin Meissner

TORONTO - Twenty-five global organizations, including news and publishing companies, have banded together to urge developers, operators and deployers of artificial intelligence (AI) systems to respect intellectual property rights.

The group, which represents thousands of creative professionals and includes News Media Canada, made the request Wednesday in a document it released laying out a series of global AI principles it would like to see the world abide by.

The principles cover areas including intellectual property (IP), transparency, accountability, fairness and safety and were positioned as a response to rapid AI advances in recent months.

"The proliferation of AI systems, especially generative artificial intelligence, present a sea change in how we interact with and deploy technology and creative content," the groups wrote in their principles.

"While AI technologies will provide substantial benefits to the public, content creators, businesses, and society at large, they also pose risks for the sustainability of the creative industries, the public’s trust in knowledge, journalism, and science, and the health of our democracies."

The principles ask that those developing AI systems provide transparency to allow publishers to enforce their rights where their content is included in training data sets.

They assert that publishers are entitled to negotiate for and receive adequate remuneration for use of their IP.

"AI system developers, operators, and deployers should not be crawling, ingesting, or using our proprietary creative content without express authorization," the principles say.

They also say developers should clearly attribute content to the original publishers, recognize publishers' role in generating high-quality content for training and not create or risk creating unfair market dominance.

Rapid advancements in generative AI convinced News Media Canada, the national association of the ºÚÁϳԹÏÍø news media industry, serving print and digital news media members in every province and territory, to join the group.

"Real journalism costs real money and publishers are going to protect our rights through fair licensing agreements so we can continue to invest in high quality, original, fact-based, fact-checked content," Paul Deegan, News Media Canada's president and chief executive, said in an email.

Similar groups from Colombia, Finland, Japan, Brazil, Hungary and Korea were among the organizations that endorsed the principles.

Pretty much all large language models — the heart of AI — are trained on publisher data from these organizations, said Courtney Radsch, director of the Center for Journalism and Liberty at the Open Markets Institute in Washington, D.C.

News media is so significant to the models because it is high quality information that has been fact-checked and includes syntax and quotes. In some cases, Radsch said work from publishers makes up 10 per cent of the data models are trained on.

But its easy accessibility across the internet also makes it vulnerable to misuse.

"One of the most dangerous things that is happening right now is the unconstrained hoovering up of everyone's information and content without compensating rights holders," Radsch said.

Some companies, including The Associated Press, are seeking to right such actions and have gained remuneration through deals with AI giants, while others like Danish media groups are in conversation with policymakers about protecting their work.

"The next challenge is figuring out what does fair compensation look like," said Radsch.

Such quandaries are arising as governments and society in general grapple with how to deal with the rapid development of AI systems and the technology's constant evolution.

Much of the current evolution was triggered by the arrival of ChatGPT, a generative AI chatbot capable of humanlike conversations and tasks that was developed by San Francisco-based OpenAI. Its launch last November kick-started an AI race with other top tech names including Google and its rival product Bard and a slew of startups innovating in the space.

However, many observers are ringing alarm bells about the technology.

The so-called 'godfather of artificial intelligence,' Geoffrey Hinton, has repeatedly warned of a slew of threats the technology poses.

In June, he told attendees of the Collision tech conference in Toronto that he worries AI could lead to bias and discrimination, joblessness, echo chambers, fake news, battle robots and existential risk.

Others have similar worries as evidenced by a March letter from more than 1,000 technology experts, including engineers from Amazon, Google, Meta and Microsoft, as well as Apple co-founder Steve Wozniak. They called for a six-month pause on training of AI systems more powerful than GPT-4, the large language model behind ChatGPT.

This report by ºÚÁϳԹÏÍø was first published Sept. 6, 2023.

ºÚÁϳԹÏÍø. All rights reserved.