Regulators are dusting off rulebooks to tackle generative AI like ChatGPT

  • Watchdogs race to keep up with massive AI rollout
  • While waiting for new laws, regulators adapt existing ones
  • Generative tools face privacy, copyright and other challenges

LONDON/STOCKHOLM, May 22 (Reuters) – As the race to develop more powerful artificial intelligence services such as ChatGPT heats up, some regulators are relying on old laws to control technology that could disrupt how companies and businesses.

The European Union is at the forefront of drafting new AI rules that could set the global benchmark for addressing privacy and security issues that have arisen with the rapid advances in generative AI technology behind OpenAI’s ChatGPT.

But it will take several years for the legislation to be applied.

“In the absence of regulation, the only thing governments can do is enforce existing rules,” said Massimiliano Cimnaghi, European data governance expert at consultancy BIP.

“If it’s about protecting personal data, they apply data protection laws, if it’s a threat to the safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”

In April, European national privacy authorities set up a task force to resolve issues with ChatGPT after Italian regulator Garante took the service offline, accusing OpenAI of violating EU GDPR. , a broad privacy regime enacted in 2018.

ChatGPT was reinstated after the US company agreed to install age verification features and let European users block their information from being used to train the AI ​​model.

The agency will begin looking more broadly at other generative AI tools, a source close to Garante told Reuters. Data protection authorities in France and Spain also launched investigations into OpenAI’s compliance with privacy laws in April.

CALL ON THE EXPERTS

Generative AI models have become notorious for making mistakes or “hallucinating”, spewing misinformation with startling certainty.

Such errors could have serious consequences. If a bank or government department used AI to speed up decision-making, individuals could be unfairly rejected for loans or benefit payments. Big tech companies including Alphabet’s Google (GOOGL.O) and Microsoft Corp (MSFT.O) had stopped using AI products deemed ethically risky, such as financial products.

According to six regulators and experts in the United States and Europe, regulators aim to apply existing rules covering everything from copyright to data privacy to two key issues: the data fed into models and the content that ‘they produce.

Agencies in both regions are encouraged to “interpret and reinterpret their mandates,” said Suresh Venkatasubramanian, a former White House technology adviser. He cited the US Federal Trade Commission’s (FTC) investigation into algorithms for discriminatory practices under existing regulatory powers.

In the EU, the bloc’s proposed AI laws will force companies like OpenAI to disclose any copyrighted material – like books or photographs – used to train their models, leaving them vulnerable to challenges. judicial.

Proving copyright infringement will not be straightforward, however, according to Sergey Lagodinsky, one of several politicians involved in drafting the EU proposals.

“It’s like reading hundreds of novels before writing your own,” he said. “If you’re actually copying something and posting it, that’s one thing. But if you’re not directly plagiarizing someone else’s material, it doesn’t matter what you practiced on.

“THINK CREATIVELY”

France’s data regulator, the CNIL, has started to “think creatively” about how existing laws might apply to AI, according to Bertrand Pailhes, its chief technology officer.

For example, in France, complaints of discrimination are generally dealt with by the Defender of Rights. However, its lack of expertise in AI bias prompted the CNIL to take the lead on the matter, he said.

“We are looking at the full range of effects, although we remain focused on data protection and privacy,” he told Reuters.

The organization is considering using a GDPR provision that protects individuals from automated decision-making.

“At this stage, I can’t say if it’s enough, legally,” Pailhes said. “It will take some time to form an opinion, and there is a risk that different regulators will take different views.”

In Britain, the Financial Conduct Authority is one of several state regulators tasked with developing new guidelines around AI. He consults the Alan Turing Institute in London, alongside other legal and academic institutions, to improve his understanding of the technology, a spokesman told Reuters.

As regulators adjust to the pace of technological advancements, some industry insiders have called for greater engagement with business leaders.

Harry Borovick, general counsel at Luminance, a startup that uses AI to process legal documents, told Reuters dialogue between regulators and companies had been “limited” so far.

“It doesn’t bode well for the future,” he said. “Regulators seem either slow or reluctant to implement the approaches that would strike the right balance between consumer protection and business growth.”

(This story has been reclassified to correct spelling of Massimiliano, not Massimilano, in paragraph 4)

Reporting by Martin Coulter in London, Supantha Mukherjee in Stockholm, Kantaro Komiya in Tokyo and Elvira Pollina in Milan; edited by Kenneth Li, Matt Scuffham and Emelia Sithole-Matarise

Our standards: The Thomson Reuters Trust Principles.

Leave a Comment