(CNN) The White House announced a series of measures on Thursday to address the challenges of artificial intelligence, prompted by the sudden popularity of tools such as ChatGPT and amid growing concerns about the potential risks of discrimination, misinformation and privacy of technology.
The US government plans to introduce policies that will shape how federal agencies purchase and use AI systems, the White House has said. This step could significantly influence the market for AI products and control how Americans interact with AI at government websites, at security checkpoints, and in other settings.
The National Science Foundation will also spend $140 million to promote AI research and development, the White House added. The funds will be used to create research centers that seek to apply AI to issues such as climate change, agriculture and public health, according to the administration.
The plan comes the same day that Vice President Kamala Harris and other administration officials are expected to meet with the CEOs of Google, Microsoft, OpenAI and Anthropic, creator of ChatGPT, to stress the importance of ethical development and AI manager. And that coincides with a UK government inquiry launched on Thursday into the risks and benefits of AI.
“Tech companies have a fundamental responsibility to ensure that their products are safe and secure and that they protect people’s rights before they are deployed or made public,” a senior Biden administration official said. to reporters on a conference call.
Officials cited a series of risks the public faces in the widespread adoption of AI tools, including the possible use of AI-created deepfakes and misinformation that could undermine the democratic process. Job losses from increasing automation, biased algorithmic decision-making, physical dangers stemming from self-driving vehicles and the threat of malicious AI-powered hackers are also on the White House’s list of concerns. .
This is just the latest example of the federal government acknowledging concerns about the rapid development and deployment of new AI tools, and trying to find ways to address some of the risks.
Testifying before Congress, members of the Federal Trade Commission argued that AI could “accelerate” fraud and scams. Its president, Lina Khan, wrote this week in a New York Times op-ed that the US government has sufficient legal authority to regulate AI based on its consumer and competition protection mandate.
Last year, the Biden administration unveiled a proposed AI bill of rights calling on developers to respect privacy, security and equal rights principles when creating new AI tools.
Earlier this year, the Commerce Department released voluntary AI risk management guidelines that it says could help organizations and businesses “govern, map, measure and manage” potential hazards at every stage. of the development cycle. In April, the ministry also said it was seeking public input on best AI regulatory policies, including through audits and industry self-regulation.
The US government is not alone in seeking to shape the development of AI. European officials plan to draft AI legislation this year that could have major implications for AI companies around the world.