Four top Senators unveiled a proposed roadmap for artificial intelligence regulation on Wednesday, calling for at least $32 billion to be spent each year for non-defense AI innovation.
The members of the AI Working Group — Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN) — released the long-awaited proposal after months of hosting AI Insight Forums to inform their colleagues about the technology. The events brought in AI experts, including executives like OpenAI CEO Sam Altman and Google CEO Sundar Pichai, as well as academics, labor, and civil rights leaders.
Here’s what the roadmap is not: specific legislation that could pass expeditiously. In the 20-page report, the working group lays out key areas where relevant Senate committees should focus their efforts with regard to AI.
Those include: AI workforce training; addressing AI-generated content in specific areas, including child sexual abuse material (CSAM) and election content; safeguarding private information and copyrighted content from AI systems; and mitigating energy costs of AI. The working group says the report is not an exhaustive list of options.
Schumer said the roadmap is meant to guide Senate committees as they take the lead in crafting regulation, and it was not intended to create a big sweeping law encompassing all of AI.
Some lawmakers didn’t wait for the roadmap to introduce their own AI-related proposals.
The Senate Rules Committee, for example, advanced a series of election-related AI bills on Wednesday. But with so many different areas touched by AI and many different views on the appropriate level and kinds of regulation, it’s not yet clear how quickly such proposals will advance into law — especially in an election year.
The working group is encouraging other lawmakers to work with the Senate Appropriations Committee to bring AI funding to the levels proposed by the National Security Commission on Artificial Intelligence (NSCAI). They say the money should be used to fund AI and semiconductor research and development across the government and the National Institute of Standards and Technology (NIST) testing infrastructure.
The roadmap does not specifically call for all future AI systems to undergo safety evaluation before selling to the public but instead asks to develop a framework determining when an evaluation is required.
This is a departure from some proposed bills that would immediately require safety evaluations for all current and future AI models. The senators also did not immediately call for an overhaul of existing copyright rules, a battle AI companies and copyright holders are having in courts. Instead, it asks policymakers to consider if new legislation around transparency, content provenance, likeness protection, and copyright is needed.
Adobe general counsel and chief trust officer Dana Rao, who attended the AI Insight Forums, said in a statement that the policy roadmap is an encouraging start as it will be “important for governments to provide protections across the wider creative ecosystem, including for visual artists and their concerns about style.”
However, other groups are more critical of Schumer’s roadmap, with many expressing concerns about the proposed costs of regulating the technology.
Amba Kak, co-executive director of AI Now, a policy research group supported by groups like Open Society Foundations, Omidyar Network, and Mozilla, released a statement following the report saying its “long list of proposals are no substitute for enforceable law.” Kak also took issue with the big taxpayer price tag on the proposal, saying it “risks further consolidating power back in AI infrastructure providers and replicating industry incentives — we’ll be looking for assurances to prevent this from taking place.”
Rashad Robinson, president of civil rights group Color of Change, said in a statement that the report “shows very clearly that Schumer is not taking AI seriously, which is disappointing given his previous capacity for honesty, problem-solving and leadership on the issue.” He added the report “is setting a dangerous precedent for the future of technological advancement. It’s imperative that the legislature not only establishes stronger guardrails for AI in order to ensure it isn’t used to manipulate, harm, and disenfranchise Black communities, but that they recognize and quickly respond to the risky, unchecked proliferation of bias AI poses.”
Divyansh Kaushik, vice president at national security advisory firm Beacon Global Strategies, said in a statement that “critical for the success of any legislative efforts” will be ensuring that the big price tag can actually be doled out to the agencies and initiatives that need to use those funds. “[T]his can’t be another CHIPS [and Science Act] where we authorize a lot of money without appropriations,” Kaushik said.