ChatGPT chief calls for AI rules

Global agency with power to strip licenses proposed

Sen. Richard Blumenthal, D-Conn., left, chair of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, greets OpenAI CEO Sam Altman before a hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
Sen. Richard Blumenthal, D-Conn., left, chair of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, greets OpenAI CEO Sam Altman before a hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)


The head of the artificial intelligence company that makes ChatGPT told Congress on Tuesday that government intervention will be critical to mitigating the risks of increasingly powerful AI systems.

"As this technology advances, we understand that people are anxious about how it could change the way we live. We are, too," OpenAI CEO Sam Altman said at a Senate hearing.

Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systems and have the authority to "take that license away and ensure compliance with safety standards."

His San Francisco-based startup rocketed to public attention after it released ChatGPT late last year. The free chatbot tool answers questions with convincingly human-like responses.


What started out as a panic among educators about ChatGPT's use to cheat on homework assignments has expanded to broader concerns about the ability of the latest crop of "generative AI" tools to mislead people, spread falsehoods, violate copyright protections and upend some jobs.

And while there's no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, the societal concerns brought Altman and other tech CEOs to the White House earlier this month and have led U.S. agencies to promise to crack down on harmful AI products that break existing civil rights and consumer protection laws.

Sen. Richard Blumenthal, who chairs the Senate Judiciary Committee's subcommittee on privacy, technology and the law, opened the hearing with a recorded speech that sounded like the senator but was actually a voice clone trained on Blumenthal's floor speeches and reciting ChatGPT-written opening remarks.

The result was impressive, said Blumenthal, D-Conn., but he added, "What if I had asked it, and what if it had provided an endorsement of Ukraine surrendering or [Russian President] Vladimir Putin's leadership?"

Blumenthal said AI companies ought to be required to test their systems and disclose known risks before releasing them, and he expressed particular concern about how future AI systems could destabilize the job market. Altman was largely in agreement, though he had a more optimistic take on the future of work.

Asked about his worst fear about AI, Altman mostly avoided specifics.

"I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that," he said. "We want to work with the government to prevent that from happening."

But he later proposed that a new regulatory agency should impose safeguards that would block AI models that could "self-replicate and self-exfiltrate into the wild" -- hinting at futuristic concerns about advanced AI systems that could manipulate humans into ceding control.

That focus on a far-off "science-fiction trope" of super-powerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation, said a former Biden administration official who co-authored its plan for an AI bill of rights.

"It's the fear of these (super-powerful) systems and our lack of understanding of them that is making everyone have a collective freak-out," said Suresh Venkatasubramanian, a Brown University computer scientist who was assistant director for science and justice at the White House Office of Science and Technology Policy. "This fear, which is very unfounded, is a distraction from all the concerns we're dealing with right now."


MUSK BACKED IDEA

OpenAI has expressed those existential concerns since its inception. Co-founded by Altman in 2015 with backing from tech billionaire Elon Musk, the startup has evolved from a nonprofit research lab with a safety-focused mission into a business.

Its other popular AI products include the image-maker DALL-E. Microsoft has invested billions of dollars into the startup and has integrated its technology into its own products, including its search engine Bing.

Altman is also planning to embark on a worldwide tour this month to national capitals and major cities across six continents to talk about the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of U.S. lawmakers, several of whom told CNBC they were impressed by his comments.

Also testifying were IBM's chief privacy and trust officer, Christina Montgomery, and Gary Marcus, a professor emeritus at New York University who was among a group of AI experts who called on OpenAI and other tech firms to pause their development of more powerful AI models for six months to give society more time to consider the risks. The letter was a response to the March release of OpenAI's latest model, GPT-4, described as more powerful than ChatGPT.

The panel's ranking Republican, Sen. Josh Hawley of Missouri, said the technology has big implications for elections, jobs and national security. He said Tuesday's hearing marked "a critical first step toward understanding what Congress should do."

Altman said his company's technology may destroy some jobs but also create new ones, and that it will be important for "government to figure out how we want to mitigate that." He proposed the creation of an agency that issues licenses for the creation of large-scale AI models, safety regulations and tests that AI models must pass before being released to the public.

"We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work," Altman said.

OVERSIGHT DEBATE

A number of tech industry leaders have said they welcome some form of AI oversight but have cautioned against what they see as overly heavy-handed rules.

Altman and Marcus called for an AI-focused regulator, preferably an international one, with Altman citing the precedent of the U.N.'s nuclear agency and Marcus comparing it to the U.S. Food and Drug Administration. But Montgomery instead asked Congress to take a "precision regulation" approach.

"We think that AI should be regulated at the point of risk, essentially," Montgomery said, by establishing rules that govern the deployment of specific uses of AI rather than the technology itself.

Dozens of privacy, speech and safety bills have failed over the past decade because of partisan bickering and fierce opposition by tech giants.

The United States has trailed the globe on regulations in privacy, speech and protections for children. It is also behind on AI regulations. Lawmakers in the European Union are set to introduce rules for the technology later this year, and China has created AI laws that comply with its censorship laws.

Some of the toughest questions and comments toward Altman came from Marcus, who noted that OpenAI hasn't been transparent about the data it uses to develop its systems. He expressed doubt about Altman's prediction that new jobs will replace those killed off by AI.

"We have unprecedented opportunities here but we are also facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability," Marcus said.

Tech companies have argued that Congress should be careful with any broad rules that lump different kinds of AI together. In Tuesday's hearing, Martin called for an AI law similar to Europe's proposed regulations, which outline various levels of risk. She called for rules that focus on specific uses, not regulating the technology itself.

"At its core, AI is just a tool, and tools can serve different purposes," she said, adding that Congress should take a "precision regulation approach to AI."

Information for this article was contributed by Matt O'Brien of The Associated Press and by Cecilia Kang of The New York Times.

  photo  OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 
  photo  OpenAI CEO Sam Altman waits to speak before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 
  photo  NYU Professor Emeritus Gary Marcus, center, speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. Seated alongside Marcus are OpenAI CEO Sam Altman, right, and IBM Chief Privacy and Trust Officer Christina Montgomery. (AP Photo/Patrick Semansky)
 
 
  photo  OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 
  photo  OpenAI CEO Sam Altman waits for a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence to begin, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 
  photo  Sen. Richard Blumenthal, D-Conn., chair of the Senate Judiciary Subcommittee on Privacy, Technology and the Law, speaks during a hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 
  photo  IBM Chief Privacy and Trust Officer Christina Montgomery speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 
  photo  FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen displaying output from ChatGPT, March 21, 2023, in Boston. The head of the artificial intelligence company that makes ChatGPT is set to testify to Congress as lawmakers call for new rules to guide the rapid development of AI technology. OpenAI CEO Sam Altman is scheduled to speak at a Senate hearing Tuesday, May 16. (AP Photo/Michael Dwyer, File)
 
 
  photo  NYU Professor Emeritus Gary Marcus speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP Photo/Patrick Semansky)
 
 


Upcoming Events