
President Biden said on Tuesday that artificial intelligence has “enormous promise” but also carries risks, such as fueling misinformation and job losses — dangers his administration wants to address.
Biden, meeting in San Francisco with AI experts, researchers and advocates, said the technology is already driving “change in every part of American life, often in ways we don’t realize.” AI helps people search the internet, find directions – and it has the potential to disrupt how people teach and learn.
“As we seize this moment, we need to manage the risks to our society, our economy and our national security,” Biden told reporters ahead of his closed-door meeting with AI experts at the Fairmont Hotel.
Pointing to the rise of social media, Biden said people have already seen the damage that powerful technology can do without the proper safeguards. Still, he acknowledged that he has a lot to learn about AI.
The meeting came as Biden is stepping up efforts to raise money for his 2024 re-election bid, including from tech billionaires. While visiting Silicon Valley on Monday, he took part in two fundraisers, including one co-hosted by entrepreneur Reid Hoffman, who has numerous ties to AI companies.
The venture capitalist was an early investor in Open AI, which created the popular ChatGPT app, and sits on the board of technology companies, including Microsoft, that are investing heavily in AI.
The experts Biden met with on Tuesday included some of Big Tech’s biggest critics. The list includes children’s advocate Jim Steyer, who founded and leads Common Sense Media; Tristan Harris, executive director and co-founder of the Center for Humane Technology; Joy Buolamwini, founder of the Algorithmic Justice League; and Fei-Fei Li, co-director of the Human-Centered AI Institute at Stanford. California Governor Gavin Newsom also joined Biden at the AI event.
Steyer said the president was really engaged during the conversation and that he spoke about the potential impact of AI on democracy.
“Some people refer to this as a sort of lunar moment,” Steyer said. “You can’t let a small handful of big companies who may or may not be well-meaning drive the future of AI.”
He said he told the president that the big winners or losers from AI could be young people, noting that it can amplify mental health issues.
Some of the experts have experience working at large tech companies. Before coming to Stanford, Li led the AI and machine learning efforts at Google Cloud and also served on the board of directors at Twitter. Li said an important issue for Biden to consider is who is developing AI.
“Our message to the president is to invest in the public sector because that will ensure a healthy ecosystem,” she said, highlighting the positive impacts of technology on health, education and the environment.
Biden’s meetings with AI researchers and tech executives highlight how the president is engaging both sides as his campaign tries to attract wealthy donors while his administration examines the risks of rapidly growing technology. While Biden has been critical of the tech giants, executives and workers at companies including Apple, Microsoft, Google and Facebook parent Meta have contributed. million dollars for his 2020 presidential campaign.
The Biden administration has focused on The potential risks of AI. Last year, the government released a “Plan for an AI Bill of Rights”, describing five principles developers should keep in mind before releasing new AI-powered tools. The administration also met with tech executives, announced the steps the federal government had taken to address AI risks, and promoted other efforts to “promote responsible American innovation.”
Tech giants use AI in various products to recommend videos, empower virtual assistants and transcribe audio.
While artificial intelligence has been around for decades, the popularity of an AI chatbot known as ChatGPT has intensified a race between big tech players like Microsoft, Google and Meta. Released in 2022 by OpenAI, ChatGPT can answer questions, generate text and complete a variety of tasks.
The rush to advance AI technology has left tech workers, researchers, policymakers and regulators uneasy about whether new products can be released before they are safe. In March, Tesla, SpaceX and Twitter chief executives Elon Musk, Apple co-founder Steve Wozniak and other tech leaders urged AI labs to stop training advanced AI systems and urged developers to work with policymakers. AI pioneer Geoffrey Hinton, 75, quit his job at Google so he could talk about the risks of AI more openly.
As technology rapidly advances, lawmakers and regulators have struggled to keep up. In California, Newsom has signaled that he wants to tread carefully with state-level AI regulation. He said in a Los Angeles conference in May that “the biggest mistake” that politicians can make is to assert themselves “without first trying to understand”.
California lawmakers have come up with a number of ideas, including legislation that would combat algorithmic discrimination, establish an office of artificial intelligence and create a working group to provide a report on AI to the legislature.
Writers and artists are concerned that companies could use AI to replace workers. Using AI to generate text and artwork raises ethical issues, including concerns about plagiarism and copyright infringement. The Writers Guild of America, which is still on strike, proposed rules in March on how Hollywood studios can use AI. Any text generated by AI chatbots, for example, “cannot be considered in determining writing credits” under the proposed rules.
The potential abuse of AI to spread political propaganda and conspiracy theories, an issue that has plagued social media, is another big concern among disinformation researchers. they fear AI tools that can spit out text and images will make it easier and cheaper for malefactors to spread misleading information.
AI is already being deployed in some mainstream political ads. The Republican National Committee posted an AI-generated video ad depicting a dystopian future that supposedly would come true if Biden were re-elected.
AI tools were also used to create fake audio clips of politicians and celebrities making comments they didn’t actually say. Republican presidential candidate and Florida governor Ron DeSantis’ campaign shared a video of what appeared to be AI-generated images of former President Trump hugging Dr. Anthony Fauci – a target of COVID-19 conspiracy theorists.
Tech companies are not opposed to putting barriers around AI. They say they welcome regulation, but they also want to help shape it. In May, Microsoft released a 42 page report about governing AI, noting that no company is above the law. The report includes a “blueprint for public AI governance” that outlines five points, including creating “safety gaps” for AI systems that control the power grid, water systems and other crucial infrastructure.
In the same month, OpenAI CEO Sam Altman testified before Congress and called for AI regulation.
“My biggest fear is that we, the tech industry, will cause significant harm to the world,” he told lawmakers. “If this technology goes wrong, it could go very wrong.”
Altman, who has met with world leaders in Europe, Asia, Africa, the Middle East and beyond, also joined scientists and other leaders in signing a one-sentence letter in May warning that AI poses an “extinction risk” for the humanity.
Times staff writer Seema Mehta in Los Angeles contributed to this report.