Transcript
A MARTÍNEZ, HOST:
California, where the tech industry is paying close attention to newly proposed AI legislation. The bill says large companies that develop artificial intelligence must take reasonable steps to ensure their AI models can't cause catastrophic events. So think biological warfare or mass casualties. Now, some AI developers say the bill is an overreach that could stifle innovation. Others in the field say the potential risks of AI need to be taken seriously. The bill's author is California State Senator Scott Wiener. And I asked him what his bill is meant to do.
SCOTT WIENER: This is a really light touch, basic safety approach, and it only applies to very large, powerful AI models that cost more than $100 million to train. So we're talking about the large labs like Google or Meta or OpenAI. This is not about startups. It doesn't cover startups. And what it requires is that if you're going to train and release one of these massive models that don't even exist today but will very soon, that you should perform basic safety testing on that model to determine if it creates a significant risk of a huge, catastrophic harm. And to be clear, the large labs like OpenAI and Anthropic and Meta and Google have already committed to doing this testing. So we're simply asking them to do the testing they've agreed to do.
MARTÍNEZ: So if someone has technology that someone else uses in a malicious way or a harmful way or a deadly way, why should that company be held responsible?
WIENER: Well, first of all, the person who uses the technology in a malicious way should be held accountable, and that's the case now, and that'll remain the case. But when we're talking about incredibly powerful models that we have never seen as a human society before, shouldn't we also try to reduce the risk that that model can enable a bad actor to do something malicious?
MARTÍNEZ: Now, I think a lot of people, Senator, right now, are maybe more concerned about maybe the smaller scale risks that AI could pose, like, maybe automation that might mean that they lose their jobs. I mean, your bill talks about AI maybe enabling nuclear war. I mean, has the technology really advanced to that point?
WIENER: The safety issues that we're talking about in this bill are not the only challenge that we have. Of course, there's algorithmic discrimination and other deepfakes. And, of course, that should be and is being addressed. But, you know, as human beings, sometimes we ignore risk until something terrible happens. That happened with social media. We allowed it to go for years and years without doing anything to address the real harms that social media was causing. Why don't we get ahead of the risk for once?
MARTÍNEZ: Did you introduce this bill maybe because Congress has yet to pass their own AI legislation and they're not moving quick enough?
WIENER: Congress has a horrible track record of not doing anything to address risks caused by technology. California, as the epicenter of tech innovation and AI innovation, we are well positioned and have a responsibility to step up to foster innovation, 'cause I want to be clear, AI promises so many benefits to humanity, and I'm a huge fan of AI, but we also are well positioned to address some of the safety risks and to try to reduce those risks in a way that fosters innovation. And so California should step up because I am not optimistic that Congress is going to do anything here.
MARTÍNEZ: That is California State Senator Scott Wiener. Thank you very much.
WIENER: Thanks for having me. Transcript provided by NPR, Copyright NPR.
300x250 Ad
300x250 Ad