Shared Chat
Should we regulate AI? Say what Chamath and David Sacks say on the topic.

Both Chamath Palihapitiya and David Sacks have discussed the topic of regulating AI in a podcast episode .

David Sacks believes that it is too early to regulate AI because we do not yet know what the standards for regulation should be. He argues that creating a new regulatory body would slow down innovation and potentially favor politically connected projects . Sacks also mentions that there are trust and safety teams within companies that work on preventing nefarious use of AI .

On the other hand, Chamath Palihapitiya suggests that eventually there will be a need for an oversight body for AI, similar to the FDA for drugs. He mentions the importance of reviewing and approving AI models to understand their behavior and potential impact . Palihapitiya proposes a middle ground between immediate regulation and unregulated development, emphasizing the need for self-regulation and thoughtful use of AI tools .

Both Palihapitiya and Sacks mention the potential positive uses of AI, such as law enforcement tools and innovative business models . They caution against rushing into regulation without a clear understanding of the technology and its potential consequences .

It is important to note that the opinions expressed in the podcast episode are those of Chamath Palihapitiya and David Sacks and may not represent the consensus among all experts and stakeholders in the AI field.

1
Davic Sacks: the pace of evolution is very fast. We are on a bullet train to something, and we don't know exactly what it is, and that's disconcerting. However, let me tell you what would happen if we create a new regulatory body like the FDA to regulate this. They would have no idea how to arbitrate whether a technology should be approved or not. development will basically slow to a crawl, just like drug development. There is no double-blind standard. I agree. What self-regulation can we do? What self-regulation can we do? There is no double-blind standard in AI that everyone can agree on right now to know whether something should be approved. And what's going to happen is, the thing that's made software development so magical and allowed all this innovation over the last 25 years is permissionless innovation. Any developer Any dropout from a university can go create their own project, which turns into a company. That is what has driven all the innovation and progress in our economy over the last 25 years. You're going to replace permissionless innovation with going to Washington to go through some approval process. And it will be the politically connected, it'll be the big donors who get their projects approved. And the next Mark Zuckerberg who's trying to do his little project in a dorm room somewhere will not know how to do that will not know how to compete in that highly political process. I think you're mixing a bunch of things together. So first of all,
2
Chamath Palihapitiya: And then when people build these tools, they have to submit them to the government to approve them. That's what you're saying here. And you want that to start now. Here's the alternative. The alternative is going to be the debacle that we know as section 230. So if you try to write a brittle piece of legislation, or try to use old legislation to deal with something new, it's not going to do a good job because technology advances way too quickly. And so if you look at the section 230 example, where have we left ourselves, the politicians have a complete inability to pass a new framework to deal with social media to deal with misinformation. And so now we're all kind of guessing what bunch of eight 70 and 80 year old Supreme Court justices will do in trying to rewrite technology law when they have to apply it on section 230. So the point of that tweet was to lay the alternatives. There is no world in which this will be unregulated. And so I think the question to ask ourselves is do we want a chance for a new body So the FDA is a perfect example why, even though the FDA commissioner is appointed by the president, this is a quasi organization, it's still arm's length away. It has subject matter experts that they hire, and they have many pathways to approval. Some pathways take days, some pathways are months and years, some pathways are for breakthrough innovation, some pathways are for devices. So they have a broad spectrum of ways of arbitrating
3
David Friedberg: coordination around financial systems to get coordination around climate change to get coordination around anything on a global basis to try and get coordination around the software models that are being developed, I think is is pretty naive. You don't want to have a global organization. I think you need to have a domestic organization that protects us. And I think Europe will have their own thing again, FDA versus Emma,
Davic Sacks: Canada has its own, Japan has its own, China has its own, and they have a lot of overlap and a lot of commonality in the guardrails they use. And I think that's what's going to happen here. This will be beneficial only for political insiders who will basically be able to get their projects and their apps approved with a huge deadweight loss for the system because innovation will completely slow down. But let me build on Freeberg's point, which is that We have to remember that AI won't just be used by nefarious actors. It'll be used by positive actors. There will be new tools that law enforcement will be able to use. If somebody's creating phishing sites at scale, they're going to be pretty easy for law enforcement AIs to detect. Let's not forget that there'll be co-pilots written for our law enforcement authorities. They'll be able to use that to detect and fight crime. A really good example of this is in the crypto space. We saw this article over the past week that Chainalysis has figured out how to track illicit Bitcoin transactions. There's now a huge number of prosecutions that are happening of illegal use of Bitcoin. If you go back to when Bitcoin first took off,
4
Davic Sacks: As soon as the FDA got involved, it slowed down massively. Now it takes years, many years to get a drug approved. Appropriately. So yes, but at least with a drug, we know what the gold standard is, you run a double blind study to see whether it causes harm or whether it's beneficial. We don't know what that standard is for AI yet. We have no idea. You can study in AI what No, we don't have somebody review the code. You have two instances in a sandbox use a code to do what? Oh, sacks. My point, auto GPT. It's benign. I mean, my friend use it to book a wine tasting. So who's going to review that code and then speculate and say, oh, well, in 99.9% of cases, it's perfectly benevolent and fine, and innocuous. I can fantasize about some cases someone might do. How are you supposed to resolve that?
Jason Calacanis: very simple. There are two types of regulation that occur in any industry, you can do what the movie industry did, which is they self regulate, and they came up with their own rating system. Or you can do what happens with the FDA. And what happens with cars, which is an external government based body, I think now is the time for self regulation, so that we avoid the massive heavy hand of government having to come in here. But these tools can be used today to create massive arm. They're moving at a pace we just said in the first half of the show that none of us have ever seen every 48 hours something drops. That is mind blowing. That's never happened before. And you can
5
Jason Calacanis: Video games are nearly half a trillion a year. Yeah, I mean, this is not insignificant. But let's pull up Chamath's tweet. Of course, the dictator wants to dictate here all this incredible innovation is being made. And a new hero has been born Chamath Palihapitiya, a tweet that went viral over 1.2 million views already. I'll read your tweet for the audience. If you invent a novel drug, you need the government to vet and approve it FDA before you can commercialize it. If you invent a new mode of air travel, you need the government to vet and approve it FAA. I'm just going to edit this down a little bit. If you create new security, you need the government to vet it and approve it sec more generally, when you create things with broad societal impact, positive and negative, the government creates a layer to review and approve it. AI will need such an oversight body. The FDA approval process seems the most credible and adaptable into a framework to understand how a model behaves and it's counterfactual. Our political leaders need to get in front of this sooner rather than later and create some oversight before the eventual big avoidable mistakes happen. And genies are let out of the bottle trim up you really want the government to come in and
Chamath Palihapitiya: And then when people build these tools, they have to submit them to the government to approve them. That's what you're saying here. And you want that to start now. Here's the alternative. The alternative is going to be the debacle that we know as section 230. So if you try to write a brittle piece of legislation, or try to use old legislation to deal with something new, it's not going to do a good job because technology advances way too quickly.
6
Chamath Palihapitiya: both racially gender based age based, highly stratified all around the world or at a minimum all around the country. You're not talking about that here, David, I think that you could have a much simpler and cheaper way where you have a version of the internet that's running in a huge sandbox someplace that's closed off from the rest of the internet, and another version of the internet that's closed off from everything else as well. And you can run on a parallel path as it is with this agent. And you can easily in my opinion, actually, figure out whether this agent is good or bad. And you can probably do it in weeks. So I actually think the approvals are actually not that complicated. And the reason to do it here is because I get that it may cause a little bit more friction for some of these mom and pops. But if you think about what's the societal and consequences of letting the worst case outcomes happen, the AGI type outcomes happen, I think those are so bad. They're worth slowing some folks down. And I think like, just because you want to, you know, buy groceries for $100, you should be able to do it, I get it. But if people don't realize and connect the dots between that and bringing airplanes down, then that's because they don't understand what this is capable of.
Davic Sacks: I'm not saying we're never going to need regulation. What I'm saying is, it's way too early. We don't even know what we're regulating. We don't know what the standard would be. And what we will do by racing to create a new FDA is destroying American innovation in the sector, and other countries will not slow down. They will beat us to the punch here. Got it. I think there's a middle ground here.
7
Chamath Palihapitiya: So the FDA is a perfect example why, even though the FDA commissioner is appointed by the president, this is a quasi organization, it's still arm's length away. It has subject matter experts that they hire, and they have many pathways to approval. Some pathways take days, some pathways are months and years, some pathways are for breakthrough innovation, some pathways are for devices. So they have a broad spectrum of ways of arbitrating what can be commercialized and what cannot. Otherwise, my prediction is we will have a very brittle law that will not work. It'll be like the Commerce Department and the FTC trying to gerrymander some old piece of legislation. And then what will happen is it'll get escalated to the Supreme Court. And I think they are the last group of people who should be deciding on this incredibly important topic for society. So What I have been advocating our leaders and I will continue to do so is don't try to ram this into an existing body. It is so important, it is worth creating a new organization like the FDA and having a framework that allows you to look at a model and look at the counterfactual, judge how good, how important, how disruptive it is, and then release it in the wild appropriately. Otherwise, I think you'll have these chaos GPT things scale infinitely. Because again, as Friedberg said, and as Zach said, you're talking about one person that can create this chaos.
8
Davic Sacks: And it will be the politically connected, it'll be the big donors who get their projects approved. And the next Mark Zuckerberg who's trying to do his little project in a dorm room somewhere will not know how to do that will not know how to compete in that highly political process. I think you're mixing a bunch of things together. So first of all,
Chamath Palihapitiya: permissionless innovation happens today in biotech as well. It's just that it's what Jason said, when you want to put it on the rails of society, and make it available to everybody, you actually have to go and do something substantive. In the negotiation of these drug approvals, it's not some standardized thing, you actually sit with the FDA, and you have to decide what are our endpoints? What is the mechanism of action? And how will we measure the efficacy of this thing? The idea that you can't do this today in AI is laughable. Yes, you can. And I think that smart people, so for example, if you pit DeepMind's team versus OpenAI's team to both agree that a model is good and correct, I bet you they would find a systematic way to test that it's fine.
Davic Sacks: I just want to point out, okay, so basically, in order to do what you're saying, okay, this entrepreneur, who just dropped out of college to do their project, they're gonna have to learn how to go sit with regulators, have a conversation with them, go through some complicated approval process. And you're trying to say that that won't turn into a game of political connections. Of course, it will, of course, it will. Which is self regulation.
9
Chamath Palihapitiya: And so there's like a whole monitoring framework there's a know your customer kind of framework. These are not unfettered things that can just live openly in the wild. So I think it's interesting to acknowledge that Sam who's deep in the bowels of one of the most important companies sees both its potential but it's danger enough to say that this is how we should think about it like nuclear weapons, I think is a very important thing to acknowledge.
Jason Calacanis: And the White House pledged to release draft guidelines for AI safeguards the National Science Foundation plans to spend 140 million on at AI focused research centers. FTC chair Lena Khan wrote a guest essay in the New York Times calling for AI regulation, due to large share risks including monopoly consolidation, fraud, extortion and bias. Any thoughts there, sacks about adding regulation to the mix right now? Are we jumping the gun here and going to smother this thing before it even gets correct answers?
Davic Sacks: serious risk. And the White House also announced that Kamala Harris would be the AI czar for this issue, which I don't think inspires anyone with confidence that they're gonna, you know, get this right. Look, my concern here is I think we should have conversations about the risks of AI. We should be thinking about that. I think people in the industry need to be thinking about what guardrails can we put on it.
10
Davic Sacks: Number two, the platforms that are commercializing these tools do have trust and safety teams. In the past, trust and safety has been a euphemism for censorship, which it shouldn't be. But OpenAI has a safety team, and they try to detect when people are using their tech in a nefarious way, and they try to prevent it. Well, no, not on censorship. But I think that they're probably a million people are using chat. They're policing it. Are you willing to abdicate your work societal responsibility to to open AI to do the trust and say, what I'm what I'm saying is, I'd like to see how far we get in terms of the system. Yeah. So you want to see the mistakes, you want to see where the mistakes are, and how bad the mistakes are. saying it's still very early to be imposing regulation. We don't even know what to regulate. So, I think we have to keep tracking this to develop some understanding of how it might be misused, how the industry is going to develop safety guardrails. And then you can talk about regulation. Look, you create some new FDA right now. First of all, we know what would happen. Look at the drug process. As soon as the FDA got involved, it slowed down massively. Now it takes years, many years to get a drug approved. Appropriately. So yes, but at least with a drug, we know what the gold standard is, you run a double blind study to see whether it causes harm or whether it's beneficial. We don't know what that standard is for AI yet. We have no idea. You can study in AI what
11
Davic Sacks: I'm not saying we're never going to need regulation. What I'm saying is, it's way too early. We don't even know what we're regulating. We don't know what the standard would be. And what we will do by racing to create a new FDA is destroying American innovation in the sector, and other countries will not slow down. They will beat us to the punch here. Got it. I think there's a middle ground here.
Jason Calacanis: of self regulation and thoughtfulness on the part of the people who are providing these tools at scale. To give just one example here, and this tweet is from five minutes ago. So to look at the pace of this, five minutes ago, this tweet came out, a developer who is an AI developer says AI agents continue to amaze my GPT for coding assistant learned how to build apps with authenticated users that can build and design a web app, create a back end, handle off logins, upload code to GitHub and deploy He literally, while we were talking, is deploying websites. Now, if this website was a phishing app, or the one that Chamath is talking about, he could make a gazillion different versions of Bank of America, Wells Fargo, et cetera, then find everybody on the internet's email, then start sending different spoofing emails, determine which spoofing emails work, iterate on those, and create a global financial collapse. Now, this sounds insane, but it's happening right now. People get hacked every day at one, two, three percent.
12
David Friedberg: is that, you know, tooling that's, that's, you know, creating entirely newly disruptive systems and models for business and economics and opportunity for so many needs to be regulated away to minimize, you know, what we claim to be some potential downside when we already have laws that protect us on the other side. So, you know, I just kind of want to also consider that this set of tools creates extraordinary opportunity. We gave one sort of simple example about the opportunity for creators, but we talked about how new business models, new businesses can be started with one or two people. you know, entirely new tools can be built with a handful of people, entirely new businesses, this is an incredible economic opportunity. And again, if the US tries to regulate it, or the US tries to come in and stop the application of models in general, or regulate models in general, you're certainly going to see those models of continue to evolve and continue to be utilized in very powerful ways are going to be advantageous to places outside the US, there's over 180 countries on earth, they're not all going to regulate together, it's been hard enough to get any sort of coordination around financial systems to get coordination around climate change to get coordination around anything on a global basis to try and get coordination around the software models that are being developed, I think is is pretty naive. You don't want to have a global organization. I think you need to have a domestic organization that protects us. And I think Europe will have their own thing again, FDA versus Emma,
13
Jason Calacanis: Google barred and of course chat GPT, OpenAI and Bing, all this stuff's available to have access to that. Do you need to have some regulation of who has access to those at scale powerful tools? Should there be some FDA or NHTSA?
Davic Sacks: I don't think we know how to regulate it yet. I think it's too early. And I think the harms that we're speculating about, we're making the AI more powerful than it is. And I believe it will be that powerful. But I think that it's premature to be talking about regulating something that doesn't really exist yet. Take the chaos GPT scenario. The way that would play out would be, you've got some future incarnation of auto GPT. And somebody says, okay, auto GPT, I want you to be, you know, WMD AI, and figure out how to cause like a mass destruction event, you know, and then it creates like a planning checklist and that kind of stuff. So that's basically the the type of scenario where we're talking about. We're not anywhere close to that yet. I mean, the chaos GPT is kind of a joke, it doesn't produce, it doesn't produce a checklist. I can give an example that would actually be completely plausible. One of the first things on the chaos GPT checklist was to stay within the boundaries of the law because it didn't want to get prosecuted.
14
Chamath Palihapitiya: I think in this narrow vertical, I 100% agree with you. Look, I think that AI is gonna eliminate unit testing. It has already done so. It's gonna eliminate most forms of coding. The engineers that you have, all of them will now become 10X engineers. So with fewer of them or with the same number, you'll be able to do as much or more than you could have before. That's a wonderful thing. And all I'm saying on that specific narrow vertical is you'll see it first rear its head in companies like Accenture and TCS and Cognizant because they have an immediate incentive to use this tooling to drive efficiency and profitability that's rewarded by shareholders. It'll be less visible in other companies. But what I am saying though is that you have to think about the impact on the end markets for a second. And I think that AI does something that other technology layers have never done before, which is supplant human judgment in a closed loop manner. And I just think it's worth appreciating that there are many systems and many jobs that reply that rely on human judgment. where we deal with error bars and an error rate that a computer will just destroy and blow out of the water. And we will have to ask ourselves, should this class of job exist with its inherent error rate?
Unknown error occured.