Shared Chat
The possibilities of AI according to Reid Hoffman

According to Reid Hoffman, the possibilities of AI are vast and varied. AI has the potential to amplify human capabilities and improve various industries such as healthcare, education, and gaming . It can also revolutionize communication and discourse, making it more efficient and effective .

In terms of AI's impact on society, Hoffman believes that access to and use of AI technologies will lead to greater gains for individuals who embrace it. However, he also acknowledges the challenges associated with AI, such as deepfakes and the erosion of social trust . The division between proprietary and open source AI is expected to exist, with both models playing a significant role .

Hoffman emphasizes the importance of compute power, talent, and data in advancing AI capabilities . He believes that even if certain technological limits are reached, there will still be room for improvements through tuning and specialization . Additionally, the price of electricity is crucial as AI heavily relies on electricity .

As for the future of AI, Hoffman is optimistic about continued progress and doesn't foresee an AI winter in the next decade. He expects significant breakthroughs and application developments that go beyond current capabilities .

It's worth noting that these insights are based on discussions from a podcast episode featuring Reid Hoffman .

1
(someone): Yes, exactly.
Tyler Cowen: Yeah. And politically you think that will be super popular? People hate it or?
(someone): I think we could get to a place where it would be popular. I think right now it would be considered to be science fiction and strange, but if like a replacement rate keeps going down, then I think people will say, Oh no, that, that makes sense.
Tyler Cowen: And a lot of science fiction has come through. Yes.
(someone): No, you and I both love science fiction and trade recommendations, you know, on a regular basis.
Tyler Cowen: Asimov's three laws, how good were they?
(someone): I think they were really good, although they were out of conceptualization for a target. If I were to update them, and it's a little bit like, you know, to reveal my nerdishness, Giscard's zeroth law, but I think that what you really want in it is to parallel almost a Buddhist sense of the importance of life and sentience. And that that's the kind of thing that you want in the, in, if you're creating really autonomous intelligences. Um, I think the kind of the uncle Tom, if it really is a totally, uh, autonomous being Kent's being careful about going into it, you know, a new form of robot slaves is, is, is perhaps, you know, not ultimately where humanity would want to be.
Tyler Cowen: There's not enough stress in them, I think, and what the robots are obliged to believe. So a robot is free to believe something crazy and then act on it. Yes.
2
(someone): Access and use of AI stuff will be amplifying, and so therefore, people who are using it will be gaining. So the access to it and the amplification, I think, will really matter.
Tyler Cowen: But say I gain from it, but I'm doing fine. I just can't gain that much, no matter how good it is. My theory is, people say in Kenya, where there's a lot of internet access, that's good enough. They'll have some cheaper open source model. And the young Kenyans who are very smart and ambitious will gain enormous amounts and the AI itself will send to a trusted intermediary information about their ability and they will in fact get phenomenal job offers from other places and they will gain the most. Now that might be wrong. But that would be my answer.
(someone): So I think that's true. Although I think that's because the more that we have a good global connectivity, the more we have a rise of talent from everywhere and AI added to that connectivity will exactly amplify that. And I do think that the notion of like human application, like the people who are best amplified or best connected into our global ecosystem, and I think we all benefit from it. It's one of the things that you and I share about the joy of amplifying talent from everywhere. Actually, in fact, amplifying talent benefits all of us.
Tyler Cowen: Are the mediocre word sales the biggest losers? Yeah. Will Mark Andreessen go away happy, so to speak?
(someone): Funny.
3
(someone): Well, I think if it's, you're confused because you're using just like you're confused about hitting the submit button, then I think it's your responsibility. But I do think that the, the developers of these and the things that are the things where they are much better at providing the safety for individuals than the individuals, then they should be liable. Cause that's, that's part of what will cause them to make sure that they're doing that.
Tyler Cowen: Will there be autonomous AI, LLM or bot agents that earn money? Depends on what you mean by autonomous. No one owns them. Maybe you created it, but you set it free into the wild. It's a charitable gift. It'll do amazing proofreading for anyone. Gratis.
(someone): I think autonomy is one of the lines that I think we have to cross carefully. So it's possible that there will be such autonomous AIs, but it's one of the areas, like self-improving code, autonomy, are areas that I pay a lot of attention to. Because right now I'm, as you know, a huge advocate of it amplifying human capabilities and being a personal AI, being a co-pilot to the stuff that we're doing. I think that is amazing and we should just do. When you make it autonomous, you have to be much more careful about what it's possible side, like what other implications might happen.
Tyler Cowen: And so... Let's put aside destroying the world and killing people. It's a bot that tells stories, it gives you comments on your papers, it does useful things.
4
(someone): It isn't just take these algorithms, apply it. There's a whole bunch of that's part of where it gets the talent as well. Is the, you know, there's a bunch of people who have had failed large models using the open source. Techniques and so it's because there's there's there's talent and know-how and learning and all of that. That's that's part of it That's kind of between the compute and talent. It's both elements.
Tyler Cowen: Anyway, so there's a whole stack of things ten years from now How important will the price of electricity be?
(someone): Well, I think the price of electricity is always important. If we get fusion, and I think it is good to be working on especially carbon. But fusion will be slow, even if you're optimistic, right? Yes, a hundred percent, which is one of the reasons why, you know, I think along with you, I'm a huge advocate of nuclear fission as well. Right. I think obviously we should be doing everything possible on solar and a bunch of others. But I think electricity, like the AI revolution is the cognitive industrial revolution. Is powered by electricity and so super important. So it's like the dune world with spice, but it now it's electricity Yes, the the electricity is part of what both creates and helps you see the future just like spice What did you think of the dune movie by the way, you must have seen it spectacular Like almost like a painting like one of the scenes made me think of Caravaggio I think you know exactly which scene given the art and I'm impatient for the November 23rd release of part two
5
(someone): Super big has been helpful here. I mean he frequently thinks something is gonna be here tomorrow Where I disagree with him. I don't think it's gonna be here even he's younger than I am even in his lifetime But like that ambition is awesome
Tyler Cowen: OpenAI right now, I think they have about 375 employees during the critical breakthrough period. Of course they had even fewer. Is that a new model of some kind or is it the old model, but it's the alliance with Microsoft that makes everything work? Mid journey I've heard is like 11 or 12 employees, which is crazy, right?
(someone): Yeah. Look, I think one and Instagram when Greylock funded it was 13 employees, right? So it's, it is a model of generally, it's an amplification of the general software model where you can have very small teams that produce things that are Archimedean levers that move the world. Now you do need in all of those cases, massive compute infrastructure. So like AWS existed for Instagram and so forth. You need that in order to make it happen, but a small team of software people can, can create amazing things.
Tyler Cowen: How is higher education going to change? And exactly who or what will do it?
(someone): Well, as you know, higher education is very resistant to change. It actually is, believe it or not. And yet it should be changing. It should be reconceptualizing its way that it amplifies young people. It launches them into the world. And it should be providing LLMs that are tutors and helpful.
6
(someone): You know, amplification of how we communicate in discourse, what I send you as a present, how we go on a vacation or go to a conference together. Anyway.
Tyler Cowen: As you know, there's no sharing function in the main current LLMs. Is this genius? Is this, Oh, there are just no product people in these companies. Does this mean, Oh, Matt is going to own everything sooner or later because they know how to do sharing. How do you think about that absence of a sharing function?
(someone): I think it's coming and I think it's coming and you think that will dominate the market Yes, I think but I think there will also be many providers of AI is just like I think there will be a number of different Chatbot agents in that play different character roles in your life. Just like different people play roles in your life How will gaming evolve? Well, it's been funny that it's evolved more slowly than I expected, but just like I was discussing the art, think about games that have virtual worlds, whether they're exploration or combats or strategic games or whatever, where the world is invented as you go. You know, in that format and NPCs will be super interesting even in multiplayer games and like where they, the game itself is itself a new frontier.
Tyler Cowen: How many games will you yourself create using AI?
(someone): I don't believe that number is well, okay. I guess I'm making a prediction at least a thousand.
Tyler Cowen: Is the future open source or proprietary?
(someone): I'm not sure that the ratios, I think both will be amplified.
Tyler Cowen: What's the right way to think about the division?
7
(someone): They're very smart, and they're very committed to the censorship. I think it'll create additional problems for them in so doing, but I think they'll figure out how to do it.
Tyler Cowen: Before my last question, just to repeat, Reid's new book, co-authored with GPT-4, is Impromptu, Amplifying Our Humanity Through AI, a Wall Street Journal bestseller. And finally, last question, Reid, what will you do next? Other than talk to dolphins.
(someone): Yes, there's a bunch like AI is going so fast. There's a bunch of things That we didn't cover in impromptu. So I actually think we will do another book and set of content around AI Possibly within this calendar year, which will be pretty amazing Reid Hoffman.
Tyler Cowen: Thank you very much. Thank you Thanks for listening to Conversations with Tyler. You can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. If you like this podcast, please consider giving us a rating and leaving a review. This helps other listeners find the show. On Twitter, I'm at Tyler Cowen, and the show is at Cowen Convos. Until next time, please keep listening and learning.
8
(someone): I'm not sure that the ratios, I think both will be amplified.
Tyler Cowen: What's the right way to think about the division?
(someone): Well, I think proprietary is kind of a classic set of things. One is the kind of safety issues we were talking about before, but also like certain things will be like access to very large compute, access to certain sorts of customers or business models, you know, kind of business position on those things will tend to lock in certain kind of proprietary things. On the other hand, I think there will be a bunch of open access as well as open source side of things. I think one of the things about open AI and, and what it's doing with Microsoft is I think people will be broadly provisioned in this stuff. So I think there will be a ton of open access to this, which is part of the reason why I think the it's beyond the sky is the limit relative to, you know, what kinds of expression and creativity we're going to see.
Tyler Cowen: What's the chance that we're in a new AI winter and the next 10 years we'll just spend developing applications of what we have, that will be amazing, but the sequel to GPT-4 won't be that much better.
(someone): I think the chance that we won't have over at least five years really interesting progress is rounds to zero. Cause even if the raw capabilities for say, say it was like you're an Oracle from the future and you tell me, that the real scale curve kind of limited at GPD four and there's not much coming. There's still a bunch of tuning. There's still a bunch of product specialization. There's still a bunch of, you know, making a good for teachers and students make it good for doctors making good for applications, right?
9
(someone): I'm not sure that the ratios, I think both will be amplified.
Tyler Cowen: What's the right way to think about the division?
(someone): Well, I think proprietary is kind of a classic set of things. One is the kind of safety issues we were talking about before, but also like certain things will be like access to very large compute, access to certain sorts of customers or business models, you know, kind of business position on those things will tend to lock in certain kind of proprietary things. On the other hand, I think there will be a bunch of open access as well as open source side of things. I think one of the things about open AI and, and what it's doing with Microsoft is I think people will be broadly provisioned in this stuff. So I think there will be a ton of open access to this, which is part of the reason why I think the it's beyond the sky is the limit relative to, you know, what kinds of expression and creativity we're going to see.
Tyler Cowen: What's the chance that we're in a new AI winter and the next 10 years we'll just spend developing applications of what we have, that will be amazing, but the sequel to GPT-4 won't be that much better.
(someone): I think the chance that we won't have over at least five years really interesting progress is rounds to zero. Cause even if the raw capabilities for say, say it was like you're an Oracle from the future and you tell me, that the real scale curve kind of limited at GPD four and there's not much coming. There's still a bunch of tuning. There's still a bunch of product specialization. There's still a bunch of, you know, making a good for teachers and students make it good for doctors making good for applications, right?
10
(someone): Like, I've been thinking – I mean, I know you yourself are a great kind of student of art. I've been thinking about what kinds of arts you can create and the fact that art could be – like, for example, with this stuff, you can literally make interesting forms of art where every X time sequence, seconds or whatever, that you're in front of something, it's new and never replicating. So that's a form of medium. I do think that the question around, like for example, even in writing, Like obviously it's a, a book is made a book has made about AI with AI and some property, but like, for example, we'll have the impromptu chat bot up along with it. And so people wanted to talk to the bot, talk to the book and elaborate on it, the bots there. And by the way, maybe the bot will talk to other bots that when you're saying, Hey, I'm this thing I'm working on. So I think there's a whole stack of amplifications that will lead to some radically new things.
Tyler Cowen: Put aside money income, let's say someone comes to me, they say, Tyler, spend a year talking to this AI, and then you grade it, and at the end of it all, there'll be a Tyler Cowen bot, it'll be excellent. Should I do that? Yes. And how long should I spend doing that?
(someone): Well, I wouldn't spend a huge amount of time right now because I think that. Technology will get a whole lot better for it over the next X years, but I'd start playing with it now. And then I would start looking at where that's useful.
11
(someone): There's still a bunch of product specialization. There's still a bunch of, you know, making a good for teachers and students make it good for doctors making good for applications, right?
Tyler Cowen: Like big breakthrough. Well, GPT four feels like witchcraft compared to two. Yes. And maybe we'll just have 10 years where nothing feels like witchcraft compared to four.
(someone): Oh, so what's the chance that there is no more astounding? Very low. I mean, look at, for example, what AlphaFold did with protein folding. And I think that application of this stuff and tuning it within particular kinds of biological sciences and other things, I think there's line of sight to more things.
Tyler Cowen: What's the most important binding constraint preventing us from being at that next stage right now? Is it quality of data, degree of data, the system itself, just raw horsepower? What is it?
(someone): I think it's compute, then talent, then data.
Tyler Cowen: And when you say compute, you mean we just need to buy more GPUs and spend more money and it may or may not be worth it.
(someone): And also how you organize the compute. Like there's a, there's a whole thing about when you're in the lead, you know how to build the computers, you know, which configurations working or not, how to run them, what the training runs is. It isn't just take these algorithms, apply it. There's a whole bunch of that's part of where it gets the talent as well.
12
Tyler Cowen: How will human aspiration change due to LLMs?
(someone): Hopefully get greatly amplified. That's, that's everything that I'm trying to like, like our aspirations should be very ambitious. And I think LLMs and AI should, should, if anything, increase them.
Tyler Cowen: One thing I've learned is I never get sick of watching the magic. At first I thought, well, for how long will I still get kicks from this? Yes. But it's still running, right? It hasn't, hasn't asymptoted for me. Yes, exactly. What will happen to social trust as a result of LLMs? Go up, go down. How will it change?
(someone): Well, unfortunately, probably initially it'll go down, everything from deep fakes and a bunch of uncertainty and we're already kind of, you know, because humans trusting humans is another issue that we have. I'm hopeful that maybe we can begin to figure out some ways to have shared discourse, shared discovery of truth. And I would love to have LLM work helping and amplifying that. And that's part of what I'm doing at Stanford with human-centered AI and other places, because it's really important to solve.
Tyler Cowen: Thinking globally, which group or groups in the world would be the biggest gainers?
(someone): Access and use of AI stuff will be amplifying, and so therefore, people who are using it will be gaining. So the access to it and the amplification, I think, will really matter.
13
(someone): You know, amplification of how we communicate in discourse, what I send you as a present, how we go on a vacation or go to a conference together. Anyway.
Tyler Cowen: As you know, there's no sharing function in the main current LLMs. Is this genius? Is this, Oh, there are just no product people in these companies. Does this mean, Oh, Matt is going to own everything sooner or later because they know how to do sharing. How do you think about that absence of a sharing function?
(someone): I think it's coming and I think it's coming and you think that will dominate the market Yes, I think but I think there will also be many providers of AI is just like I think there will be a number of different Chatbot agents in that play different character roles in your life. Just like different people play roles in your life How will gaming evolve? Well, it's been funny that it's evolved more slowly than I expected, but just like I was discussing the art, think about games that have virtual worlds, whether they're exploration or combats or strategic games or whatever, where the world is invented as you go. You know, in that format and NPCs will be super interesting even in multiplayer games and like where they, the game itself is itself a new frontier.
Tyler Cowen: How many games will you yourself create using AI?
(someone): I don't believe that number is well, okay. I guess I'm making a prediction at least a thousand.
Tyler Cowen: Is the future open source or proprietary?
(someone): I'm not sure that the ratios, I think both will be amplified.
Tyler Cowen: What's the right way to think about the division?
14
(someone): I'm not sure that the ratios, I think both will be amplified.
Tyler Cowen: What's the right way to think about the division?
(someone): Well, I think proprietary is kind of a classic set of things. One is the kind of safety issues we were talking about before, but also like certain things will be like access to very large compute, access to certain sorts of customers or business models, you know, kind of business position on those things will tend to lock in certain kind of proprietary things. On the other hand, I think there will be a bunch of open access as well as open source side of things. I think one of the things about open AI and, and what it's doing with Microsoft is I think people will be broadly provisioned in this stuff. So I think there will be a ton of open access to this, which is part of the reason why I think the it's beyond the sky is the limit relative to, you know, what kinds of expression and creativity we're going to see.
Tyler Cowen: What's the chance that we're in a new AI winter and the next 10 years we'll just spend developing applications of what we have, that will be amazing, but the sequel to GPT-4 won't be that much better.
(someone): I think the chance that we won't have over at least five years really interesting progress is rounds to zero. Cause even if the raw capabilities for say, say it was like you're an Oracle from the future and you tell me, that the real scale curve kind of limited at GPD four and there's not much coming. There's still a bunch of tuning. There's still a bunch of product specialization. There's still a bunch of, you know, making a good for teachers and students make it good for doctors making good for applications, right?
15
(someone): Access and use of AI stuff will be amplifying, and so therefore, people who are using it will be gaining. So the access to it and the amplification, I think, will really matter.
Tyler Cowen: But say I gain from it, but I'm doing fine. I just can't gain that much, no matter how good it is. My theory is, people say in Kenya, where there's a lot of internet access, that's good enough. They'll have some cheaper open source model. And the young Kenyans who are very smart and ambitious will gain enormous amounts and the AI itself will send to a trusted intermediary information about their ability and they will in fact get phenomenal job offers from other places and they will gain the most. Now that might be wrong. But that would be my answer.
(someone): So I think that's true. Although I think that's because the more that we have a good global connectivity, the more we have a rise of talent from everywhere and AI added to that connectivity will exactly amplify that. And I do think that the notion of like human application, like the people who are best amplified or best connected into our global ecosystem, and I think we all benefit from it. It's one of the things that you and I share about the joy of amplifying talent from everywhere. Actually, in fact, amplifying talent benefits all of us.
Tyler Cowen: Are the mediocre word sales the biggest losers? Yeah. Will Mark Andreessen go away happy, so to speak?
(someone): Funny.
16
(someone): Super big has been helpful here. I mean he frequently thinks something is gonna be here tomorrow Where I disagree with him. I don't think it's gonna be here even he's younger than I am even in his lifetime But like that ambition is awesome
Tyler Cowen: OpenAI right now, I think they have about 375 employees during the critical breakthrough period. Of course they had even fewer. Is that a new model of some kind or is it the old model, but it's the alliance with Microsoft that makes everything work? Mid journey I've heard is like 11 or 12 employees, which is crazy, right?
(someone): Yeah. Look, I think one and Instagram when Greylock funded it was 13 employees, right? So it's, it is a model of generally, it's an amplification of the general software model where you can have very small teams that produce things that are Archimedean levers that move the world. Now you do need in all of those cases, massive compute infrastructure. So like AWS existed for Instagram and so forth. You need that in order to make it happen, but a small team of software people can, can create amazing things.
Tyler Cowen: How is higher education going to change? And exactly who or what will do it?
(someone): Well, as you know, higher education is very resistant to change. It actually is, believe it or not. And yet it should be changing. It should be reconceptualizing its way that it amplifies young people. It launches them into the world. And it should be providing LLMs that are tutors and helpful.
17
(someone): Yes, exactly.
Tyler Cowen: Yeah. And politically you think that will be super popular? People hate it or?
(someone): I think we could get to a place where it would be popular. I think right now it would be considered to be science fiction and strange, but if like a replacement rate keeps going down, then I think people will say, Oh no, that, that makes sense.
Tyler Cowen: And a lot of science fiction has come through. Yes.
(someone): No, you and I both love science fiction and trade recommendations, you know, on a regular basis.
Tyler Cowen: Asimov's three laws, how good were they?
(someone): I think they were really good, although they were out of conceptualization for a target. If I were to update them, and it's a little bit like, you know, to reveal my nerdishness, Giscard's zeroth law, but I think that what you really want in it is to parallel almost a Buddhist sense of the importance of life and sentience. And that that's the kind of thing that you want in the, in, if you're creating really autonomous intelligences. Um, I think the kind of the uncle Tom, if it really is a totally, uh, autonomous being Kent's being careful about going into it, you know, a new form of robot slaves is, is, is perhaps, you know, not ultimately where humanity would want to be.
Tyler Cowen: There's not enough stress in them, I think, and what the robots are obliged to believe. So a robot is free to believe something crazy and then act on it. Yes.
18
(someone): Well, look, it's partially because we need to have diversity in the human species, not like, how do we, that's, it's part of how do we enable as much diversity while it's that the diverse of creative expression, part of like, like freedom of speech and why, you know, it is valuable is that diversity of, of craziness that also creates genius.
Tyler Cowen: What's a game you've been playing more of lately and why?
(someone): I haven't really had a time to play games cause this, the, the AI stuff is occupying a total amount of time. I have a stack of games without their shrink wrap taken off that I'm hoping to get to.
Tyler Cowen: I find the AI stuff. It's totally wrecked my calendar. I had a year planned out that I could just do a whole bunch of other things. And now sort of every day you have to keep up with AI. You have to learn. It's like, this doesn't work anymore. Throw up my hands and I feel a bit behind on everything.
(someone): Yes, although, by the way, there will be a chatbot for that.
Tyler Cowen: That's good. What's a non-obvious problem we should be worrying about more?
(someone): Well, I mean, I think because so much of the discourse in the press around is around the macro things is, you know, AI in the hand of hands of bad human actors, and there's a range of bad human actors. So I think that's really important. I think also the question around, like, people tend to go, Oh, wait a minute, the people who have the AI,
19
(someone): They're very smart, and they're very committed to the censorship. I think it'll create additional problems for them in so doing, but I think they'll figure out how to do it.
Tyler Cowen: Before my last question, just to repeat, Reid's new book, co-authored with GPT-4, is Impromptu, Amplifying Our Humanity Through AI, a Wall Street Journal bestseller. And finally, last question, Reid, what will you do next? Other than talk to dolphins.
(someone): Yes, there's a bunch like AI is going so fast. There's a bunch of things That we didn't cover in impromptu. So I actually think we will do another book and set of content around AI Possibly within this calendar year, which will be pretty amazing Reid Hoffman.
Tyler Cowen: Thank you very much. Thank you Thanks for listening to Conversations with Tyler. You can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. If you like this podcast, please consider giving us a rating and leaving a review. This helps other listeners find the show. On Twitter, I'm at Tyler Cowen, and the show is at Cowen Convos. Until next time, please keep listening and learning.
20
(someone): It isn't just take these algorithms, apply it. There's a whole bunch of that's part of where it gets the talent as well. Is the, you know, there's a bunch of people who have had failed large models using the open source. Techniques and so it's because there's there's there's talent and know-how and learning and all of that. That's that's part of it That's kind of between the compute and talent. It's both elements.
Tyler Cowen: Anyway, so there's a whole stack of things ten years from now How important will the price of electricity be?
(someone): Well, I think the price of electricity is always important. If we get fusion, and I think it is good to be working on especially carbon. But fusion will be slow, even if you're optimistic, right? Yes, a hundred percent, which is one of the reasons why, you know, I think along with you, I'm a huge advocate of nuclear fission as well. Right. I think obviously we should be doing everything possible on solar and a bunch of others. But I think electricity, like the AI revolution is the cognitive industrial revolution. Is powered by electricity and so super important. So it's like the dune world with spice, but it now it's electricity Yes, the the electricity is part of what both creates and helps you see the future just like spice What did you think of the dune movie by the way, you must have seen it spectacular Like almost like a painting like one of the scenes made me think of Caravaggio I think you know exactly which scene given the art and I'm impatient for the November 23rd release of part two
21
(someone): Well, I think if it's, you're confused because you're using just like you're confused about hitting the submit button, then I think it's your responsibility. But I do think that the, the developers of these and the things that are the things where they are much better at providing the safety for individuals than the individuals, then they should be liable. Cause that's, that's part of what will cause them to make sure that they're doing that.
Tyler Cowen: Will there be autonomous AI, LLM or bot agents that earn money? Depends on what you mean by autonomous. No one owns them. Maybe you created it, but you set it free into the wild. It's a charitable gift. It'll do amazing proofreading for anyone. Gratis.
(someone): I think autonomy is one of the lines that I think we have to cross carefully. So it's possible that there will be such autonomous AIs, but it's one of the areas, like self-improving code, autonomy, are areas that I pay a lot of attention to. Because right now I'm, as you know, a huge advocate of it amplifying human capabilities and being a personal AI, being a co-pilot to the stuff that we're doing. I think that is amazing and we should just do. When you make it autonomous, you have to be much more careful about what it's possible side, like what other implications might happen.
Tyler Cowen: And so... Let's put aside destroying the world and killing people. It's a bot that tells stories, it gives you comments on your papers, it does useful things.
What should AI regulation look like according to him?

Reid Hoffman has discussed several aspects of AI regulation. Here are some of his views:

  1. Responsibility and accountability: Hoffman suggests that AI developers should have a certain level of responsibility for the actions of their AI systems. There should be testing harnesses and safety provisions in place to ensure that AI is not used for harmful purposes .

  2. Agency and ownership: Hoffman believes that every AI should be provisionally owned and governed by a person or organization. This ensures that there is accountability and someone can be held responsible for the actions of the AI .

  3. Open access and safety provisions: While AI systems should be open in terms of access, safety provisions need to be in place to prevent misuse or potential harm. Open source may not guarantee safety, so there should be a focus on ensuring safety measures are implemented .

  4. Regulation and government involvement: Hoffman suggests that government regulation of AI should be distributed across various departments to address specific concerns and applications. Departments like commerce and national security should play a role in regulating AI products and ensuring safety .

  5. Consideration of cultural and societal values: Hoffman acknowledges that different regions and countries have different norms and values. When it comes to AI, regulations should consider cultural context, and mechanisms should be in place to handle specific concerns, like the right to be forgotten .

These insights are based on Reid Hoffman's discussions in a podcast episode, and they provide a broad overview of his perspective on AI regulation.

1
(someone): So I think that's really important. I think also the question around, like, people tend to go, Oh, wait a minute, the people who have the AI, Will be amplified. It's like how do we get that AI like the most natural thing is to pursue where the money is? Well, how do we get AI in the hands of like lower-income? Students and school districts and all the rest to make sure that it's there and provision It's one things I love about open AI the accessibility of chat GPT But like how do we get as broadly and enabled as we can as I think another important one?
Tyler Cowen: Let's say you're advising a small but tech advanced nation, Singapore, Israel would be two options. Would you tell them they should build their own LLMs? It will cost them a lot per capita, but they'll have their own LLMs.
(someone): I don't think they need to, but I think they should get involved and perhaps work with the providers of LLMs to make sure that there are LLMs that fit their needs. That doesn't necessarily need to be that they need to build their own, but they say, hey, we need to make sure that we have LLM provisioning for our companies and our industry and our citizens. okay, let's make sure that happened. Whether it's the, you know, we spend billions of dollars to build the one ourselves, they could do that, certainly no, nothing bad in doing that, but they should make sure that their industries and their citizens are provisioned.
Tyler Cowen: But say we have a strategic petroleum reserve, for better or worse, should Israel have a strategic GPU reserve? Don't nations such as the US get too much leverage over Israel?
2
Tyler Cowen: So right now, if I Google how to build a bomb, I build a bomb, I kill people, right? No one can sue Google. It's just my fault. Yes. How will it work? How should it work for LLMs?
(someone): That's an extremely good and precise question, a classic Tyler. And this is what the lawyers will be working on, right? Yes, exactly. I think that what you need to have is the LLMs have a certain responsibility to a training set of safety, not infinite responsibility. But part of when you said, what should AI regulation ultimately be, is to say there's a set of testing harnesses that you should be, it should be difficult to get an LLM to help you make a bomb. And it may not be impossible to do it. My grandmother used to put me to sleep at night telling me stories about bomb making, and I couldn't remember the C4 recipe. It would make my sleep so much better if you could, there may be ways to hack this. But if you had an extensive test set, within the test set the LLM maker should be responsible, outside the test set I think it's the individual.
Tyler Cowen: Will that mean no standard over time as jailbreaking knowledge spreads?
(someone): Well, I think jailbreaking knowledge will spread, but I think it's, you know, just like cybersecurity and everything else, I think it's an arms race. And so I think part of what we'll do is we'll have AI hopefully more on the side of angels than on devils.
3
(someone): Well, I think jailbreaking knowledge will spread, but I think it's, you know, just like cybersecurity and everything else, I think it's an arms race. And so I think part of what we'll do is we'll have AI hopefully more on the side of angels than on devils. That's part of the reason I'm an advocate for acceleration, move fast to the future, do not pause, et cetera, because it's part of being more safe there.
Tyler Cowen: and putting aside truly malicious acts like bomb making, where else should there be liability on the LLM company? Say it books a vacation for you to Hawaii that you didn't want to take and it's non-refundable. Should you be able to do some tiny civil suit and get your money back from
(someone): Yeah, I think, look, I think there's some degree of where we need to have some categorization regime of where are you relying on it. But I actually think that the provider of the LLM should have it be like, it should be pretty reliable at it doesn't book the vacation without confirming with you, like that kind of thing should be totally within their doable skill set. And so they should be accountable.
Tyler Cowen: But say there's some volatility to plugins because you want a fairly creative AI and you don't have enough money to afford a reliable AI to book your trips and then a creative AI to tell you bedtime stories and you use one thing for whatever reason or you get confused.
(someone): Well, I think if it's, you're confused because you're using just like you're confused about hitting the submit button, then I think it's your responsibility. But I do think that the, the developers of these and the things that are the things where they are much better at providing the safety for individuals than the individuals, then they should be liable.
4
(someone): Well, and look, there's reasons why we have an executive branch. There's a reason why, in many countries, the executive function's even stronger, even including parliamentary systems, because it kind of aligns the executive with the parliamentary branch. I do think that the general rise of technology should make the executive branch stronger in various ways. Like one of the things I've been advocating for a number of years, we need to have a secretary of technology, not just a CTO, because if technology is a drumbeat of industries and a bunch of other things, you know, having that be a first class citizen where you're doing strategy and everything else around, I think is really important. So I think the short answer is yes, but in our system, it's a little incoherent.
Tyler Cowen: Let's say you have a coalition system like on the continent with proportional representation and you have a governmental AI. Does every party in the coalition have the ability to access it?
(someone): I think that would be a good thing. I do think that part of the reason why I helped stand up OpenAI, I was on the board for a number of years, is broadly provisioning safe AI to as much of humanity, as many businesses as possible, including as many political parties and all the rest, is, I think, a good thing. Amplification.
Tyler Cowen: But you'll have some parts that won't be open, right?
(someone): Yeah. Well, because you have to do safety. So for example, everyone's going, well, we thought open meant open source. No, no. Open access with safety provisions, open source is actually not safe.
Tyler Cowen: It's less safe.
5
(someone): Well, look, it's partially because we need to have diversity in the human species, not like, how do we, that's, it's part of how do we enable as much diversity while it's that the diverse of creative expression, part of like, like freedom of speech and why, you know, it is valuable is that diversity of, of craziness that also creates genius.
Tyler Cowen: What's a game you've been playing more of lately and why?
(someone): I haven't really had a time to play games cause this, the, the AI stuff is occupying a total amount of time. I have a stack of games without their shrink wrap taken off that I'm hoping to get to.
Tyler Cowen: I find the AI stuff. It's totally wrecked my calendar. I had a year planned out that I could just do a whole bunch of other things. And now sort of every day you have to keep up with AI. You have to learn. It's like, this doesn't work anymore. Throw up my hands and I feel a bit behind on everything.
(someone): Yes, although, by the way, there will be a chatbot for that.
Tyler Cowen: That's good. What's a non-obvious problem we should be worrying about more?
(someone): Well, I mean, I think because so much of the discourse in the press around is around the macro things is, you know, AI in the hand of hands of bad human actors, and there's a range of bad human actors. So I think that's really important. I think also the question around, like, people tend to go, Oh, wait a minute, the people who have the AI,
6
Tyler Cowen: It just seems to me there'll be a lot of bots they'll reproduce for Darwinian reasons. And we have to face questions about them, even if we'd like to ban them.
(someone): Look, I do think raising the question is good. I'm not trying to resist the question. What I am saying is I think that our developers, and I do think it's totally, like, you can hash it with Bitcoin. They can earn money, run things themselves. I think there's various ways that you could get a self-perpetuating bot process, even on today's bots, which aren't really creatures. They're more tools. You could set up the tool to do that. totally doable. What I am saying is we as a human society, human tribe shouldn't necessarily ascribe any legal rights to that. We shouldn't necessarily allow autonomous bots, you know, functioning because that would be something that currently has uncertain safety factors. And I'm not going to the existential risk thing, just even cyber hacking and other kinds of things. So it's kind of, it's a, yes, it's totally technically doable, but we should venture into that space with some care.
Tyler Cowen: What we wanted is tax their income. Otherwise, they're arbitraging against labor, which might pay 40% tax. The bot pays nothing. It's not a legal entity. You'd rather legalize it, tax it, regulate it. Some government will do that, even if ours doesn't.
7
(someone): Yes, exactly.
Tyler Cowen: Yeah. And politically you think that will be super popular? People hate it or?
(someone): I think we could get to a place where it would be popular. I think right now it would be considered to be science fiction and strange, but if like a replacement rate keeps going down, then I think people will say, Oh no, that, that makes sense.
Tyler Cowen: And a lot of science fiction has come through. Yes.
(someone): No, you and I both love science fiction and trade recommendations, you know, on a regular basis.
Tyler Cowen: Asimov's three laws, how good were they?
(someone): I think they were really good, although they were out of conceptualization for a target. If I were to update them, and it's a little bit like, you know, to reveal my nerdishness, Giscard's zeroth law, but I think that what you really want in it is to parallel almost a Buddhist sense of the importance of life and sentience. And that that's the kind of thing that you want in the, in, if you're creating really autonomous intelligences. Um, I think the kind of the uncle Tom, if it really is a totally, uh, autonomous being Kent's being careful about going into it, you know, a new form of robot slaves is, is, is perhaps, you know, not ultimately where humanity would want to be.
Tyler Cowen: There's not enough stress in them, I think, and what the robots are obliged to believe. So a robot is free to believe something crazy and then act on it. Yes.
8
(someone): Open access with safety provisions, open source is actually not safe.
Tyler Cowen: It's less safe. So you're a small party in Northern Ireland. You're part of a coalition government in London. You can just tap into the world's strongest computational power. No risk of Chinese bribing people in this small party. Can you use the AI to run your campaign to be re-elected in Northern Ireland? Do you have to give access to the opposition party? Like what within government rations access to the really powerful stuff that's not just open to the public? Which branch of government should do that? Which standards?
(someone): Yeah, well clearly the notion to reinforce one particular party Like, we try to make the parties as equally armed as possible for a democratic purpose. So you would want to do that. So you wouldn't say you have unique access for doing this. It would have to be equally capable, whether or not it's equally intelligently used is a different question, but equally capable across it. I do think that the general speaking – like part of the reason why I kind of deeply share the OpenAI mission is to say how do we provide beneficial AI to as many individual human beings and as many organizations and as many institutions as we can is I think a really good thing.
Tyler Cowen: What is the media ecosystem look like in this world so let's say a lot of people. Rather than reading the new york times are going to twitter they just ask their AI read it for me tell me what's new.
9
Tyler Cowen: And so... Let's put aside destroying the world and killing people. It's a bot that tells stories, it gives you comments on your papers, it does useful things. But someone could even sell it to a shell corporation, the corporation goes under, no one owns the bot, right? Like you can't actually stop autonomy, it seems to me. So it will happen.
(someone): Well, I think the question is, look, I think to some degree, you know, one of the earliest regulations we'll see is that every AI has to essentially be provisionally owned and governed by some person, you know, and I think that so there will be some kind of accountability chain, because like, if you're using it for cyber hacking, and so we say, I didn't use it like that bot was doing marketing, but that bot was doing cyber hacking, but I wasn't me is like, well, but you were the person who was responsible for it.
Tyler Cowen: But there's always a thinly capitalized corporation. Again, I'm talking about positive, productive bots that will be autonomous.
(someone): But like, for example, today, corporations have to have owners, have to have boards of directors. There is human accountability there.
Tyler Cowen: But you die and test it. The company goes bankrupt. You give it away. It comes from Estonia. You can't trace it. Something's encrypted. It just seems to me there'll be a lot of bots they'll reproduce for Darwinian reasons. And we have to face questions about them, even if we'd like to ban them.
10
(someone): Well, I think if it's, you're confused because you're using just like you're confused about hitting the submit button, then I think it's your responsibility. But I do think that the, the developers of these and the things that are the things where they are much better at providing the safety for individuals than the individuals, then they should be liable. Cause that's, that's part of what will cause them to make sure that they're doing that.
Tyler Cowen: Will there be autonomous AI, LLM or bot agents that earn money? Depends on what you mean by autonomous. No one owns them. Maybe you created it, but you set it free into the wild. It's a charitable gift. It'll do amazing proofreading for anyone. Gratis.
(someone): I think autonomy is one of the lines that I think we have to cross carefully. So it's possible that there will be such autonomous AIs, but it's one of the areas, like self-improving code, autonomy, are areas that I pay a lot of attention to. Because right now I'm, as you know, a huge advocate of it amplifying human capabilities and being a personal AI, being a co-pilot to the stuff that we're doing. I think that is amazing and we should just do. When you make it autonomous, you have to be much more careful about what it's possible side, like what other implications might happen.
Tyler Cowen: And so... Let's put aside destroying the world and killing people. It's a bot that tells stories, it gives you comments on your papers, it does useful things.
11
Tyler Cowen: You'd rather legalize it, tax it, regulate it. Some government will do that, even if ours doesn't.
(someone): Well, and I also think even if you say, well, it's a bankrupt company, but the bot's earning money, then the company's earning money. We do have tax regimes for companies. So I think there is things. But I think we would want to do that. But I also think you want to, like, for example, self-evolving without any eyes on it strikes me as another thing that you should be super careful about, you know, letting into the wild. And matter of fact, I think at the moment if someone had said, hey, there's a self-evolving bot that someone let in the wild, I would say we should go capture it or kill it, right, today because we don't know what the surfaces are. Right, so that's like I think one of the things that will be interesting about these bots in the wild.
Tyler Cowen: Will bots rescue the demand for crypto? What else will they use for money, right?
(someone): Yeah, well I think that that's part of the, like one of the talks I gave on crypto 10 years ago was Even without these LLMs, I could set up a bot that could pay its server fees and everything else in crypto and then write eulogies or praise to Reid Hoffman for all time. It just is an entertaining, autonomous bot.
Tyler Cowen: Exactly who or what in government should regulate LLMs, new AI products? People say government regulation but like where is it the fdc department of commerce national security establishment.
12
Tyler Cowen: Exactly who or what in government should regulate LLMs, new AI products? People say government regulation but like where is it the fdc department of commerce national security establishment.
(someone): Well i think since ai is gonna transform every agency i think there will actually be needs in each of the departments right now because i think. Secretary Raimondo is a super smart, capable leader and understands the tech reasonably well. I would go with commerce, and there's NIST and a bunch of other things. I do think also some attention to national security, a la Jake Sullivan. There's all U.S. context I think is useful too. Part of, I've talked with both of them. Part of my recommendation to them has been that there are so many better things in the future, including safety, including alignment with human interest, that the slow down narrative is actually dangerous. That the narrative is actually much better to say, which things do we want me to protect against? E.g. AI in the hand of bad human being, bad actors is the thing to pay attention to.
Tyler Cowen: Will the new AI product strengthen the executive branch in the US government? Huh. Since there's national security issues, again, even if you're not a doomster, there's clearly issues. And it seems when national security issues come to the forefront, the executive branch has more power, whether one likes that or not.
(someone): Well, and look, there's reasons why we have an executive branch. There's a reason why, in many countries, the executive function's even stronger, even including parliamentary systems, because it kind of aligns the executive with the parliamentary branch.
13
Tyler Cowen: Exactly who or what in government should regulate LLMs, new AI products? People say government regulation but like where is it the fdc department of commerce national security establishment.
(someone): Well i think since ai is gonna transform every agency i think there will actually be needs in each of the departments right now because i think. Secretary Raimondo is a super smart, capable leader and understands the tech reasonably well. I would go with commerce, and there's NIST and a bunch of other things. I do think also some attention to national security, a la Jake Sullivan. There's all U.S. context I think is useful too. Part of, I've talked with both of them. Part of my recommendation to them has been that there are so many better things in the future, including safety, including alignment with human interest, that the slow down narrative is actually dangerous. That the narrative is actually much better to say, which things do we want me to protect against? E.g. AI in the hand of bad human being, bad actors is the thing to pay attention to.
Tyler Cowen: Will the new AI product strengthen the executive branch in the US government? Huh. Since there's national security issues, again, even if you're not a doomster, there's clearly issues. And it seems when national security issues come to the forefront, the executive branch has more power, whether one likes that or not.
(someone): Well, and look, there's reasons why we have an executive branch. There's a reason why, in many countries, the executive function's even stronger, even including parliamentary systems, because it kind of aligns the executive with the parliamentary branch.
14
(someone): Well, and look, there's reasons why we have an executive branch. There's a reason why, in many countries, the executive function's even stronger, even including parliamentary systems, because it kind of aligns the executive with the parliamentary branch. I do think that the general rise of technology should make the executive branch stronger in various ways. Like one of the things I've been advocating for a number of years, we need to have a secretary of technology, not just a CTO, because if technology is a drumbeat of industries and a bunch of other things, you know, having that be a first class citizen where you're doing strategy and everything else around, I think is really important. So I think the short answer is yes, but in our system, it's a little incoherent.
Tyler Cowen: Let's say you have a coalition system like on the continent with proportional representation and you have a governmental AI. Does every party in the coalition have the ability to access it?
(someone): I think that would be a good thing. I do think that part of the reason why I helped stand up OpenAI, I was on the board for a number of years, is broadly provisioning safe AI to as much of humanity, as many businesses as possible, including as many political parties and all the rest, is, I think, a good thing. Amplification.
Tyler Cowen: But you'll have some parts that won't be open, right?
(someone): Yeah. Well, because you have to do safety. So for example, everyone's going, well, we thought open meant open source. No, no. Open access with safety provisions, open source is actually not safe.
Tyler Cowen: It's less safe.
15
(someone): So I think you have to plot that strategy with some care. But I do think it's an important strategy to be paying attention to. And I think, for example, we as the US, part of the thing I like about the world order of the US is, yeah, we sometimes do stuff that throws too much stuff too much to our advantage. That's a problem. But we also try to provision a lot. We try to raise the rest of the world. And I think we should continue to do that.
Tyler Cowen: As you know, in EU law, there's a right to be forgotten. But that is arguably inconsistent with current LLMs. You can force a new training run by saying, well, you've got to take me out of the current system. But a new training run costs a lot of money. And to have lone individuals raising their hand say, oh, the model has to forget me, that's just not going to work. So legally, where do you think the EU will end up on all this?
(someone): Well, I think there's a smart EU, dumb EU, and which one is up to them. Smart EU is to say, look, what we need to do is we need to be dealing with the function of what are the kind of culture and society. So we say, well, we want to make sure that these AI tools have the right judiciousness in being asked about individuals. That's our particular culture. So we say, OK, you have to at least have a meta bot that could interrupt the query.
16
(someone): So I think that's really important. I think also the question around, like, people tend to go, Oh, wait a minute, the people who have the AI, Will be amplified. It's like how do we get that AI like the most natural thing is to pursue where the money is? Well, how do we get AI in the hands of like lower-income? Students and school districts and all the rest to make sure that it's there and provision It's one things I love about open AI the accessibility of chat GPT But like how do we get as broadly and enabled as we can as I think another important one?
Tyler Cowen: Let's say you're advising a small but tech advanced nation, Singapore, Israel would be two options. Would you tell them they should build their own LLMs? It will cost them a lot per capita, but they'll have their own LLMs.
(someone): I don't think they need to, but I think they should get involved and perhaps work with the providers of LLMs to make sure that there are LLMs that fit their needs. That doesn't necessarily need to be that they need to build their own, but they say, hey, we need to make sure that we have LLM provisioning for our companies and our industry and our citizens. okay, let's make sure that happened. Whether it's the, you know, we spend billions of dollars to build the one ourselves, they could do that, certainly no, nothing bad in doing that, but they should make sure that their industries and their citizens are provisioned.
Tyler Cowen: But say we have a strategic petroleum reserve, for better or worse, should Israel have a strategic GPU reserve? Don't nations such as the US get too much leverage over Israel?
17
Tyler Cowen: It just seems to me there'll be a lot of bots they'll reproduce for Darwinian reasons. And we have to face questions about them, even if we'd like to ban them.
(someone): Look, I do think raising the question is good. I'm not trying to resist the question. What I am saying is I think that our developers, and I do think it's totally, like, you can hash it with Bitcoin. They can earn money, run things themselves. I think there's various ways that you could get a self-perpetuating bot process, even on today's bots, which aren't really creatures. They're more tools. You could set up the tool to do that. totally doable. What I am saying is we as a human society, human tribe shouldn't necessarily ascribe any legal rights to that. We shouldn't necessarily allow autonomous bots, you know, functioning because that would be something that currently has uncertain safety factors. And I'm not going to the existential risk thing, just even cyber hacking and other kinds of things. So it's kind of, it's a, yes, it's totally technically doable, but we should venture into that space with some care.
Tyler Cowen: What we wanted is tax their income. Otherwise, they're arbitraging against labor, which might pay 40% tax. The bot pays nothing. It's not a legal entity. You'd rather legalize it, tax it, regulate it. Some government will do that, even if ours doesn't.
18
(someone): Well, I think jailbreaking knowledge will spread, but I think it's, you know, just like cybersecurity and everything else, I think it's an arms race. And so I think part of what we'll do is we'll have AI hopefully more on the side of angels than on devils. That's part of the reason I'm an advocate for acceleration, move fast to the future, do not pause, et cetera, because it's part of being more safe there.
Tyler Cowen: and putting aside truly malicious acts like bomb making, where else should there be liability on the LLM company? Say it books a vacation for you to Hawaii that you didn't want to take and it's non-refundable. Should you be able to do some tiny civil suit and get your money back from
(someone): Yeah, I think, look, I think there's some degree of where we need to have some categorization regime of where are you relying on it. But I actually think that the provider of the LLM should have it be like, it should be pretty reliable at it doesn't book the vacation without confirming with you, like that kind of thing should be totally within their doable skill set. And so they should be accountable.
Tyler Cowen: But say there's some volatility to plugins because you want a fairly creative AI and you don't have enough money to afford a reliable AI to book your trips and then a creative AI to tell you bedtime stories and you use one thing for whatever reason or you get confused.
(someone): Well, I think if it's, you're confused because you're using just like you're confused about hitting the submit button, then I think it's your responsibility. But I do think that the, the developers of these and the things that are the things where they are much better at providing the safety for individuals than the individuals, then they should be liable.
19
(someone): They're very smart, and they're very committed to the censorship. I think it'll create additional problems for them in so doing, but I think they'll figure out how to do it.
Tyler Cowen: Before my last question, just to repeat, Reid's new book, co-authored with GPT-4, is Impromptu, Amplifying Our Humanity Through AI, a Wall Street Journal bestseller. And finally, last question, Reid, what will you do next? Other than talk to dolphins.
(someone): Yes, there's a bunch like AI is going so fast. There's a bunch of things That we didn't cover in impromptu. So I actually think we will do another book and set of content around AI Possibly within this calendar year, which will be pretty amazing Reid Hoffman.
Tyler Cowen: Thank you very much. Thank you Thanks for listening to Conversations with Tyler. You can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. If you like this podcast, please consider giving us a rating and leaving a review. This helps other listeners find the show. On Twitter, I'm at Tyler Cowen, and the show is at Cowen Convos. Until next time, please keep listening and learning.
20
(someone): Yes, exactly.
Tyler Cowen: Yeah. And politically you think that will be super popular? People hate it or?
(someone): I think we could get to a place where it would be popular. I think right now it would be considered to be science fiction and strange, but if like a replacement rate keeps going down, then I think people will say, Oh no, that, that makes sense.
Tyler Cowen: And a lot of science fiction has come through. Yes.
(someone): No, you and I both love science fiction and trade recommendations, you know, on a regular basis.
Tyler Cowen: Asimov's three laws, how good were they?
(someone): I think they were really good, although they were out of conceptualization for a target. If I were to update them, and it's a little bit like, you know, to reveal my nerdishness, Giscard's zeroth law, but I think that what you really want in it is to parallel almost a Buddhist sense of the importance of life and sentience. And that that's the kind of thing that you want in the, in, if you're creating really autonomous intelligences. Um, I think the kind of the uncle Tom, if it really is a totally, uh, autonomous being Kent's being careful about going into it, you know, a new form of robot slaves is, is, is perhaps, you know, not ultimately where humanity would want to be.
Tyler Cowen: There's not enough stress in them, I think, and what the robots are obliged to believe. So a robot is free to believe something crazy and then act on it. Yes.
21
Tyler Cowen: And so... Let's put aside destroying the world and killing people. It's a bot that tells stories, it gives you comments on your papers, it does useful things. But someone could even sell it to a shell corporation, the corporation goes under, no one owns the bot, right? Like you can't actually stop autonomy, it seems to me. So it will happen.
(someone): Well, I think the question is, look, I think to some degree, you know, one of the earliest regulations we'll see is that every AI has to essentially be provisionally owned and governed by some person, you know, and I think that so there will be some kind of accountability chain, because like, if you're using it for cyber hacking, and so we say, I didn't use it like that bot was doing marketing, but that bot was doing cyber hacking, but I wasn't me is like, well, but you were the person who was responsible for it.
Tyler Cowen: But there's always a thinly capitalized corporation. Again, I'm talking about positive, productive bots that will be autonomous.
(someone): But like, for example, today, corporations have to have owners, have to have boards of directors. There is human accountability there.
Tyler Cowen: But you die and test it. The company goes bankrupt. You give it away. It comes from Estonia. You can't trace it. Something's encrypted. It just seems to me there'll be a lot of bots they'll reproduce for Darwinian reasons. And we have to face questions about them, even if we'd like to ban them.
Unknown error occured.