Shared Chat
How to define consciousness?

Consciousness is a complex and debated topic. There are various theories and perspectives on how to define it .

One perspective is panpsychism, which suggests that consciousness is a fundamental and ubiquitous feature of the physical world. According to panpsychism, even simple entities like particles may have some degree of consciousness .

Integrated Information Theory (IIT) is another theory that suggests that consciousness arises from systems with a high level of integrated information. According to IIT, the brain's large amount of integrated information at the macro level is the basis for consciousness .

There is ongoing research and debate about the relationship between consciousness and intelligence. While consciousness is often considered a prerequisite for certain cognitive functions, such as experiencing pleasure or suffering, the exact connection between consciousness and intelligence is not yet fully understood .

The definition of consciousness can vary among individuals and philosophical perspectives. Some define it as a subject of experiences, while others see it as a fundamental property of the physical world .

Overall, the nature of consciousness and its exact definition continue to be areas of active exploration and debate in philosophical and scientific communities .

(someone): There's no direct evidence for this, but yeah, but if you want a picture of what that universe would be like, think yeah, giant cosmic mind with enough richness and structure among it to replicate all the structure of physics.
Lex Fridman: I think therefore I am at the level of particles and with quantum mechanics at the level of the wave function. It's kind of an exciting, beautiful possibility, of course, way out of reach of physics currently.
(someone): It is interesting that some neuroscientists are beginning to take panpsychism seriously. You find consciousness even in very simple systems. So, for example, the integrated information theory of consciousness, a lot of neuroscientists are taking seriously. Actually, I just got this new book by Christoph Koch just came in, The Feeling of Life Itself, why consciousness is widespread, but can't be computed. He basically endorses a panpsychist view where you get consciousness with the degree of information processing or integrated information processing in a system. And even very, very simple systems like a couple of particles will have some degree of this. So he ends up with some degree of consciousness in all matter. And the claim is that this theory can actually explain a bunch of stuff. about the connection between the brain and consciousness. Now that's very controversial. I think it's very, very early days in the science of consciousness. It's interesting that it's not just philosophy that might lead you in this direction, but there are ways of thinking quasi-scientifically that lead you there too.
Lex Fridman: But maybe it's different than panpsychism. What do you think?
(someone): of things. Physics tells us about how a particle relates to other particles and interacts. It doesn't tell us about what the particle is in itself. That was Kant's thing in itself. So here's a view. um the nature in itself of a particle is something mental a particle is actually a conscious a little conscious subject with uh with properties of its consciousness that correspond to its physical properties the laws of physics are actually ultimately relating these properties of conscious subjects so on this view A Newtonian world actually would be a vast collection of little conscious subjects at the bottom level. Way, way simpler than we are without free will or rationality or anything like that. But that's what the universe would be like. Of course, that's a vastly speculative view. No particular reason to think it's correct. Furthermore, non-Newtonian physics, say quantum mechanical wave function, Suddenly it starts to look different. It's not a vast collection of conscious subjects. Maybe there's ultimately one big wave function for the whole universe. Corresponding to that might be something more like a single conscious mind whose structure corresponds to the structure of the wave function. People sometimes call this Cosmo-psychism, and now of course we're in the realm of extremely speculative philosophy. There's no direct evidence for this, but yeah, but if you want a picture of what that universe would be like, think yeah, giant cosmic mind with enough richness and structure among it to replicate all the structure of physics.
Lex Fridman: I think therefore I am at the level of particles and with quantum mechanics at the level of the wave function.
Lex Fridman: in a panpsychist view.
(someone): It's a difficult question. I mean, I do see consciousness as dealing with consciousness, an interdisciplinary task between something more experimental, which has to do with the ongoing project of trying to work out what people call the neural correlates of consciousness, what kinds of physical brain activity correspond to conscious experience. That's one part of it, but I think essentially there's also a theoretical question of more the why question. Why do those kinds of brain activity go along with certain kinds of conscious experience? I don't think you can answer that. Because consciousness is not publicly observable, I don't think you can answer that why question with an experiment. But they have to go hand in hand. And I mean, one of the theories I'm attracted to is the the integrated information theory, according to which we find consciousness at the level at which there is most integrated information, and they try to give a mathematically precise definition of that. So on that view, you know, probably this cup of tea isn't conscious because there's probably more integrated information in the molecules making up the tea than there is in the liquid as a whole. But in the brain, what is distinctive about the brain is that there's a huge amount of integrated, there's more integrated information in the system than there is in individual neurons. So that's why they claim that that's the basis of consciousness at the macro level. Now they, so I don't, I mean, I like some features of this theory, but they do talk about degrees of consciousness. They do want to say there is gradations I'm not sure conceptually I can kind of make sense of that.
(someone): But we have, in the standard model of elementary particles, we have a quantum field theory that seems Totally adequate for all of the terrestrial phenomena right the only things that it doesn't you know explain are well first of all you know the details of gravity if you were to probe it like at add a you know extremes of you know curvature like incredibly small distances it doesn't explain dark matter it doesn't explain black hole singularities right but these are all very exotic things very you know far removed from our life on earth. So for Penrose to be right, he needs these phenomena to somehow affect the brain. He needs the brain to contain antennae that are sensitive to this as-yet-unknown physics. And then he needs a modification of quantum mechanics. So he needs quantum mechanics to actually be wrong. What what he wants is what he calls an objective reduction mechanism or an objective collapse so this is the idea that once quantum states get large enough then they somehow. Spontaneously collapse. Right there are you and this is an idea that lots of people have explored you know there's something called the g r w proposal that tries to, you know say something along those lines you and these are theories that actually make testable predictions. which is a nice feature that they have. But the very fact that they're testable may mean that in the coming decades we may well be able to test these theories and show that they're wrong. We may be able to test some of Penrose's ideas. If not, not his ideas about consciousness, but at least his ideas about an objective collapse of quantum states. And people have actually, like Dick Balmister, have actually been working to try to do these experiments.
(someone): So he focuses on the core theory, which is the name for standard model of particle physics plus the weak limit of general relativity. So we can't totally bring quantum mechanics and relativity together, but actually the circumstances in which we can't bring them together are just in situations of very high gravity. For example, when you're about to go into a black hole or something, actually in terrestrial circumstances, we can bring them together in the core theory. And then Sean wants to say, well, we can be very confident that core theory is correct. And so if there were libertarian free will in the brain, the core theory would be wrong. I mean, this is something I'm not sure about, and I'm still thinking about, and I'm learning from my discussion with Sean. But I'm still not totally clear what It could be, suppose we did discover strong emergence in the brain, whether it's free will or something else, perhaps what we would say is not that the core theory is wrong, but we'd say the core theory is correct in its own terms, namely capturing the causal capacities of particles and fields. But then it's a further assumption whether they're the only things that are running the show. Maybe there are also fundamental causal capacities associated with systems. And then if we discover this strong emergence, then when we work out what happens in the brain, we have to look to the core theory, the causal capacities of particles and fields. and we have to look to what we know about the strongly emergent causal capacities of systems, and maybe they co-determine what happens in the system.
(someone): Consciousness is a fundamental and ubiquitous feature of the physical world.
Lex Fridman: like a law of physics, what should we be imagining? What do you think the different flavors of how that actually takes shape in the context of what we know about physics and science and the universe?
(someone): So in the simplest form of it, the fundamental building blocks of reality, perhaps electrons and quarks, have incredibly simple forms of experience, and the very complex experience of the human or animal brain is somehow rooted in or derived from these very simple forms of experience at the level of basic physics. But I mean, maybe the crucial bit about the kind of panpsychism I defend, what it does is it takes the standard approach to the problem of consciousness and turns it on its head. So the standard approach is to think we start with matter and we think, how do we get consciousness out of matter? So I don't think that problem can be solved for reasons I've kind of hinted at. We could maybe go into more detail. But the panpsychist does it the other way around. They start with consciousness and try to get matter out of consciousness. So the idea is basically at the fundamental level of reality, there are just networks of very simple conscious entities. But these conscious entities, because they have very simple kinds of experience, they behave in predictable ways. Through their interactions, they realize certain mathematical structures. And then the idea is those mathematical structures just are the structures identified by physics. So when we think about these simple conscious entities in terms of the mathematical structures they realize, we call them particles, we call them fields, we call their properties mass, spin, and charge.
(someone): There's, first of all, that there is some quantum gravity theory that would involve this kind of uncomputability, right? Most people who study quantum gravity would not agree with that. They would say that what we've learned, you know, what little we know about quantum gravity from this ADS-CFT correspondence, for example, has been very much consistent with the broad idea of nature being computable, right? But but but but but supposing that he's right about that then you know what most physicists would say is that whatever new phenomena there are in quantum gravity you know they might be relevant at the singularities of black holes they might be relevant at the big bang. They are plainly not relevant to something like the brain that is operating at ordinary temperatures with ordinary chemistry and the physics underlying the brain. They would say that we have The fundamental physics of the brain, they would say that we've pretty much completely known for generations now, right? Because quantum field theory lets us sort of parameterize our ignorance, right? I mean, Sean Carroll has made this case in great detail, right? that whatever new effects are coming from quantum gravity, they are sort of screened off by quantum field theory. And this brings us to the whole idea of effective theories. But we have, in the standard model of elementary particles, we have a quantum field theory that seems Totally adequate for all of the terrestrial phenomena right the only things that it doesn't you know explain are well first of all you know the details of gravity if you were to probe it like at add a you know extremes of you know curvature like incredibly small distances it doesn't explain dark matter it doesn't explain black hole singularities right but these are all very exotic things very you know far removed from our life on earth.
(someone): and that the very complex experience of the human or animal brain is somehow rooted in or derived from this much more simple consciousness at the level of fundamental physics. I mean, that's a theory that I would justify on the grounds that it can account for this datum of consciousness that we are immediately aware of in our experience in a way that I don't think other theories can. If you asked me to contrast that to Daniel Dennett, I think he would just say there is no such datum. Dennett says the data for science of consciousness is what he calls heterophenomenology, which is specifically defined as what we can access from the third person perspective. including what people say. But crucially, we're not treating what they say. We're not relying on their testimony as evidence for some unobservable realm of feelings and experiences. We're just treating what they say as a datum of public observation experiments that we can account for in terms of underlying mechanisms.
Lex Fridman: But I feel like there's a deeper view of what consciousness is. So you have a very clear, and we'll talk quite a bit about panpsychism. we have a clear view of what, almost like a physics view of consciousness. He, I think, has a kind of view that consciousness is almost a side effect of this massively parallel computation system going on in our brain. That the brain is, has a model of the world, and it's taking in perceptions, and it's constantly weaving multiple stories about that world that's integrating the new perceptions, and the multiple stories are somehow, it's like a Google Doc, collaborative editing, and that collaborative editing is the actual experience of what we think of as consciousness somehow
Lex Fridman: panpsychist theories have truth to them. So like to me, it is physics. You said kind of physics emerges, by which you mean like the basic four laws of physics that as we currently know them, the standard model, quantum mechanics, general relativity, that emerges from the base consciousness layer. That's what you mean.
(someone): Yeah, so maybe the way I phrased it made it sound like these things are more separate than they are. What I was trying to address was a common misunderstanding of panpsychism that it's a sort of dualistic theory, that the idea is that particles have their physical properties like mass, spin, and charge, and these other funny consciousness properties. So the physicist Sabine Hossenfelder had a blog post critiquing panpsychism maybe a couple of years ago now that got a fair bit of traction, and she was interpreting panpsychism in this way. And then her thought was, well, look, if particles had these funny consciousness properties, then it would show up in our physics, like the standard model of particle physics would make false predictions because its predictions are based wholly on the physical properties. If there were also these consciousness properties, we'd get different predictions. But that's a misunderstanding of the view. The view is, it's not that there are two kinds of property that mass, spin and charge are forms of consciousness. How do we make sense of that? Because actually, when you look at what physics tells us, it's really just telling us about behavior, about what stuff does.
(someone): OK, so we don't have really very good definitions. Intelligence, we don't have a rigorous definition. We know something how to measure. It's called IQ, or g-factors, right? And we're beginning to build it in a narrow sense, right? Like Go, AlphaGo, and Watson, and Google cars, and Uber cars, and all of that. That's still narrow AI. And some people are thinking about artificial general intelligence. But roughly, as we said before, it's something to do with the ability to learn and to adapt to new environments. But that is, as I said also, it's a radical difference from experience. And it's very unclear. If you build a machine that has AGI, it's not at all a priori. It's not at all clear that this machine will have consciousness. It may or may not.
(someone): So let's ask it the other way. Do you think if you were to try to build an artificial general intelligence system, do you think figuring out how to build artificial consciousness would help you get to an AGI. So, or put another way, do you think intelligent requires consciousness?
(someone): In human, it goes hand-in-hand. In human, or I think in biology, consciousness and intelligence goes hand-in-hand. Quay is illusion because the brain evolved to be highly complex. Complexity via the theory integrated information theory is sort of ultimately is what is closely tied to consciousness. Ultimately, it's causal power upon itself.
(someone): And if you can't explain one thing in terms of the previous fundamental things, you have to expand. maybe something like this happened with maxwell um ended up with fundamental principles of electromagnetism and took charge as fundamental because turned out that was the best way to explain it so i at least take seriously the possibility something like that could happen with consciousness, take it as a fundamental property, like space, time, and mass, and instead of trying to explain consciousness wholly in terms of the evolution of space, time, and mass, and so on, take it as a primitive, and then connect it to everything else by some fundamental laws. I mean, there's this basic problem that the physics we have now looks great for solving the easy problems of consciousness, which are all about behavior, right? They give us a complicated structure and dynamics that tell us how things are going to behave, what kind of observable behavior they'll produce, which is great for the problems of explaining how we walk and how we talk and so on. Those are the easy problems of consciousness. But the hard problem was this problem about subjective experience just doesn't look like that kind of problem about structure, dynamics, how things behave. So it's hard to see how existing physics is going to give you a full explanation of that.
Lex Fridman: Certainly, trying to get a physics view of consciousness, yes. There has to be a connecting point and it could be at the very axiomatic, at the very beginning level. But first of all, there's a crazy idea that sort of everything has properties of consciousness. At that point, the word consciousness is already beyond the reach of our current understanding, like far, because it's so far from, at least for me, maybe you can correct me, it's far from the experiences that we have, that I have as a human being.
(someone): And so we see brains of different complexity, and we think, well, they have different levels of consciousness. They're capable of different experiences. But now we confront a world where we're beginning to engineer intelligence, and it's radical unclear whether the intelligence we're engineering has anything to do with consciousness and whether it can experience anything. Because fundamentally, what's the difference? Intelligence is about function. Intelligence, no matter exactly how you define it, Adaptation to new environments, being able to learn and quickly understand, you know, the setup of this and what's going on and who are the actors and what's going to happen next. That's all about function. Consciousness is not about function. Consciousness is about being. It's in some sense much fundamental. You can see this in several cases. You can see it, for instance, in the case of the clinic. When you're dealing with patients who, let's say, had a stroke or were in a traffic accident, et cetera, they're pretty much immobile. Terri Schiavo, you may have heard historically, she was a person here in the 90s in Florida. Her heart stood still. She was reanimated. Then for the next 14 years, she was what's called in a vegetative state. So there are thousands of people in a vegetative state. So they're like this. Occasionally, they open their eyes for 2, 3, 4, 5, 6, 8 hours, and then close their eyes. They have sleep-wake cycle. Occasionally, they have behaviors.
(someone): Yeah.
Lex Fridman: but it feels like consciousness is doing quite a lot, is doing quite a lot of work in sort of interacting with the environment. It feels like consciousness is not just a, like if you remove consciousness, it's not just that you remove the experience of things, it feels like you're also going to remove a lot of the progress of human civilization and society and all of that. It just feels like consciousness has a lot of... and how we develop our society. So from everything you said with suffering, with morality, with motivation, with love and fear and all of those kinds of things, it seems like it's consciousness in all different flavors and ways is part of all of that. And so without it, you may not have a human civilization at all. So it's doing a lot of work, causality-wise, in every kind of way. Of course, when you go to the physics level, it starts to say, okay, how much, maybe the work consciousness is doing is higher at some levels of reality than at others. maybe a lot of the work it's doing is most apparent at the human level, when you have, at the complex organism level. Maybe it's quite boring, like maybe the stuff of, like physics is more important at the formation of, at the formation of stars and all that kind of stuff. Consciousness only starts being important when you have greater complexities of organism.
(someone): Yeah, my consciousness is complicated and fairly complicated, and as a result it does complicated things. The consciousness of a particle is very simple and hence it behaves in predictable ways, but the idea is
Lex Fridman: We don't know which point it happens or where it is, but we do see the birth of consciousness.
(someone): Yeah, I mean, there's a question, of course, is whether babies are conscious when they're born. And it used to be, it seems, at least some people thought they weren't, which is why they didn't give anesthetics to newborn babies when they circumcised them. And so now people think, oh, that's incredibly cruel. Of course, babies feel pain. And now the dominant view is that the babies can feel pain. Actually, my partner, Claudia, who works on this whole issue of whether there's consciousness in babies and of what kind, and she certainly thinks that newborn babies come into the world with some degree of consciousness. Of course, then you can just extend the question backwards to fetuses, and suddenly you're into politically controversial territory. But the question also arises in the animal kingdom. Where does consciousness start or stop? Is there a line in the animal kingdom where the first conscious organisms are. It's interesting, over time, people are becoming more and more liberal about ascribing consciousness to animals. People used to think, ah, maybe only mammals could be conscious. Now, most people seem to think, sure, fish are conscious, they can feel pain, and now we're arguing over insects. You'll find people out there who say plants have some degree of consciousness. So, you know, who knows where it's gonna end. The far end of this chain is the view that every physical system has some degree of consciousness. Philosophers call that panpsychism.
(someone): Consciousness is a prerequisite for suffering. You can't suffer if you're not conscious. But is it true that every being that is conscious will suffer or has to be capable of suffering? I suppose you could imagine a kind of consciousness, especially if we can construct it artificially, that's capable of experiencing pleasure. but just automatically cuts at the consciousness when they're suffering, sort of like instant anesthesia as soon as something is going to cause you suffering. So that's possible, but doesn't exist as far as we know on this planet yet. You asked what is consciousness. Philosophers often talk about it as there being a subject of experiences. So you and I and everybody listening to this is a subject of experience. There is a conscious subject who is taking things in, responding to it in various ways, feeling good about it, feeling bad about it. And that's different from the kinds of artificial intelligence we have now. I take out my phone. I ask Google directions to where I'm going. Google gives me the directions and I choose to take a different way. Google doesn't care. It's not like I'm offending Google or anything like that. There is no subject of experiences there. And I think that's the indication that Google AI we have now is not conscious, or at least that level of AI is not conscious. And that's the way to think about it. Now, it may be difficult to tell, of course, whether a certain AI is or isn't conscious.
(someone): When you, with IIT or in general, try to think about consciousness, try to think about these ideas. we kind of naturally think in human time scales. Do you, or, and also entities that are sized close to humans, do you think of things that are much larger and much smaller as containing consciousness, and do you think of things that take, you know, well, you know, ages, eons, to operate in their conscious cause effect, cause effect.
(someone): It's a very good question. So, yeah, I think a lot about small creatures because experimentally, you know, a lot of people work on flies and bees. And most people just think they're automata. They're just bugs for heaven's sake, right? But if you look at their behavior, like bees, they can recognize individual humans. They have this very complicated way to communicate if you've ever been involved or you know your parents when they bought a house what sort of agonizing decision that is and bees have to do that once a year right when they swarm in the spring and then they have this very elaborate way they have 300 scouts they go to the individual sites they come back they have this power this dance literally where they dance for several days they try to recruit other bees this very complicated decision way When they finally want to make a decision, the entire swarm, the scouts warm up the entire swarm and then go to one location. They don't go to 50 locations. They go to one location that the scouts have agreed upon by themselves. That's awesome. If we look at the circuit complexity, it's 10 times more denser than anything we have in our brain.
(someone): That's awesome. If we look at the circuit complexity, it's 10 times more denser than anything we have in our brain. Now, they only have a million neurons, but the neurons are amazingly complex. Complex behavior, very complicated circuitry, so there's no question they experience something. Their life is very different. They're tiny. They only live, you know, for, well, workers live maybe for two months. So I think, and IAT tells you this, in principle, the substrate of consciousness is the substrate that maximizes the cause-effect power over all possible spatial temple grains. So when I think about, for example, do you know the science fiction story The Black Cloud? It's a classic by Fred Hoyle, the astronomer. He has this cloud intervening between the earth and the sun and leading to some sort of global cooling. This is written in the 50s. It turns out using the radio dish, they communicate with actually an entity. It's actually an intelligent entity. And they convince it to move away. So here you have a radically different entity. And in principle, IT says, well, you can measure the integrated information, in principle at least. And yes, if the maximum of that occurs at a time scale of months, rather than an asset fraction of a second, yes, then they would experience life where each moment is a month, or microsecond, rather than a fraction of a second in the human case. And so there may be forms of conscience that we simply don't recognize for what they are because they are so radical different from anything you and I are used to.
(someone): And then if you can't, as at least right now it looks like, then you've got to add something. It doesn't follow that you have to add consciousness. Here's another interesting possibility is, well, we'll add something else. Let's call it proto-consciousness, or X. And then it turns out space, time, mass, plus X, will somehow collectively give you the possibility for consciousness. Why don't rule out that view? Either I call that pan-proto-psychism, because maybe there's some other property, proto-consciousness, at the bottom level. And if you can't imagine there's actually genuine consciousness at the bottom level, I think we should be open to the idea there's this other thing, X, maybe we can't imagine that somehow gives you consciousness. But if we are playing along with the idea, that there really is genuine consciousness at the bottom level. Of course, this is gonna be way out and speculative, but at least in, say, if it was classical physics, then you'd end up saying, well, every little atom with a bunch of particles in space-time, each of these particles has some kind of consciousness whose structure mirrors maybe their physical properties, like its mass. charge, its velocity, and so on. The structure of its consciousness would roughly correspond to that. And the physical interactions between particles. I mean, there's this old worry about physics. I mentioned this before in this issue about the manifest image. We don't really find out about the intrinsic nature. of things. Physics tells us about how a particle relates to other particles and interacts.
(someone): So as long as that structure is real, I would say that's enough for the physical world to be real. Yeah, the physical world may turn out to be somewhat more intangible than we had thought and have a surprising nature, but we're already gotten very used to that from modern science.
Lex Fridman: See, you've kind of alluded that you don't have to have consciousness for high levels of intelligence, but to create truly general intelligence systems, AGI systems, human level intelligence, and perhaps super human level intelligence, You've talked about that you feel like that kind of thing might be very far away, but nevertheless, when we reach that point, do you think consciousness from an engineering perspective is needed or at least highly beneficial for creating an AGI system?
(someone): Yeah, no one knows what consciousness is for, functionally, so right now there's no specific thing we can point to and say, you need consciousness for that. Still, my inclination is to believe that, in principle, AGI is possible. At the very least, I don't see why someone couldn't simulate a brain, ultimately have a computational system that produces all of our behavior. And if that's possible, I'm sure vastly many other computational systems of equal or greater sophistication are possible with all of our cognitive functions and more. My inclination is to think that once you've got all these cognitive functions, you know, perception, attention, reasoning, introspection, language, emotion, and so on. It's very likely you'll have consciousness as well. At least it's very hard for me to see how you'd have a system that had all those things while bypassing.
Lex Fridman: somehow conscious. So just naturally, it's integrated quite naturally.
(someone): And that idea then made it into Christendom, and then the rational soul is the one that lives forever. He was very unclear. He wasn't really—I mean, different readings of Aristotle give different—whether did he believe that rational soul was immortal or not? I probably think he didn't. But then, of course, that made it through Plato into Christianity, and then the soul became immortal and then became the connection thereafter to God. Now, so you ask me, essentially, what is our modern conception of these three Aristotle would have called them different forms. Life, we think we know something about it, at least life on this planet, right? Although we don't understand how it originated, but it's been difficult to rigorously pin down. You see this in modern definitions of death. In fact, right now, there's a conference ongoing, again, that tries to define legally and medically what is death. It used to be very simple. Death is you stop breathing, your heart stops beating. You're dead. Totally unconventional. If you're unsure, you wait another 10 minutes. If the patient doesn't breathe, he's dead. Well, now we have ventilators. We have heart pacemakers. So it's much more difficult to define what death is. Typically, death is defined as the end of life. And life is defined. before death. OK, so we don't have really very good definitions. Intelligence, we don't have a rigorous definition.
Lex Fridman: Do bacteria have consciousness? Not bacteria, but maybe you could say bacteria does, but like the interesting, complicated organisms that are within us have consciousness.
(someone): I think it's proper to argue, and I have, that like a paramecium or bacteria has a kind of agency and even a kind of intelligence, kind of sense-making ability, but I do not think that we can attribute consciousness, at least what we mean by consciousness, this kind of self-awareness, this ability to introspect, et cetera, et cetera. Now the reason why distributed cognition doesn't have consciousness I think is a little bit more tricky. And I think there's no reason in principle why there couldn't be a consciousness for distributed cognition, collective intelligence. In fact, many philosophers would agree with me on that point. I think it's more an issue of certain empirical facts, bandwidth, density of connections, speed of information transfer, et cetera. It's conceivable that if we got some horrible Frankensteinian neural link and we linked our brains and we had the right density and dynamics and bandwidth and speed, that a group consciousness could take shape. I don't have any argument in principle against that. I'm just saying those contingent facts do not yet exist. and therefore it is implausible that consciousness exists at the level of collective intelligence.
Lex Fridman: So you talk about consciousness quite a bit, so let's step back and try to sneak up to a definition. What is consciousness?
(someone): For me, there are two aspects to answering that question. One is, what's the nature of consciousness?
(someone): So if you find meaning and fulfillment and value in say intellectual work, like understanding, then that's a very significant part of the meaning of life for you. If you find it in social connections or in raising a family, then that's the meaning of life for you. The meaning kind of comes from what you value as a conscious creature. So I think there's no, on this view, there's no universal solution. no universal answer to the question, what is the meaning of life? The meaning of life is where you find it as a conscious creature, but it's consciousness that somehow makes value possible. Experiencing some things as good or as bad or as meaningful, something that comes from within consciousness.
Lex Fridman: So you think consciousness is a crucial component, ingredient of assigning value to things?
(someone): I mean, it's kind of a fairly strong intuition that without consciousness, there wouldn't really be any value. If we just had a purely a universe of unconscious creatures, would anything be better or worse than anything else? Certainly when it comes to ethical dilemmas, you know about the old trolley problem. Do you kill one person or do you switch to the other track to kill five? I've got a variant on this, the zombie trolley problem, where there's one conscious being on one track and five humanoid zombies, let's make them robots, who are not conscious, on the other track. Given that choice, do you kill the one conscious being or the five unconscious robots? Most people have a fairly clear intuition here. Kill the unconscious beings, because they basically, they don't have a meaningful life. They're not really persons. conscious beings at all.
Unknown error occured.