• On Illusionism, what is an illusion exactly?
    Well it's really just a tangential point, I will rephrase the question so we get back on track: why do you believe that pain has a qualitative component? As you know I view pain only as functional, what is the problem with this?goremand

    Phantom pains exist. Those aren't functional. Also, it's easy to distinguish the functional part of the system from the experience of pain. We do it all the time for organisms we doubt are conscious. And we can do it for machines. You can have a program behave like it's in pain without there being any reason to suspect if feels pain. You could build a robot to do so as well.

    It's also possible to imagine a painful enough scenario to feel discomfort. And there's emotional pain as well.
  • On Illusionism, what is an illusion exactly?
    You still have the appearance of colors, pains, etc that need explaining. Claiming they don't have phenomenal properties doesn't explain away their appearance. What Chalmers argues is that if the hard problem is an illusion (that we have phenomenal experiences), then this illusion needs to be explained. How does the brain produce such an illusion?

    Because otherwise, you haven't dissolved the hard problem. You've merely claimed that it's an illusion without showing how.
  • On Illusionism, what is an illusion exactly?
    "Illusionism claims that introspection involves something analogous to ordinary sensory illusions; just as our perceptual systems can yield states that radically misrepresent the nature of the outer world, so too, introspection yields representations that substantially misrepresent the actual nature of our inner experience."goremand

    But the very fact of having an inner experience is evidence in favor of the hard problem. If color and sound are illusions, those experiences still need to be explained in terms of how the brain produces them in a way that avoids the hard problem. Calling them interpretive illusions doesn't dissolve the matter. Just shifts it over to explaining how the brain accomplishes these illusions.

    It's what Chalmers has called the meta-problem of consciousness.
  • Are sensations mind dependent?
    a mindless sensation is a blue sky before anybody sees it and a thunder clap with nobody around to hear it.lorenzo sleakes

    I don't think mindless sensations are coherent. The sky isn't blue when nobody sees it. It's not any color. What it is are scattering photons, some of which are perceived as blue on a sunny day for creatures with eyes and nervous systems like our own. One thing that makes the blue sky impossible as a mindless sensation is that we only see a small fraction of the electromagnetic radiation when looking at the sky. It would not look blue if we could see the microwaves or radio waves.
  • Chomsky on ChatGPT
    I don't think it's a stochastic parrot, but I may be anthropomorphizing it.RogueAI

    I've fed ChatGPT a fictional story from a show that didn't exist at the September 2021 cutoff date for it's training data, and the AI is pretty good at summarizing the story, drawing inferences about the characters and their motivations, and asking questions not answered by the show so far. I'd say it was about on par with your average online comment.

    I've also asked it to take characters it knows about from older stories and have them interact in a new scenario. You can have it show the characters thoughts, and it's a decent story teller. You can have them play a hand of poker. I invented a simple game to play with it, and it mostly got the rules correct. When it didn't, I could tell it what it got wrong, and it would correct itself.

    I would say stochastic parrot is too narrow. It seems clear there are emergent behaviors from the more complex models like 3.5 and 4 where it's some building internal models to output the correct text. It doesn't understand like we do, but it does seem to have an increasingly emergent understanding of the semantic meanings embedded in our languages.
  • Neuroscience is of no relevance to the problem of consciousness
    I watched a Quinn's Ideas YT video about blindsight a few months ago. Sounded interesting.

  • Neuroscience is of no relevance to the problem of consciousness
    But I don't know how to justify someone else having "narrow" content since everything observable seems to be "wide" content, when you take others' self reports as a form of behaviour anyway. Like p-zombies can say "I see the traffic light has red, green and yellow lights" or "Ouch" without, allegedly, the qualia. A p-zombie can behave as a qualia-haver in any way, AFAIK that's part of the point.fdrake

    You can prompt LLMs like ChatGPT to do that. Some people have been playing with hooking the OpenAI api to robots and prompt ChatGPT to control the robot. So you could have a robot with sensors claiming it sees colors, feels the cold, is hungry for more power and what not.

    Wouldn't convince me it was conscious, though. Not in the phenomenal sense. LLMs are linguistic p-zombies. We can ask what it would mean or look like for a model to have phenomenal consciousness. I'm guessing we couldn't tell from the weights or architecture. We'd be in the same position as we are with neurons, except that we built the models.

    But if neurons are carrying out something like gradient descent, then what makes that different? Or for any functions the brain might be said to perform?
  • Neuroscience is of no relevance to the problem of consciousness
    In my experience, p-zombies are just more pointless, unrealistic thought experiments like the trolley problem. They seem to be made up by people with too much time on their hands.T Clark

    It clarifies the conceptual problem for physicalism. If you think we have phenomenal consciousness, then how do you square that with physicalism? If you don't, then you need to explain why we think we have phenomenal consciousness, and admit we live in Chalmers p-zombie universe. Frankish calls it a magic trick of the brain, and Dennett endorses that as a plausible solution for why we're deluded, although he may prefer a Wittgestenien language on holiday kind of answer.

    And if you if you think you can make physicalism work with phenomenal consciousness, then good luck with that. Personally, I think Nagel's argument about the view from nowhere gets to the heart of the objectivity/subjectivity split, and we don't even need to talk about Mary or p-zombies.
  • Neuroscience is of no relevance to the problem of consciousness
    A good reason to imagine p-zombies is that they illustrate differences between philosophical theories of consciousness very well, and are an intuitive way to think about the issue. Whether p-zombies exist is a sexy way to phrase the issue of whether functional/physical properties are vital for an account of phenomenal consciousness. They don't have to exist to be useful.fdrake

    Yeah, it's a more evocative way of specifying what the debate over the hard problem is about. But I'd rephrase it as whether functional/physical properties are all there is to account for consciousness. And if phenomenal consciousness doesn't fit, as Dennett and Frankish will admit, then we are the p-zombies, deluded into thinking phenomenal consciousness is real.

    I want to be clear about that. As far as I can tell, Daniel Dennett and Keith Frankish do not think phenomenal consciousness exists. And they do recognize that it would pose a serious philosophical problem for physicalism/functionalism/objectivity if it did. I'm pretty sure the Churchlands also fall into this category. There are physicalists who do think phenomenal consciousness can either be reductively explained by the functional/physical properties, or strongly emerges from the functional/physical. But not Dennett or Frankish. For them, we are conscious only in a functional and behavioral sense.

    Since I don't think we are deluded p-zombies, then I think physicalism has a conceptual problem. And why not? It's an abstraction, and it' s a metaphysical proposition.
  • Neuroscience is of no relevance to the problem of consciousness
    I know my "box' contains something, and I assume that you know yours does, even though I cannot know that for sure. So, there is private experience, and we all know that, because we can emetertain thoughts and feelings that others cannot know about.Janus

    This is such an obvious thing that it's weird when people dispute it. What would it possibly mean to be wrong? We all do have private experiences that nobody else knows about, short of some speculative, science fiction mind-reading technology which doesn't exist.

    If it's somehow wrong, then that just means I'm the only one with private experiences. I assume the rest of you do as well, for obvious reasons. Why would I be special as member of the same species? But that would be the implication of any argument denying it. Thus why some have accused Witty of having solipsistic tendencies.
  • Neuroscience is of no relevance to the problem of consciousness
    Now ,we can rule out panpsychism or consciousness in structures without similar biological gear, because such structures lack sensory systems(no input) or a central processing units capable to process drives and urges (which are non existent),emotions, capability to store info (memory), to recognize pattern, to use symbolic language, to reason, etc etc.Nickolasgaspar

    We can't if Boltzmann brains and brain simulations are a possibility. Or just simply performing the functions that are correlated with consciousness. Doesn't have to be human either. Could be an alien kind.
  • Neuroscience is of no relevance to the problem of consciousness
    A fundamental nature of reality will never change our descriptions and narratives on how reality interacts with us and vice versa.Nickolasgaspar

    Well if nature is fundamentally physical, then subjective experience doesn't conceptually fit. The biological level is still function and structure.
  • Neuroscience is of no relevance to the problem of consciousness
    t turns out it is helpful for organisms who don't acquire nutrients, protection and mates through root in the ground, thorns/toxic substances and airborne pollen......to be able to be aware of their needs and environment and to be conscious of which action and behavior in order to will allow them to acquire food, shelter, avoid preditors and find mates.Nickolasgaspar

    And subjective experience is necessary for that? How do organisms without nervous systems survive? Are all living nervous systems conscious?
  • Neuroscience is of no relevance to the problem of consciousness
    think there are historical reasons that lead us to conclude that consciousness is a property of matter. But it also depends on what you think matter (or more broadly "the physical) encompasses.Manuel

    We don't even really know what 'matter' is. Could be quantum fields or vibrating 10 dimensional strings. Or maybe everything in physics is a kind of analogy, limited by human cognition and technology. Maybe we can't get at what reality fundamentally is.
  • Neuroscience is of no relevance to the problem of consciousness
    "Why do we have consciousness?" - ...

    ... what's the kind of answer that goes there?
    Isaac

    Depends on whether we are cognitively capable of providing an answer. Can we answer all questions? Some philosophical questions have remained unanswered for millennia, despite much debate and scientific progress. Why does anything exist?

    But of course we can speculate on an answer. Maybe when the right sort of material arrangement happens, consciousness also occurs. It's just the way nature is. Or maybe physicalism is wrong, because it's an abstraction from intersubjective experience. We can't really say what nature is other than something that gives rise to both the material and mental.

    We can just invoke Kant at that point. The mind makes the world appear material to us.
  • Neuroscience is of no relevance to the problem of consciousness
    Actually we do know enough about the phenomenon to be pretty sure (beyond any reasonable doubt) that the conscious awareness of experience is limited to biological brains.Nickolasgaspar

    If that's the case, then we can rule out machine consciousness, and consciousness arising in other non-biological systems, like meteor showers that just happen to be instantiating a simulation of conscious brain function.
  • Neuroscience is of no relevance to the problem of consciousness
    And Marc Solms through his new Theory on Consciousness will add "because it has evolutionary advantages to feel uncomfortable when your biology is exposed to a situation that has the potential to undermine your well being and your "being".Nickolasgaspar

    That's a just-so story. How did evolution produce conscious experiences?

    The debate is over.....and philosophers didn't get the memo
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8121175
    Nickolasgaspar

    It's not over because you declare it over. We're having the debate write now in this thread. Seems like you've failed to convince people for the first 8 pages. Maybe over the next 8?
  • Neuroscience is of no relevance to the problem of consciousness
    It depends from the definition. If we
    AI "consciousness" is based on the algorithmic process of data feeding prioritizing those which are beneficial or detrimental for the predefined goals of the program.
    Nickolasgaspar

    Why should we accept that definition for machine consciousness? It's not the same thing as qualia. You just created an arbitrary definition and assigned it to 'consciousness'. It doesn't answer the question of whether a machine can have qualia.
  • Will the lack of AI Alignment will be the end of humanity?
    An example of misalignment with information technology is social networks. At first they were hailed as a major benefit to humanity, bringing us closer together online. But then we found out the downsides after social networks had become very popular and widespread. Turns out the algorithms maximize engagement to sell ads, which often ends up being whatever makes people angry, among other things.

    What makes alignment a hard problem for AI models? Because they are based on gradient descent using giant matrices with varying weights. The emergent behaviors are not well understood. There isn't a science to explain the weights in a way that we can just modify them manually to achieve desired results. They have to be trained.

    So it's a difficult task for OpenAI to add policies to ChatGPT which can always prevent it from saying anything that might be considered harmful. There are are always ways to hack the chat by prompting it to get around those policies.

    This also raises the discussion of who gets to decide what is harmful and when the general public should be protected from a language model generating said content. If future advances do lead to a kind of Super AI, which organiation(s) will be the ones aligning it?
  • The role of observers in MWI
    That would be an anti-realist interpretation. Sean Carol is a realist about the wave-function, so he thinks there literally is a multiverse, at least from after inflation until heat death of the universe.
  • The role of observers in MWI
    Reasonable as in making sense within the MWI interpretation. MWI needs to be self-consistent and not have to introduce anything from outside the wave-function to make things works. So as long as observers and observation can be understand as parts of the universal wave-function, it's reasonable. I still have questions, though.
  • The role of observers in MWI
    But when inflation ends, the universe reheats into a hot plasma of matter and radiation. That actually does lead to decoherence and branchingSquelching Boltzmann Brains (And Maybe Eternal Inflation) - Sean Carroll

    That's informative and interesting. So once inflation ends, the multiverse begins, until De Sitter space, when there's nothing left to decohere and make observations. Then all is just superposition.

    That sounds mostly reasonable, but the branching part based on something making observations still bothers me a bit. What is the branching mechanism? Perhaps I should have started with that question instead.
  • The role of observers in MWI
    So, coming back to this thread after many days away, Sean Carol has stated a solution to the Boltzmann Brain problem is that there won't be any observers in De Sitter space to cause decoherence under the MWI. Boltzmann Brains are thought to be the results of quantum fluctuations over an infinite amount of time after the heat death of he universe, but the wave function is deterministic, so as long as there are no decoherent branches, there's no sense of fluctuation.

    It's still weird to me that the observer is a necessary component of making sense of MWI, since decohered branches are still in universal superposition, which is what infinite De sitter space will become, except without the decohered observers.
  • Why is the Hard Problem of Consciousness so hard?
    Keith Frankish's illusionism argument. That the brain is performing the equivalent of a magic show, tricking us into thinking there's something about consciousness that turns it into the hard problem. I can't be sure exactly what his argument amounts to. He seems to be denying the phenomenal aspects of consciousness, since those are what leads to the hard problem. So I guess he's arguing for a functional account with the added twist that are brains trick us into say things like the "redness of red", or there's something it's like to be a bat, which we can't discover with neuroscience. It only seems like we have qualia.

    Chalmers has said that if there is a dissolution of the hard problem, the meta-problem of explaining why we think there's a hard problem has to first be addressed. Frankish attempts to do that. I just don't know whether it seems like I'm phenomenally conscious is different than actually being conscious in the hard sense.
  • The role of observers in MWI
    He has a very interesting idea on how to put MWI and wave-function collapse interpretations to the test. Assuming we can build a conscious AGI quantum computer.

    The rest of what he says sounds similar to Sean Carrol's arguments for thinking MWI is likely correct. That it explains the interference patterns seen in experiments when a measurement isn't made, that there's no clear dividing line between the classical and the quantum, and entanglement means all the particles making up classical stuff should be quantum. And that any other interpretation would have be at least as complex as MWI, and probably more so.

    That being said, my understanding is that the probabilities we use to calculate the likelihood of what to expect when a measurement is made still needs to be derived within the Schrodinger equation in a self-consistent manner without adding it in post hoc, since the wave function is supposed to describe the universe we live in, if MWI is true. So deriving the Born rule within MWI is an ongoing project.
  • The role of observers in MWI
    As an aside, I was listening to Sean Carol being interviewed, and he made it sound like the very low probability events didn't happen, at least not over the time period since the Big Bang (not nearly enough time had passed). However, they should be happening in the sense of being expressed as superpositions by the universal wave function in MWI.

    So there's some states where the particles of the rock are located in other parts of the universe, under the understanding that a particle's position ranges over the entire universe, with most of the positions within the place we'd expect to measure them. But there still would be a few spread out everywhere else.

    There should even be some human-like observers seeing a rock teleport some distance, or just vanish into being spread out all over the place, and all sorts of scenarios in between, even if it's a vanishingly small subset of observers.
  • Why is the Hard Problem of Consciousness so hard?
    I never said I wasn't crazy. Or didn't make typos, whoops.
  • Why is the Hard Problem of Consciousness so hard?
    I don't know if that would be a logical error. I'm guessing the strong bias towards believing that we're all the same has to do with communication.frank

    It's obviously not the case if you've aware of savants or various neurological abnormalities, which you would hope educated people like philosophers and scientists would be aware of when making claims about the mind.
  • Why is the Hard Problem of Consciousness so hard?
    Yes, my experience is the same as yours. I read other posts from people with aphantasia and they make the same mistake. They think we are walking around with HD movies in our heads. some people do, but I guess they are at least as rare as people with aphantasia.hypericin

    What about when you dream? I would put it more in terms of a VR headset kind of experience, particularly for lucid dreaming.

    Some people are really good visualizers. Others can compose music in my head. I have a regular stream of inner dialog. I wonder what you make of Temple Grandin's claims that for autists like herself, their imagination is like the Star Trek Holodeck.

    Of course in all this I'm reminded of the certain scientific and philosophical skeptics who mistake their lack of visualization or lucid dreaming for those abilities not existing in other people. That's a kind of logical error whose name escapes me.
  • The role of observers in MWI
    Our classical appearance needs to be part of a valid solution to the universal wave function, and nothing says it is not.noAxioms

    Sabine Hossenfelder says it's not:

  • Why is the Hard Problem of Consciousness so hard?
    I would then expect us to find out that bats aren't that different from humans, that all animal mental worlds are variations on a common theme, just like all animals use the same genetic code, and tend to share vast amounts of DNA. Your hemoglobin is quite similar to bats'. We're all cousins.Olivier5

    But that doesn't mean bats or other animals have the exact same set of sensations. We know that can't be true because many birds can see more than three primary colors, and presumably bats have a sonar sensation. Maybe it's a kind of color or sound, but it could be something altogether different as well. And what would it be like as an octopus, where the nervous system is as much distributed in the tentacles, which act semi-independently, as it is in the head?
  • Why is the Hard Problem of Consciousness so hard?
    Even if so, that doesn't mean consciousness is understood functionally, as in we can provide a function which makes a system conscious. If we could, then we would know how to do the same with computer programs and robots. Chalmers criticism is that no amount of structure and function results in an explanation of consciousness. Which is similar to Locke's primary and secondary qualities. Number, shape, extension, composition don't give you the sensations of color, taste, etc. Nagel used that to show the fundamental objective/subjective split in our descriptions. We can't say what bat sonar sensation is no matter how good our science is.

    It should be noted though that Chalmers has proposed property dualism on information systems, so he's fine with functionalism as long as there's something additional that connects it to consciousness.
  • The role of observers in MWI
    Everett doesn't control how the interpretation develops after him. Sean Carrol, a current proponent of MWI, talks of universes splitting. There's a Universe Splitter app: https://cheapuniverses.com/universesplitter/

    Yes, a classical rock takes measurements. If that makes it an observer, then fine. It doesn't need to know about Schrodinger's equation in order to measure a classical world. If you don't count that as an observation, then I completely disagree with your statement above.noAxioms

    There aren't classical rocks or observations in MWI. And yet we make a classical observation, which some call the wave function collapse in other interpretations, every time a measurement is made. You can say the detector or a rock also makes the same observation. But the universal wave function doesn't make such a distinction. Every quantum state is still in superposition.

    The point about human observers is we're the ones interpreting the mathematical formalism as meaning reality is this or that. Some physicists, mathematicians and philosophers say the wave function describes the universe. If it does, then the classical appearance of our world needs to be derivable from that equation.
  • Is the blue pill the rational choice?
    Agent Smith never did really understand what the Oracle was up to. Neither did the Architect until the end. There's several good YT videos that do a deep dive on the trilogy. Some even argue Agent Smith is actually The One.

    Anyway, the Oracle, as an intuitive program, recognized that the fight between the machines and humans was just going to continue in the same cycle, so she wanted to find a way forward where they could both coexist in a less combative state. A way for humans and machines to evolve their relationship. To do this, she had to risk everything to force both sides into making peace. The Architect and the machines lose control over Agent Smith, forcing Neo to make a deal they will accept if he gives the machine he's plugged into the ability to identify the Smith virus and eliminate it. The Oracle shows Neo the way by letting Smith turn her into another Smith. Neo must concede the fight so Smith will take him over, allowing the antivirus to take out Smith, and the Architect will then honor the peace agreement, as you see when he meets the Oracle in the park.

    As for Agent Smith possibly being the the actual One, you could argue the Oracle lied to everyone including Neo so that she could use Smith to force the peaceful resolution. Neo was necessary because of his special status (somehow both connected to the machine world and humanity), and that needed to be transferred to Smith so he could become a virus.
  • Why is the Hard Problem of Consciousness so hard?
    I updated my comment and added a comment on Kant and the hard problem (that he would likely find it pointless).
  • Why is the Hard Problem of Consciousness so hard?
    This is also how Kant used the term. The noumenon for Kant is an object of intellectual intuition (non-sensible representation of reality).

    The difference is that Kant argued that such intuition is a faculty we do not have.
    Jamal

    Is Kant saying we reason that the real world responsible for our senses is beyond our perceptions and reason? There is a real world responsible for us reasoning and perceiving, but it's unknowable and we can't say anything meaningful about it, only the one of appearances our minds shape from our sensory manifold?

    I wonder what Kant would make of the modern consciousness debate. I suspect he would think it's beside the point with both sides making a fundamental error of mistaking the phenomenal physical for the noumenal. There's no point in arguing whether there's a hard problem if it's all phenomenal anyway.
  • The role of observers in MWI
    Yes.noAxioms

    Problem is you have to square this with actual observations, which have classical results when a measurement is performed.

    Observers as such play no role. Think systems in a state, such as a classic rock at time T. Anything that rock has measured (a subset of what's in its past light cone) is part of the entangled state of that system.noAxioms

    I'll refer you back to what Bohr had to say regarding experiments. Experiments have to be described in terms of the language of performing the experiment, not the mathematical formalism used to model what happens during the experiment. Rocks didn't come up with the Schrodinger equation or the Born rule. Physicists did after observing or learning about experimental results.

    No, a world is not a relation with an observer. Not sure where you get this. If you like, you can assign a world in relation to an event-state, but calling the system an observer seems to suggest a very different interpretation.noAxioms

    If there's no observation, there's no world, since as we both agree, a world is a system that appears to be classical. Without an observer, you just have superpositions. Decoherence only matters in this context for explaining why observers don't notice the superpositions.
  • Why is the Hard Problem of Consciousness so hard?
    Yes, and my point is that with physicalism, the question of whether x is conscious will always be open-ended. That suggests the physicalism framework is a dead-end.RogueAI

    We certainly have problems drawing the line on which life forms are conscious. And we can't say what sort of sensations animals with different sensory abilities from us would have. In the far future, there could be Boltzmann brains fluctuating into existence with bizarre mental states that we can't even imagine.
  • Why is the Hard Problem of Consciousness so hard?
    Well, I'm a physicist so I'm going to be biased toward the physicalist/materialist PoVs. I tend to think that property dualism explains things reasonably well, though.tom111

    Chalmers espoused a property dualism in one of his books where any informationally rich system would be conscious. He's more predisposed to finding a universal law connecting consciousness to the physical than just identifying it with certain biological creatures.
  • Why is the Hard Problem of Consciousness so hard?
    Ned Block wrote a paper on the Harder Problem of Consciousness using the android Data from Star Trek to illustrate the problem that we can't tell whether consciousness is tied to our particular biology or functionalism. As such, we have no criteria for deciding whether Data is conscious.

    Some people have suggested that the recent machine learning models exhibit conscious behavior. I have serious doubts, and most researchers would probably disagree. But at some point it's likely we will create a machine that's convincing enough where we can't tell. The movies Ex Machina and Her would be good examples of this.