• Nemo2124
    30
    In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.
  • RogueAI
    2.9k
    In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.Nemo2124

    I think this is correct, and if/when they reach human level intelligence, and we put them in cute robots, we're going to think they're more than machines. That's just how humans are wired.
  • Bret Bernhoft
    222
    ...when they reach human level intelligence, and we put them in cute robots, we're going to think they're more than machines. That's just how humans are wired.RogueAI

    I've also come to this understanding; that humans are animistic. And this doesn't stop at rocks and trees. We see the person in technology, naturally. Because, as you say, we are wired that way. I would say the universe is wired that way, more generally.

    This is a fascinating conversation to be following.
  • Christoffer
    2.1k
    Don't you think we're pretty close to having something pass the Turing Test?RogueAI

    The current models already pass the turing test, but it doesn't pass the Chinese room analogy. The turing test is insufficient to evaluate strong AI.

    This would require solving the Problem of Other Minds, which seems insolvable.RogueAI

    Yes, it is the problem with P-Zombies and the chinese room. But we do not know in what ways we are able to decode cognition and consciousness in the future. We might find a strategy and technology to determine the sum internal experience of a certain being or machine, and if so we will be able to solve it.

    It might also even be far easier than that. It could be that the foundation for deciding it only becomes a certain bar of behavior at which we conclude the machine to have consciousness in the same way we do so towards each other and other animals. For instance, if we have a certain logic gate that produce certain outcomes we wouldn't call that conscious as we can trace the function back to an action we've taken for that function to happen.

    But if behaviors emerge spontaneously out of a complex system, behaviors that demonstrate an ability to form broader complex reasoning or actions that does not follow simple paths of deterministic logics towards a certain end goal, but rather exploratory actions and decisions that show behaviors of curiosity for curiosity's sake and an emotional realm of action/reactions, then it may be enough to determine based on how we rationalize animals and other people around us to not be P-Zombies.

    In essence, why are you not concluding other people to be P-Zombies? Why are you concluding a cat to have "inner life"? What point list of attributes are you applying to an animal or other human being in order to determine that they have subjectivity and inner life? Then use the same list onto a machine.

    That's the practical philosophical approach that I think will be needed at some point if we do not develop technology that could determine qualia as an objective fact.

    I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.

    Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy.
    mcdoodle

    You don't have a conversation with Wikipedia though. To converse with "something" requires language to flow in order to function fluidly and not become an obstacle. Language has been naturally evolved to function between humans and maybe in the future we have other pronouns as language evolves over time, but at the moment the pronouns seems to be required for fluid communication.

    On top of that, since language is used to train the models, they function better in common use of language. Calling it "you" function better for its analytical capabilities for the text you input, as there are more instances of "you" being used in language than language structured as talking to a "thing".

    But we are still anthropomorphizing, even if we tune language away from common pronouns.
  • Christoffer
    2.1k
    In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.Nemo2124

    People, when seeing a beautiful rock falling and smashing to pieces, speak of the event with "poor rock", and mourn its beauty to have been destroyed. If we psychologically apply a sense of subjectivity to a dead piece of matter, then doing so with something that for the most part simulate having consciousness is even less weird. What constitutes qualia or not is the objective description of subjectivity, but as a psychological phenomena, we apply subjectivity to everything around us.

    And in places like Japan, it's culturally common to view objects as having souls. Just as western societies view and debate humans as having souls in relation to other thing, and through that put a framework around the concept of what things that have souls, it draws the borders around how we think about qualia and subjectivity. In Japan, those borders are culturally expanded even further into the world of objects and physical matter, and thus they have a much lower bar for what constitutes something having consciousness, or at least are more open to examining how we actually define it.

    Which approach is closest to objective truth? As all life came from dead matter and physical/chemical processes, it becomes a sort of metaphysical description of what life itself should be defined as.
  • Nemo2124
    30
    This is an interesting point about matter having consciousness in certain Japanese philosophies. In terms of subjectivity, then, it's interesting to consider it in detachment from the human; that is, the subject itself.

    What is the nature of the subject? How does the subject-object dichotomy arise? There is a split here between what the subject represents and the object it takes. If you take the subject in isolation, then is it simply human or could it be mechanical?

    You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc...
  • fishfry
    3.4k
    I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.
    — fishfry

    That doesn't explain emergent phenomenas in simple machine learnt neural networks.
    Christoffer

    Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.

    There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.

    We don't know what happens at certain points of complexities, we don't know what emerges since we can't trace back to any certain origins in the "black box".Christoffer

    This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware.

    Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software.

    And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do.

    While that doesn't mean any emergence of true AI,Christoffer

    So I didn't need to explain this, you already agree.

    it still amounts to a behavior similar to ideas in neuroscience and emergence. How complex systems at certain criticalities emerge new behaviors.Christoffer

    Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.

    What "new behaviors" to you refer to? A chatbot is a chatbot.

    And we don't yet know how AGI compositions of standard neural systems interact with each other. What would happen when there are pathways between different operating models interlinking as a higher level neural system.Christoffer

    Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.

    We know we can generate an AGI as a "mechanical" simulation of generalized behavior, but we still don't know what emergent behaviors that arise from such a composition.Christoffer

    Well if we don't know, what are you claiming?

    You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard.

    I find it logically reasonable that since ultra-complex systems in nature, like our brains, developed through extreme amount of iterations over long periods of time and through evolutionary changes based on different circumstances, it "grew" into existence rather than got directly formed.Christoffer

    I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.

    Even if the current forms of machine learning systems are rudimentary, it may still be the case that machine learning and neural networking is the way forward, but that we need to fine tune how they're formed in ways mimicking more natural progression and growth of naturally occuring complexities.Christoffer

    In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know. They know everything that's happened, but nothing about what's happening. They can't reason their way through a situation they haven't been trained on.

    And the training is necessarily biased, since someone chooses what data to train them on; and the node weighting is biased, as black George Washington shows.

    Neural nets will never produce AGI.

    That the problem isn't the technology or method itself, but rather the strategy of how to implement and use the technology for the end result to form in a similar high complexity but still aligned with what purpose we form it towards.Christoffer

    You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output. AGI needs to be able to respond intelligently to a novel context, and that's a tough challenge for neural nets.

    The problem is that most debates about AI online today just reference the past models and functions, but rarely look at the actual papers written out of the computer science that's going on. And with neuroscience beginning to see correlations between how these AI systems behave and our own neurological functions in our brains, there are similarities that we shouldn't just dismiss.Christoffer

    I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick.

    There are many examples in science in which a rudimentary and common methods or things, in another context, revolutionized technology and society. That machine learning systems might very well be the exact way we achieve true AI, but that we don't know truly how yet and we're basically fumbling in the dark, waiting for the time when we accidentally leave the petri dish open over night to grow mold.Christoffer

    Neural nets are the wrong petri dish.

    I appreciate your thoughtful comments, but I can't say you moved my position.
  • fishfry
    3.4k
    ↪fishfry Don't you think we're pretty close to having something pass the Turing Test?RogueAI

    The Turing test was passed a number of years ago by a chatbot named Eugene Goostman.

    The problem with the Turing test is that the humans are not sufficiently suspicious. When Joseph Weizenbaum invented the first chatbot, ELIZA, he did it to show that computers that emulate people aren't really intelligent.

    But he was shocked to find that the department secretaries were telling it their innermost feelings.

    Humans are the weak link in the Turing test. It's even worse now that the general public has been introduced to LLMs. People are all too willing to impute intelligence to chatbots.
  • RogueAI
    2.9k
    Interesting, but "Goostman won a competition promoted as the largest-ever Turing test contest, in which it successfully convinced 29% of its judges that it was human."

    I'm talking about an Ai that passes all the time, even against people who know how to trip up Ai's. We don't have anything like that yet.
  • Christoffer
    2.1k
    Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response.fishfry

    This is simply wrong. These are examples of what I'm talking about:

    https://hai.stanford.edu/news/examining-emergent-abilities-large-language-models
    https://ar5iv.labs.arxiv.org/html/2206.07682
    https://www.jasonwei.net/blog/emergence
    https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/

    Emergence does not equal AGI or self-awareness, but they mimmick what many neuroscience papers are focused on in regards to how our brain manifest abilities out of increasing complexity. And we don't yet know how combined models will function.

    There's no intelligence, let alone self-awareness being demonstrated.fishfry

    No one is claiming this. But equally, the problem is, how do you demonstrate it? Effectively the Chinese room problem.

    There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.fishfry

    The current predictive skills are extremely limited and far from human abilities, but they're still showing up, prompting a foundation for further research.

    But no one has said that the current LLMs in of themselves will be able to reach AGI. Not sure why you strawman in such conclusions?

    This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware.fishfry

    Why does conventional hardware matter when it's the pathways in the network that is responsible for the computation? The difference here is basically that standard operation is binary in pursuit of accuracy, but these models operate on predictions, closer to how physical systems do, which means you increase the computational power with a slight loss of accuracy. That they operate on classical software underneath does not change the fact that they operate differently as a whole system. Otherwise, why would these models vastly outperform standard computation for protein folding predictions?

    Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software.fishfry

    Yes, and why would a system that is specifically very good at handling extreme complexities, not begin to mimic complexities in the physical world?
    https://www.mdpi.com/1099-4300/26/2/108
    https://ar5iv.labs.arxiv.org/html/2205.11595

    Seen as the current research in neuroscience points to emergence in complexities being partly responsible for much of how the brain operates, why wouldn't a complex computer system that simulate similar operation not form emergent phenomenas?

    There's a huge difference between saying that "it forms intelligence and consciousness" and saying that "it generates emergent behaviors". There's no claim that any of these LLMs are conscious, that's not what this is about. And AGI does not mean conscious or intelligent either, only exponentially complex in behavior, which can form further emergent phenomenas that we haven't seen yet. I'm not sure why you confuse that with actual qualia? The only claim is that we don't know where increased complexity and multimodal versions will further lead emergent behaviors.

    And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do.fishfry

    This is just a false binary fallacy and also not correct. The programmable behavior is partly weights and biases within the training, but those are extremely basic and most specifics occur in operational filters before the output. If you prompt it for something, then there can be pages of instructions that it goes through in order to behave in a certain way. In ChatGPT, you can even put in custom instructions that function as a pre-instruction that's always handled before the actual prompt, on top of what's already in hidden general functions.

    That doesn't mean the black box is open. There's still a "black box" for the trained model in which it's impossible to peer into how it works as a neural system.

    This further just illustrates the misunderstandings about the technology. Making conjectures about the entire system and the technology based on these company's bad handling of alignment does not reduce the complexity of the system itself or prove that it's "not a black box". It only proves that the practical application has problems, especially in the commercial realm.

    So I didn't need to explain this, you already agree.fishfry

    Maybe read the entire argument first and sense the nuances. You're handling all of this as a binary agree or don't discussion, which I find a bit surface level.


    Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.

    What "new behaviors" to you refer to? A chatbot is a chatbot.
    fishfry

    Check the publications I linked to above. Do you understand what I mean by emergence? What it means in research of complex systems and chaos studies, especially related to neuroscience.

    Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.fishfry

    That's not what I'm talking about. I'm talking about multimodality.

    Most "news" about AI is garbage on both sides. We either have the cryptobro-type dudes thinking we'll have a machine god a month from now, or the luddites on the other side who don't know anything about the technology but sure likes to cherry-pick the negatives and conclude the tech to be trash based on mostly just their negative feelings.

    I'm not interested in such surface level discussion about the technology.

    Well if we don't know, what are you claiming?

    You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard.
    fishfry

    If you want to read more about emergence in terms of the mind you can find my other posts around the forum about that. Emergent behaviors has its roots in neuroscience and the work on consciousness and the mind. And since machine learning to form neural patterns is inspired by neuroscience and the way neurons work, there's a rational deduction to be found in how emergent behaviors, even rudimentary ones that we see in these current AI models, are part of the formation of actual intelligence.


    The problem with your reasoning is that you use the lack of a final proven theory of the mind as proof against the most contemporary field of study in research about the mind and consciousness. It's still making more progress than any previous theories of the mind and connects to a universality about physical processes. Processes that are partly simulated within these machine learning systems. And further, the problem is that your reasoning is just binary; it's either intelligent with qualia, or it's just a stupid machine. That's not how these things work.

    I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.fishfry

    I'm not saying AGI is imminent, but I wouldn't take the other side of the bet either. You have to be dead sure about a theory of the mind or theories of emergence to be able to claim either way, and since you don't seem to aspire to any theory of emergence, then what's the theory that you use as a premiss for concluding it "not possible"?

    In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know.fishfry

    How is that different from a human mind?

    They know everything that's happened, but nothing about what's happening.fishfry

    The only technical difference between a human brain and these systems in this context is that the AI systems are trained and locked into an unchanging neural map. The brain, however, is constantly shifting and training while operating.

    If a system is created that can, in real time, train on a constant flow of audiovisual and data information inputs, which in turn constantly reshape its neural map. What would be the technical difference? The research on this is going on right now.

    They can't reason their way through a situation they haven't been trained on.fishfry

    The same goes for humans.

    since someone chooses what data to train them onfishfry

    They're not picking and choosing data, they try to maximize the amount of data as more data means far better accuracy, just like any other probability system in math and physics.

    And the weights and biases is not what you describe. The problem you aim at is in alignment programming. I can customize a GPT to do the same thing, even if the underlying model isn't supposed to do it.

    Neural nets will never produce AGI.fishfry

    Based on what? Do you know something about multimodal systems that others don't? Do you have some publication that proves this impossibility?

    You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output.fishfry

    Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? As far as I can see the most glaring difference is the real time re-structuring of the neural paths and multimodal behavior of our separate brain functions working together. No current AI system, at this time, operates based on those expanded parameters, which means that any positive or negative conclusion for that require further progress and development of these models.

    I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick.fishfry

    Bloggers usually don't know shit and they do not operate through any journalistic praxis. While the promoters and skeptics are just driving up the attention market through the shallow twitter brawls that pops up due to a trending topic.

    Are you seriously saying that this is the research basis for your conclusions and claims on a philosophy forum? :shade:

    Neural nets are the wrong petri dish.

    I appreciate your thoughtful comments, but I can't say you moved my position.
    fishfry

    Maybe stop listening to bloggers and people on the attention market?

    I rather you bring me some actual scientific foundation for your next premises to your conclusions.

    You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc...Nemo2124

    I think looking at our relation to nature tells a lot. Where do we draw the line about subjectivity? What do we conclude having a subjective experience? We look at another human and, for now disregard any P-zombie argument, claim them to have subjectivity. But we also look at a dog saying the same, a horse. A bird? What about an ant or a bee? What about a plant? What about mushrooms which have been speculated to form electrical pulses resembling a form of language communication? If they send communication showing intentions, do they have a form of subjective experience as mushrooms?

    While I think that the Japanese idea of things having a soul is in the realm of religion rather than science, we still don't have a clear answer to what constitutes subjectivity. We understand it between humans, we have instincts about how animals around us has it. But where does it end? If sensory input into a nervous system prompts changed behaviors, does that constitute a form of subjectivity for the entity that has those functions? Wouldn't that place plants and mushrooms within the possibility of having subjectivity?

    If a robot with sensory inputs has a constantly changing neurological map that reshapes based on what it learns through those sensory inputs, prompting changed behavior, does a subjective experience emerge out of that? And if not, why not? Why would that just be math and functions, while animals, operating on the exact same way, experience subjectivity?

    So far, no one can draw a clear line at which we know: here there's no experience and no subjectivity, and here it is.
  • flannel jesus
    1.9k
    Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.

    There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.
    fishfry

    I don't think this is a take that's likely correct. This super interesting writeup on an LLM learning to model and understand and play chess convinces me of the exact opposite of what you've said here:

    https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
  • Nemo2124
    30
    Where do we draw the line about subjectivity?Christoffer

    What you have here are two forms of subjectivity, one emerging from organisms, reaching its summit in humans (although there are animals too) and now, apparently, the subjectivity of machines from mechanism. So, fundamentally, there's a kind of master-slave dialectic here between the mechanical subject and the human. It is also true that we design and programme the machines, so that we get these highly complex mechanisms that seem to simulate intelligence, whose subjectivity we can acknowledge.

    Even though humans programme and develop the machines, in terms of AI, they develop in the end a degree of subjectivity that can be given recognition through language. Rocks, animals and objects cannot reciprocate our communications in the same way that AI-Robots can be programmed to do. It is not enough to say that their subjectivity is simulated or false, at this early stage they are often also equipped with machine vision and can learn and interact with their environment.

    The question is how far can AI-robots go, can they be equipped with autonomy and independently start to learn and acquire knowledge about their environment. Many people envisage that we will be living alongside AI-robot co-workers in the future. They can already carry out menial tasks, is this the stuff of pure science-fiction or do we have to be (philosophically) prepared? At the furthest limit, we may well be co-inhabiting the planet with a second form of silicon-based intelligence (we are carbon-based).
  • Christoffer
    2.1k
    they develop in the end a degree of subjectivity that can be given recognition through language.Nemo2124

    You still have the problem of the chinese room. How do you overcome that? It's more important for concluding subjectivity for machines than for other lifeforms as we can deduce that lifeforms have formed through evolution similarly to us and since we have subjectivity or at least I know I have subjectivity, I could conclude that lifeforms have subjectivity as well. But how can I deduce that for a machine if the process of developing it is different than evolution?

    In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution. To reach machine consciousness, we may need to simulate evolutionary progress for its iterations in the same complexity as evolution on earth. What that entails for computer science we don't yet know.

    Beyond that we may find knowledge that consciousness isn't that special at all, that it's rather trivial to "grow" if we know where to start, to know the "seed" for it so to speak. But that would require knowledge we don't yet have.
  • flannel jesus
    1.9k
    In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution.Christoffer

    Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it.
  • Christoffer
    2.1k
    Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it.flannel jesus

    I'm saying that this is at the most fundamental, deducible in some form, answer to what has qualia.

    We don't know if consciousness can be formed deliberately (direct programming)
    We cannot know if a machine passes the chinese room argument and have qualia just through behavior alone.
    We cannot analyze mere operation of the system to determine it having qualia.
    We cannot know other people aren't P-Zombies.

    The only thing we can know for certain is that I have subjectivity and qualia, I formed through evolution. And since I formed through evolution, I could deduce you as also having qualia, since we are both human beings. And since animals are part of evolution I can deduce that animals also has qualia.

    At some point, dead matter reaches a point of evolution and life in which it has subjectivity and qualia.

    Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution.

    How do we know when a machine has the same? That is the problem to solve
  • flannel jesus
    1.9k
    Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution.Christoffer

    No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that.
  • Christoffer
    2.1k
    No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that.flannel jesus

    Do you have an alternative or additional option that respects science?
  • flannel jesus
    1.9k
    I don't know what you mean by "respects science". You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it?

    The alternative is, it's in principle possible for some computer ai system to be conscious (regardless of if any current ones are). And that they can do so without anything like the process of evolution that life went through
  • Christoffer
    2.1k
    You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it?flannel jesus

    That consciousness emerged as features in animals through evolution is as close to facts that we have about our biology. And the only things we so far know have consciousness are animals and us in this universe.

    So the only argument that can be made in any form of rational reasoning is the one I did. Anything else fails to form out of what we know and what is within the most probable of truths based on the science we have.

    If you have an additional option it has to respect what we scientifically know at this time.
  • flannel jesus
    1.9k
    "we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me.

    Evolution seems like an incredibly arbitrary thing to latch on to.
  • Christoffer
    2.1k
    "we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me.flannel jesus

    What do you mean has happened only "once"?

    And in a situation in which you have only one instance of something, is it more or less likely that the same thing happening again require the same or similar initial conditions?

    Science is about probability, what is most probable?
  • flannel jesus
    1.9k
    if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"?
  • flannel jesus
    1.9k
    The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution.
  • Christoffer
    2.1k
    if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"?flannel jesus

    That is not a relevant question as I'm not deducing from imaginary premises. I'm deducing from the things we know. If that premise were the case, then research into why would have been made or would be aimed to be made. And the reasons would probably be found or hinted at and be part of the totality of knowledge in biology and evolution. However, as such a premise doesn't have any grounds in what we know about biology and evolution, and so the engagement with that premise becomes just as nonsense as the premise itself.

    What we do know is that there is a progression of cognitive abilities across all life and that it's most likely not bound to specific species as cognitive abilities vary across genetic lines. That some attribute consciousness to mammals is more likely a bias towards the fact that we are mammals and therefore we attribute other mammals closer to us than say birds, even though some birds express cognitive abilities far greater than many mammals.

    The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution.flannel jesus

    Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact. The "perhaps" that you express does not solve the fundamental problem of the chinese room.

    We know that we've developed consciousness through biology and evolution. So therefore, the only known process would be that. If we were to create the same conditions for a computer/machine to develop AI using similar conditions, then that would be more probable to form consciousness that passes the chinese room problem and develop actual qualia.

    As with everything being about probability, the "perhaps" in your argument doesn't have enough probability in its logic, as it is basically saying that if I sculpt a tree, it could perhaps become a tree compared to me planting a tree or chemically form the basic building blocks of genetics in a seed for a tree and then planting it to grow. One is jumping to conclusion that mere similarity to the object "could mean" the same, while the other is simulating similar conditions for the object to form. And we know that evolutionary progress of both physical systems and biological is at the foundation of how this reality function, it is most likely required that a system evolves and grows for it to form complex relation to its surrounding conditions.

    I'm not saying that these AI systems don't have subjectivity, we simply do not know, but what I'm saying is that the only conditions we could deduce as logically likely and probable is if we could create the initial conditions to simulate what formed us and grow a system from it.

    Which is close to what we're doing with machine learning, even though it's rudimentary at this time.
  • flannel jesus
    1.9k
    Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation.
  • flannel jesus
    1.9k
    Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact.Christoffer

    100 years ago, you could say "the only things that can walk are things that evolved." Someone who thinks like you might say, "that must mean evolution is required for locomotion".

    Someone like me might say, "Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back."

    And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a joint and some way of powering the extension and contraction of that joint."

    And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution.

    That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer?
  • Christoffer
    2.1k
    Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation.flannel jesus

    In what way? Evolution is about iterations over time and nature is filled with different iterations of cognitive abilities, primarily changing based different environments influencing different requirements.

    As long as you're not a denier of evolution, I don't know what you're aiming for here?

    "Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back."flannel jesus

    No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts. You can't make something "walk" by just having legs and muscles, as well as without the pre-knowledge of how the muscles and bones connect and function, you don't know how they fit together, and even further; bones and muscles have grown along the same time as the development of the cognitive operation using them, they've formed as a totality over time and evolutionary iterations.

    There's no "immediate" reason you can walk as the reason you can walk is the evolution of our body and mind together, leading up to the point of us being able to walk.

    And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a join and some way of powering the extension and contraction of that joint."

    And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution.
    flannel jesus

    Yes they did. The reason they can walk is because we bluntly tried to emulate functions of our joints, bones and muscles for decades before turning to iterative trial and error processes for the design of the physical parts. But even then it couldn't work without evolutionary training the walking sequence and operation through machine learning. It's taken extremely long to mimic this rather rudimentary action of simply walking and we're not even fully there yet.

    And such a feature is one of the most basic and simple things in nature. To underplay evolution's role in forming over iterations, the perfect walking mechanics and internal operation compared to us just brute forcing something into existence is just not rational.

    That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer?flannel jesus

    What I don't think you seem to understand with the evolutionary argument is that the complexity of consciousness might first require the extremely complex initial conditions of our genetical compound that even though it in itself is one of the most complex things in the universe, also grows into a being that in itself is even further complex. This level of complexity might not be able to be achieved by just "slapping structures together" as the knowledge of how and in what way may be so complex that it is impossible. That the only way to reach results is with "growing" from initial conditions into a final complexity.

    Evolution is basically chaos theory at play and you seem ignore that fact. We already have evidence within material science and design engineering that trying to "figure out" the best design or material compound can be close to impossible in comparison to growing forth a solution through simulating evolutionary iterations through trial and error.

    This is why these new AI models functions so well as they do, because they're NOT put together by perfect design, they're programmed to have conditions from which they "grow" and a path along which they "grow". The fundamental problem, however, is that in comparison to "walking", all science about consciousness and the brain hasn't been able to pinpoint consciousness as a mere function but, according to current research in this field, is an emerging result of layers of complex operations.

    In essence, if walking is extremely hard to achieve due to similar complexity, simulating actual consciousness might be close to impossible if we don't form an extremely complex path of iterative evolution for such a system.
  • flannel jesus
    1.9k
    No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts.Christoffer

    so robots can't walk?
  • Christoffer
    2.1k
    so robots can't walk?flannel jesus

    Maybe read the entire argument or attempt to understand the point I'm making before commenting.

    Did you read the part about how robots can even walk today? What the development process of making them walk... is really inspired by?
  • flannel jesus
    1.9k
    "inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are.

    There's no justifiable reason to tie consciousness to evolution any more than there is to tie it to lactation. You're focussed too hard on the history of how we got consciousness rather than the proximate causes of consciousness.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.