• MoK
    1.8k

    I used specific when it comes to function. I don't mean that the function is one-to-one. Think of a cup of tea. There is a function that describes the shape of the cup in terms of the position of its parts. If the positions of the parts are the only property that parts have, then there is only a single/specific function that describes the property of the system in terms of the properties of the parts. If parts have other properties, then there are more functions that describe the properties of the system in terms of the properties of the parts. I call the set of all functions that describe the properties of the system in terms of properties of the parts S. This set is complete; you cannot add another function to it.
  • flannel jesus
    2.9k
    you haven't shown that anything is complete though. You say "exhausted", it seems like you just want me to take your word for it. You're not making a case for it.
  • MoK
    1.8k
    How do you know this? There are those that disagree and say that consciousness is not a function of the properties of the parts. They also often claim to 'know' this.noAxioms
    That is true given the definition of weak emergence.

    T
    This seems very inconsistent. Why is one a function of the parts and the other is not a function of parts with nearly identical relevant properties?
    noAxioms
    What do you mean by one and the other?

    Obviosly some physical change (a deliberate one) would have to lack a physical cause.noAxioms
    Such as?

    The laws describing the states of matter would necessarily be incomplete.noAxioms
    What do you mean by incomplete?
  • MoK
    1.8k
    you haven't shown that anything is complete though. You say "exhausted", it seems like you just want me to take your word for it. You're not making a case for it.flannel jesus
    I said enough, otherwise tell me the number of functions in a system that describe the properties of the system in terms of the properties of the parts, if the locations of parts are the only property that parts have?
  • flannel jesus
    2.9k
    otherwise tell me the number of functionsMoK

    This is a completely inappropriate question for you to ask me. This is the question YOU have to answer. I never said anything about how the functions or properties are exhausted, YOU did. For you to know that, you have to somehow have a full list of all those priorities and all those functions and a proof that there can't be any more functions. My position has no such restraint.

    Your approach here has been really weird. You're saying now that the functions don't have to be one to one, but you said before that the available properties have been exhausted. That statement only makes sense if you think a property can "be exhausted" by being used in one function and then not being able to be used again, with other properties in some other function.

    Your entire approach here is completely bizarre. I don't think you have any idea what you're talking about. Weak emergence, strong emergence, and properties being functions of other priorities - all three of those things you're talking about seemingly unaware of how drastically different everyone else uses those terms. You're in way over your head here.
  • MoK
    1.8k

    Why don't you answer my question?
  • flannel jesus
    2.9k
    because the question doesn't even make sense. It's like a Christian asking an atheist, "oh yeah, well how many angels are there?" What the fuck do you mean how many angels are there? I'm a fucking atheist. YOU tell ME how many angels there are.
  • MoK
    1.8k
    because the question doesn't even make sense. It's like a Christian asking an atheist, "oh yeah, well how many angels are there?" What the fuck do you mean how many angels are there? I'm a fucking atheist. YOU tell ME how many angels there are.flannel jesus
    There is only one function that describes the shape of a system in terms of the locations of the parts.
  • flannel jesus
    2.9k
    I don't know why you think that or where you got that from. You think properties can be exhausted, you think functions can be exhausted, I think you've invented this whole idea of exhausting properties, I don't think it comes from anybody who knows what they're talking about when it comes to emergence.

    Think about a turing complete system. You can write any program technically in any Turing complete system - the limitation to the size and capabilities of that program are limited by the number of units in your turing complete system, but if you increase the units (like individual logic gates and storage capacity), you increase the number of things it can do.

    Even though a particular logic gate may have a remarkably few set of properties, when you combine many logic gates, the number of new possible programs - with new possible system level properties - increases rapidly. "Exponentially" is probably an understatement. More rapidly than that.

    So the number of possible system level properties isn't just limited by the number of properties of the components, but also increases exponentially with the number of those components as well. You seem to think that if you count the properties of the components, you can somehow figure out a specific number of properties any system made of those components can have, without taking into account this fact about turing complete systems. There's genuinely no hard limit from just the properties of the components - add more components in the right ways, and the higher level systems can have more and more properties. There's genuinely no limit once you have turing completeness.

    So where are you getting these ideas about exhausting properties from? Did you just make it up?
  • MoK
    1.8k

    I am not talking about the number of shapes you can make by changing the location of parts. I am talking about the function that can describe all shapes in terms of the locations of the parts.
  • flannel jesus
    2.9k
    then you're not talking about properties

    When someone says consciousness is a property of certain complex systems, they're not saying "consciousness is a specific shape that the system can take". Properties are not shapes.
  • MoK
    1.8k
    then you're not talking about propertiesflannel jesus
    I am talking about the properties of parts and how they are related to the quality of experience, so-called Qualia. I am saying that Qualia are a function of the properties of the parts of the brain.

    Properties are not shapes.flannel jesus
    I didn't say that properties are shapes. Shapes are the result of the properties of the parts having different values.
  • flannel jesus
    2.9k
    okay, you've obviously developed your entire unique language for talking about this, that uses the same terms other people use but with entirely new meanings unique to you. I don't think I can wade through it.
  • MoK
    1.8k
    okay, you've obviously developed your entire unique language for talking about this, that uses the same terms other people use but with entirely new meanings unique to you. I don't think I can wade through it.flannel jesus
    As you wish!
  • noAxioms
    1.7k
    If consciousness is fundamental, then we can't measure it in the ways we measure everything else.Patterner
    Sure you can. You can measure its effect on everything else.



    How do you know this? There are those that disagree and say that consciousness is not a function of the properties of the parts. They also often claim to 'know' this. — noAxioms

    That is true given the definition of weak emergence.
    MoK
    It does not logically follow from a mere definition that any specific case meets that definition. So no, it is not true given the definition. For it to be true, it must be the case that consciousness is a function of human parts that have certain relevant properties, and in complete contradiction, not a function of non-human parts that have the same relevant properties.


    This seems very inconsistent. Why is one a function of the parts and the other is not a function of parts with nearly identical relevant properties? — noAxioms
    What do you mean by one and the other?
    Well you deleted all the context.
    One: human parts (your assertion), and the other: non-human parts with the same relevant properties, as described by @RogueAI


    Obviosly some physical change (a deliberate one) would have to lack a physical cause. — noAxioms

    Such as?
    Such as any choice involving what is typically defined as free will.
  • Patterner
    1.6k
    If consciousness is fundamental, then we can't measure it in the ways we measure everything else.
    — Patterner
    Sure you can. You can measure its effect on everything else.
    noAxioms
    Can you elaborate? How do you measure the effect consciousness has on everything else? What's the method, or procedure? Which sense, or what tool, is used?
  • sime
    1.1k
    The unity of a proposition in language is one thing; the unity of experience is something else entirely. When I imagine a red triangle, I don’t just have “red” and “triangle” floating around in my head in some grammatical alignment. I have a coherent perceptual experience with vivid qualitative content. The parts of the brain firing don’t have that quality. There’s nothing red in the neurons, just as there’s nothing red in a sentence that uses the word “red.”

    So no, I don’t buy that this is a problem of grammatical form. Experience isn’t grammar. You can’t dissolve the hard problem by shifting the conversation to the philosophy of language. You just move the goalposts and pretend the mystery went away.
    RogueAI

    A proposition is meant to describe and thereby predict the world.

    So then what of the unity of the proposition?

    Consider the sentence "The cat sat on a mat" that syntactically consists of a cleanly separated subject, predicate and object. Is this syntactical partition an aspect of the semantic content of the sentence? This is related to the question as to the extent to which subject-object-predicate structure has predictive value.

    Compare to token embeddings in LLMs. Text corpra are encoded discretely and atomically on an individual word level that preserves the subject-predicate-object structure using a standard tokenizer, which are fed into a neural network encoder that learns and outputs a holistic language in which chains of words are encoded with atomically indivisible new words such that subject-predicate-object structure collapses.

    In philosophical parlance it might be said that the objective of an LLM encoder is to maximize the unity of the propositions of the input language, by compressing them into holistically encoded "mentalese" that is a closer representation of reality by virtue of each of the encoded sentences representing their entire corpus, hence having higher predictive value than the original sentences.

    Is it possible to represent the meaning of "strong" emergence in holisitic mentalese? I think not as easily, if at all. Certainly it is very easy to express the problem of strong emergence in formal syntax (however one interprets emergence), by merely pointing out that the relation Sitting(Cat,Mat) and the list [Cat,Sitting,Mat] will both coincide with the same syntactical sentence, and by arguing that attempts to fix this problem through syntactical enrichment will lead to the semantic problem of Bradley's infinite regress. But in mentalese, words aren't explictly defined in terms of a priori syntactical structure, but implicitly in terms of an entire open-ended and evolving body of text corpra, plus data from other modalities.

    Strong emergence concerns the semantic discrepency between logically atomic semantics (as in Russell and Wittgenstein's logical atomism as exemplified by tokenizers) andt the infinite continuity of experience. But the semantic discrepancy between mentalese and experience is much narrower, due to mentalese being semantically continuous and having higher predictive value. A semantic gap still remains - since mentalese is not-perfectly predictive, so I think the philosophical lesson of LLM encoders is that the unity of the proposition problem can be recast as the indeterminacy of inferential semantics.
  • flannel jesus
    2.9k
    Sure you can. You can measure its effect on everything else.noAxioms

    I'm also curious about this. Effects are measured in physical change. You measure a physical change, how do you determine that it was fundamental consciousness that caused that rather than something else? Some other physical cause?
  • Wayfarer
    25.2k
    If consciousness is fundamental, then we can't measure it in the ways we measure everything else.
    — Patterner

    Sure you can. You can measure its effect on everything else.
    noAxioms

    How?
  • MoK
    1.8k
    It does not logically follow from a mere definition that any specific case meets that definition. So no, it is not true given the definition. For it to be true, it must be the case that consciousness is a function of human parts that have certain relevant properties, and in complete contradiction, not a function of non-human parts that have the same relevant properties.noAxioms
    I invite you to read the OP again.
  • noAxioms
    1.7k
    If consciousness is fundamental, then we can't measure it in the ways we measure everything else.
    — Patterner
    Sure you can. You can measure its effect on everything else. — noAxioms
    Can you elaborate?
    Patterner
    How?Wayfarer
    I'm also curious about this.flannel jesus
    Slow reply, but primarily I am talking about mind interactionism here, which necessitates interaction between mind and physical (usually substances, but can be property dualism).
    Given this, the interaction point must be somewhere, and the tool used depends on where that point is.
    Even with panpsychism, all matter has mental properties that are not described by current naturalistic physics.

    So anticipating pushback on this, I tried to investigate non-interacting forms that still deny mental processes supervening physical, besides epiphenomenal views which refutes any talk about qualia and such. I could not find any, so my assertion above stands. A counterexample is required.


    You measure a physical change, how do you determine that it was fundamental consciousness that caused that rather than something else?flannel jesus
    You don't know of course, which is a good reason why physicalism is a valid position.


    I invite you to read the OP again.MoK
    I did and saw a long list of assumptions, most but not all of which I would accept. That's fine. What I'm pointing out is that the assumptions are not enough.

    Granting these assumptions means that there is a function that describes the property of the system.MoK
    This does not follow from the list of assumptions. It's an assertion. I'd not even disagree with the assertion except the part where you suggest that it follows from the list of assumptions.
    Sure, there's what you call a 'function' that describes a certain property of the system. That's just yet another assumption. It says nothing about if this property is emergent, strong or weak.

    The only avalaible properties are the properties of parts though.
    That also does not follow from the list of assumptions you provided.
    Therefore, the property of such a system is a function of the properties of the parts.
    That arguably would follow from the above statement, which unfortunately doesn't follow from the assumptions.

    Hence my question, "How do you know this"? How have you falsified the view that human experience is not emergent from the physical parts?
  • Patterner
    1.6k
    I could not find any, so my assertion above stands. A counterexample is required.noAxioms
    Not sure what you mean. What example of yours would I be countering? Just curious. I'm not looking to counter you. I'm just wondering how you would measure such a thing.


    Effects are measured in physical change. You measure a physical change, how do you determine that it was fundamental consciousness that caused that rather than something else? Some other physical cause?flannel jesus
    I don't know about fundamental consciousness. I don't think we can be conscious of the things we are conscious of without some kind of fundamental consciousness. But I don't think the subjective experience of a particle is causing anything. I don't know at what point of complexity I think an entity must attain before its subjectively experience can be casual, any more than any physicalist can say at what point they think the physical complexity of the brain causes consciousness to emerge.

    But, eventually, consciousness is causal. I think what Hofstadter says about the comet approaching and hitting Jupiter at the beginning of I Am a Strange Loop makes a good case for this. What he talks about can't be explained by physical causes.
  • flannel jesus
    2.9k
    I don't know at what point of complexity I think an entity must attain before its subjectively experience can be casualPatterner

    That's a pretty big problem. Everything else fundamental is also fundamentally causal. It's not fundamental now, causal later - it's causal at a fundamental level. If consciousness isn't causal at a fundamental level, but it is causal at a microscopic scale... I think the whole idea, in my opinion, crumbles
  • Pierre-Normand
    2.7k
    I'm bringing this back here from my GPT-4 thread since the core issue raised by @MoK is the emergence of the ability to create ideas.

    The only mental event that comes to mind that is an example of strong emergence is the creation of ideas. They are, however, not the result of the properties of the matter, but solely created by the mind. The ideas are irreducible yet distinguishable. An AI is a mindless thing, so it does not have access to ideas. The thought process is defined as working on ideas with the aim of creating new ideas. So, an AI cannot think, given the definition of thinking and considering the fact that it is mindless. Therefore, an AI cannot create a new idea. What an AI can do is to produce meaningful sentences only given its database and infrastructure. The sentence refers to an idea, but only in the mind of a human interacting with an AI. The sentence does not even have a meaning for an AI since a meaning is the content of an idea!MoK

    What I would have thought were strongly emergent phenomena displayed by rational animals like us are such things as cognitive and rational agentive abilities. Ideas are what we come to be able to trade in discourse when those abilities are suitably developed. They're the abstract shareable contents of our intentional (that is, representational) attitudes such as our intentions, beliefs and speech acts. You claim that the creation of ideas constitute cases of strong emergence because ideas are created by the mind rather than, I suppose, the body. But what about the mind itself? Do you view the mind to be something distinct from the collection of cognitive abilities that animals or human beings manifest? I'm also trying to understand if what your now describe as a case of strong emergence—the creation of ideas by the mind—is understood by you to be something quite separate from the material embodied life of a human being such that it can not be "a function" of its parts in accordance with your proposed definition of weak emergence.
  • MoK
    1.8k
    You claim that the creation of ideas constitute cases of strong emergence because ideas are created by the mind rather than, I suppose, the body. But what about the mind itself?Pierre-Normand
    We are dealing with the strong emergence in the case of ideas since they are irreducible, yet they have a single content that can be experienced. Ideas are irreducible mental events since they can be experienced. There are other mental events like experiencing a cup. To me, experiencing is an activity of the mind. I have a thread on substance dualism that you can find here.

    Do you view the mind to be something distinct from the collection of cognitive abilities that animals or human beings manifest?Pierre-Normand
    Yes, the mind to me is a substance with the ability to experience, freely decide, and cause.

    Do you view the mind to be something distinct from the collection of cognitive abilities that animals or human beings manifest?Pierre-Normand
    Yes. The mind is a separate substance. Matter cannot even be the cause of its own change (I have another thread on this topic that you can find here). So the Mind is needed to keep the order of matter. Once the order is in place, you could even have life.

    I'm also trying to understand if what your now describe as a case of strong emergence—the creation of ideas by the mind—is understood by you to be something quite separate from the material embodied life of a human being such that it can not be "a function" of its parts in accordance with your proposed definition of weak emergence.Pierre-Normand
    Experiencing a cup is a weak emergence considering all the complexities between experiencing the cup and the cup. We, however, have the ability to experience ideas as well, which is a strong emergence.
  • Wayfarer
    25.2k
    Yes, the mind to me is a substance with the ability to experience, freely decide, and cause… The mind is a separate substance.MoK

    If you wouldn’t mind, I’d like to hear what you believe ‘substance’ means.
  • noAxioms
    1.7k
    Not sure what you mean. What example of yours would I be countering?Patterner
    P1) Human consciousness does not supervene on physical processes.
    P2) Qualia is part of human consciousness
    C1) Human consciousness is a 2nd kind of property/substance that is not part of the physical processes described by physics
    P3) A human talks/writes about qualia, a physical action
    C2) Consciousness causes physical effects

    C1 encompasses varying kinds of dualism. I don't support this, so the example needs to come from those that do.
    C2 is a statement of interactionism. Descartes put the interaction in the pineal gland, supposedly because "I cannot find any part of the brain, except this, which is not double", whatever that means.
    I suspect that it was selected due to its inaccessibility (at the time) in a living subject.

    Chalmers seems to deny interactionism (I may have this wrong), but I could not find any explanation (except obfuscation) of how he gets around it. He has no counter-story. Either my logic above is not valid, or mental substances/properties have non-mental physical effects that are open to being measured.


    MoK has not replied to my identification of logic flaws in his OP, so I presume agreement.

    We are dealing with the strong emergence in the case of ideas since they are irreducible, yet they have a single content that can be experienced. Ideas are irreducible mental events since they can be experienced.MoK
    This is not consistent with your definition of strong emergence in the OP.
    I mean, your OP implies consciousness to not be strong emergence, and it too can be experienced. Emergence (weak at least, per your OP) means it is a function of the parts, not that it is experienced or not, nor if it is reducible.

    To be honest, I cannot figure out your stance, since weak emergence seems to be a conclusion of a physicalist, and yet you seem to support substance dualism.

    Experiencing a cup is a sort of weak emergence considering all the complexities between experiencing the cup and the cup. We, however, have the ability to experience ideas as well, which is a strong emergence.
    Experience of one thing is arguably weak emergence, but experience of a different thing is strong emergence? Really? All without any demonstration of the difference, or why these things cannot be emergent from different (non-human) parts with the same relevant properties.
  • MoK
    1.8k
    If you wouldn’t mind, I’d like to hear what you believe ‘substance’ means.Wayfarer
    A substance is something that objectively exists. An experience is something that subjectively exists. I think that is all forms of existence.
  • MoK
    1.8k
    This is not consistent with your definition of strong emergence in the OP.noAxioms
    I have a long struggle to consider ideas as a form of strong emergence. At first, I thought that they are a form of weak emergence since we can only form an idea from a meaningful sentence in which the words are arranged in a certain way. So, it seems that an idea is a function of how words are arranged in a sentence. But then I recognized that a meaningful sentence is only a way that we communicate an idea. An idea does not have parts at the end since it is irreducible, so we are dealing with something that has no parts, yet it is meaningful to us. So, when it comes to language, a sentence, whether meaningful or meaningless, is a form of weak emergence as long as we are not talking about the meaning of a sentence. The idea that is derived from reading a sentence is something more than the sentence, though, so we are dealing with a form of strong emergence when it comes to ideas.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.