Comments

  • On emergence and consciousness
    It's all about the molecules, atoms, and proteins and electrons.Patterner

    It's all about molecules, atoms, proteins and electrons, but it's not just about those things. As proper parts of living organism, those constituents are caught up into functionally organized anatomic structures (such as cell membranes) and channeled through the finely tuned and regulated metabolic pathways that Brian Greene provides striking descriptions of. Those are indeed processes that arise in far from equilibrium thermodynamic conditions such that relatively low-entropy forms of energy (such as incident solar radiation or energy-dense molecules like glucose) get harnessed by the molecular machinery to produce work in such a way as to sustain and reproduce this machinery. What is being sustained and reproduced isn't the parts, but the form: that is, the specific functional structure of the organism. The parts, and the proximal interactions between them, don't explain why the organism is structured in the way it is, or why it behaves in the way it does. Rather, the high-level norms of functional organization of the organism, characterised in the higher-level terms of anatomy and physiology, explain why the individual atoms, electrons, protons, and organic molecules are being caught up and channeled in the specific way that they are to sustain processes that are geared towards maintaining the whole organism (at least for awhile) away from complete decay and thermodynamic equilibrium.
  • On emergence and consciousness
    Jaurrero’s Dynamics in Action begins with Aristotle.Wayfarer

    Neat! I'll start with watching this and then perusing her Dynamics in Action. I'm always interested in looking into detailed accounts of downward-causation by philosophers who have produced one.
  • On emergence and consciousness
    But in non-equilibrium thermodynamics and the study of complex systems, contextual factors are not just boundary conditions; they are essential to the system’s dynamics. I’m studying that through Alicia Juarrero.Wayfarer

    That's interesting. I didn't know Juarrero. I'll look her up. In general, I view essentially relational metaphysical and phenomenological stances as correctives for God-eye-view stances. With Hilary Putnam, this takes the form of criticism of "metaphysical realism," in favor of a "realism with a human face" that draws inspiration, in part, from the American pragmatist tradition of Pierce, Dewey and James. With Michel Bitbol, it stems from bringing Kant's "Copernican revolution" to the interpretation of quantum mechanics while, with Carlo Rovelli, it stems from drawing similar insights from the analysis of thermodynamical concepts. So far, those all are reactions to the excesses of the modern mechanistic conception of the world. But I think a more perennial tradition that draws on Aristotle's hylomorphism and on Wittgenstein and Ryle's "philosophical behaviorism," as exemplified in the work of, for instance, Elizabeth Anscombe, David Wiggins, Anthony Kenny and Peter Hacker, simply sidesteps those excesses.
  • On emergence and consciousness
    Would you say they need to take context into account in a way that classical physics did not?Wayfarer

    Yes, one might say this, appealing to the pragmatic context of the theoretician, observer or experimentalist who is dealing with high-level phenomena. Out of equilibrium (and irreversible) processes are characterised by a drop in local entropy whereby the classes of possible microphysical states get coarse-grained, as it were, into equivalent classes of macroscopically equivalent states. Carlo Rovelli has convincingly argued that this process of coarse-graining, and hence the local lowering of entropy, can only be defined in relation with an observer that, by means of interacting with the system, gathers memory traces of it (whereby the direction of the arrow of time gets defined).

    I think Rovelli's lesson can be generalized, and made intuitive, beyond the rather technical case of non-equilibrium thermodynamical processes. Whenever strongly emergent features of a material process can be identified, observers non only select in accordance with their allegedly "parochial" interests what high-level features of a material system they are interested in. Weak emergentists would claim that it's merely due to epistemic limitations that the high-level explanations of the phenomena are being appealed to, while, as they argue, low-level material laws determine everything that happens. But our decisions to group low-level states into macroscopic equivalence classes, defined with the concepts that belong to economics, biology, psychology, etc., don't merely reflect our ignorance of the micro-physical details. Rather, they often are part of the process whereby we contribute to sustaining or creating the very high-level phenomena at issue, chief among them insuring our survival and flourishing as the high-level entities that we ourselves are, and those of the other organisms that we share our ecosystems with.
  • On emergence and consciousness
    Could we agree on the definition of weak emergence, which occurs when the property of the system is a function of the properties of its parts? That, of course, requires that the system be reducible to its parts. Please let me know what you think, and we can go to the next step.MoK

    This definition would be more precise if we would substitute "is deducible from" or "is grounded in" for "is a function of". That's because, as I've suggested, many proponents of strong emergence, who we may call "compatibilists" (by close analogy with the corresponding stance in the philosophy of free will and determinism) grant both the causal closure of the micro-physical domain and the thesis of the supervenience of high-level phenomena such as mental acts over the physical domain. That is, once the physical state of a person is fully specified, then this person's mental properties also are fully specified as a function of this physical state, if you wish. What is denied, however, from the strong emergent stance, is that the mental properties of the person can be deduced or derived solely from those physical properties. And likewise for the higher-level (e.g. psychological) principles that govern those high level phenomena. Rather, one must consider normative, functional and/or organisational principles that arise from the specific interactions of those parts and that can't be deduced from the low-level (i.e. physical) laws governing them.
  • On emergence and consciousness
    I'm asking if anyone has an example of strong emergence. For those who think consciousness is, I'm wondering if there are others.Patterner

    I view the objects and phenomena of pretty much all the special sciences (e.g. biology, ecology, psychology, economics, etc.) to be strongly emergent in relation with the the objects and phenomena of the material sciences such as physics or chemistry. Some, like our @apokrisis argue (and I would agree) that even within physics, especially when the thermodynamics of non-equilibrium processes is involved, many phenomena are strongly emergent in the sense that they aren't intelligible merely in light of, or deducible from, the laws that govern their smaller components.
  • On emergence and consciousness
    If the macro property is directly derivable from the properties and interactions of its parts - as in, it can analytically be confirmed to be a necessary consequence of the interactions of the parts - I would say that that IS what weak emergence is. It's not too weak to guarantee weak emergence, it's basically the definition of weak emergence.flannel jesus

    I agree but here it's the idea of "derivability" that does the heavy lifting guaranteeing weak emergence. But, in his OP, @MoK, derived the conclusion that there ought to be such a "function" from the premise that there ought to be a "reason" why the system has the emergent property that it has. But this inference isn't valid. When some mental state M of an organism supervenes on the micro-physical configuration P of the parts of this organism, the reason why it is M specifically that is being realized by P may be that M non-accidentally satisfies some high-level biological or psychological features that characterise organisms of this specific kind, and not be derivable from the physical features of P alone. Or, as ChatGPT o3 phrased it: "Supervenience already implies a function from micro-configurations to macro-properties: if two systems are identical in all micro respects, they must be identical in their macro-properties. But this function need not be definable in purely micro-level terms. The criteria that fix the mapping may depend on high-level structures or capacities that cannot themselves be specified without invoking macro-level concepts."
  • On emergence and consciousness
    * An example of weak emergence is like aniferomagnetism in which the system is reducible to atoms and there is a function that describes the property of the system, specific arrangement of the spins of atoms, in terms of the property of its parts, namely locations and the direction of spins of atoms.
    ** Strong emergence is defined as when a system is irreducible to its parts. This also means that there is no function that can describe the property of the system in terms of the properties of its parts as well. On the contrary, if there is a function that describes the property of the system, then the system must be reducible to something.
    MoK

    The condition that the macro-property, or holistic property, be a function of the properties of the parts of a system (including, presumably, relational properties) seems too weak to preclude strong (irreducible) emergence and also too weak to guarantee weak (reducible) emergence.

    It's too weak to preclude strong emergence since strongly emergent properties like (arguably) consciousness often are granted to supervene* on lower-level properties (such as physical states of individual neurons) despite not being reducible to them. However, supervenience alone guarantees that there is a many-to-one function from the set of possible low-level configurations to the high-level properties that they realize, but it doesn't guarantee that this function can be given independently of the high-level, or formal, principles that govern the mapping. That is, what it is that determines that some given configuration, as expressed in low-level terms, instantiate the high-level property that it does may be high-level features of the system, such as its molar capabilities, that can't be described using the low-level concepts.

    (*Supervenience already implies a function from micro-configurations to macro-properties: if two systems are identical in all micro respects, they must be identical in their macro-properties.)

    It's too weak to guarantee weak emergence (i.e. guarantee reducibility) for the same reason. The satisfaction of this functional condition merely guarantees supervenience, but doesn't guarantee reducibility.
  • The Question of Causation
    This kind of physicalist reasoning has been subjected to careful critique by philosophers and neuroscientists alike. A notable example is Philosophical Foundations of Neuroscience by Max Bennett (a neuroscientist) and P. M. S. Hacker (a philosopher of mind and Wittgenstein scholar - review). They argue that many claims made in the name of neuroscience rest on category errors—particularly the idea that “the brain thinks,” “the brain interprets,” or “the brain understands,” when in fact it is persons who do these things. The brain is a necessary condition, yes—but not the experiencing subject. Treating the brain as a standalone thinking agent or process is not a scientific theory but philosophical confusion.Wayfarer

    Reading your exchange with @Philosophim I was tempted to jump in and mention precisely this book and argument by Bennett and Hacker. But you did.

    Hacker's stance in the philosophy of mind is instructive. As it distinguishes what it is that brains do from what it is that persons (who have brains as parts) do, it stresses a sort of Aristotelian duality of form and matter, but not a dualism.

    The forms that are intelligibly disclosed when we widen our focus from the material processes that take place in brains to the role that those processes play in enabling normatively structured behavior in the wider context of the life of an animal, or human being, aren't a separate substance. They are indeed the forms that matter (e.g. brains, bodies, ecological niches) takes when it is caught up in normative patterns of self-differentiating living activity. Looking at this activity closely, material causes contribute to explaining how it is enabled. Formal causes (like mental states or principles of physiology or ecology) contribute to explaining why this rather than that higher-level kind of activity (the actualization of a capacity) gets intelligibly exercised in such and such circumstances, and not normally in others, even though both normal and abnormal biological process, and both rational and irrational behaviors, happen consistently with physical laws.

    In short, material/physical causation can't fully explain living processes or mental life because this sort of causation fails to disclose any sort of distinction between what should and what shouldn't happen from the standpoint of the organism, or its internal norms/forms of flourishing.
  • Assertion
    My second thought is: Like just about everyone else who talks about AI, you're accepting the fiction that there is something called a chatbot, that it can be talked about with the same kind of entity-language we used for, e.g., humans. I maintain there is no such thing. What there is, is a computer program, a routine, a series of instructions, that as part of its routine can simulate a 1st-person point of view, giving credence to the idea that it "is ChatGPT." I think we should resist this way of thinking and talking. In Gertrude Stein's immortal words, "There's no there there."J

    I don't quite agree with this, or with the position claimed by Goddu, Noë and Thompson in the passage quoted by @Joshs (although I'm sympathetic with the embodied and enactive cognition stances of Noë and Thompson, regarding human beings and animals.) Those skeptical positions seem to me to rest on arguments that are overly reductionistic because they are insensitive to the distinction of levels between enabling mechanisms and molar behaviors, and, as a result, misconstrue what kinds of entities AI chatbots are (or what kinds of acts their "outputs" are). I don't want to argue for this in the present thread, though (but I could do so elsewhere), since this isn't tied enough to the OP topic of assertions. I had only wished to highlight the one specific respect—personhood—in which I do agree AI chatbots don't really make assertions with the same sort of significance human beings do. I may comment a bit more on the issue of personhood as an instituted status, and what some Supreme Court might or might not be able to rule, since you raised this pertinent question, later on.
  • Assertion
    Having said that, I should also say that I'm not very familiar with how computer programmers talk about their work. Is "inner state" a common term? If so, do you know what they're meaning to designate? Could there be a distinction between inner and outer, speaking strictly about the program?J

    In discussions about LLMs, machine learning, and artificial neural networks, the phrase "inner state" is hardly ever used. However, when the phrase is used to characterize the mental states of human beings—such as thoughts, beliefs, and intentions—it often involves a philosophically contentious understanding of what is "inner" about them. Is it merely a matter of the person having privileged epistemic access to these states (i.e., without observation)? Or is it, more contentiously, a matter of this privileged first-person access being infallible and not needing publicly accessible (e.g., behavioral) criteria at all?

    I think a Rylean/Wittgensteinian understanding of embodied mental life leaves room for the idea of privileged epistemic access, or first-person authority, without making mental states hidden or literally "inner." Such a view amounts to a form of direct-realist, anti-representationalist conception of mind akin to Davidson's: what we refer to when we speak of people's mental states (including our own) is a matter of interpreting the moves that they (and we) are making in language games that take place in the public world (and this world isn't describable independently of our understanding of those games).

    Turning to LLM-based conversational assistants (i.e., current chatbots), although the exact phrase "inner state" is seldom used, the idea that they have literally "internal" representations is seldom questioned, and so a representationalist framework is often assumed. What seems to come closest to a literal "inner state" in an LLM is a contextual embedding. While these embeddings are often explained as "representing" the meaning of words (or tokens) in context, in the deeper layers of a neural network they come to "represent" the contextual meaning of phrases, sentences, paragraphs, or even abstract ideas like "what Kant likely meant in the passage Eric Watkins discussed at the end of his second chapter."

    For what it's worth, I think the idea that contextual embeddings—which are specific vector representations—correspond to or are identical with what an LLM-based assistant "internally" represents to itself is as problematic as the idea of "inner states" applied to human beings. The reason this is problematic is that what determines what LLMs mean by their words is, just as in our case, the sorts of moves they have been trained to make in our shared language games. The content of their contextual embeddings merely plays a role in enabling their capacity to make such moves, just as patterns of activation in our cortical areas (such as Broca's and Wernicke's areas) enable our own linguistic capacities.

    All of this leaves out what seems to me the most salient difference between human beings and chatbots. This difference, I think, isn't most perspicuously highlighted by ascribing only to us the ability to have inner states, form intentions, or make meaningful assertions. It rather stems from the fact that—in part because they are not embodied animals, and in part because they do not have instituted statuses like being citizens, business partners, or family members—chatbots aren't persons. Not having personal stakes in the game radically limits the kinds of roles they can play in our language games and the sorts of moves they can make. We can transact in meanings with them, since they do understand what our words mean, but their words do not have the same significance and do not literally convey assertions, since they aren't backed by a personal stake in our game of giving and asking for reasons (over and above their reinforced inclination to provide useful answers to whoever happens to be their current user).
  • Must Do Better
    Yeah, I agree with that, there should be an answer here.

    But if we take "I think..." as a formal unity of judgement, it's just taking the place of Frege's judgement stroke.

    And that would be at odds with Rödl, so far as I can see. The contrast with Rödl hinges on whether the “I think” (Kant) or the judgment stroke (Frege) is best understood as a mere formal marker within a shared, impersonal space of reasons, or as something more fundamentally self-involving, reflexive, or identity-constituting.

    The latter, not so much.
    Banno

    I was thinking that the "I think" marks oneself as being able to make moves in the game of giving and asking for reasons, with everything such an ability entails (the intersubjectivity of judgement, shared practices, common objects of perception, the ability to track one's own deontic scores in rational discourse, etc.). John Haugeland, for instance, in his essay Truth and Rule Following, has highlighted the existential commitments to objects that motivate our insistence on the constitutive rules of the empirical domains within which those objects are constituted.

    So, as Opus suggested while bringing up McDowell's elegant solution, the "I think" is tied up with our standing in the game of giving and asking for reasons. But the resilient constitutive rules that we must therefore insist on (or agree to revise when circumstances rationally warrant doing so) in order to satisfy our existential commitments include the transcendental requirements for the unity of judgement.

    What makes the expression of those commitments warrant the use of the first-personal pronoun in "I think" just is the fact that we each are individually responsible for our own moves. But the "we" sometimes expressed in declarations like "We the people..." or "(We) the authors of this article..." or "We worshipers of the Great Spaghetti Monster" accomplishes the same integrative function for pragmatic/epistemic joint enterprises, where "we" share existential commitments (and we therefore wish to foreground areas of agreement rather than loci of rational dispute between members of the "we").
  • Must Do Better
    Notice the circularity - of course my representations must be accompanied by "I think..."

    What if we were to ask what we think?

    I can't help but regard this playing with private judgements with great suspicion.
    Banno

    So would I but the individualistic reading of Kant is tricky.

    This an issue that is only tangentially related to the topic regarding the aim of the activity of philosophising, or the aim of seeking truth in judgement, but I do know that neo-Kantians like Sellars, McDowell or Rödl have well absorbed the situated/socially scaffolded Wittgensteinian ideas about mind and language, and therefore don't conceive the transcendental requirements of empirical judgement to incorporate a need for mental representations to be "internal" in the individualistic sense that Wittgensteinian considerations on private languages tell against. The "internalism" at issue in Kant's quoted passage can be more charitably interpreted as the need for acts of representation to be internal to the sphere of the conceptual, while public discourse also is internal to that sphere and must hence also be answerable to what it is that "we" think.

    I asked Claude 4 Opus whether it thought there was anything in Kant's own thinking (as opposed to more recent neo-Kantian reinterpretations) that might warrant your worry.

    Here is its response (which I know you'll take with a suitable grain of salt):
    Claude 4 Opus:

    I think you're right that neo-Kantians like Sellars, McDowell, and Rödl have indeed absorbed the Wittgensteinian lesson about the essentially public nature of conceptual capacities. The key move they make is to distinguish between two senses of "internal":

    1. **Problematic internalism**: Mental states as private, individually possessed items inaccessible to others (the Cartesian picture Wittgenstein attacks)

    2. **Conceptual internalism**: The requirement that empirical content be taken up into the space of reasons/concepts, which is inherently intersubjective

    Your charitable reading seems correct - when Kant speaks of representations belonging to "my" self-consciousness, this needn't mean they're private possessions. Rather, it means they must be capable of being taken up into the unity of judgment, which itself depends on shared conceptual capacities.

    However, Banno's worry does have some textual basis in Kant. Consider:

    - Kant's emphasis on the "I" rather than "we" throughout the first Critique
    - His focus on individual faculties of cognition
    - The methodological solipsism implicit in transcendental arguments starting from individual consciousness

    That said, I think Kant himself provides resources for a more social reading:

    1. The categories must be universal and necessary - not idiosyncratic to individuals
    2. The "I think" is the *same* in all consciousness (B132)
    3. Judgment inherently aims at objective validity, which implies intersubjective agreement

    The real philosophical question is whether the transcendental "I" can do its work without being understood as essentially capable of saying "we." McDowell's solution is elegant: the "I think" marks one's ability to take responsibility for judgments within a shared space of reasons. It's not about private mental acts but about one's standing in the game of giving and asking for reasons.

    So while Banno's worry points to a genuine tension in Kant's presentation, I think the neo-Kantian strategy of understanding the transcendental subject as always already situated in shared conceptual practices is philosophically sound, even if it requires some reconstruction beyond Kant's explicit formulations.
  • Must Do Better
    Just a quick check -- you mean the first one for the bolded phrase, yes?J

    Correct! Thanks, I'll correct my post.
  • Must Do Better
    What I should have gone on to say -- and this is what Rodl means -- is that what is being thought, in A, is something about a judgment, whereas what is being thought, in B, is something about a cat. You don't actually even need B to get where Rodl is going: "My thought of judging that things are so is a different act of the mind from my judging that they are so." This is apparent merely from the way A is formulated.J

    I haven't yet read Rödl's Self-Consciousness and Objectivity (2018). So, my understanding derives from my reading of his earlier book Self-Consciousness (2007). What I gathered from this reading is that the thought contents, or propositions, one might express with the sentences A "I think the cat is on the mat" (where one could also use "believe" or "judge" instead of "think") and B "The cat is on the mat" indeed are different. The former one predicates a judgement to oneself while the later one predicates being-on-the-mat to the cat. The act of the intellect involved in making the second claim might be an act of receptivity (being governed by the thing in the world that one perceives) while the first one always is an act of spontaneity, whereby one takes oneself to be making (or, when the claim is being made out loud in a public setting, to commit oneself to) a specific move in the space of reasons, as it were, and thereby drawing on determinate (and also, arguably, shared) conceptual abilities.

    However, and this is the important point, if I understand Rödl correctly, the specific act of spontaneity involved in making the explicit claim "I think P" always also is involved in the making of the claim "P". It is the Kantian "...I think [that] must be able to accompany all my representations..."

    Here is now the point that brings us back a little closer to the OP's topic (or rather the OP of the other thread this one branched out of) about the aim of philosophy. Human rational judgement, including, paradigmatically, empirical judgement, may have truth as its formal aim. This formal aim is being acknowledged in the explicit claim "I think P" whereby one locates one's act in the space of reasons (i.e. within the public game of giving and asking for reasons). It doesn't mean that all the specific truths (true judgements) that one might express on particular occasions, including true philosophical pronouncements, all aim at expressing a maximally general Truth that they all are specifications of. What it rather means is that they all fall under a general metaphysical form, whereby (to simplify) acts of receptivity (intuitions) and acts of spontaneity (concepts) always must be involved together in contentful acts of judgement. ("Thoughts without content are empty, intuitions without concepts are blind.")

    In order to stress how the "I think" always is implicitly involved in all acts of judgement, Rödl usefully stresses the fact that one expressing what it is that one believes regarding any proposition P isn't a separate act from the one involved in making up one's mind regarding the truth of P. This may mesh well with the core intuition motivating disquotationalist or redundancy theories of truth (regardless of their possible shortcomings in other regards).
  • Mechanism versus teleology in a probabilistic universe
    Isn't what you are describing all about evolving the board to a state of balanced criticality – critical opalescence or the edge of chaos?

    So game starts in a neutral state where neither side can make big wins and just want to get their pieces out onto the board in a way that minimises the risk of big losses. The aim is to work everything towards a state of sweeping dynamism after it starts in a state of minimal strategic advantage.

    You build up a position to the point that it is extremely tense and one right move can send your opponent's position crumbling.
    apokrisis

    Interestingly enough, while beginners are encouraged to play safe moves, ensuring the security of their groups, stronger players and professionals understand that they, indeed, can't win without walking closer to the edge, and without carefully balancing safety with ambition. Like chess, Go is a game of errors. When neither player makes any gross error, the final scores tend to be very close (and many draws are achieved in chess, which is not possible in Go due to half-point komi). When a player's position crumbles, because, for instance, a big group died with no foreseen compensation being realized, then the game ends with a resignation.

    I think one lesson that can be drawn from such structural features of the game of Go is that the teleological organization of the sets of strategic principles being understood by strong players (and tacitly understood by AIs like AlphaGo) explain normal play until one player makes an error. The occurrence of those errors, and reflection on them in post-mortem analysis of the games, drives further progress and enables the players, as their skills improve, to indeed skirt ever closer to the edge of chaos. Maybe a sharp eye might caught glimpse of the stones becoming opalescent in some historical professional games ;-)
  • On Intuition, Free Will, and the Impossibility of Fully Understanding Ourselves
    The problem with this statement is that, in modern biology and the philosophy of science, teleology is generally rejected as a fundamental explanatory principle. While evolutionary processes can produce structures that appear purpose-built (such as wings for flying), this appearance is understood as a result of natural selection, not as evidence of actual purpose. Since Darwin — and even more explicitly since Stephen Jay Gould — such apparent design is treated as an illusion rather than a literal reality.Jacques

    I like Gould very much. I read some of his collected essays in the French translation as a teen (The Panda's Thumb, Hen's Teeth and Horse's Toes, The Famingo's Smile), his book The Mismeasure of Man, and followed his debate with Dawkins regarding the latter's genocentrism and reductionism. Other anti-reductionist biologists like Ernst Mayr have defended naturalized conceptions of teleology (that Mayr calls "teleonomy") that don't conflict with Gould's insistence on the lack of foresight of evolution through natural selection. The question regarding the present aims (forward-looking) of an organism's structure and behavior is distinct from the question regarding the origin of this structure (backward-looking).

    (I'll comment on your second paragraph later on.)
  • Mechanism versus teleology in a probabilistic universe
    There is no current certainty that the theory of quantum mechanics implies an indeterminate universe.RussellA

    Again, this is a bit off topic since the OP inquires about the validity of teleological explanations in the case where the laws of evolution of a system would be indeterministic.
  • Mechanism versus teleology in a probabilistic universe
    Norton's dome is the classic illustration of where determinism breaks down in the usual Newtonian notion of causal determinism. The question of what fluctuation nudged the ball down the slope becomes flipped to the other question of what fluctuation could not have knocked the ball off its precarious perch. The future outcome was always definite and foretold, the initiating event always as mysterious and uncertain as it could get.

    So in general, nature has a hierarchical causality. It is a confluence of bottom-up construction and top-down constraint. And the top-down really matters as it is what shapes up the parts making the whole. It is what makes the atoms that compose the system. Precisely as quantum field theory tells us as a story of topologically emergent order.
    apokrisis

    This is all very nicely put and I find it quite serendipitous that you would make to connection to the case of Norton's dome since, following our discussion in your stimulating OP of seven years ago, I had then raised the issue again (a few days ago) with GPT-4o. The AI on its own brought up the relevance of "the idea of dynamical bifurcations in phase space."

    I also was thinking about this idea when a video about joseki patterns appeared in my YouTube feed. Josekis are common patterns of play, usually occurring in corners, in the Asian game of Go/Weiqi/Baduk. They're analogous to theoretically sound openings in the game of Chess: usual sequences of moves that are known, empirically and through analysis, not to confer a significant advantage to either players leading into the middle-game. This is of course relative to the state of play, and current theoretical knowledge and development of strategic and tactical skills, of the strongest players, at any specific time in chess history.

    What struck me about josekis is how the patterns develop in a sort of fractal like manner obeying not just the global constraint that good moves should maximize the chances of winning the game (which now can be quantified fairly accurately by neural-networks like AlphaGo) but, at intermediate levels of analysis, by carefully, and in contextually sensitive ways, balancing the proximal goals of securing territory, creating thickness, gaining influence, maintaining access to the center, getting sente (that is, being the first player able to abandon the local fight and take a big point elsewhere on the board), etc.

    The evolving practice of playing Go, exemplified in the opening phase by the players' knowledge of josekis, evolves similarly to living organisms. And the replacement of those patterns by new ones, when flaws are discovered and exploited, in addition to the initial development of those patterns, are quite analogous to thermodynamically driven phase transitions. The emergence of the contextual embeddings in neural networks like AlphaGo, that effectively rediscover those patterns through self-play (and reinforcement), also has been characterised by computer scientists as them undergoing phase transitions as the network learns how to play autonomously and latches on those stable patterns.
  • Mechanism versus teleology in a probabilistic universe
    I'm not completely convinced it's a dependency relation, but something in the neighborhood for sure, and I could be persuaded. Other than that, both you and Leontiskos are drawing the right conclusion from Darwinism, seems to me. Surely Darwin would agree?J

    Possibly! Asa Grey was an American botanist who wrote in an article in Nature: "[...] let us recognize Darwin's great service to Natural Science in bringing back to it Teleology; so that instead of Morphology versus Teleology, we shall have Morphology wedded to Teleology." In response to this, Darwin wrote to Gray: "What you say about Teleology pleases me especially and I do not think anyone else has ever noted that. I have always said you were the man to hit the nail on the head." (June 5, 1874)
  • Mechanism versus teleology in a probabilistic universe
    I would go further and say that natural selection is itself a teleological explanation. It is a teleological explanation that covers all species instead of just one (i.e. it is a generic final cause). I would even venture that if an ur-cause like natural selection were not teleological, then the subordinate causal accounts could not be teleological, [...]Leontiskos

    Yes, I agree. The dependency seems rather indirect since the telos being appealed to in the subordinate causal account (i.e. the organism's seeking to flourish/survive in this or that specific way) emerges from but isn't derived or determined by the ultimate one (i.e. the organism's aims at reproducing/perduring). But, as you seem suggest, if, indeed, the action of the sieve on the raw material provided by random mutations was a non-teleological process, then we would have to conclude that the emergence of the (apparently) functional organisation of the organisms was an accident, or a happy sequence of accidents. And, of course, it's not an accident. Ordinary evolutionary explanations of the emergence of complex traits, which supply sufficiently rich descriptions of the environmental/ecological context, make clear how those complex traits didn't arise accidentally.
  • Mechanism versus teleology in a probabilistic universe
    I would underline this as the key point in the discussion: If it's true, which I think it is, then it allows us to say that "birds gather twigs in order to build a nest" is explanatory. The role of natural selection arises at a different level of description, having to do with how such bird-intentions wind up being chosen and facilitated.J

    Indeed. We could also say that natural selection occurs at the level where various tendencies (and the material enabling conditions if those tendencies) progressively become interlocked in such a way as to promote the flourishing of the organism, as a means to enhance their fitness. But this fitness always is relative to the particular niche that they construct and that co-evolves with their endogenous behavioral tendencies. This is what makes the process of natural selection teleological. It's not just a passive "physical" environment that exerts a selective pressure. It's the already structured part of this environment—the constructed niche—that pushes back against, or facilitates, the organisms already active (and teleologically oriented) attempts to thrive (most often exercised unknowingly, as you noted).
  • Mechanism versus teleology in a probabilistic universe
    Is there any evidence that the universe is probabilistic?RussellA

    The OP is inquiring what happens to teleology if the universe is probabilistic. As for evidence that the laws of physics are probabilistic, this appears to be the case with quantum mechanics.
  • Mechanism versus teleology in a probabilistic universe
    But the above remark shouldn't be confused with the examples associated with Aristotelian teleology, which seems to concern circular causality rather than linear causality, as in examples like "the purpose of teeth is to help digest food". Such examples can be unpacked by unwinding the causal circle backwards through time (in this case the cycle of reproduction) so as to reduce a supposedly forward looking "teleological" example to a standard Darwinian explanation.sime

    That's an interesting way to characterise an attempted reduction of holistic (or organismic) teleological explanations of organisms and of their function to a linear process of adaptation through random variation and selection. But I think, rather than constituting a reduction of the teleological explanation, the proposed evolutionary explanation answers a different question. We can ask why did the individual animal grew teeth. And the answer that the teeth help digesting food locates the explanation within a holistic (or circular) network of inter-dependent functional features of the specific organism (i.e. of the species that it belongs to). The attempted reduction, through peeling off the circular onion over the course of its past evolutionary history, however, answers a different question: why has the organism come to be functionally organized, over a phylogenetic times scale, in precisely the way that it now is? This is a perfectly good explanation to a different inquiry than the one the functional explanation was meant to address.

    On edit: one clear way to highlight the distinctness of the two questions it to attend to the fact that the functional explanation can be known to a fairly high degree of accuracy and certainty, by means of observations of the organisms present behavior in a wide range of circumstances, while the evolutionary explanation can be greatly elusive.
  • Mechanism versus teleology in a probabilistic universe
    The OP raises an overlooked point; if the evolution of a system is invertible, which is presumably the case for a deterministic system, then there is no physical justification for singling out a causal direction, and therefore no reason to choose the first event over the last event as the initial cause, as is the case if the microphysical laws are symmetric.sime

    That is a good point. But the idea applies when the events under consideration are taken to be "caused" or determined by "events" in the past or future that encompass the full intersection of the determined event's light cone with a space-like surface at that past or future time. This is because, of course, a physical event is determined jointly by everything that can influence it through forces propagating no faster than the speed of light. Another assumption is that we identify those spatially extended "events" from a God's-eye-view perspective, considering them in all their glorious microphysical details, and without perturbing them through our means of observation.

    In physics, however, the idea of the arrow of time (from past to future) is generally taken to be linked to the direction of increasing entropy and is dependent on the contingent fact of the "past" having low entropy (by definition). Carlo Rovelli neatly explains how our conception of time asymmetrical concepts of time (our ideas of ordinary present, past and future events) and hence also of our (correct) belief that our actions can causally influence the future but not the past, is linked to the time-asymmetrical thermodynamical process by means of which our memories must be formed, and how those memories can only encode information about past events and not future ones. He does this in this paper, although there also are YouTube videos where he explains those ideas informally.
  • Mechanism versus teleology in a probabilistic universe
    No. We are clearly not going to get any further with this discussion. Your understanding of teleology makes the whole thing trivial. Of course the heart has a function.

    I guess we should just leave it at that.
    T Clark

    Sure, you don't have to discuss it if you think it's trivial and not worth your while. But the very notion of function was precisely what some early critics of teleological explanations in science were targeting. Both Ernst Nagel and Carl Hempel sought to subsume functional explanations under the deductive-nomological model. The biologist Ernst Mayr, although critical of scientific reductionism, also was wary of teleological explanations and proposed the idea of teleonomy instead: a sort of as-if notion of goal directed behavior meant to replaces the abstract idea of function with the concrete notion of a physically instantiated program (such as, but not limited to, a DNA instantiated genetic program). This is meant to deal with causality by referring to the physically instantiated program as the efficient cause, as it were. I don't think either of those reduction programmes were successful, but they were attempts to cope with the non-trivial problem (according to those thinkers) of dealing with the scientific probity of the notion of function in biology.

    (My first draft philosophy paper was titled Autonomy, Consequences and Teleology. It included a critique of such reductions as attempted by Nagel, Hempel and Mayr although I wasn't acquainted with either one of them specifically! I was rather drawing on ideas in the philosophy of biology from Michael Thompson and Anthony Chemero.)
  • Mechanism versus teleology in a probabilistic universe
    A function is not the same as a goal.T Clark

    Yes, you can make this distinction, but both (1) the functional explanations of the behaviors of artifacts and (2) the purposive explanations of intentional behaviors of humans (or of non-rational animals) are species of teleological explanation. They both appeal to the end state that the system or organism is non-accidentally structured to achieve rather than appealing to laws of nature. For sure, there also are explanations why those artifacts or animals came to be teleologically structured in the way that they are. Those explanations can indeed appeal to natural selection, cultural evolution or artificial design.

    Nevertheless, the proximal explanations of the behaviors of such systems often appeal to norms rather than laws. Norms and laws have opposite directions of fit. If some objects are seen not to follow the laws of nature that we took them to be obeying, then either we were wrong about some auxiliary hypotheses or we were wrong about the law. If an artifact or person fails to act in accordance with a norm of behavior (norms of rationality, in the case of human behavior) then there is something wrong with them (e.g. they may be sick, broken, irrational or misinformed) rather than with the norms themselves.

    I scanned the two articles in the SEP you, although I didn't read all of them. In both cases, there seemed to be confusion between cause and function. Yes, the function of the heart is to pump blood, but that's not why it developed. Again, it developed in accordance with the principles of evolution by natural selection. There are many examples of organs and tissues that evolved for one function but later evolved for other functions. A common example is the evolution of the bones in the inner ear from the jaw bones of fish.

    I think it can naturally be argued that fulfilling its ability to pump blood is indeed why the heart developed. The genomic variations that favored its effectively carrying this function were selected for that reason, because fulfilling this function increased the fitness of the organism.

    The process of exaptation that you mention also is teleological. An organ that was fulfilling one function came progressively to be restructured (as well as the other parts of the organisms that function synergistically with it), though the process of natural selection, to fulfill its new function precisely because the selected genomic variations favored the fulfilling of the new fitness enhancing function.

    Natural selection isn't a mechanism that renders teleological explanations otiose. It is rather a general mechanism that explains how the development of teleologically structured organisms is enabled by random mutations and selective pressures. The suggestion that Darwin's theory explains natural teleology rather than replace it was made to Charles Darwin by one of his contemporaries and Darwin agreed. I can dig up the reference if you wish.
  • Mechanism versus teleology in a probabilistic universe
    Can you specify a mechanism other than God that could establish a goal or purpose for the universe?T Clark

    If I may jump in... Individual things in the world, like plants, animals, persons and thermostats, can have goals and functions without there there being an overarching goal for the whole universe.
  • Mechanism versus teleology in a probabilistic universe
    As for teleology, how does that fit into this at all? It seems like it is a complete non sequitur. Are you saying that something in the future reaches back and causes something in the past? As I see it, the only way to make teleology plausible is to assume there is a God.T Clark

    Proponents of naturalized teleology have something less contentious in mind. See for instance the two SEP entries about teleological notions in biology or in theories of mental content. @SophistiCat provided two neat examples. When one provides a teleological explanation of an event or phenomenon, it's not an event in the future that is claimed to be the cause. Not even Aristotle really was claiming this when he distinguished final causes from efficient, material and formal ones, since his notion of αἰτία that we now translate as "cause" was referring to the "why" or explanation of something. It's rather the identification of the aim to be realized in the future that is provided as the explanation of a natural phenomenon, behavior or rational action.

    Of course, some thinkers like Ernst Mayr have resisted the introduction of teleological notions in science. But the fault line seems to be between reductionism and strong (nomologically irreducible) emergentism rather than between naturalism and supernaturalism (or natural theology).
  • On Intuition, Free Will, and the Impossibility of Fully Understanding Ourselves
    In my opinion anyone who rejects physicalism and the associated reduction of conscious experiences to material processes must assume that these experiences are based on something else. But on what – an élan vital, magic, or what else?Jacques

    The rejection of materialistic (or physicalistic) reductionism need not entail the reduction(sic) [On edit: I meant rejection!] of materialism broadly construed: the idea that everything that we see in the natural world is materially constituted of physical objects obeying the laws of physics. But material constitution just is, generally, one particular feature of a material entity. Many entities have, in addition to their material constitution, formal/functional/teleological features that arise from their history, their internal organisation, and the way they are embedded in larger systems. This is true of human beings but also of all living organisms and of most human artifacts.

    What must then be appealed to in order to explain such irreducible formal features need not be something supernatural or some non-material substance. What accounts for the forms can be the contingencies and necessities of evolutionary and cultural history history, and the serendipitous inventions of people and cultures. Those all are explanatory factors that have nothing much to do with physics or the other material sciences. Things like consciousness (and free will) are better construed as features or emergent abilities of embodied living (and rational) animals rather than mysterious immaterial properties of them.
  • Mechanism versus teleology in a probabilistic universe
    When A causes B, and B causes C, where the kind of causation at issue is nomological, then there need not be a teleological explanation of the occurrence of B at all. It's only in the case of functionally organized systems (such as a functional artifact, like a computer, or a living organism) that we can say that a state B will occur with the purpose of realizing C (or realizing a final state C that instantiates the relevant aim of the system.) And in that case, on my view, it's not relevant whether the material realization basis of the system is governed by deterministic or indeterministic laws. The initial occurrence of A might explain the occurrence of B, specifically, owing to laws of nature that such sequences of events can be subsumed under. But what it is that explains that whatever physical state B the system happens to be caused to instantiate would be such as to subsequently lead to a state C that instantiates the relevant goal is the specific functional organization of the system, and such a teleological explanation is valid and informative regardless of the deterministic nature of the laws that govern the particular A -> B -> C sequence of events.

    So, although the teleological explanation doesn't guarantee that C rather than C' or C'' (etc.) will occur, it explains why it is that whichever final state is realized will (likely) be such as to non accidentally realize the general aim of the functionally organized system.
  • ChatGPT 4 Answers Philosophical Questions
    This mini-documentary from CNBC discusses, with many references, the apparent wall that AI is hitting with respect to the ability to reason. Many of the papers cited argue that LLM's, no matter how sophisticated, are really performing pattern-recognition, not rational inference as such. There are examples of typical tests used to assess reasoning ability - the systems perform well at basic formulations of the problem, but past a certain point will begin to utterly fail at them.Wayfarer

    I don't think the Apple paper (The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity) is very successful in making the points it purports to make. It's true that LLM-based conversational assistants become more error prone when the problems that they tackle become more complex, but those limitations have many known sources (many of which they actually share with humans) that are quite unrelated to an alleged inability to reason or make rational inferences. Those known limitations are exemplified (though unacknowledged) in the Apple Paper. Some variants of the river crossing problem that were used to test the models were actually unsolvable, and some of the Tower of Hanoi challenges (with ten or more disks) had resulted in the reasoning models declining to solve them due to the length of the solution. Instead, the models provided the algorithm to solve them rather than outputting an explicit sequence of instructions as requested. This hardly demonstrates that the models can't reason.

    Quite generally, the suggestion that LLMs can't do X because what they really do when they appear to do X just is to match patterns from the training data is lazy. This suggestion neglects the possibility that pattern matching can be the means by which X can be done (by humans also) and that extrapolating beyond the training data can be a matter of creatively combining known patterns in new ways. Maybe LLMs aren't as good as humans are in doing this, but the Apple paper fails to demonstrate that they can't do it.

    Another reason why people make the claim that reasoning models don't really reason is that the explicit content of their reasoning episodes (their "thinking tokens"), or the reasons that they provide for their conclusions, sometimes fail to match the means that they really are employing to solve the problem. Anthropic has conducted interpretability research through probing the internal representations of the models to find how, for instance, they actually add two-digit numbers together, and discovered that the model had developed ways to perform those tasks that are quite distinct from the rationales that they offer. Here also, this mismatch fails to show that the models can't reason, and it is also a mismatch that frequently occurs in human beings albeit in different ranges of circumstances (when our rationalizing explanations of our beliefs and intentions fail to mirror our true rationales). But it raises the philosophically interesting issue of the relationship between reasoning episodes, qua mental acts, and inner monologues, qua "explicit" verbal (and/or imaginistic) reasoning. A few weeks ago, I had discussed this issues with Gemini 2.5.

    On edit: I had also discussed this issue in relation to the interpretability paper by Anthropic, in this discussion with GPT-4o, beginning with my question: "I'd like to move on to a new topic, if you don't mind, that is only tangentially related to your new recall abilities but that has made the news recently following the publication of a paper/report by Anthropic. You can find an abstract if you search the internet for "Tracing the thoughts of a large language model" and the paper itself if you search for "On the Biology of a Large Language Model""
  • ChatGPT 4 Answers Philosophical Questions
    I think it's philosophically interesting, quite aside from the technical and financial implications.Wayfarer

    I also am inclined to think it's quite wrong since it seems to misattributes the source of the limitations of LLMs, but there may be a grain of truth. (The misattribution often stems from focusing on low level explanations of failures of a capacity while neglecting the fact that the models have this fallible capacity at all.) Thanks for the reference! I'll watch the video and browse the cited papers. I'll give it some thought before commenting.
  • Some questions about Naming and Necessity
    If you're using "private" the way Wittgenstein did, the answer depends on the extent to which meaning arises from rule following. If it's mostly rule following, then you couldn't establish rules by yourself.

    If you're just asking if you can keep some information to yourself, yes.
    frank

    I agree with the comments @Banno and @Srap Tasmaner made on the issue of intent regarding the way for one's own expressions (or thoughts) to refer, at least until this point in the thread (where you flagged me). The very content of this intent is something that ought to be negotiated within a broader embodied life/social context, including with oneself, and, because of that, it isn't a private act in Wittgenstein's sense. It can, and often must, be brought out in the public sphere. That doesn't make the speaker's intentions unauthoritative. But it makes them fallible. The stipulated "rules" for using a term, and hence securing its reference, aim at effective triangulation, as Srap suggested.

    Another issue is relevant. Kripke's semantic externalism (that he disclaimed being a causal "theory"), like Putnam's, often is portrayed as an alternative to a descriptive theory that is itself construed as a gloss on Frege's conception of sense. But modern interpreters of Frege, following Gareth Evans, insist on the notion of singular senses, that aren't descriptive but rather are grounded in the subject's acquaintance with the referred object and can be expressed with a demonstrative expression. Kripke's so called "causal theory" adumbrates the idea that, in the champagne case, for instance, whereas the speaker makes a presupposition while referring to the intended individual that they see holding a glass, their act of reference also is perceptually grounded (or meant to be so) and is a singular sense rather than descriptive. When there is an unintended mismatch between the reference of this singular sense and the descriptive sense that the speaker expresses, then the presupposition of identity is mistaken. What it is that the speaker truly intended to have priority (i.e. the demonstrative singular sense or the descriptive one) for the purpose of fixing the true referent of their speech act (or of the thought that this speech act is meant to express) can be a matter of negotiation or further inquiry.
  • Measuring Qualia??
    Given that I don't think the very notion of qualia can be made coherent, I oddly find myself agreeing with you for completely different reasons.Banno

    I don't think is can be made coherent either while hanging on to the notion that they are essentially private mental states, which is their ordinary connotation in the philosophical literature, but not always part of the definition.
  • Measuring Qualia??
    Agree, but because of the fact we're similar kinds of subjects. We know what it is to be a subject, because we are both subjects.Wayfarer

    I can't quite agree with this. Arguably, a philosophical zombie isn't a "subject" in the relevant sense since, ex hypothesi, they lack subjective states. So, if our solution to the problem of other minds is to infer, inductively, that other people must experience the world (and themselves) in the same way that we do because they are the same kinds of subjects that we are, then the argument is either circular or, if we take "subject" to only designate an "objective" structural/material similarity (such as belonging to the same biological species with similar anatomy, behavior, etc.) then it is, in the words of Wittgenstein, an irresponsible inductive inference from one single case (our own!)
  • Measuring Qualia??
    What it doesn't do is offer a means to measure qualia themselves in any philosophically robust sense (which after all would require the quantification of qualitative states!) That would require somehow rendering the intrinsically first-person nature of experience into a third-person measurable variable—which remains the crux of the hard problem.Wayfarer

    You are quite right, and your comments are on point. I would suggest, thought, that the issue of the third-person accessibility of qualia (and the so called epistemological problem of other minds) can be clarified when we disentangle two theses that are often run together. The fist one is the claim that qualia are essentially private. The second one is the claim that they can be accounted for in reductionistic scientific terms (such as those of neuroscientific functionalism) and thereby "objectified". It's quite possible to reject the second thesis and yet argue that subjective qualia (i.e. what one feels and perceives) can be expressed and communicated to other people by ordinary means.
  • Measuring Qualia??
    Sabine often spouts loads of nonsense whenever she strays outside of her own narrow domain of expertise, which is theoretical physics. In this case, it's so egregious that it's hard to even know where to begin. To be fair, most of the neuroscientists that she quotes in this video are nearly as equally confused about qualia as she is. But, at least, they're producing interesting neuroscientific results even though they misunderstand or over-hype the philosophical implications of finding neural correlates of subjective experiential states in human subjects.

    (With my apologies to the OP. This curt response is meant to be dismissive of Sabine's misinformed blathering; not of your fine question/topic.)
  • Some questions about Naming and Necessity
    But that's the only point that's made by insisting that I could become Obama, that the universe could work differently than the way we think it does. Do you agree with that?frank

    I agree, but in that case we're talking about epistemic possibilities, or epistemic humility.
  • Some questions about Naming and Necessity
    Right. Once I've picked out an object from the actual world, though many of its properties might be contingent, for my purposes they're essential to the object I'm talking about. Right?frank

    Saying that they're essential "to the object [you're] talking about" is ambiguous. It admits of two readings. You may mean to say (de re) of the object you are talking that it neccessarily had those properties. That isn't the case, ordinarily. The pillow you are talking about could possibly (counterfactually) have had its button ripped off. In that case, of course, you could have referred to it differently. But it's the same pillow you would have referred to.

    On its de dicto reading, your sentence is correct. But then the essentialness that you are talking about belongs to your speech act, not to the object talked about. Say, you want to talk about the first pillow that you bought that had a red button, and you mean to refer to it by such a definite description. Then, necessarily, whatever object you are referring to by a speech act of that kind, has a red button. But this essentialness doesn't transfer to the object itself. In other words, in all possible worlds where your speech act (of that kind) picks a referent, this referent is a pillow that has a red button. But there also are possible worlds where the red-buttoned pillow that you have picked up in the actual world doesn't have a red button. Those are possible worlds where you don't refer to it with the same speech act. In yet other words (and more simply), you can say that if the particular red-buttoned pillow you are talking about (by description) hadn't had a red button, then, in that case, it would not have been the pillow that you meant to refer to.

    Why couldn't rigidity come into play regarding a contingent feature of an object?

    Good question. I'm a bit busy. I'll come back to it!

Pierre-Normand

Start FollowingSend a Message