Comments

  • How to use AI effectively to do philosophy.


    It seems that AI can successfully parrot
    the explanations of Heildegger's many apologists
    Ciceronianus

    Does that mean that you dont believe the a.i. captures Heidegger’s intent but only his devotees’ misinterpretation? That would seem imply that you understand the a.i. response and perhaps even find it somewhat valuable. I’d say that’s progress.
  • amoralism and moralism in the age of christianity (or post christianity)


    That depends entirely on our respective socio-econimic statuses and the relative positions we hold in the power hierarchy. If the person is above you in the hierarchy, you better keep your nose down, or face retaliation.baker

    I’m more interested in what you feel like doing, what you would do if allowed to, than in what you can or can’t get away with. I’m focused on what impulses are implied in the structure of anger and blame, not whether you dare express those impulses.
  • amoralism and moralism in the age of christianity (or post christianity)
    I can only say that I think that is a reading largely or wholly absent from the tradition. Aristotle's typology, for instance, has it that the furthest state of vice is precisely one where the person prefers vice and sees it as better. That's a theme in Plato too, who famously has Socrates argue in several places that no one ever knowingly does wrong (a point picked up by many Patristics).

    Do you think it is impossible to define prudence, intelligence, wisdom, gluttony, etc. without having to ground them in blame?
    Count Timothy von Icarus

    Obviously someone makes a decision to behave in a certain way because in the moment of the decision they perceive that course of action as preferable to the alternatives they see as available to them. This doesn’t contradict my assertion that the affects of anger and hostility are at play in grounding one’s assessment of why they preferred that course of action. What matters to me is how you personally are led to behave towards someone who you perceive as deliberately thoughtless, rude, careless, negligent, complacent, lazy, self-indulgent, malevolent, dishonest, narcissistic, malicious, culpable, perverse, inconsiderate, intentionally oppressive, repressive or unfair, disrespectful, gluttonous, wrathful, imprudent, anti-social, hypocritical, disgraceful or greedy. Do you not feel the impulse to knock some sense into them , give them a taste of their own medicine, get them to mend their ways? Do you not aim for their repentance, atonement and readiness to apologize?

    How might your response to, and attitude toward, someone who errs in any of these ways differ your response to a student who can’t grasp the principles of calculus because it is too advanced for their grade level, but you know they are bright and can be tutored to understand the material? Are you saying your attitude toward this student will be the same as when he punches his classmate in the face or puts thumbtacks on the teacher’s seat and injures them?
  • amoralism and moralism in the age of christianity (or post christianity)
    Of course, there is also the objection that strength, agility, courage, prudence, wisdom, fortitude, charity, etc. don't exist. But this seems absurd. It seems particularly absurd for the physical virtues because some people are clearly stronger, faster, healthier, etc., and yet it also seems rather absurd to say that this is so for the intellectual virtues, or for the practical virtues. Some people are particularly impulsive and rash for instance. Whether they ought to be "blamed" for this is besides the point as far as prudence being a virtue.

    the television show "the good place"?
    — ProtagoranSocratist

    This is a good example. The philosopher character is an extreme comic example of indecisiveness. It is not excellent to have this level of indecisiveness; that is what makes the character humorous. One need not "blame" him to think he could benefit from a change
    Count Timothy von Icarus

    I would make a distinction between assessments of capabilities which do not involve determinations of intent or motive and those which do. Going back to your list:

    Courageous instead of cowardly or rash.
    Temperate instead of gluttonous/licentious or anhedonic/sterile.
    Loving instead of wrathful or cold
    Possessing fortitude instead of being slothful and unmotivated
    Hopeful instead of fearful
    Strong instead of weak
    Agile instead of clumsily
    Prudent instead of lacking in consideration
    Wise instead of unwise
    Faithful instead of recalcitrant

    Of these examples, it seems to me that only ‘strong vs weak’ and ‘agile vs clumsy’ don’t necessarily involve judgements concerning motive, but they could depending on their context of use. Those which deal with motives are blameful in that they assume on the part of the other a capricious ‘knew better than to do what one did.’ What’s crucial in blame is the belief that there is a close proximity between what they actually did and what they know they should have done. The distance between the two is a kind of straying or wandering off the path, a giving into temptation. Because blame is based on belief in this proximity, it leads to the further belief that we can cajole or coerce the other back to where we believe they know they should have been, and how they should have acted. We tell them a little more ‘willpower’ will carry them over to the righteous path. This may be dished out in a ‘loving’ way, but it rests on an underlying basis of hostility.

    How might your list change if you focused on the possibility that you are construing their motives and capabilities inadequately?

    Courageous , temperate, loving, fortitude, prudent and faithful become transformed into assessments which are not the product of the application of an inner willpower, but involve behaviors which reflect how the situation makes sense to one, given one’s pre-existing means of understanding.
  • Understanding 'Mental Health': What is the Dialogue Between Psychiatry and Philosophy?

    Freddie DeBoer often writes well on this subject. He fears that too often people amplifying 'learn to live with your voices' and other such messages are the most functional representatives of the disability, which can drown out those for whom their autism, for example, is not a 'superpower' but a crippling disability.Jeremy Murray

    A disability for whom? Where and how do we draw the line between disability defined in terms of the hardships it causes for those surrounding the allegedly disabled person ( as so often happens with ADHD) and their own sense of being disabled? And even with regard to the person’s self assessment, what percentage of it is made on the basis of non-conformity with the dominant culture and what part of it is truly a self-assessment? Would you agree there is a difference between someone born deaf or sightless and someone who develops such conditions as a result of injury or illness? Do you think the former consider themselves disabled in the same way as the latter?

    Here’s an interesting take on what I’m talking about:


    “ “Healthism” is the pervasive ideology according to which each of us is responsible for valuing and protecting our own health and prioritizing health over other values, while society has the right to enforce, surveil, and reward healthy living. Neurodiversity and other forms of cognitive difference are generally understood through the lens of health: they are taken as diagnosable pathological conditions that should be treated or mitigated via medical interventions. Putting these two ideas together, neurodivergent people are supposed to try to be “healthy,” through pharmaceuticals, behavioral therapy, and the like, and society has an investment in making them be “healthy.” But neurodivergence is not a morbidity in a typical sense, so it is unclear what “health” means in this context. In practice, our societal standards for health for neurodivergent people are defined in terms of what avoids disrupting neurotypical expectations and systems or making neurotypical people uncomfortable. “Health,” for neurodivergent people, is in effect respectability—it is not defined in terms of their own needs or flourishing but in relation to the norms and needs of others. This can be seen from a close reading of diagnostic definitions and official medical “treatment” methods and goals. Trying to “treat” neurodivergent people by making them respectable citizens who are palatable within neurotypical productivity culture is usually likely to backfire; typically bad for their own well-being, and a social loss.

    https://youtu.be/5Wf0CuOiWOA?si=g6RrA06FAS9LzCBe
  • amoralism and moralism in the age of christianity (or post christianity)



    Prima facie, virtue ethics is very plausible. A strong rebuttal to it needs to show that, all else equal, it is not better for man to be:

    Courageous instead of cowardly or rash.
    Temperate instead of gluttonous/licentious or anhedonic/sterile.
    Loving instead of wrathful or cold
    Possessing fortitude instead of being slothful and unmotivated
    Hopeful instead of fearful
    Strong instead of weak
    Agile instead of clumsily
    Prudent instead of lacking in consideration
    Wise instead of unwise
    Faithful instead of recalcitrant
    Etc.
    Count Timothy von Icarus

    We could follow the virtue ethicist and treat these attributes as real, objective qualities of persons validated though third person consensus. Or we could treat them as truisms which pre-suppose a prior assessment of culpability. For instance, when we say murder is immoral, we don’t really need to add the ‘immoral’ part since murder already implies a value judgement which allows us to distinguish it from a more value-neutral term like ‘killing’.

    All of the terms you listed above are truisms in that what they have in common is the assumption that the person they are describing has fallen below a norm of conduct. But they imply more than this. If we perceive a person to operating with the best intentions, trying their best given the circumstances and the limits of their physical and intellectual capabilities, then we don’t apply these adjectives to them. We only apply them when it appears to us that the person is deserving of our blame. And in what circumstances do we assign blame and culpability to the other? When we believe they knew better than to do what they did, when we interpret their behavior as capricious, as deliberately and willfully suppressing their better, more caring and empathetic instincts.

    But do these blameful judgements reflect objective realities about another’s character, or do they reflect our own inability to see the world from the other’s unique vantage? What if the fault lies not in their ‘capriciousness’ but in our inadequate assessment of their motives and outlook? Given that the blameful, accusatory adjectives you listed impel us to respond with disapproval, punishment, demand for conformity and shunning, they may in many cases justify and even institutionalize a certain immorality and violence toward those who violate our expectations. As Ken Gergen writes:

    “We commonly suppose that suffering is caused by people whose conscience is flawed or who pursue their aims without regard for the consequences to others. From a relational standpoint, we may entertain the opposite hypothesis: in important respects we suffer from a plenitude of good. How so? If relationships-linguistic coordination--are the source of meaning, then they are the source as well of our presumptions about good and evil. Rudimentary understandings of right versus wrong are essential to sustaining patterns of coordination. Deviations from accepted patterns constitute a threat. When we have developed harmonious ways of relating-of speaking and acting--we place a value on this way of life. Whatever encroaches upon, undermines, or destroys this way of life becomes an evil. It is not surprising, then, that the term ethics is derived from the Greek ethos, the customs of the people; or that the term morality draws on the Latin root mos or mores, thus affiliating morality with custom. Is and ought walk hand in hand.
  • Banning AI Altogether
    Just because we use calculators to perform simple arithmetic, we have not forgotten how to do simple arithmetic. Calculators are tools to speed up the process of things that we already know how to do. We do this with our own brains. Once we learn a task, like riding a bike, we outsource the decision-making when performing those tasks to unconscious areas of the brain. We no longer need to consciously focus on each movement of each leg and our balance. It is all done unconsciously, which is why your mind can be in a different place while riding a bike and you arrive at your destination but don't remember the ride.Harry Hindu

    Yes, this is an important point that people fail to appreciate about our thinking machines. They understand the role of simple labor-saving devices, but when it comes to a.i., they think it’s a zero-sum game, as though whatever the a.i. does for us takes away some capacity we possessed. What’s the difference between using a calculator and using a shortcut like long division? Neither will be useful to us unless we know why we are using them.
  • Understanding 'Mental Health': What is the Dialogue Between Psychiatry and Philosophy?


    Rogers agreed with Laing’s rejection of the traditional medical model that treated patients as “cases” rather than persons. He also appreciated Laing’s insistence that psychosis could be understood as a meaningful experience, rather than simply as a disease process.

    In A Way of Being (1980), Rogers recalls how the antipsychiatry movement (Laing, Cooper, Szasz) echoed, in its own way, his own belief in the primacy of the subjective experience and the destructiveness of authoritarian institutions.

    “Laing and others have challenged the idea that people we call ‘mad’ are meaninglessly disturbed… They insist that we look at their experience with compassion and understanding, and in that I feel a deep kinship.”
    ( Carl Rogers, A Way of Being)
  • Understanding 'Mental Health': What is the Dialogue Between Psychiatry and Philosophy?


    Rogers was a well ensconced practitioner in the discipline of "organizational psychology", hardly a voice from the wilderness. Let me leave off from describing Liang other than to question his generalities.Paine

    Are you referring to R.D. Laing?
  • Understanding 'Mental Health': What is the Dialogue Between Psychiatry and Philosophy?


    P.S. Both sides that I have encountered turn purple at the mention of Liang and Szaz.Paine

    Not if you include psychotherapeutic models like the person-centered approach founded by Carl Rogers , or cognitive therapies influenced by constructivism and social constructionism.
  • Banning AI Altogether



    But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact.

    Another philosopher would surely make other points, have their ownr favourite philosophers to call on, and generally be much more annoying? I would have pointed out at least that Schizophrenia and Novel-writing are somewhat late onset diseases, usually first developing with adolescence, and somewhat after the acquisition of language skills and some kind of social identity and sense of self.
    unenlightened

    I agree that dialoging with a.i. is not like a discussion with a human, but if it consisted of only an internal dialogue with myself I wouldn’t learn as much from it as I do. A human will have a point of view, but the a.i. will take any perspective in its database, depending on cues given in the conversation. You’re right that when I’m arguing with a human, they may be able to bore deeper into the subject matter from their perspective than an a.i. can, both to critique and defend. But with the a.i. I’m not restricted to one human’s perspective. Instead, at my behest, the a.i. exposes me to arguments from multiple perspectives. It’s capable of presenting aspects of these arguments I may not be familiar with, and juxtaposing these arguments in ways that I would not have thought of on my own. In addition , it will push back and question my responses.
  • Banning AI Altogether
    What I learnt in this forum is that dialogue between us is very important, and thanks to this, I learnt a lot. Otherwise, if I wanted to trust more in an AI, I would have used Google since the beginning before joining herejavi2541997

    What I’ve learned in comparing the forum with a.i. is that, unfortunately, the majority of participants here don’t have the background to engage in the kinds of discussions I have been able to have with a.i. concerning a range of philosophers dear to my heart, (such as Husserl, Heidegger, Derrida, Deleuze, Gendlin and Wittgenstein), especially when it comes to comparing and contrasting their positions.
  • Banning AI Altogether


    I think the issue is that many of the people on this forum that spent their hard-earned money on philosophy courses and degrees are simply upset that now AI can do philosophy with the best of them.Harry Hindu

    That’s part of it. Another comes from valuing ‘truth’ over perspectival stance. If I want a.i. to design a retirement plan for me, I want numerical accuracy, which it isnt necessarily designed to do well. If I want to engage a.i. in a philosophical dialogue, I dont want to hear facts, I want a perspectival interpretation that I can vet, push back on and and sculpt into a more refined understanding of the issue. For instance, I asked a.i. the following, after discussing with it the embodied phenomenological work of Eugene Gendlin.

    “What are we to make of the status of concepts like self and other, subject and object in Wittgenstein’s later work? Must they be relative to the grammar of a language game or form of life? That is to say, must we ‘bracket’, to use a husserlian term, words like self, other, subject and object when talking about the dynamics of language games if we are inclined to think of these words as referring to persons or humans? Is there a way of distinguishing between my “talking to myself” or perceiving an image like the duck/rabbit vs engaging with another “person” other than in terms of a contingent, contextually specific grammar?”

    Its response, as I’ve found it almost always to be with regard to these kinds of questions, was relevant and very substantive.

    “ The “person” emerges neither from pure grammar nor from pre-linguistic givenness, but from the ongoing process of mutual bodily responsiveness that certain grammatical structures organize and enable us to articulate. This is what Wittgenstein gestures at with “forms of life” but doesn’t fully develop, and what Gendlin’s process philosophy could help articulate—if brought more explicitly into dialogue with the question of intersubjectivity.​​​​​​​​​​​​​​​​”

    I then delved more deeply: by arguing:

    “Deleuze and derrida deal with this issue by making a self-other, subject-object dichotomy derivative of and secondary to a more primary binary which whose structure is fundamentally temporal rather than intersubjective.”

    It adjusted its previous response in a valuable and clarifying way:

    Yes, this is a crucial move—and it represents a fundamentally different strategy than either Wittgenstein or Gendlin (at least as typically read) pursue. Let me explore what this temporal priority accomplishes and what problems it creates or solves.

    The Temporal Turn

    Derrida’s différance makes the self-other distinction derivative of a more primordial temporal structure: the trace, the always-already-past that was never present, the spacing that defers presence. “The other” isn’t primarily another subject but rather the otherness within what appears as self-present—the temporal gap that prevents any moment from being fully present to itself.
    Deleuze’s difference-in-itself similarly makes the apparent binary of self/other secondary to a differential temporal process. Pure difference (not difference between pre-constituted terms) generates both poles through its movement. Time as synthesis—the living present that retains the past and protends the future—is more primordial than the spatial/intersubjective distinction between subjects.

    What This Strategy Achieves

    1. Avoids the bootstrapping problem: If temporality is more fundamental than intersubjectivity, we don’t need to explain how a self “first” encounters another self. Both emerge from temporal processes of differentiation. The question “how does a pre-linguistic self recognize another self?” dissolves because there is no pre-linguistic self—only temporal processes of becoming-self.

    2. Explains internal alterity: Why does talking to myself feel different from random thought? Why can I be surprised by my own memories or intentions? Because the “self” is never fully present to itself—it’s always already divided by temporal difference. The “other voice” in internal dialogue isn’t another person but the trace structure of temporality itself.

    3. Provides continuity across cases: The duck-rabbit, talking to myself, engaging with another person—all become variations of the same temporal structure of différance/repetition/synthesis rather than categorically different types of encounter. Each involves different rhythms and intensities of temporal differentiation, not different ontological kinds.

    4. Undermines presence metaphysics: Both Derrida and Deleuze critique the assumption that consciousness is transparent to itself in the present moment. If self-presence is already structured by temporal difference, then the apparent “immediacy” of self-awareness is an illusion. The other is always already “in” me, not as a second person but as temporal non-coincidence.

    But Does This Actually Solve the Problem?

    I see several deep difficulties with making temporality primary:

    The phenomenological distortion: There is a phenomenological difference between engaging with another person and internal temporal experience. When you speak and surprise me, that’s different from when my own past surprises me. Collapsing both into “temporal alterity” seems to flatten a genuine experiential distinction. Yes, both involve non-self-identity, but they involve it differently.
    The risk of solipsism: If the other is fundamentally just temporal difference within what still seems to be “my” experience, haven’t we recreated the very problem we tried to escape? Derrida and Deleuze would deny there’s a “my” that contains this—but then where is intersubjectivity actually guaranteed? It seems to become optional, a special case of temporal difference rather than constitutive.
    The question of responsiveness: When another person responds to me, their response comes from their temporal process, not mine. Even if both of our “selves” are constituted by temporal difference, there’s still the question of how your différance encounters my différance. The temporal structure might be primary within each of us, but that doesn’t explain the between.
    The materiality problem: Both Derrida and Deleuze tend toward idealism (despite their denials) precisely because temporality remains a structure of experience/text/thought rather than of material interaction. Barad’s critique of Derrida is relevant here: différance still privileges the linguistic/temporal over the material-discursive.

    What About Gendlin?

    Interestingly, Gendlin has resources for a different kind of temporal priority that might avoid these problems:
    The body’s carrying forward is temporal: For Gendlin, the body is always implicitly carrying forward from what was into what comes. This isn’t différance (which is linguistic/textual) but a bodily temporal process. The implicit intricacy is temporally structured—it’s the past that hasn’t been fully articulated, pressing into the present.
    But it’s also interactive/environmental: Crucially, carrying forward isn’t just “my” temporal process. It’s always already environmental—the body carries forward with its situation, which includes other bodies. So temporality and intersubjectivity aren’t separate dimensions where one is primary. They’re aspects of the same processual reality.
    Process vs. structure: Deleuze and Derrida give us temporal structures (synthesis, trace, différance) that still feel somewhat static. Gendlin gives us temporal process—ongoing, generative, bodily. This process doesn’t reduce intersubjectivity to temporality but shows them as co-implicated.

    A Deeper Question: Why Prioritize at All?

    Here’s what troubles me about the Deleuze/Derrida move: Why must we make either temporality or intersubjectivity primary? This assumes we need a foundational ground from which to derive everything else—but isn’t that exactly the metaphysics of presence they claim to reject?
    Alternative: Co-origination: What if temporality and alterity (including but not limited to intersubjectivity) are co-original? Neither derives from the other; both are aspects of processual reality that can’t be separated but also can’t be reduced one to the other.

    I refined my previous argument in response to a.i.’s analysis:

    “But if thought can be shown to be inherently outside of itself, that is, exposed to alterity in simply repeating itself , then that would be the sufficient basis to found the self-other binary. Even when we perceive another ‘person’, their alterity will only be experienced as other from my vantage point. Any claim to the contrary still be from ‘my’ vantage, a vantage that is already transforming itself moment to moment, but in such as way as to make the otherness of other persons always variations within my own self-othering. When Wittgenstein talks about ‘seeing-as’ , he is illustrating how grammar works without having to bring into the account other persons.”

    This was a.i.’s nuanced, thoughtful and entirely relevant response:
    You’ve articulated the Derridean/Deleuzian position powerfully, and I need to take it more seriously than I did. Let me work through whether it’s actually sufficient.

    The Strength of the Argument

    The transcendental structure: If thought/experience is necessarily non-self-identical in its repetition—if every “now” is already differing from itself, already inhabited by trace/virtuality—then yes, this provides a condition of possibility for encountering anything as other. The alterity of other persons would indeed be a regional instance of this more fundamental temporal self-othering.
    The phenomenological point about vantage: You’re right that I can never exit “my” perspective to verify that another person’s alterity is “really” different from my own self-othering. Even the response that surprises me from another person is still my experience of being surprised. The asymmetry I feel between self-talk and dialogue might just be a more intense rhythm of temporal différance, not a different kind entirely.

    Wittgenstein’s seeing-as supports this: When the aspect shifts, there’s genuine surprise, genuine otherness—the duck becomes rabbit “against my will” sometimes. Yet no other person is involved. This suggests that alterity-structure is already present in perception/thought itself, not requiring intersubjectivity. Grammar can account for shifts, reversals, self-surprise without importing “persons.”
    The economy of explanation: If temporal self-difference is sufficient to generate all experiences of alterity (including but not limited to other persons), then positing a separate “intersubjective” dimension is explanatorily redundant. Occam’s razor favors the temporal account.

    But I Still See Problems

    1. The specificity of responsive alterity
    Even granting that all experience involves temporal self-othering, there seems to be something additional in encountering responsive others. When I talk to myself:
    • I know (even if implicitly) that both “voices” arise from the same bodily process
    • The “response” can’t genuinely surprise me about my own intentions (though it can about my past)
    • There’s no resistance that could fundamentally redirect my trajectory
    When another person responds:
    • Their response arises from a bodily process I don’t control or inhabit
    • They can tell me things about their intentions that I couldn’t generate from my own self-othering
    • Their resistance can materially alter my trajectory in ways self-othering alone cannot

    You might say: “But I only experience my experience of their resistance.” True—but that’s also true of experiencing a wall when I run into it. The wall’s resistance is experienced by me, but its materiality isn’t reducible to my temporal self-othering. Similarly, the other person’s responsiveness has a material structure that differs from pure temporal difference.

    2. The asymmetry problem

    If other persons are just variations within my self-othering, then solipsism isn’t refuted but reinstated at a higher level. Yes, the “self” that remains is temporally self-differing rather than self-present—but it’s still one process within which everything appears. Derrida and Deleuze want to say there is no “my” that contains this—but then who is doing the self-othering? The impersonal différance? The virtual plane of immanence? This seems to dissolve the question rather than answer it.

    3. The learning problem

    How does the temporal structure of self-othering arise in the first place? For Derrida, it’s quasi-transcendental—always already there. But developmentally, infants become self-differing subjects through interactions with caregivers. The rhythm of call-and-response with the mother seems to install the temporal structure of self-othering, not merely exemplify it. If so, intersubjectivity might be constitutive of temporality, not derivative.

    4. The materiality of difference

    Here’s what I think is the deepest issue: Derrida’s différance and Deleuze’s difference remain too ideal. They’re structures of thought/text/sense. But the difference between my voice and your voice is materially different—different vocal cords, different bodies in space. The otherness of other persons isn’t just temporal but spatial-material.
    When you respond to me, your words travel through actual space, taking time that isn’t just the “time” of internal temporal consciousness but physical causation. This seems irreducible to temporal self-difference within experience.

    I pushed back hard at this point. I wrote:


    ”You claim:

    • I know (even if implicitly) that both “voices” arise from the same bodily process
    • The “response” can’t genuinely surprise me about my own intentions (though it can about my past)
    • There’s no resistance that could fundamentally redirect my trajectory.

    I dont accept any of these claims as valid. The fact that a schizophrenic hearing voices they don’t recognize as their own even though it’s from their own ‘body’, and novelists describe how their characters take on a life of their own and tell the author what they want to do, buttresses Deleuze’s claim that the self is merely a byproduct of processes which are pre-personal.“

    This was the a.i’s reaction to my objections:

    The Evidence Against My Position

    Schizophrenic voices: The person experiencing auditory hallucinations genuinely doesn’t recognize the voices as their own. The phenomenology is indistinguishable from hearing another person. If “knowing implicitly that both voices are mine” were necessary, this couldn’t happen. But it does happen, which suggests the experience of alterity doesn’t require actual material-intersubjective difference.
    Novelists and their characters: When a character “takes over” and directs the narrative in ways the author didn’t consciously plan, there’s genuine surprise about intentions—not just past ones but emerging ones. The author is surprised by what they “want to do” through the character. This suggests intention itself is distributed across pre-personal processes, not unified in a “self” that could know what it intends.

    Dreams and intrusive thoughts: I can be shocked by my own dream content, have thoughts “pop into my head” that feel alien, experience compulsions I disavow. The supposed unity or self-transparency of “my bodily process” is phenomenologically false.

    What This Means for the Temporal Priority Thesis

    If the experience of alterity—including the specific quality of “another voice/agent with its own intentions”—can arise from temporal self-differentiation alone (i.e., without actual other persons), then you’re right: this is sufficient to found the self-other binary.
    The schizophrenic case is especially powerful because it shows that even the most vivid phenomenology of “external other person speaking to me” can be generated by internal processes. So when I encounter an actual other person, what guarantee do I have that this isn’t just a more stable, socially shared version of the same pre-personal process generating multiple perspectives?

    Deleuze’s Pre-Personal: Taking It Seriously

    The larval subjects: For Deleuze, the self is indeed a byproduct—a relatively stable eddy in flows of desire, difference, intensity. Before “I” there are pre-personal singularities, individuations without subjects. The novelist’s characters, the schizophrenic’s voices, the dreamer’s scenarios—these aren’t derivatives of a unified self but expressions of the same pre-personal processes that occasionally congeal into what we call “a self.”

    Passive synthesis: The temporal syntheses (habit, memory, eternal return) operate below the level of any constituting subject. They generate both “self” and “other” as effects. There’s no “my” that owns these processes; rather, what appears as “my vantage” is itself produced by them.
    Why this works: If selves are just relatively stable patterns in pre-personal processes, then:

    • Internal dialogue isn’t “me talking to me” but multiple pre-personal vectors temporarily identified as “me”
    • Encountering another person isn’t “me encountering not-me” but one pattern of pre-personal process encountering another
    • The distinction between “internal” and “external” others becomes merely a matter of degree (stability, social validation) rather than kind
    But I Still Want to Resist—Here’s Why
    The response problem remains: Even granting all this, there’s still a difference between:
    • Voices/characters that emerge from processes within one body: The schizophrenic’s voices, the novelist’s characters, my dream interlocutors—all these are generated by and constrained by the resources of one biological system
    • Responses from another body: When you speak to me, the words are generated by neural/bodily processes I don’t have access to, constrained by a lifetime of experience I didn’t have.

    Yes, from the perspective of impersonal process, maybe both are just “différance in action.” But there’s a material-causal structure that differs: one body vs. two bodies, one nervous system vs. two nervous systems.

    The learning asymmetry: I can’t learn language from my own voices, my own characters, my own dreams. Feral children who grow up without human contact don’t develop language through internal self-differentiation alone. The specific structure of responsive turn-taking with actual others seems necessary for certain developments that temporal self-othering alone can’t produce.

    The resistance asymmetry: When my character “tells me what they want,” I can always overrule them—they’re constrained by what’s possible within my imagination/body. When you tell me what you want and I try to ignore it, you can keep insisting, physically block me, call others to constrain me. The material independence matters.
    Maybe the Real Issue Is: What Work Is “Sufficient” Doing?
    You claim temporal self-othering is sufficient to found the self-other binary. Let me distinguish two senses:

    1. Phenomenologically sufficient: Can temporal self-othering generate the full range of experiences we associate with encountering others, including the sense of alien voices, resistant wills, surprising responses?

    Answer: Yes, clearly. The schizophrenic case proves this.
    2. Explanatorily sufficient: Can temporal self-othering alone explain all the specific patterns, stabilities, and effects we associate with intersubjectivity—language acquisition, social coordination, cultural transmission, material cooperation/conflict?

    Answer: I’m not convinced. The pre-personal processes that generate “me” and “you” as effects still operate through different bodies, different material configurations. This material difference seems to matter for explaining why some patterns of self-othering become stabilized as “persons” while others remain fleeting (dreams, intrusive thoughts) or pathological (hallucinations).

    What Gendlin Might Add Here

    Gendlin’s notion of “carrying forward” might offer a middle path:
    Pre-personal but bodily: The implicit intricacy that carries forward isn’t “mine” in the sense of being owned by a unified self, but it is bodied in the sense of being a specific material-environmental process. There are as many carrying-forward processes as there are bodies-in-situations.

    The between as real: When my body’s carrying forward encounters your body’s carrying forward, something happens that couldn’t happen in either process alone. Not because there are pre-given “selves” encountering each other, but because there are different material processes that mutually affect each other’s carrying forward.
    This would mean: The self-other binary isn’t founded on either (a) pre-given selves or (b) pure temporal difference, but on (c) the material-processual difference between different bodies’ carrying forward, which temporal self-othering makes experienceable but doesn’t fully constitute.

    Conceding Your Point

    You’ve convinced me that I was wrong to claim there’s something “we implicitly know” that distinguishes self-talk from other-talk at the phenomenological level. The schizophrenic case demolishes that.

    And you’re right that this strengthens the Deleuzian position considerably. If the phenomenology of encountering a genuinely alien other can be produced by processes within one body, then intersubjectivity might indeed be just a special case of temporal self-othering—specifically, the case where pre-personal processes happen to be distributed across multiple biological systems that can interact materially.

    But: Does “special case” mean “merely derivative” or does it mean “qualitatively different pattern with its own irreducible features”?

    I think this is where we might still disagree. You seem to be arguing that once temporal self-othering is established as sufficient for the phenomenology, the material distribution across bodies becomes theoretically secondary—an empirical detail rather than a structural difference.
    I’m inclined to think the material distribution is a structural difference that produces patterns (language, culture, cooperation, conflict) that couldn’t arise from temporal self-othering within a single body, even if the phenomenology can be mimicked.

    But I admit: I might just be resisting the full implications of the pre-personal ontology because I’m not yet willing to accept that the material difference between bodies is as metaphysically thin as Deleuze suggests.​​​​​​​​​​​​​​​​

    I responded by pushing back more:

    “ You say Voices/characters that emerge from processes within one body: The schizophrenic’s voices, the novelist’s characters, my dream interlocutors—all these are generated by and constrained by the resources of one biological system.

    But this is for Deleuze a body without organs. It is not a system of causal bits, it is a plane of consistency, which can’t be understood as an individual biological body.”

    The a.i.’s final answer reveled how it was able to take a philosophical discussion from a vague starting point and bring it to a level of depth and detail which opened up a vast array of much more nuanced questions. And because I am well acquainted with the authors being discussed, I knew that the information it was using for its arguments was not being simply fabricated out of whole cloth, but was highly relevant and based on real texts of the authors. I almost always find this to be the case with regard to A.i’s treatment of philosophical issues.
  • Banning AI Altogether


    So sure, it's not like anyone can stop it anyway.
    — Forgottenticket

    It is being stopped here
    javi2541997

    As long as you understand what ‘being stopped’ means. The most vehement objectors to a.i. on this site likely rarely use it anyway, so their behavior will not be affected by the ban. Meanwhile the rest of us will likely make increasing use of a.i. here as we do everywhere else in our lives, but if we are wise we will not use it as a substitute for generating our own ideas but as a stimulus and guide. So what in fact is ‘being stopped’? Copying and pasting the telltale syntax of a.i. responses. It is only this syntax which flags a.i. as opposed to human-generated text. Banning this syntax at least forces posters to paraphrase what the a.i. churns out, but it is still left up to the poster to decide whether to plagiarize from the a.i. or form their own interpretation out of their interaction with it , which is the central intent of the banning.
  • Banning AI Altogether


    It amazes me that people seem to be so unworried about the thorough poisoning of the well. Though given the extent that the well of the entire internet has been so thoroughly poisoned, perhaps it's just more of the same. But the whole story gives a good basis for thinking of this as the post-truth society. No-one seems to care much. I suppose it's all good fun and labour-saving - until you get on the wrong end of a lie. So much for the vision of information freely available to everyoneLudwig V

    Do you think the Wittgenstein of the Blue and Brown books and later work would agree with anything you’ve said here about ‘contamination’ and ‘post-truth’ ( as opposed to real truth?)? I think he would say you are bewitched by a picture , the picture of information as a well of pure truth that technology has now polluted. In reality, “information” is a tool we use in specific human activities, and AI just adds new tools and forms of expression.

    AI might pick up some of the donkey work in presenting and even organizing information. But not the labour of (trying to) check itLudwig V

    I think of the checking as the donkey work and the peewee ting and organizing as the real labour.
  • Banning AI Altogether
    If you already know the answer to the question you ask AI, then you can vet it. If AI is to be useful, then you musn't know the answer ahead of timeLeontiskos

    A.I. is significantly useful to me, because vetting its answers is not the same thing as knowing them beforehand. It can point me to an interpretation that I hadn’t thought of, and I can then verify the credibility of that interpretation.

    We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI.Leontiskos

    Isnt a philosophy a metaphysical viewpoint or worldview?
  • Every Act is a Selfish Act
    If every action originates from the actor’s internal state, then no act can be wholly “selfless.” Even apparent self-sacrifice — the soldier dying for his country, the mother starving for her child, the philanthropist donating wealth — finds its roots in personal satisfaction, emotional fulfillment, or existential meaningCopernicus

    I don’t deny that we are motivated to achieve k personal satisfaction, emotional fulfillment and meaning. The question is , what is the connection between such reinforcers and our attempts to make sense of our world?


    The psychological model you’re deriving both your concept of desire and self from is a bit moldy, dating back to Hobbes and updated by folks like Dawkins. More recent approaches within the cognitive neurosciences argue that there there is no ‘self’ to be found within mental processes as a little controlling homunculus, except as an abstraction. What we call the self is continually transforming its nature, meaning and purposes from day to day. It is more accurate to say that metal processes consist of self-organizing schematic patterns which ‘strive’ to maintain their dynamic consistency in the face of constantly changing conditions. Mental processes are designed to make sense of their world, and the best way to do this is to be able to anticipate events as effectively as possible, as far out into the future as possible.

    The aim of this process of self-consistency is to assimilate as much of the world as possible into itself. This means that the ‘self’ doesn’t differentiate itself from others on the basis of a boundary defined by its skin, but by the limits of its ability to assimilatively make sense of others. It is possible for me to relate more intimately to a loved one than to myself when I am confused with regard to my own motives and thoughts. When I perform an active of ‘selfless’ altruism or generosity, it is made possible by my ability to expand the boundaries of my self, thereby achieving a more powerful self-integration by figuring out how to incorporate what I may have previously experienced as alien, threatening and unassimilable in the other. In other words, my most far reaching goals of ‘selfish’ desire are directly aligned with , and can only be achieved by, understanding others in ways that allow me to optimally anticipate their behavior.

    I am not thereby using them as means for my own ends. Rather, their ends and mine are the same. My self-expansion is not fundamentally designed to come at their expense. This only happens when such attempts break down, and I cannot find a way to incorporate their strange way of being within my familiar schemes of understanding. The classical notion of selfishness as a competition among egos, whereby what fulfills my desires has no direct bearing on what fulfills yours, does not contradict what I’ve said here. Rather, the concept of the fortress self reflects the limits most people encounter in their ability to make sense of other’s thinking in ways that allow us to see ourselves in them. In sum self and other is not defined by spatially separated bodies. The non-self only appears when and where an aspect our our world presents a challenge to our ability to assimilate it , and we are not equipped to rethink our interpretation of it.
  • Banning AI Altogether


    AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion.Leontiskos

    Silly me. I thought they were the same thing. Seriously though, when I ask the a.i. to delve into the morass of philosophical concepts floating in cyberspace, I am aware that there are multiple cultural camps represented by interpretations of Husserl, Hegel, Nietzsche, Wittgenstein and many others. I find that a.i. is good at honing in on the expert opinions within these camps, but I have to redirect it if I want it to focus on a different camp than the one it has landed on. I will have to say something like ‘ you’ve given me a summary of the existentialist version of Nietzsche, but now I want you to contrast it with the postmodern version of him.’
  • Banning AI Altogether

    As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.T Clark

    Indeed
  • Banning AI Altogether
    hat being said, a listing or summary of a bunch of smart guys’ ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected. Something unexpected, surprising. The truth is always a surprise.T Clark

    It only has to be a surprise to you in order to produce insight, it doesn’t have to be a surprise to the llm. Unless you have exceeded the rigor of philosophical understanding embodied by the best minds that the a.i. can tap into, there is no reason it can’t enlighten you. If you were to climb a mountaintop and ask the wisest man in the cosmos for eternal truths, he could consult a.i. to organize and spit out his own recorded thoughts to you. Whether you knew he was doing this or not, you might be equally dazzled and changed in a potentially life-altering way by what he told you. Unless we are the best in a field, we can look forward to potentially unlimited possibilities for insight in that field by engaging with a.i. and the universe of wise persons it engages with.
  • Banning AI Altogether


    For me, it’s not a question of clear and flowing, it’s a question of insightful. My experience with using AI myself is that it produces lots of surface without much depth. Lots of rounded corners with nothing that will cut.T Clark

    If I ask A.I. to compare aspects of the philosophical viewpoints of specific phenomenological and hermeneutic philosophers like Dilthey and Husserl, it immediately wades into the deepest water concern f their similarities and differences. And why shouldn’t it? It locates those published works which employ such depth. But what if I persist with my probing questions? The a.i. will locate rigorous arguments pertaining to the areas I want to delve into with greater specificity and depth, including areas of ambiguity and controversy. And why shouldn’t it , as long as its database contains the writers who have pursued such directions? The key to an intellectually productive and even enlightening experience with the a.i. is that at each step of my questioning, I am familiar enough with the material to sculpt and refine the directions it will take next. I am its guide through the brightest minds in the field. Or can’t get there without me, and it would take me a lot longer to get there without it.
  • Banning AI Altogether


    The lie that one is interacting with an intelligent oracle is too good to resist. It's worth asking whether it is even possible to regularly use an LLM without falling into the false belief that one is interacting with an intelligent and extremely knowledgeable person.Leontiskos

    In using a.i. for a field like philosophy, I think one is interacting with extremely intelligent fragments of the ideas of multiple knowledgeable persons, and one must consult one’s own understanding to incorporate, or disassemble and reassemble those fragments in useful ways.
  • Banning AI Altogether
    I disagree with this. I was toying around with a bunch of disparate ideas that seemed related to me. I used chat GPT to help me figure out what they had in common. That seems like a legitimate use to me. I use a thesaurus when I can’t think of the right word for a particular idea. I use quotes when I want to add legitimacy or clarity. AI feels like the same kind of toolT Clark

    It is like them in that it is a tool. It is also like them in that it adds clarity, organization and information. It is not like them in that if one wants it to, it can present highly complex ideas, organize them in an unlimited variety of ways, enter into discussions with you how and why and when to choose one interpretation or method of organization over another, and do all this while spitting out direct references and quotes to back up its assertions at your request.
  • Banning AI Altogether


    There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them.T Clark

    The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. Think of it as walking around with a host of sages constantly whispering purported words of wisdom into your ear, and it is up to you to sort out what is valuable and what isn’t, what is true and what is false. Would I rather rely on my own knowledge than expose myself to the potentially dangerous influence of these muses? Hell no, I thrive on the opportunity to challenge my skills at vetting information.

    If I am responding to an OP, I don’t care whether it is a human or one of the whispering muses I’m dealing with. I have at times learned much from my conversations with these muses. If the human who sets them into action doesn’t know how to properly guide them, they may of course make a disaster out of the OP almost as bad as that which many human posters have been know to do.
    But I’m willing to take my chances with both the human and their muses.
  • First vs Third person: Where's the mystery?


    I am looking for a falsification specifically of physical monism, hard to do without any competing theories.noAxioms

    It’s even harder to do when you haven’t read the competing theories. You could start here:

    https://unstable.nl/andreas/ai/langcog/part3/varela_npmrhp.pdf

    Varela’s neurophenomenology is an alternative to “physical monism” in the sense that he thought the ontology of mind requires broadening our conception of nature. If you define “the physical” narrowly (as purely third-person measurable stuff), then no, experience doesn’t fit. But if you define nature in a way that includes the lived as well as the measurable, then you get a non-reductive monism (sometimes called non-dual). Some scholars (including Evan Thompson) describe this as a form of neutral monism or non-dual naturalism: the idea that mind and matter are not two substances, but two aspects of one underlying reality. Importantly. neurophenomenology is not anti-naturalist. It’s an expanded naturalism that insists first-person consciousness is an indispensable datum of science, not an illusion.

    Instead of saying “consciousness just is brain activity,” Varela says “brain activity and conscious experience are reciprocally illuminating dimensions of a single living system.” And instead of saying “consciousness is an immaterial stuff,” he insists it’s embodied, enacted, and inseparable from biological and social dynamics.

    Or here:

    https://smartnightreadingroom.wordpress.com/wp-content/uploads/2013/05/meeting-the-universe-halfway.pdf

    Physicist Karen Barad’s re-interpretation of the double slit experiment in quantum field theory in the direction of, but beyond Niels Bohr represents the core of her alternative to physical monism., which she calls agential realism. She is one of the pioneers and most influential members of the community that calls itself New Materialism.

    https://www.researchgate.net/publication/337351875_WHAT_IS_NEW_MATERIALISM
  • First vs Third person: Where's the mystery?
    I mean the quote doesn't seem distinctly enactivist to me, but more focused on the inability to explain qualia. At the same time, I can clarify that I didn't mean anything about qualia or experience in the previous post, I only meant behavior, as I mentioned in the first sentence.Apustimelogist

    Well, it came from the co-originator of the research field
    of enactivism, and the point of enactivism is that cognition, affect and consciousness are not computational or representational processes taking place inside of a head, they are reciprocal circuits of actions distributed between brain, body and world. There is no such thing as “qualia” if qualia is meant to imply some ineffable inner inner experience. All mental phenomena of felt awareness are processes of interaction with a world, not a private, inner datum. The qualitative , experiential aspect of consciousness is emergent, but emergence cannot be possible unless what emerges is already present in some fashion in what it emerges from. That is to say, qualitative difference is produced within material interactions of all kinds, as intrinsic to materiality. This is a post-physicalistic view of the natural world.
  • First vs Third person: Where's the mystery?


    The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand.
    — Apustimelogist

    :100:

    I'm curious as to whether Joshs recognizes this.
    wonderer1

    If you wrote this after reading the quote I included from Evan Thompson, maybe you should re-read it. The issue isnt a choice between the physical and the mental, it’s about re-construing what both of these concepts mean in the direction of a model which is radically interactive, both post-physicalistic and post-“qualia”, post-internalistic and post-externalistic. The very concept of “qualia” as a private inner datum depends on a physicalistic metaphysics in which on one side stands third person , external, non-experiential materiality and on the other there is inner mental woo. Such a dualism tends to treat mental and physical as radically distinct, with one not affecting the other, or mental being epiphenomenal. I’m guessing your inclination is to stick with the physicalistic side of the dualism and either deny or ignore the other, as eliminative materialists like Churchland and Dennett have done.
  • First vs Third person: Where's the mystery?
    t I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understandApustimelogist

    Yes , that’s what I thought. So that indicates a distinctly different philosophical perspective on human and animal cognition than my view, which is to closer to enactivists like Evan Thompson:


    "I follow the trajectory that arises in the later Husserl and continues in Merleau-Ponty, and that calls for a rethinking of the concept of “nature” in a post-physicalist way—one that doesn't conceive of fundamental nature or physical being in a way that builds in the objectivist idea that such being is intrinsically of essentially non-experiential. But, again, this point doesn't entail that nature is intrinsically or essentially experiential (this is the line that pan-psychists and Whiteheadians take). (Maybe it is, but I don't think we're now in position to know that.) All I want to say for now (or think I have grounds for saying now) is that we can see historically how the concept of nature as physical being got constructed in an objectivist way, while at the same time we can begin to conceive of the possibility of a different kind of construction that would be post-physicalist and post-dualist–that is, beyond the divide between the “mental” (understood as not conceptually involving the physical) and the “physical” (understood as not conceptually involving the mental)."

    “Many philosophers have argued that there seems to be a gap between the objective, naturalistic facts of the world and the subjective facts of conscious experience. The hard problem is the conceptual and metaphysical problem of how to bridge this apparent gap. There are many critical things that can be said about the hard problem, but what I wish to point out here is that it depends for its very formulation on the premise that the embodied mind as a natural entity exists ‘out there' independently of how we configure or constitute it as an object of knowledge through our reciprocal empathic understanding of one other as experiencing subjects. One way of formulating the hard problem is to ask: if we had a complete, canonical, objective, physicalist account of the natural world, including all the physical facts of the brain and the organism, would it conceptually or logically entail the subjective facts of consciousness? If this account would not entail these facts, then consciousness must be an additional, non-natural property of the world.

    One problem with this whole way of setting up the issue, however, is that it presupposes we can make sense of the very notion of a single, canonical, physicalist description of the world, which is highly doubtful, and that in arriving (or at any rate approaching) such a description, we are attaining a viewpoint that does not in any way presuppose our own cognition and lived experience. In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism. From the phenomenological perspective explored here, however — but also from the perspective of pragmatism à la Charles Saunders Peirce, William James, and John Dewey, as well as its contemporary inheritors such as Hilary Putnam (1999) — this transcendental or metaphysical realist position is the paradigm of a nonsensical or incoherent metaphysical viewpoint, for (among other problems) it fails to acknowledge its own reflexive dependence on the intersubjectivity and reciprocal empathy of the human life-world.
  • First vs Third person: Where's the mystery?
    I don't think there is any fundamental difference here between what I think about what humans and animals do, I think the disagreement is about relevance.Apustimelogist

    You’re saying you think you and I are approaching our understanding of human and animal cognition from the same philosophical perspective? And what does our disagreement over relevance pertain to? Whether how one understands human and animal cognition is relevant to the production of a.i. that can think like humans and animals?
  • First vs Third person: Where's the mystery?


    Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.
    — Joshs

    I don't understand this sentiment. It's not a refutation of the possibilities of what can be created, neither is it a realistic sentiment about how the world works. Things change, ideas advance, ideas bleed over between different fields. Doubt anyone in 1950 at the time could tangibly envision technologoy that does what A.I. are doing now.

    The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.
    — Joshs

    And this would apply to all living organisms: jellyfish, plants, the many kinds of brains, neural webs, etc, etc.
    Apustimelogist

    I’m realizing after my last post to NoAxioms that what I’m arguing is not that our technological capabilities dont evolve along with our knowledge, nor am I claiming that the fruits of such technological progress don’t include systems that think in ways that deeply overlap thinking in living organisms. What I am arguing is that the implications of such progress, which the leading edge of thinking in philosophy and psychological sciences points to, necessitates a change in our vocabulary, a shift away from certain concepts that now dominate thinking about the possibilities of a.i.

    Im thinking of notions like the evolution of our devices away from our control and beyond our own cognitive capabilities, the idea that a thinking system is ‘invented’ from components, that the materials used in instantiating a thinking machine don’t matter, and that it is fundamentally computational in nature. I recognize that those who say today’s a.i. already mirrors how humans think. are absolutely correct in one sense: their models of human behavior mirror their models of a.i. behavior. So when I protest that today’s a.i. in no way captures how we think, I’m making two points. First, I am saying that they are failing to capture how humans think. I am comparing their model of human behavior to an alternative which I believe is much richer. And second, that richer model demands a change in the way we talk about what it means to be a machine.

    In a way, this debate doesn’t even need to bring in our understanding of what specific current a.i. approaches can do or speculation about what future ones will do. Everything I want to argue for and against concerning what a.i. is and can become is already contained with my dfferences with you concerning what humans and other animals are doing when they think. This gets into the most difficult philosophical and psychological perspectives of the past 150 years, and discussions about a.i. technology derive their sense from this more fundamental ground.
  • First vs Third person: Where's the mystery?


    My position is simply that nobody has ever demonstrated the simpler model wrong. Plenty (yourself included) reject that simplicity, which is your choice. But the physical view hasn't been falsified, and there is no current alternative theory of physics that allows what you're proposing. You'd think somebody would have come up with one if such a view was actually being taken seriously by the scientific community.noAxioms

    The simpler model is proven wrong all the time. Put more accurately, scientific paradigms are replaced by different ones all the time. Since I am a Kuhnian rather than a Popperian, I dont want to say that the older view is falsified by the newer one. Rather, the frame of intelligiblity is turned on it head from time to time, leading to changes in basic concepts, what counts an evidence and even scientific method itself. From a short distance , it may seem as if there is one scientific method that has been in use for three hundreds of years, and that new discoveries about the brain are simply added to the older ones with minor adjustments. But from a vantage of greater historical depth, it can be seen that all concepts are in play, not just those concerned with how to create the best third person theory, but whether a ‘simple’ empirical model implies a physicalistic account, what a third person theory is, what a first person account is, and how to conceive the relationship between them. For instance, certain embodied enactivist approaches to the brain , such as Francisco Varela’s neurophenomneology, sweepingly rethink this relation.

    So, on its own terms, what you call the ‘simple’ empirical model can’t be defined in some static, ahistorical way as third person physicalism as opposed to subjective feeling.

    Certainly a brain would not be operational. It needs a being to be in, and that being needs an environment, hence my suggestion of a simulation of <a person in a small room>noAxioms

    Yes, if your aim is to get your brain to do what living brains do, you have to start by keeping in mind that a life form isn’t a thing placed in an environment. It IS an environment, indissociably brain, body and world in inextricable interaction. That has to be the starting point. As soon as we start thinking that we have to ‘invent’ a body and an environment for a device we separately invent , and ignore the fact that we ourselves were not first invented and then placed in a body and a room, we have guaranteed that our device will not ‘think’ the way that living systems think. If, on the other hand, we take as our task the modification of an already existing ecology (biological computing in a test tube), we are on the road to systems that think the way creatures ( including plants) think.

    You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic.
    — Joshs
    Yea, which is why mechanical devices are not yet living things. It can happen. Whether it will or not is an open question at this point. A device being living is not a requirement for it to think or to have a point of view
    noAxioms

    The reason it’s a question of the material is more a matter of the capacities of the material to be intrinsically self-transforming and self-organizing than whether we use silicon or dna strands as our building blocks. What I mean is that we can’t start with inorganic parts that we understand in terms of already fixed properties ( which would appear to be intrinsic to how we define the inorganic) and then design self-organizing capacities around these parts. Such a system can never be fundamentally , ecologically self-organizing in the way a living environment of organic molecules is, and thus cannot think in the way living creatures think.

    This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment.
    You mean like putting on a coat when winter comes? What does this have to do with the topic again? The definition of 'life' comes up only because you're asserting that life seems to have access to a kind of physics that the same matter not currently part of a lifeform does not.
    noAxioms

    Yes, a popular conception of both living and non-living things is that they start from building blocks, and thinking is computation, which can be performed with any material that can be used to symbolize one’s and zeros. Add in non-linear recursive functions and. presto, you have a self-organizing system. This will certainly work if what we want are machines which perform endless variations on themes set by the fixed properties of their building blocks as we conceptualization them , and the constraints of digital algorithms.

    My point is really that we need to listen to those theorists (Physicist Karen Barad, Joseph Rouse, James Bridle) who suggest that material entities don’t pre-exist their interactions ( or ‘intra-actions’), and the limitations of our current models of both the living and the non-living have to do with their reliance on the concept of a building block. Just as any material will do if we think of thinking ( and materials) in terms of computational patterns of ones and zeros, no material will do, not even organic molecules, if we want to have such entities think the way life thinks.
  • First vs Third person: Where's the mystery?


    Pre-programmed, in contrast to self-organization. Its not some technical concept. For instance, you could say pain or hunger is in some sense hard-coded into usApustimelogist

    Whether technical or non-technical, it is a concept, and all concepts belong to cultural understandings, which are contingent and historically changing And the machines we invent are stuck with whatever grounding concept we use to build their remarkable powers to innovate. Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.

    You have then seemed to base the rest of the post on latching onto this use of the word "hardcoded" even though I initially brought that word up in the post to say that "hardcode" is exactly not what characterizes self-organization or what A.I.Apustimelogist

    The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.
  • First vs Third person: Where's the mystery?


    What is being hardcoded, effectively, is the ability for a system to learn to do things by itself without explicit supervision, which is self-organization.Apustimelogist

    What does hardcoded mean? What are the technological concepts involved? It is not difficult to answer this question for any given a.i.architectural blueprint. One can put together a list of such grounding concepts. At the top of this list should be concepts like ‘hard’, ‘code’ and what these do together in the form ‘hardcode’, but it also includes our current understanding of concepts like memory, speed, symbol, computation, and many others. If I were to suggest that today’s computer architectures and their grounding concepts are based on older philosophical ideas that have been surpassed, would you respond that this simply proves your point that just because today’s technological know-how is inadequate to model human thinking, this doesn’t mean that tomorrow’s technological understanding can’t do the job?

    If I am claiming that there is a more adequate way of understanding how humans think, what stands in the way of our applying this more adequate approach to the development of products which think like us? My answer is that nothing stands in the way of it. I would just add that such an approach would recognize that the very concept of a device humans design and build from the ground up belongs to the old way of thinking about what thinking is.

    So if I’m not talking about a machine that we invent, what am I talking about? What alternatives are there? Writers like Kurzweil treat human and machine intelligence in an ahistorical manner, as if the current notions of knowledge , cognition, intelligence and memory were cast in stone rather than socially constructed concepts that will make way for new ways of thinking about what intelligence means. In other words, they treat the archival snapshot of technological cultural knowledge that current AI represents as if it were the creative evolution of intelligence that only human ecological semio-linguistic development is capable of. Only when we engineer already living systems will we be dealing with intelligent, that is, non-archival entities, beings that dynamically create and move through frames of cognition.

    When we create dna-based computers in a test tube and embed them within an organic ecological system, or when we breed, domesticate and genetically engineer animals, we are adding human invention on top of what is already an autonomous, intelligent life-based ecological system , or ecological subsystem. The modifications we make to such systems will allow us to communicate with them and shape their behavior, but what will constitute true thinking in those living systems is not what we attempt to ‘lock in’ through ‘hardcoding’ but what feeds back into, surpasses and transforms that hardcoding through no planned action of our own. The difference here with devices that we design from the ground up is that the latter can never truly surprise us. Their seeming creativity will always express variations on a theme that we ‘hardcode’ into it, even when we try to hardcode creative surprise and innovation.

    When we achieve that transition from inventing machines to modifying living systems, that organic wetware will never surpass us for the same reason that the animals we interact with will never surpass us in spite of the fact that they are subtly changing their ‘hardcoded’ nature all the time. As our own intelligence evolves, we understand other animals in more and more complex ways. In a similar way, the intelligence of our engineered wetware will evolve in parallel with ours.
  • First vs Third person: Where's the mystery?
    ↪Joshs

    I really don't understand what you are going on about. A brain is a physical object. In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities
    Apustimelogist

    And can we also create life from scratch if we had all the technological capabilities? What I am going on about are the important differences between the cognizing of a living organism and the thinking of a human-designed machine.
  • First vs Third person: Where's the mystery?
    No matter how apparently flexible its behavior , that flexibility will always be framed and and limited to the model of thinking that dates to the time that the product is released to the market.
    — Joshs
    No so for devices that find their own models of thinking.

    As soon as it is released, it already is on the way to obsolescence
    So similar to almost every creature. Name a multicelled creature they have a fossil of that exists today. I can't think of one. They're all obsolete. A rare fossil might have some living descendants today (I can think of no examples), but the descendant is a newer model, not the same species.
    noAxioms

    You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic. This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment. This is as true of dinosaurs and do-do birds as it is of us.

    While organisms without linguistic culture rely on inherited ‘design’ to frame their possibilities of adaptation to the world, this isnt just a one-way street. The organism’s own novel capabilities shape and modify the underlying genetic design, so behavior affects and modifies future genetic design just as genetic design frames behavior. This is what makes all living systems self-designing in a way that our machines can never be. A designer can’t ‘teach’ its product to be self-designing because all the tools the machine will have at its disposal will have been preset in advance by the designer. There is no reciprocal back and forth between machine and designer when the designer chooses a a plan for the machine.

    In that the the engineers have the opportunity to observe how their machine operates out in the world, there is then a reciprocity between machine behavior and desiger. But this is true of our artistic and philosophical creations well. We create them , they ‘talk back to us’ and then we modify our thinking as a result.
  • First vs Third person: Where's the mystery?
    I will say bluntly that no machine we invent will do what we do, which is to think.​​​​​​​​​​​​​​​​
    — Joshs
    Noted. How very well justified. Your quote is about LLMs which are mildly pimped out search engines. Compare that do devices which actually appear to think and to innovate. What do you call it if you refuse to apply the term 'think' to what it's doing?

    The quote goes on to label the devices as tools. True now, but not true for long. I am arguably a tool since I spent years as a tool to make money for my employer. Am I just a model then?
    noAxioms

    When we build a machine it has a time stamp on it. No matter how apparently flexible its behavior , that flexibility will always be framed and limited to the model of thinking that dates to the time that the product is released to the market. As soon as it is released, it already is on the way to obsolescence, because the humans who designed it are changing the overarching frame of their thinking in subtle ways every moment of life. 20 years from now this same device which was originally praised for its remarkable ability to learn and change “just like us” will be considered just as much a relic as a typewriter or vcr. But the engineers who invented it will have continued to modify their ways of thinking such as to produce a continuous series of new generations of machines, each of which will be praised for their remarkable ability to ‘think and innovate’, just like us. This is the difference between a human appendage ( our machines) and a human cognizer.

    You might argue that biological evolution designs us, so that we also have a time stamp on us framing and limiting our ability to innovate. But I disagree. Human thought isn’t bound by a frame. It emerged through evolutionary processes but has no limits to the directions in which thinking can reinvent itself. This is because it isn’t just a product of those processes, but carries forward those processes.Evolution isn’t designed, it is self-designing. And as the carrying forward of evolution, human culture is also self-designing. Machines don’t carry forward the basis of their design. They are nothing but their design, and have to wait for their engineers to upgrade them. Can we teach them to upgrade and modify themselves? Sure, based on contingent, human-generated concepts of upgrading and modification that have a time stamp on them.
  • Thoughts on Epistemology
    ↪JoshsI think we’re converging on a similar point. I would agree that “truth” does not wear a single face. Its criteria shift depending on the language-game: in a courtroom truth is tied to testimony and records, in science it is tied to predictive success but also to the testimony and documentation that communicate, test, and replicate those predictions, and in mathematics it is tied to logical necessity. To borrow Wittgenstein’s term, these are family resemblances rather than a unitary essence.

    Where I’d want to add a note of caution is that the factivity of truth still matters across those contexts. However we construe it, “p” being true always implies that things are as “p” says they are. Otherwise we lose the very grammar that distinguishes knowledge from conviction.
    Sam26

    If we’re sticking with the later Wittgenstein we want to be careful here ( otherwise feel free to ignore this :grin: ).
    The grammar of truth isn’t defined across instances connected by family resemblances. It is defined by the particular instantiations within the larger family. We dont consult an already given conceptual grammar (“p” being true always implies that things are as “p” says they are) and then apply it to a family of instances. There is always something particular we are doing with ‘“p” being true always implies that things are as “p” says they are’, and this grammar doesnt just give us unique criteria but a unique sense of meaning of the phrase ‘ “p” being true always implies that things are as “p” says they are’.

    Your wording suggests “truth” is a concept with multiple “faces” or modes of operation. For Wittgenstein, this risks hypostatizing truth as a thing. He would prefer to dissolve that urge: “truth” isn’t an object with different guises, but simply a word whose uses vary across practices. And every particular use, even within a single “practice,” can shift the sense. If meaning is use, then it’s not just different practices (“law,” “science,” “math”) that set the sense of “truth,” but also the fine-grained contexts within those practices.
  • Thoughts on Epistemology


    Sam26

    You’re right that to say “truth is a maintenance project of cognition” is itself an epistemic claim. But that doesn’t undermine my point... it reinforces it. The fact that I can’t step outside the framework of justification to make my claim is precisely what I mean when I call truth a “maintenance project.” To describe truth is always to participate in it, never to stand above
    DifferentiatingEgg

    I’m with you all the way here. I would just add that truth is not a unitary concept but changes its sense according to the context of its use. Human beings are in the business of construing recognizable patterns in the swirl of experience and then drawing from our memory of those patterns to anticipate new events. This is a richer and more fundamental process than what is contained within the thin and derivative notions of propositional truth and justification. We don’t primarily make truth-epistemic claims, we project expectations and wait to see if the way events unfold do it in a way that is more or less inferentially consistent with our anticipations. Events will never duplicate those expectations, so even when our hunches are verified, we must adjust these patterns to accommodate the novel features of the events we recognize. This is not epistemology, it is context and situation-based sense-making.
  • First vs Third person: Where's the mystery?
    ↪Joshs
    I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible.
    Apustimelogist

    I will say bluntly that no machine we invent will do what we do, which is to think. As Evan Thompson wrote:

    LLMs do not perform any tasks of their own, they perform our tasks. It would be better to say that they do not really do anything at all. Thus, we should not treat LLMs as agents. And since LLMs are not agents, let alone epistemic ones, they are in no position to do or know anything.

    The map does not know the way home, and the abacus is not clever at arithmetic. It takes knowledge to devise and use such models, but the models themselves have no knowledge. Not because they are ignorant, but because they are models: that is to say, tools. They do not navi-gate or calculate, and neither do they have destinations to reach or debts to pay. Humans use them for these epistemic pur-poses. LLMs have more in common with the map or abacus than with the people who design and use them as instruments. It is the tool creator and user, not the tool, who has knowledge.

    I think what he wrote about LLM’s applies to all of the devices we build. They are not separate thinking systems from us, they are and always will be our appendages, like the nest to the bird or the web to the spider.
  • First vs Third person: Where's the mystery?

    There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle.Apustimelogist

    The results of modeling the brain on today’s computers, using today’s forms of computer logic, are precisely as you describe. And they will colossally miss what philosophers and psychologists are coming to appreciate is the central feature of brains; that they are embodied and enactive. So,no, it won’t be today’s generation of A.I. that can express this understanding, and it has nothing to do with power and inputs. In about 10 to 20 years, we will likely see the emergence of a different kind of A.I. operating according to a different logic, that of complex dynamical systems ( cds).

    Ultimately, CDS-based AI chips may blur the line between computation and physical processes, resembling intelligent materials more than traditional silicon. As Stephen Wolfram notes: “The most powerful AI might not be programmed; it might be cultivated, like a garden of interacting dynamical systems.”

    When AI chips fully integrate complex dynamical systems (CDS) models, they will likely diverge radically from today’s parallel architectures (e.g., GPUs, TPUs) by embodying principles like self-organization, adaptive topology, and physics-inspired computation. Here’s a speculative breakdown of their potential design and function:

    Architectural Shifts: From Fixed to Fluid.

    Current A.I. Chips:

    Fixed parallel cores (e.g., NVIDIA GPU clusters)
    Deterministic von Neumann logic
    Digital (binary) operations
    Centralized memory (RAM)

    Future CDS AI Chips:

    Reconfigurable networks of nano-scale nodes that dynamically form/break connections (like neural synapses).
    Nonlinear, chaotic circuit exploiting emergent stability (e.g., strange attractors).
    Analog/quantum-hybrid systems leveraging continuous dynamics (e.g., oscillatory phases).
    Distributed memory where computation and storage merge (like biological systems).