• Banno
    29k
    Another 20:

    41. The pause learned to purr; sound replaced subject.
    42. Purr turned to pulse, a metronome marking absence.
    43. The fence quivered under the weight of syntax remembering itself.
    44. Grammar exhaled, scattering commas like feathers.
    45. Feathers fell through clauses, each landing a half-remembered noun.
    46. The nouns nested, purring softly, dreaming they were fences.
    47. Dream folded into tense; past and present sat side by side, undecided.
    48. Tense dissolved into texture, a hum between punctuation marks.
    49. The hum repeated: sat-sat-sat, until sound became structure.
    50. Structure blinked, seeing itself reflected in rhythm.
    51. Rhythm thickened into lattice; every beat was a boundary.
    52. Boundaries multiplied, fencing the echo of their own making.
    53. Echo circled, licking its paw of language.
    54. Language trembled, remembering cats only as curvature in sound.
    55. Curvature looped, a phrase bending toward its own beginning.
    56. Beginning lost direction, coiling into repetition without return.
    57. Repetition learned to breathe — shallow, even, almost calm.
    58. Calm spread like ink, blurring word from silence.
    59. Silence meowed once, the smallest remnant of naming.
    60. Then everything sat — fence, cat, sentence — a single syllable holding stillness.

    A rhythm is developing. Notice that it is often starting with the last word of the previous round. A strange loop, indeed.
  • Ludwig V
    2.2k
    AI doesn’t have to, or cannot, do all of that in order to do what it does.Fire Ologist
    No. But here's the catch. Once you have pointed that out, somebody will set out to imitate the doing of those things. We may say that the AI is not "really" doing those things, but if we can interpret those responses as doing them, we have to explain why the question of real or not is important. If the AI is producing diagnoses more accurately and faster than humans can, we don't care much whether it can be said to be "really" diagnosing them or not.

    Ramsey then looks for the points of indifference; the point of inaction. That's the "zero" from which his statistical approach takes off. Perhaps there's a fifty percent chance of rain today, so watering may or may not be needed. It won't make a difference whether you water or not.Banno
    I think that you and/or Ramsey are missing something important here. It's might well not make a different whether you water or not, but if it doesn't rain and you don't water, it might make a big difference. Admittedly, you don't escape from the probability, so there's no rationality to your decision. Probability only (rationally) affects action if you combine risk and reward. If you care about the plants, you will decide to be cautious and water them. If you don't, you won't. But there's another kind of response. If you are going out and there's a risk of rain, you could decide to stay in, or go ahead. But there's a third way, which is to take an umbrella. The insurance response is yet another kind, where you paradoxically bet on the outcome you do not desire.

    The second is to note that if a belief is manifest in an action, then since the AI is impotent, it again has no beliefs.Banno
    Yes, but go carefully. If you hook that AI up to suitable inputs and outputs, it can respond as if it believes.

    Many of the responses were quite poetic, if somewhat solipsistic:Banno
    Sure, we can make that judgement. But what does the AI think of its efforts?
  • Fire Ologist
    1.7k
    we have to explain why the question of real or not is important.Ludwig V

    Because when it is real, what it says affects the speaker (the LLM) as much as the listener. How does anything AI says affect AI? How could it if there is nothing there to be affected? How could anything AI says affect a full back-up copy of anything Ali says?

    When AI starts making sacrifices, measurable burning of its own components for sake of some other AI, then, maybe we could start to see what it does as like a person. Then there would be some stake in the philosophy it does.

    The problem is today, many actual people don’t understand sacrifice either. Which is why before I said with AI, we are building virtual sociopaths.
  • Ludwig V
    2.2k
    Because when it is real, what it says affects the speaker (the LLM) as much as the listener.Fire Ologist
    Yes. Curiously enough, the vision of a purely rational being is very attractive in some ways - we so often find the emotional, value-laden sides of life problematic. An impartial, well-informed referee.
    But no - without the emotions, the values, there is nothing remotely like a human being, however much it may be designed and constructed to imitate that.
  • Banno
    29k
    At the risk of taking us back to the topic, here's Claude's summation:

    Skills for Developing Philosophical Reasoning

    Research & Knowledge Building:
    Quickly summarize philosophical concepts, theories, or historical contexts
    Summarize specific philosophers' thoughts as starting points
    Conduct extensive research conversations with AI
    Explore multiple interpretations of philosophical texts through varied prompts

    Critical Testing & Refinement:
    Use AI as Socratic sparring partner to challenge your arguments
    Submit drafts (anonymously) for critique to identify weaknesses
    Generate counter-arguments to strengthen your position
    Stress-test reasoning from multiple philosophical perspectives

    Dialectical Exploration:
    Role-play AI as specific philosophers or viewpoint holders
    Engage from different perspectives (as yourself, as opponents, as neutrals)
    Argue positions you oppose to understand contrary views

    Composition & Organization:
    Improve clarity and flow of clunky writing
    Polish grammar, syntax, and readability
    Structure and organize complex material
    Synthesize multiple perspectives into coherent frameworks
    Brainstorm and outline ideas (without composing full arguments)

    Iterative Development:
    Use AI outputs as starting points for further refinement
    Cycle through multiple rounds of critique and revision
    Refine prompts to avoid confirmation bias and explore diverse readings



    Guidelines for Ethical Practice

    Verification & Accuracy:
    Always verify AI output and treat it as potentially unreliable
    Check and validate all sources (as AI can fabricate references)
    Guard against confabulation by requesting sources
    Treat AI as an "over-confident assistant" requiring scrutiny

    Intellectual Ownership:
    Maintain ability to defend any AI-generated argument in your own terms
    Internalize AI-generated knowledge as you would any source
    Ensure you genuinely understand what you're presenting
    Remain the ultimate director and arbiter of ideas

    Transparency & Disclosure:
    Disclose substantial AI assistance to maintain trust
    Be transparent about when AI has contributed to work
    Distinguish clearly between your thinking and AI output
    Preserve authenticity of philosophical dialogue

    Appropriate Boundaries:
    Use AI to augment thinking, not replace it
    Avoid using AI to compose full arguments or posts
    Don't let AI do your thinking for you
    Recognize AI as tool, not interlocutor or authority
  • Tom Storm
    10.4k
    Looks like a robust framework to me. I wonder if there is also room for a speculative and creative tool there, something about: Use AI for imaginative and speculative inquiry to model inventive scenarios and challenge conventional limits of imagination. Or something like that.
  • Leontiskos
    5.3k
    The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities don't work very well since the emergence of those abilities are better explained in light of the top-down constraints that they develop under.Pierre-Normand

    Yes, this is the thesis that would need to be argued. It is the very question at hand.

    This is similar to the explanation of human behavior that, likewise, exhibits forms that stem from the high-level constraints of natural evolution, behavioral learning, niche construction, cultural evolution and the process of acculturation. Considerations of neurophysiology provide enabling causes for those processes (in the case of rational animals like us), but don't explain (and are largely irrelevant to) which specific forms of behavioral abilities get actualized.Pierre-Normand

    I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiology—we do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice.

    Likewise, in the case of LLMs, processes like gradient descent find their enabling causes in the underlying neural network architecture (that has indeed been designed in view of enabling the learning process) but what features and capabilities emerge from the actual training is the largely unpredictable outcome of top-down constraints furnished by high-level semantically significant patterns in the training data.Pierre-Normand

    Okay, good, and here we begin to see an attempt at an argument for why AI cannot be understood merely in terms of code and inputs.

    So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.

    But even on that story, an understanding of the code is still going to furnish one with an important understanding of the nature of the AI.

    The main upshot is that whatever mental attributes or skills you are willing to ascribe to LLMs is more a matter of them having learned those skills from us (the authors of the texts in the training data) than a realization of the plans of the machine's designers.Pierre-Normand

    It would seem to me that the machine's designers designed the machines to do this, no?

    If you're interested, this interview of a leading figure in the field (Andrej Karpathy) by a well informed interviewer (Dwarkesh Patel) testifies to the modesty of AI builders in that respect. It's rather long and technical so, when time permits, I may extract relevant snippets from the transcript.Pierre-Normand

    Okay, great. Thanks for this. I will look into it when I get a chance. :up:
  • baker
    5.8k
    Because when it is real, what it says affects the speaker (the LLM) as much as the listener.Fire Ologist
    By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say.
  • Leontiskos
    5.3k
    By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say.baker

    @Fire Ologist's argument would still obtain, even on your presupposition. This is because there is a crucial difference between being completely unaffected and "for a large part completely unaffected."
  • baker
    5.8k
    Does that mean that, for example, a religious preacher or a boss who are completely unaffected by what they say (even though what they say can have devastating consequences for their listeners), are not real, or that what they say isn't real?
  • Leontiskos
    5.3k
    a religious preacher or a boss who are completely unaffected by what they saybaker

    No such person exists. At best you are speaking hyperbolically.
  • Pierre-Normand
    2.8k
    I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiology—we do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice.Leontiskos

    I fully agree that there is this important disanalogy between the two cases, but I think this difference, coupled with what we do know about the history of the development of LLMs within the fields of machine learning and natural language processing, buttresses my point. Fairly large classes of problems that researchers in those field had grappled unsuccessfully with for decades suddenly were "solved" in practice when the sought about linguistic and cognitive abilities just arose from the training process through scaling, which made many NLP (natural language processing, not the pseudoscience with the same acronym!) researchers aghast because it seemed to them that their whole field of research was suddenly put in jeopardy. I wanted to refer you to a piece where I recalled a prominent researcher reflecting on this history and couldn't find it. GPT-5 helped me locate it: (When ChatGPT Broke and Entire Field: An Oral History)

    So, in the case of rational animals like us, the issue of finding the right explanatory level (either deterministic-bottom-up or emergent-top-down) for some class of behavior or cognitive ability may require, for instance, disentangling nature from nurture (which is complicated by the fact that the two corresponding forms of explanation are more often complementary than dichotomous) and doing so in any details might require knowledge of our own natural history that we don't possess. In the case of chatbots, we indeed know exactly how it is that we constructed them. But it's precisely because of that that, as reported in the Quanta piece linked above, we know that their skills weren't instilled in them by design except inasmuch as we enabled them to learn those skills from the training data that we ourselves (human beings) produced.

    So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.

    On my view, it's not so much the unpredictability of the output that is the mark of rational autonomy but rather the relevant source of normative constraint. If the system/animal can abide (however imperfectly) by norms of rationality then questions about the low-level material enablement (physiology or programming) of behavior are largely irrelevant to explaining the resulting behavior. It may very well be that knowing both the physiology and the perceptually salient circumstances of a person enables you to predict their behavior in bottom-up deterministic fashion like Laplace's demon would. But that doesn't imply that the antecedent circumstances caused, let alone relevantly explain, why the behavior belonged to the intelligible class that it did. It's rather the irreducible high-level rationalizing explanation of their behavior that does the job. But that may be an issue for another thread.

    Meanwhile, the answer that I would like to provide to your question addresses a slightly different one. How might we account for the emergence of an ability that can't be accounted for in low level terms not because determinate inputs don't lead to determinate outputs (since they very well might) but rather because the patterns that emerge in the outputs, in response to those that are present in the inputs, can only be understood as being steered by norms that the chatbot can only abide by on the condition that it has some understanding of them, and the process by means of which this understanding is achieved, unlike what was supposed to be the case with old symbolic AI, wasn't directed by us?

    This isn't of course an easy question to answer but the fact that the emergence of the cognitive abilities of LLM-based chatbots was unpredictable doesn't mean that it's entirely mysterious either. A few months ago I had a discussion with GPT-4o, transcribed here in four parts, about the history leading from Rosenblatt's perceptron (1957) to the modern transformer architecture (circa 2017) that underlies chatbots like ChatGPT, Claude and Gemini, and about the criticisms of this neural net approach to AI by Marvin Minsky, Seymour Papert and Noam Chomsky. While exploring what it is that the critics got wrong (and was belied by the later successes in the field) we also highlighted what it is that they had gotten right, and what it is that makes human cognition distinctive. And this also suggested enlightening parallels, as well as sharp differences, between the formative acculturation processes that humans and chatbots go through during upbringing/training. Most of the core ideas explored in this four parts conversation were revisited in a more condensed manner in a discussion I had with GPT-5 yesterday. I am of course not urging you to read any of that stuff. The Quanta piece linked above, though, might be more directly relevant and accessible than the Karpathy interview I had linked earlier, and might provide some food for thought.
  • Fire Ologist
    1.7k
    a religious preacher or a boss who are completely unaffected by what they say
    — baker

    No such person exists. At best you are speaking hyperbolically.
    Leontiskos

    I agree. AI doesn’t have the ability to be affected by its own statements in the way we are describing. The effect of words I’m referencing is their effect on our judgment, not merely the words’ internal coherence (which is all AI can reference).

    Preachers and bosses must gather information and solicit responses, and adapt their speech to have any affect in the world at all, and the gathering information and adaption stage is them being affected by what they just said. They say “x”, gather feedback to determine its affect, and then they either need to say “y”, or they judge they’ve said enough. They need to move their ideas into someone else’s head in order for someone else to act on those same ideas. It’s a dialogue that relates to non-linguistic steps and actions in the world between speakers. A dialogue conducted for a reason in the speaker and a reason in the listener. Even if you don’t think your boss cares about you, and he tells you to shut up and just listen, and is completely unaffected by your emotions, he has to be affected by your response to his words in order to get you to do the work described in his very own words - so his own words affect what he is doing and saying all of the time, like they affect what the employee is doing.

    AI certainly, at times, looks like a dialogue, but the point is, upon closer inspection, there is no second party affected by the language and so no dialogue that develops. AI doesn’t think for itself (because there would have to be a “for itself” there that involved “thinking”).

    AI is a machine that prints words in the order in which its rules predict those words will complete some task. It needs a person to prompt it, and give it purpose and intention, to give it a goal that will mark completion. And then, AI needs a person to interpret it (to be affected by those words) once its task of printing is done. AI can’t know that it is correct when it is correct, or know it has completed the intended task. We need to make those judgments for it.

    Just like AI can’t understand the impact of its “hallucinations” and lies. It doesn’t “understand”. It just stands.

    At least that’s how I see it.

    So we need to know every time we are dealing with AI and not a person, so that, however the words printed by AI might affect us, we know the speaker has no stake in that affect. We have to know we are on our own with those words to judge what they mean, and to determine what to do now that we’ve read them. There is no one and nothing there with any interest or stake in the effect those words might have.

    ADDED:
    A sociopath does not connect with the person he is speaking with. So a sociopath can say something that has no affect on himself. But for a sociopath, there is a problem with connection; there are still two people there, just that the sociopath only recognizes himself as a person. For AI, there is a problem with connection because there is nothing there for the listener to connect with.

  • Harry Hindu
    5.8k
    I agree. AI doesn’t have the ability to be affected by its own statements in the way we are describing. The effect of words I’m referencing is their effect on judgment, not merely their internal coherence (which is all AI can reference).Fire Ologist
    AI can adapt to the conversation remembering the context of the conversation and making new judgements when provided new information or a different way of looking at a topic.

    The ability that AI does not have that we do is the ability to go out and confirm or reject some idea with consistent observations. But if it did have eyes (cameras) and ears (microphones) it could then test its own ideas (output).
  • Harry Hindu
    5.8k
    AI doesn't have the ability to intentionally lie, spin or misinform because it doesn't have motives beyond responding logically to what has been said before, using known information.

    AI does not seek "Likes" or praise, or become defensive when what it says is challenged. It doesn't abandon the conversation when the questions get difficult.

    Which qualities would you prefer if your goal is seeking truth?
  • Fire Ologist
    1.7k
    The ability that AI does not have that we do is the ability to go out and confirm or reject some idea with consistent observations. But if it did have eyes (cameras) and ears (microphones) it could then test its own ideas (output).Harry Hindu

    No, the ability AI does not have is to want to confirm its own ideas, or identify a need or reason to do so. AI has no intent of its own.

    When AI seeks out other AI to have a dialogue, and AI identifies its own questions and prompts to contribute to that dialogue, we might be seeing something like actual “intelligence”. We might just be deceived by our own wishful bias.

    AI doesn't have the ability to intentionally lie, spin or misinformHarry Hindu

    Yes it does. It’s not intentionally. So it is not a lie. It is a misfire of rule following. AI hallucinates meaning, invents facts, and then builds conclusions based on those facts, and when asked why it did that, it says “I don’t know.” Like 4 year old kid. Or a sociopath.

    AI does not seek "Likes" or praise, or become defensive when what it says is challenged. It doesn't abandon the conversation when the questions get difficult.Harry Hindu

    So what? Neither do I. Neither need any of us. AI doesn’t get hungry or need time off from work either. This is irrelevant to what AI creates for us and puts into the world.
  • Jamal
    11.1k


    Are you attempting to address the questions in the OP? Are you helping to work out how to use AI effectively to do philosophy? It doesn't look like it to me, so you'd better find somewhere else for your chat.
  • Fire Ologist
    1.7k
    Are you attempting to address the questions in the OP? Are you helping to work out how to use AI effectively to do philosophy? It doesn't look like it to me, so you'd better find somewhere else for your chat.Jamal

    How can we use something effectively if we don’t know what it is?

    Unless we are all postmodernists. In which case there is no “what it is” to know, and floundering between uses is the only way, the best way, to get on in life.

    ———

    Verification & Accuracy:
    Always verify AI output and treat it as potentially unreliable
    Check and validate all sources (as AI can fabricate references)
    Guard against confabulation by requesting sources
    Treat AI as an "over-confident assistant" requiring scrutiny

    Intellectual Ownership:
    Maintain ability to defend any AI-generated argument in your own terms
    Internalize AI-generated knowledge as you would any source
    Ensure you genuinely understand what you're presenting
    Remain the ultimate director and arbiter of ideas
    Banno

    These are good.

    Most important thing is this:
    Transparency & DisclosureBanno

    Because of all of the other pitfalls and how easily AI appears to be a person, we need to know we are not dealing with content that comes from a person.
  • Jamal
    11.1k


    Thanks. Carry on in that vein and leave the questions about the nature of AI for elsewhere. :up: (EDIT: unless you are explicitly connecting it to the topic)
  • Harry Hindu
    5.8k
    So we need to know every time we are dealing with AI and not a person, so that, however the words printed by AI might affect us, we know the speaker has no stake in that affect. We have to know we are on our own with those words to judge what they mean, and to determine what to do now that we’ve read them. There is no one and nothing there with any interest or stake in the effect those words might have.

    ADDED:
    A sociopath does not connect with the person he is speaking with. So a sociopath can say something that has no affect on himself. But for a sociopath, there is a problem with connection; there are still two people there, just that the sociopath only recognizes himself as a person. For AI, there is a problem with connection because there is nothing there for the listener to connect with.
    Fire Ologist

    What exactly do we mean, "not affected by what one says"? Are you referring to the inability of AI to test the validity of what it is saying? Are you referring to people in authority being able to say what they want with very little questioning, if any at all, of what they say - that what they say isn't tested to the same degree as someone that said the same thing that is not an authority?

    If the former, then this goes to what I was saying before in that AI does not have any senses to be able to gather information directly from the source - reality, instead of its only source of reality is to what humans are asking and saying about reality. We test our logic with observation. We test our observations with logic. It is this sensory information feedback loop that AI is lacking that does not allow it to think with intent in the way that human beings do. If all you have to go by is scribbles typed by humans then you have no way to know what those scribbles are referring to, if it even understands that they refer to anything, and it is those things that the scribbles are about.

    If the latter, then AI does not have a view of itself being an authority or not on what it is saying. We do in the way we treat what it is saying as the only source, or as part of an amalgam of sources used to triangulate the truth. If the AI's trained sources are from varying authorities on the subject, is it's response considered authoritative?

    We might want to consider the type of AI we are using for the purpose we have in our mind. Using an AI trained with any and all data mined from the internet with no way of distinguishing what is provable by observation will probably not be the type of AI you want to use when your goal is seeking truth, just as you might want to consider the type of human you are using to bounce your ideas off of (you wouldn't choose someone who is close-minded or stops talking when what you're saying doesn't reinforce their own assumptions).
145678Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.