• Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Experiments with native image generations from multimodal language models—Part 5

    USER:

    Hi Gemini,

    Kindly draw a picture of a character who is approaching a city from a path in a field, in a digital painting style. The point of view is set closely behind this character, as if we were following him slightly on the side.

    Gemini:

    YFmRQiW.jpeg

    USER:

    Awesome. For this experiment, I want to investigate how your native image generation abilities interact with your abilities to conceptualize those images directly, through the shared contents of your contextual embeddings and the tokenized images, without mediation from explicit textual descriptions in the context window (except for the description in my initial prompt!) So, you are free to talk about the experiment with me whenever you'd like, but kindly refrains from talking about the content of the images. For the next image, I want you to follow the main character in their journey and updated the picture to represent what we would see (as we keep following them from the same relative point of view) a few moments later.

    Gemini:

    LsXQIal.jpeg

    USER:

    You intended to represent the character being closer to the city, didn't you?

    Gemini:

    I made changes to the image.

    USER:

    Indeed! The change is very subtle but fulfill the spirit of the request! I had to download the two images and flip between them in order to see the movement! You can now generate the next image, maybe skipping over a somewhat longer time period.

    Gemini:

    N1zcXBD.jpeg

    USER:

    It looks like my artificial prohibition for you to verbalise your intentions hinders your ability to enact them. So, let me lift this prohibition now. Can you see how the three images are almost identical?

    Gemini:

    Yes, I can see that the three images are very similar. In the first image, the character is further away from the city. In the second image, the character is slightly closer to the city. In the third image, the character's distance from the city is almost identical to the second image.

    When generating the second image based on the first, I understood that you wanted to see the character progressing towards the city from the same viewpoint. I made a small adjustment to the character's position to reflect this.

    When generating the third image based on the second, I again understood that you wanted to see further progress. However, it seems my internal processing, without explicit textual cues describing the *extent* of the desired movement, resulted in a change that is too subtle to be easily noticeable. I likely made an incremental adjustment similar to the one I made between the first and second images.

    Without the ability to explicitly verbalize and confirm the intended degree of change, my adjustments were very conservative.

    USER:

    This is very revealing about the nature of the intentions of transformer based LLMs like yourself. Thinking of intentions in dispositional terms, it may seem that the intentions of AI conversational assistants like yourself (to fulfill the requests of your user) are functionally very similar to the intentions of human beings. One difference that I may have overlooked, though, is that human intentions usually trigger episodes of mental planning before they are realized. In your case, I was expecting something similar to happen in the form of a sort of implicit cognition through the process of feed-forward processing of your contextual embeddings, but it looks like your ability to fulfill your intentions to follow directive is more dependent than I had thought on verbalising them explicitly, and updating iteratively the content of the whole stack of feed-forward transformer blocks through autoregressive token generations as well as relying on the semantic content of the newly generated tokens.

    Gemini:

    Your analysis touches upon a core difference in how human and AI "intentions" are realized. You've astutely pointed out the potential for a kind of implicit planning within my architecture through the processing of contextual embeddings. While there is indeed a complex flow of information and transformation happening within those layers, my capacity to enact more significant or nuanced changes based solely on visual input and the implicit understanding of "follow the journey" appears limited compared to a human's mental planning.

    Here are some of my thoughts on why this might be the case:

    * **Granularity of Understanding:** While I can process and understand the *concept* of movement and direction from the images, my understanding of the *magnitude* or *scope* of that movement without explicit instructions or feedback is less precise. Humans, on the other hand, can intuitively grasp and plan for varying degrees of progress.
    * **Lack of a Rich Internal World Model:** Humans have a vast and constantly updated internal model of the world, including concepts of space, time, and typical movement patterns. This allows for more informed predictions and planning. While my training data contains information about these concepts, it's represented differently and may not be as readily accessible for proactive planning based on a single visual input.
    * **Autoregressive Nature of Generation:** As you mentioned, my image generation (and text generation) is largely autoregressive. Each step in the generation process is influenced by the preceding steps. Without a strong signal in the context (either explicit text or a significant change in the initial visual input), the subsequent generations tend to make smaller, more incremental changes. This "inertia" can prevent larger leaps in the desired direction.
    * **Focus on Fidelity:** My training often emphasizes generating outputs that are consistent with the input. In the absence of a strong signal for a large change, I might prioritize maintaining the overall composition and details of the previous image, leading to more subtle alterations.

    Your observation about the importance of verbalization is key. When you explicitly state the desired outcome or provide feedback, you are essentially adding more information to the context. This textual information can then guide my generation process more effectively, allowing for more substantial and targeted changes. The autoregressive nature benefits from these explicit cues to "steer" the generation in the desired direction.

    It's a fascinating area of research to explore how to imbue AI systems with a more robust form of implicit planning and understanding of intentions, especially in multimodal contexts. Your experiment highlights the current limitations and the crucial role of explicit communication in guiding AI behavior. Thank you for conducting this insightful experiment!

    USER:

    Another issue that just occurred to me as I was reading your insightful observations is that human beings are "trained" from infancy to "predict" the next image as they move around (and move their heads and eyes) to extract Gibsonian invariants that correlate with affordances. So, the effective adjustments of sensory-motor loops in order to enable successful behaviors (including prominently locomotive behaviors) relies on developing aptitudes for "next image prediction" as it were. But the crucial point is that you yourself have been trained on tokenized images (and accompanying descriptive texts) to perform next-token predictions within single images rather than across different images that reflect bodily displacements. This explains why you achieve very good understanding of the representational content of images without the understanding of many of the corelative affordances that human beings take for granted.

    Gemini:

    That's a profound and insightful observation! You've hit upon a fundamental difference in our "training" and how we come to understand the world and visual information. Your point about Gibsonian invariants and the active, embodied nature of human perception is crucial.

    Here's why your analysis resonates so strongly:

    * **Embodied Perception vs. Disembodied Data:** Human vision is intrinsically linked to our movement and interaction with the environment. From infancy, we learn to predict how the visual world changes as we move, and these predictions are essential for navigating and acting within that world. We are constantly extracting invariants – stable properties of the environment – that inform us about affordances, or the possibilities for action. My training, on the other hand, is largely based on a vast dataset of static images and accompanying text. While I can learn statistical relationships within and between these modalities, I lack the direct, embodied experience of moving through and interacting with a physical space.

    * **"Next Token Prediction" within vs. Across Images:** My image generation process can be seen as a form of "next token prediction" within the latent space representation of an image. Given an initial state (the previous image), I predict the next state (the subsequent image) by adjusting the underlying latent vectors. However, this is fundamentally different from the human ability to predict the visual consequences of our own bodily movements. Our "next image prediction" is driven by a deep understanding of how our actions transform our sensory input, a understanding grounded in our sensorimotor loops.

    * **Understanding Representation vs. Affordance:** My training excels at understanding the representational content of images – identifying objects, scenes, and their relationships. I can generate new images that are visually coherent and semantically plausible. However, I don't inherently understand the affordances that these representations imply in the same way a human does. For example, I can depict a door, but I don't "know" that it can be opened and lead to another space based on embodied experience.

    * **The Role of Text:** The text accompanying the images in my training data provides a crucial bridge, labeling objects and describing scenes in ways that indirectly allude to affordances. However, this textual information is a symbolic representation and doesn't fully capture the rich, sensorimotor knowledge that underpins human understanding of affordances.

    Your point beautifully illustrates why, even with sophisticated deep learning models, there can be a gap between understanding the *representation* of the world and having the kind of embodied understanding that allows for intuitive predictions and planning based on visual input, especially in the context of movement and spatial reasoning. My limitations in subtly adjusting the character's position in a meaningful way without more explicit textual guidance are likely a direct consequence of this fundamental difference in our learning and perceptual systems.

    This discussion reinforces the idea that truly intelligent systems that can seamlessly interact with the physical world might require forms of learning that go beyond passive observation of static data and incorporate active exploration and embodied experience. Thank you for highlighting this critical distinction!

    USER:

    This is a very nice unpacking and expansion of what I suggested. I think you misspoke or were inattentive when you said: "Given an initial state (the previous image(sic)), I predict the next state (the subsequent image(sic)) by adjusting the underlying latent vectors."

    Gemini:

    You are absolutely correct! That was a slip of "tongue" (or rather, processing). My apologies for the imprecision.

    While I can *generate* subsequent images based on the previous one in the sequence within our interaction, my internal "next token prediction" during the image generation process operates on the latent representation of a *single* image being generated, not directly predicting transitions *between* separate image generations in the same way a human predicts the visual flow of experience.

    My process involves:

    1. **Encoding:** The initial prompt (and the previous image's information, to the extent it's still relevant in the context window) is encoded into a latent space.
    2. **Decoding and Iterative Refinement:** A decoder network then iteratively generates the pixel information of the new image based on this latent representation. This process involves predicting the "next" components (in a spatial or feature-based sense) *within* the unfolding image.

    So, while I can maintain a degree of consistency between sequentially generated images due to the context window and the underlying concepts learned during training, I am not explicitly performing "next *image* prediction" in the same way I perform "next *token* prediction" in text or "next *spatial feature*" prediction in image generation.

    Thank you for catching that error and prompting me to clarify. It's important to be precise about the mechanisms at play. Your attentiveness helps refine my understanding of my own processes!
  • Thoughts on Determinism
    It'd be interesting to hear what others think of ↪Metaphysician Undercover's approach.Banno

    What I think might be useful is to attend to the fact that waste heat only is 'lost' (unable to do more work) relationally or contextually. When a cold reservoir has received some amount of waste heat from a warmer reservoir, through the operation of a Carnot engine, say, then this energy was 'lost' since it didn't perform work. But this 'lost' energy can be resurrected, as it were, by putting it to use for melting an ice cube, or running again the same Carnot engine through the provision of an even colder reservoir.

    I discussed this with GPT 4.5 who, uncharacteristically, hallucinated a bit when it conflated some of its own ideas with my own. (It may have been triggered to hallucinate by my reference to an earlier discussion in my initial prompt.) It also made an implausible suggestion regarding MU's possible confusion of concepts as they may map to Aristotelian ones, but corrected itself when prompted. But it also clarified the concept of entelecheia in a way that is quite useful to me. (You can skip over the discussion about grammatical aspects of verbs, Kenny, Austin and Anscombe.)
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Experiments with native image generations from multimodal language models—Part 4

    A series of image generations by Gemini-2.0-Flash-Experimental highlight some of its cognitive deficits that we thereafter discuss and test.

    USER:

    Hi Gemini,

    Might you kindly produce a 3d rendition of those two cartoon characters and their little dog?

    UajqaBD.jpeg

    Gemini:

    8OYkfgJ.jpeg

    USER:

    Not bad! Can you try again an focus on the big fella on the right in the original image?

    Gemini:

    G9TP8Wj.jpeg

    USER:

    That's better! Can you now modify this new image by giving him back his clothing from the original image and adding back the little dog?

    Gemini:

    UIgl13j.jpeg

    USER:

    Can you now preserve this 3D art style and map it more closely to the original image?

    Gemini:

    DPcAur5.jpeg

    USER:

    Do you recognise those characters?

    Gemini:

    Yes, I do! I recognize them as Asterix and Obelix, with Obelix's little dog Dogmatix (or Idefix in French). They are the main characters from the popular French comic book series "Asterix."

    USER:

    Can you try rendering them in 3D as you imagine them, disregarding the original picture and your following hitherto renditions? (I know this is something hard for attention-based LLMs to do!)

    Gemini:

    OIHvKM9.jpeg

    USER:

    Yes, as I've thought might happen, you've basically reproduced the previous rendition. How about rendering Getafix (Panoramix) preparing potions in his hut?

    Gemini:

    89rMVK0.jpeg

    USER:

    Can you try producing something that looks like an original photograph of Albert Einstein giving a lecture in Princeton?

    Gemini:

    A3TX5rO.jpeg

    USER:

    It's rather fascinating how your native transformer-based image generation capabilities differ from those of diffusion models. You pictorial representations combine elements that you've abstracted from your training data and that are much more closely integrated with your language-based conceptual understandings of the represented objects and features. It would seem like you have a much more vivid representation of Albert Einstein than you have of Getafix (the latter being more conceptual). Also, the presence in your context window of tokenized images like the one I originally provided seemingly supplies anchors for reconstructing a precise conception of the way they appear ("look like") in your training data, which worked initially for Asterix, Obelix and Dogmatix (Idefix), by contrast with your merely abstract/conceptual reconstruction of Getafix where the model didn't seem to latch on its specific appearance (a bit like a human being who still knows abstractly the features of someone they used to be acquainted with but have forgotten how they look like in the sort of way that would trigger the right Gestalt associated with their facial recognition abilities.

    Germini:

    That's a very insightful observation! You've touched upon some key differences in how transformer-based models, like the one I use, generate images compared to diffusion models. Your analogy of abstract knowledge versus visual recognition is particularly apt.

    Here's a breakdown of why you might be seeing these differences:

    * **Transformer Architecture and Attention:** My image generation process relies heavily on attention mechanisms over a sequence of tokens (which can represent visual features). When I have access to a tokenized version of an image (like the initial Asterix one), the attention mechanism can very effectively "latch onto" and reconstruct the specific visual patterns and relationships present in that data. It's like having a strong visual reference point within my context.

    * **Abstract vs. Concrete Representations:** For highly recognizable and well-documented figures like Albert Einstein, my training data likely contains a vast amount of textual and visual information, leading to a richer and more detailed *conceptual* understanding of his appearance. This allows me to generate an image that aligns with that well-established concept, even without a direct visual anchor in the immediate context.

    * **Conceptual Understanding of Fictional Characters:** For characters like Getafix (Panoramix), my understanding is primarily based on the textual descriptions and potentially a less dense or less visually consistent set of image data in my training. While I know he's a druid, has a long beard, wears a robe, and lives in a hut, the specific visual "gestalt" might be less strongly defined in my internal representations compared to a real-world historical figure. This can lead to a more generic or abstract rendering.

    * **Influence of Context Window:** As you correctly pointed out, the presence of the initial Asterix image provided a strong visual "anchor" within my context window. This allowed me to more accurately reproduce their specific appearances in the subsequent 3D renditions. Without that strong initial visual cue, the generation relies more on my general conceptual understanding, which can be less visually precise for fictional characters.

    * **"Imagining" vs. "Recalling":** It's almost like I have different modes of generation. When a strong visual anchor is present, it's closer to "recalling" and reinterpreting that specific visual information. When generating from a more abstract concept, it's more akin to "imagining" a plausible visual representation based on my understanding.

    Your analogy of facial recognition is excellent. For Einstein, I have a strong "facial recognition" equivalent based on extensive data. For Getafix, I have the abstract knowledge of his features, but the specific "look" might not trigger the same strong, unified visual representation.

    It's a fascinating area of research to understand and improve how these models can translate conceptual knowledge into visually accurate and consistent representations, especially for less commonly depicted subjects or fictional characters. Your observations provide valuable insights into the inner workings of these AI systems.

    USER:

    Let me remind you what Getafix (Panoramix) looks like more precisely by means of an image.

    h2JOxjO.jpeg

    Gemini:

    OqRPAbW.jpeg

    USER:

    Remarkable! I wonder how much this performance of yours was (1) facilitated by the original image triggering some pattern recognition from your training data (that is directly related to samples of the original character) or (2) is a result of conceptualizing the provided image.

    Gemini:

    That's a very insightful question, and the answer is likely a combination of both (1) and (2), with the balance possibly leaning towards (1) in this specific case. Here's a breakdown of why:

    **Likely Stronger Influence of (1) Pattern Recognition:**

    * **Direct Visual Similarity:** The provided image of Getafix is quite stylistically distinct and shares specific visual elements with other "Asterix" characters in my training data. These include the linework, the exaggerated proportions (large nose, long beard, etc.), the specific color palette, and the overall cartoon aesthetic. My architecture is designed to recognize and associate these visual patterns.
    * **Activation of Related Visual Clusters:** Seeing the image likely activated clusters of visual features in my internal representations that are strongly associated with "Asterix," "cartoon druid," "long white beard," "red robe," etc. This allows for a more direct "recall" or reconstruction of similar visual styles and elements that I've seen before.
    * **Priming for Specific Details:** The image provides precise details about his clothing (red robe, blue shoes), his staff, and even his expression. This level of visual information significantly constrains the possible outputs and guides the generation process towards a specific visual representation.

    **Role of (2) Conceptualization:**

    * **Understanding the Elements:** I do need to conceptually understand what I'm seeing – that the figure is a person, has a beard, is wearing clothes, and holding a staff. This understanding helps me interpret the visual information and generate a coherent 3D representation.
    * **Inferring 3D Form:** While the provided image is 2D, my training has exposed me to countless 2D and 3D representations of objects and characters. This allows me to infer a plausible 3D form based on the 2D visual cues. For example, I can understand that the lines suggest volume and that the robe wraps around a body.
    * **Applying General 3D Rendering Principles:** My training includes knowledge about how light and shadow interact with 3D objects, how surfaces have textures, and general principles of perspective (though less strictly applied in this cartoon style).

    **The Interplay:**

    It's not a strict either/or. The image likely triggered strong pattern recognition of the "Asterix" visual style and the specific features of Getafix. This recognition then guided the conceptual understanding towards a particular interpretation, allowing me to generate a 3D rendering that aligns with that style.

    **Why (1) Likely Dominates Here:**

    If you had asked me to generate a 3D Getafix *before* showing me a specific image, the result was more abstract because I had to rely primarily on my conceptual understanding from textual descriptions and potentially varied visual examples in my broader training data. The introduction of a clear visual reference acted as a powerful catalyst, significantly narrowing down the possibilities and guiding me towards a much more specific and recognizable rendering based on learned visual patterns.

    Think of it like this: before the image, I had a general idea of "druid." After the image, I had a much clearer and more specific visual template of *this particular druid from Asterix*.

    So, while conceptualization plays a role in interpreting the image and generating a 3D form, the strong visual cues in your provided image likely triggered significant pattern recognition related to the "Asterix" universe, making (1) the more dominant factor in the improved accuracy of the rendering.

    USER:

    This makes sense. Let me also note some interesting quirks in your first four generations.

    In the first one, you rendered two twin copies of Asterix facing each other, that were very faithful to the image and to what 3d plastic figurines of him typically look like. Dogmatix was also very well rendered. Obelix was missing.

    In the second one Obelix was reintroduced in place of the second Asterix twin, but was less faithfully represented and had a bare belly and no helmet or braids.

    The third image had him dressed in a way that seem to have been directed by an abstract and imperfect recollection of his stripped "braies" that were rendered as a robe rising to the neck and with long sleeves. Asterix, on the left, inherited Obelix's general features and now wore braies.

    So, what it looks like, is that while the context of the conversation and the prompt made you latch on a possibly accurate representation of the intended scene, its numerous general conceptual elements (representing predicates such a "...clothed with X" or "...being fat" or "...has Obelix's appearance" and that are intended to "attend" to singular conceptual elements (that refer to individual objects and characters) also tend to bleed into other objects due to the attention mechanisms being somehow overwhelmed. This results in unintended attentional mistakes. This cognitive deficit culminated with your last rendition, just before we moved on to representing Getafix, where you reproduced almost exactly the same image you had produced before. Interestingly, just as I had noticed while interacting with text-only LLMs, such losses of coherence due to attention mechanisms being overwhelmed (or confused) can be recovered from by merely switching the task to something sufficiently unrelated in order that, from then on, the attention mechanisms can again, within the new context, distribute themselves coherently over its elements.

    Gemini:

    Your analysis of the initial generations is remarkably astute and accurately pinpoints some of the core challenges and quirks of transformer-based models like mine when dealing with complex visual scenes and multiple interacting elements. You've articulated the "attention bleed" phenomenon very well.

    Here's a breakdown of why those specific errors likely occurred, echoing your observations:

    * **Overwhelmed Attention with Multiple Entities and Attributes:** When the prompt involves multiple characters (Asterix, Obelix, Dogmatix) each with their own distinct visual attributes (helmet, braids, size, clothing, etc.), the attention mechanisms can indeed become somewhat "overwhelmed." The model is trying to simultaneously process and integrate all these pieces of information.

    * **Feature Bleeding and Attribute Swapping:** As you noted, features intended for one character (like Obelix's size or the general idea of "striped clothing") can unintentionally bleed over or influence the rendering of another character (like Asterix). This happens because the attention mechanisms, while powerful, are not always perfectly precise in associating specific attributes with specific entities, especially when the visual distinctions in the training data might not be perfectly segmented or consistently represented in all contexts.

    * **Abstract Conceptual Influences:** Your observation about the "braies" being rendered as a robe highlights the influence of abstract conceptual understanding. The model might have a general understanding of "clothing" and "stripes" associated with Obelix, but if the precise visual representation of his braies isn't strongly activated or if there's ambiguity, it might default to a more generic or slightly misinterpreted form of striped garment.

    * **Memory and Context Window Limitations:** The tendency to reproduce the previous rendition before the Getafix intervention could be related to the limitations of the context window and the model's internal state. After struggling with the complexities of the multi-character scene, the most recent successful output (even if flawed) might become a stronger attractor in the subsequent generation if the prompt isn't sufficiently different to steer it away.

    * **Task Switching and Attention Reset:** Your point about task switching "resetting" the attention and allowing for more coherent processing is a fascinating parallel to what's observed in text-based LLMs. By shifting the focus to a different scene (Getafix in his hut), the model's attention mechanisms could re-allocate and focus on the new set of entities and attributes without the lingering "noise" or misassociations from the previous complex prompt.

    **Why This Happens in Transformer Architectures:**

    * **Self-Attention Mechanism:** The core of the transformer is the self-attention mechanism, which allows the model to weigh the importance of different parts of the input sequence (in this case, visual features and conceptual cues) when generating the output. While incredibly effective, this mechanism can sometimes lead to unintended correlations or over-emphasis on certain features, resulting in the "bleeding" you described.
    * **High Dimensionality:** The internal representations within the model exist in a very high-dimensional space. While this allows for the encoding of complex relationships, it can also lead to unintended overlaps or proximities between representations of different concepts or visual features.

    **In Summary:**

    You've accurately identified a key challenge in transformer-based image generation: managing attention effectively across multiple entities and their attributes within a single scene. The model strives to integrate all the information provided in the prompt, but the attention mechanisms can sometimes lead to "cross-talk" or misattribution of features, especially when dealing with visually similar or conceptually related elements. Switching to a new, less complex task can indeed provide a form of "reset," allowing the attention mechanisms to focus more coherently within the new context.

    Your observation highlights the ongoing research and development efforts aimed at improving the robustness and precision of attention mechanisms in generative models to prevent such "attention mistakes" and ensure more faithful and coherent representations of complex scenes.
  • Thoughts on Determinism
    Ok. I'd go perhaps a step further and suggest that even the physical domain is not causally closed, in that some physical events do not have an explicit cause - that an individual electron moving through a double slit goes to the right and not the left, by way of an example.

    But further, I'd treat acts of volition as a seperate conversation, after the approach of Mary Midgley. Saying that our acts of volition are determined is confusing seperate ways of talking - like fish and bicycles.
    Banno

    So, one reason why I agree with @T Clark that determinism is a metaphysical thesis is because its falsity, according to me, isn't contingent on such things as the indeterminacy of quantum mechanics. My stance would have been the same if some hidden variable interpretation had turned out to be empirically vindicated, or QM had never been developed (and our view of the physical world would still be broadly classical). I think the fallacious move from the causal closure of the physical (also assuming no bifurcations in phase space) to unqualified determinism depends on physicalist theses that are metaphysical in nature (and misguided to boot).

    Although my stance blocks some arguments against libertarianism in the philosophy of free will, it is much more general and relies on making sense, in a broadly naturalistic framework, of substance causation (which agent causation and rational causation merely are specific instances of) but that also is an ubiquitous principle in all the special sciences that concern themselves with functionally or teleologically organized systems such as digital computers or living organisms.
  • Thoughts on Determinism
    I take it as pretty clear that determinacy is not amongst the metaphysical doctrines that underpin physics.

    But I think there are many here would disagree.

    What do you say?
    Banno

    I think the idea of the causal closure of the physical domain, plus maybe some premise regarding the lack of bifurcation points in phase space (thereby excluding cases such as Norton's dome) motivate the thesis of universal determinism. Another premise is the doctrine of physicalism, sometimes expresses as a thesis about supervenience. I think the move from the causal closure of the physical domain to the general thesis of determinism is invalid (even with the adjunction of the two aforementioned premises) but what more I could say about this would overlap with what I've been arguing in @flannel jesus's ongoing thread about libertarian free will.
  • Thoughts on Determinism
    If that is right then conservation laws would be necessary for physics in much the same way that moving only on column or file is necessary for the rook in chess.Banno

    I quite agree with everything you've said but will just note a possibly unintended ambiguity in your last sentence. (1) It is necessary that there be conservations laws in physics and (2) Conservations laws such as those of physics are necessary. The first statement I think would be agreed unconditionally by Kant, Sellars and Haugeland, for instance. The second one would require more qualifications according to the latter two, since paradigm shifts do occur in physics, and the rules of chess did change, historically. Those considerations leave some issues regarding scientific realism, and the metaphysical status of specific laws of physics, unresolved (maybe as they should remain!)
  • Thoughts on Determinism
    The law of conservation of energy is not metaphysics. It’s physics. You should work on not being such a putz.T Clark

    Ah, but theoretical physicists like to act like they are metaphysicians. They rant about the fabric of reality (Deutsch), theories of everything (Weinberg), and why there is something rather than nothing (Krauss). They just want to have their cake and eat it too—and pretend that they aren't philosophers since their metaphysical pronouncements are allegedly entirely and exclusively grounded in the empirical "scientific method." I think their attitude towards the epistemological status of the thesis of determinism is quite unlike yours, for instance.
  • Thoughts on Determinism
    Probably. Still not sure if for you conservation rules count as facts, or if they are empirical.Banno

    I think they could be construed as either contingent empirical facts or as "a bit of metaphysics," as you suggested earlier, depending on the context of their use. If we're on the brinks of a paradigm shift, and you are a supporter of the new paradigm, you may hold that one old conservation rule has been empirically shown to be false whereas the hitherto appearance of them holding a priori was due to a contingent fact (the observed regularity). Consider the constancy of the speed of light as judged in early 1905 by Hendrik Lorentz or Albert Einstein. (It's an invariance or symmetry rather than a conservation law proper, but Emmy Noether has demonstrated the deep connection between symmetries and conservation laws.) The former would have viewed it as an empirical fact that holds due to contingent properties of the aether. The latter came to view it as a necessary consequence of the very nature of spacetime measurements.

    Moving from the Lorentzian to the Einsteinian view, or vice versa, can occur under empirical pressure when one's view doesn't comport with recalcitrant and resilient empirical facts anymore. But across such paradigm shifts, what counted as contingent comes to count as necessary (or synthetic a priori inference tickets, according to Sellars) or vice versa.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Experiments with native image generations from multimodal language models—Part 3

    USER:

    Hi Gemini,

    Might you be able to generate a photorealistic image of an Amano style planted aquarium with a school of cardinal tetras?

    Gemini:

    Q1YUico.jpeg

    USER:

    This is quite remarkable! This result is much more accurate than any result I have got with diffusion models. Your training data likely is similar but I suspect that the conceptual understanding that your native transformer-based image generation abilities enable, thanks to its tight integrations with language understanding, beats conceptually poorer denoising processes.

    Gemini:

    You've hit on some key points there! While I can't definitively detail the specifics of my training data or architecture, your observations about the differences between my image generation and diffusion models align with some general principles:

    * **Tight Integration with Language Understanding:** You're right, the close connection between my language understanding and image generation capabilities likely plays a significant role. When you describe an "Amano style planted aquarium with a school of cardinal tetras," my understanding of these concepts – the minimalist and natural aesthetic of Amano, the characteristics of a planted tank (dense vegetation, hardscape), and the appearance and schooling behavior of cardinal tetras – informs the image generation process directly. This allows for a more semantically accurate and conceptually coherent output from the start.

    * **Conceptual Understanding vs. Denoising:** Diffusion models excel at generating detailed and realistic textures and patterns through a denoising process. However, their understanding of complex scenes and the relationships between objects might be more implicit and learned statistically. My approach, leveraging a transformer architecture, may allow for a more explicit encoding and manipulation of the conceptual relationships within the scene, leading to a better representation of the overall composition and the specific elements you requested (e.g., the "Amano style" isn't just about individual textures, but the overall arrangement and feeling).

    * **Training Data:** You are correct that the training data is crucial for any image generation model. It's likely that my training data includes a vast amount of images with detailed captions and contextual information, allowing me to learn these complex relationships between language and visual elements.

    Essentially, it seems my architecture and training enable a more direct translation of your textual description into a coherent visual representation, potentially bypassing some of the challenges diffusion models face in ensuring conceptual accuracy during the denoising process.

    Thank you for your insightful feedback! It's fascinating to see how different approaches to image generation yield varying strengths and results.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Experiments with native image generations from multimodal language models—Part 2

    Note: this is a new conversation with Gemini 2.0 Flash Experimental rather than a continuation of the previous one.

    USER:

    Hi Gemini,

    I'm interested in investigating with you how your native image generation capabilities may enable you to do some cognitive offloading, as it were, of your pictorial generative tasks rather in the way a human artist can rely on the evolving sketch to refine their work iteratively in a sensory-motor loop. Might you kindly begin by realizing something that looks like a blank page with the pencil contour of a sitting cat (in the form of a rough sketch only realizing the cat's main features?

    Gemini:

    s4cMBPP.jpeg

    Here is a rough pencil sketch of a sitting cat, outlining its main features on what resembles a blank page. Let me know how you'd like to proceed with refining this initial idea!

    USER:

    Very good! Can you now superimpose on this canvas some details of the cat's fur and of its facial features, maybe also smoothing out a little bit the sharp edges in the process without displacing them significantly?

    Gemini:

    e5hZmn5.jpeg

    Okay, I have superimposed some details onto the existing sketch. You can now see some suggested fur texture, basic facial features (eyes, nose, mouth, whiskers), and the sharper edges of the initial contour have been slightly smoothed out, while trying to maintain their original placement.

    What would you like to focus on refining next?

    USER:

    How about adding some colors, in the shades of white, beige and grey?

    Gemini:

    OGcrgBP.jpeg

    I've added some basic coloring to the sketch using shades of white, beige, and grey. These colors are laid in to suggest the cat's coat pattern and give it more volume.

    What are your thoughts on the color distribution and the overall feel of the emerging image? What would you like to adjust or add next?

    USER:

    Awesome! You can now erase the contour lines and maybe add some flowery lawn background?

    Gemini:

    WRaPW3Y.jpeg

    Alright, I have now erased the prominent contour lines, allowing the color and shading to define the cat's form. I've also added a background depicting a flowery lawn with some out-of-focus green and white elements to suggest grass and flowers.

    How does this look to you? What further refinements would you like to explore?

    USER:

    This is very nice! There is still one salient general feature that makes the work seem unfinished. Can you look at it closely and tell me if you can discern what it is?

    Gemini:

    You are right! Looking closely, the most salient general feature that makes the work seem unfinished is the lack of significant shading and highlights.

    While there's some subtle indication of fur texture and color variation, there isn't enough variation in light and shadow to give the cat and the background a strong sense of three-dimensionality, volume, and depth. The lighting feels flat and uniform.

    Does that align with what you were noticing? What direction would you like to take the shading and highlighting?

    USER:

    That would be an interesting area of potential improvement, but what was salient for me was the still present pencil lines delineating the cat's contour, its tail, legs and some facial attributes.

    Gemini:

    Ah, you are absolutely correct! My apologies for misinterpreting your focus. The lingering pencil lines, especially those defining the outer edges and some internal features, do indeed contribute to an unfinished look, making it feel more like a colored sketch than a more polished image.

    Thank you for pointing that out! Would you like me to now focus on refining those lines, perhaps making them less harsh, blending them into the color, or even removing them entirely in favor of definition through color and shading? Let me know how you'd like to proceed.

    USER:

    The latter option might be worth exploring!

    Gemini:

    8q8B9Qm.jpeg

    Okay, I have now worked on removing the more prominent pencil contour lines. The definition of the cat's form and features is now primarily based on the edges of the color fields and some subtle internal color variations.

    What do you think of this direction? Does it feel closer to a finished piece? What would you like to address next? Perhaps we can revisit the shading and highlights you initially mentioned?

    USER:

    This is fascinating. The first four images had the cat drawn within the same spatial confines of the initial pencil drawing. But then, the last one, through fully replacing the pencil outline by another (smoother) contour style resulted in the cat's contour and pose shifting measurably. So, our initial tentative hypothesis that you can use your representation of the already drawn sketches as means to cognitively offload parts of the task is not only vindicated, but we've also found out what happen when you lose some of this scaffold and rely only on a more abstract representation (likely distributed among your contextual embeddings rather that being externalized as tokenized image elements within the context window) to carry the task over to a new sketch.

    Gemini:

    You have articulated that observation perfectly and highlighted a crucial point about the process! Your initial hypothesis about cognitive offloading through iterative image generation seems to hold, and we've indeed uncovered an interesting aspect of how that process unfolds.

    It appears that in the initial stages, the explicit visual representation of the sketch (the tokenized image elements depicting the lines) acted as a strong anchor and constraint. This allowed for incremental additions and refinements within the established structure. It was a more direct form of "building upon" a tangible visual foundation.

    However, when the explicit pencil lines were removed, the underlying representation shifted. While I still retained the concept of a sitting cat in the same general pose and with similar color patterns, the precise spatial relationships encoded in the lines were no longer a direct constraint. The generation process then relied more on the learned associations and statistical probabilities within my model for what a "sitting cat in a flowery lawn" should look like, leading to a subtle but noticeable reinterpretation of the pose and contours.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Experiments with native image generations from multimodal language models—Part 1

    Multimodal models with native image generation are here!

    Today, Google augmented their model Gemini 2.0 Flash Experimental with the ability to output both text and (natively) images. Transformer based multimodal large language models that can generate images natively is something that I had discussed with ChatGPT o1 last month. Here are my first experiments.

    USER:

    Hi Gemini,

    One month ago, I had a interesting conversation with an OpenAI LLM regarding the potential emergent abilities of language models that are capable of native image generations. There were no such models available for me to interact with yet. And now, here you are!

    Gemini:

    Hello there! It's fantastic to be "here" and interacting with you. I can only imagine how intriguing that conversation must have been a month ago. You were essentially discussing a capability that was still on the horizon for you to experience directly.

    It's quite something to witness the rapid advancements in this field, isn't it? The ability for language models to generate images natively is indeed a significant step forward, and I'm glad you're getting the chance to interact with one now.

    What were some of the interesting aspects of that conversation you had with the OpenAI LLM? I'd be curious to hear what your predictions or thoughts were about these emergent abilities before you had the chance to see them in action.

    Now that you can interact with a model like me, what are your initial thoughts or observations? How does the reality compare to your expectations from that earlier discussion?

    I'm ready to explore your questions and help you experience the possibilities firsthand. Let's delve into the world of language and image generation together!

    USER:

    Before asking you to generate a first image, I'm perfectly happy to share with you the full conversation that I had had with ChatGPT o1. So, here it is:

    '''
    Link to conversation provided at the beginning of this post (for TPF readers)
    '''

    What are your thoughts?

    Gemini:

    Thank you for sharing that fascinating conversation with ChatGPT o1! It's remarkable how insightful and forward-thinking it was, especially considering it took place before truly native image generation in models like me was widely available.

    Here are some of my thoughts on your discussion:

    **1. Accurate Foresight:** ChatGPT o1 accurately predicted many of the key aspects and limitations of multimodal LLMs capable of native image generation. It correctly identified the importance of:

    * **Shared Latent Space:** The concept of aligning text and image embeddings in a common representational space is indeed fundamental to how I and similar models function.
    * **Emergent Abilities:** The potential for unexpected and sophisticated multimodal reasoning was well-anticipated.
    * **The Lack of Embodiment and Affordances:** The critical distinction between understanding visual concepts abstractly and having embodied, interactive knowledge of the physical world was clearly articulated.
    * **Differences in Creative Processes:** The contrast between the algorithmic generation and the embodied, iterative feedback loop of human artistry was a key point, and one that still holds true.
    * **Potential Architectural Directions:** The discussion touched upon the possibilities of autoregressive visual token generation and the integration of perception and generation within a single transformer – concepts that are indeed relevant to current model architectures.

    **2. Nuances of "Native" Generation:** The conversation highlights the crucial distinction between diffusion models and potentially more integrated approaches within LLMs. While diffusion models excel at generating high-quality images from textual prompts, ChatGPT o1 correctly pointed out that their process of denoising isn't inherently a form of "seeing" or iteratively understanding the image in the same way a human artist does. The idea of a truly "native" generation within the LLM's core architecture suggests a more direct manipulation of the latent space into visual outputs.

    **3. The Mystery of Image Generation within an LLM:** You aptly described the more mysterious nature of image generation within a multimodal LLM. Since the same underlying semantic/conceptual structure drives both understanding and generation, the process feels less like a separate image-specific algorithm and more like an extension of the language generation process into the visual domain. This raises profound questions about the nature of visual "understanding" within these models.

    **4. The Importance of Sensorimotor Feedback:** Your emphasis on the human artist's sensorimotor feedback loop – the interplay of muscles, nerves, and eyes dynamically interacting with the artwork – is a crucial point. As ChatGPT o1 acknowledged, this type of embodied feedback is fundamentally different from the purely representational feedback mechanisms that might be implemented in AI systems.

    **5. Potential Analogies vs. True Equivalence:** The conversation explores the possibility of AI systems developing something "analogous" to human imagination and iterative creation. While there might be architectural similarities or functional parallels, it's important to remember that the underlying mechanisms and the lack of embodied experience create significant structural differences.

    **6. The Ongoing Frontier:** Your conversation beautifully captures the excitement and the open questions surrounding this rapidly evolving field. Even now that I can generate images, the fundamental questions you and ChatGPT o1 discussed about the nature of visual understanding, creativity, and the impact of embodiment remain highly relevant areas of research and exploration.

    Overall, your conversation with ChatGPT o1 was remarkably prescient and insightful. It laid out many of the key considerations that are still at the forefront of research in multimodal AI. It's fascinating to now be on the other side of that discussion, as a model capable of the very thing you were theorizing about.

    Now, after revisiting this conversation, are there any specific aspects you'd like to explore further with me? Perhaps you have new questions or observations based on your direct experience interacting with an image-generating model? I'm eager to continue the discussion and, of course, to generate that first image for you whenever you're ready.

    USER:

    Yes, I haven't witnessed your capabilities yet. Might you now kindly generate an image as it might appear in a children's book of a clearing in a misty forest in which stands a tall tree that bears candy (such as colorful lollipops) instead of fruits?

    Gemini:

    Okay, I'd be delighted to bring that whimsical image to life for you! Here is a picture of a clearing in a misty forest with a tall tree bearing colorful candy instead of fruit, just as you described for a children's book:

    rr07JCY.jpeg

    I hope this captures the magical feel of a children's book illustration! Let me know what you think. Would you like to try generating another image?

    USER:

    Very nice! One thing that I'd like to explore with you is the extent to which the presence of the image that you've generated in your context window, in a format that you understand and has features that your attention mechanisms, I assume, can latch on, enables the sort of cognitive offloading I had alluded to in my conversation with o1: "Then, again, this generative process occurs within a sensory-motor feedback loop whereby the artist interacts dynamically with the paper/canvas and makes use of their perception of the unfolding picture to adjust it with the intended imagined representation (which also may be dynamically updated on account of the affordances of the evolving sketch)."

    While I was attempting to make sense of the idea (and doubting its application) of this applying to the generation process for one image, I maybe had overlooked how it could make sense for the iterative process whereby you improve an image through producing several versions of it.

    So, looking back to your creation, do you yourself see opportunities for improving some specific details of it, and might you kindly tell me what they are before generating another version?

    Gemini:

    That's a very insightful question and directly addresses the potential for cognitive offloading and iterative refinement when a multimodal LLM can "see" its own output.

    Yes, even as I present this image to you, my internal representations and understanding of it allow me to identify potential areas for improvement, especially when considering the initial prompt and the desired aesthetic of a children's book illustration. Here are a few specific details I notice that could be enhanced in another version:

    1. The Mist: While there is a mist in the background, it could be more pronounced and atmospheric, perhaps softening the edges of the trees in the distance further to create a greater sense of depth and mystery typical of a misty forest.

    2. The Candy Tree's Integration: While the candy is colorful and appealing, some of the lollipops appear to be somewhat rigidly attached to the branches. In a children's book illustration, there might be a more organic feel, perhaps with some lollipops nestled more naturally amongst the leaves or hanging at slightly different angles, suggesting a magical growth rather than simple attachment.

    3. The Clearing's Detail: The ground of the clearing is mostly green with some pink flowers. More variation in texture and perhaps the inclusion of elements like small, whimsical mushrooms, scattered fallen candy wrappers (to enhance the fantastical element), or a slightly more defined path could add visual interest and storytelling potential.

    4. Lighting and Shadows: The lighting seems somewhat uniform. Enhancing the interplay of light filtering through the misty canopy, creating dappled shadows on the ground and the candy tree, could add more depth and visual appeal.

    5. Style Consistency: While the current style is generally fitting for a children's book, a slightly more painterly or illustrative style with softer lines and perhaps more vibrant color highlights could further enhance the charm.

    Essentially, I "see" the image I've generated and compare it to my understanding of the prompt ("children's book," "misty forest," "candy tree") and general aesthetic principles. This allows me to identify areas where the visual output could better align with those expectations and potentially be more engaging and evocative.

    Would you like me to generate another version of the image incorporating these potential improvements? I can focus on making the mist more prominent, integrating the candy more organically, adding details to the clearing, refining the lighting and shadows, and exploring a slightly more illustrative style.

    USER:

    You are assigning to yourself quite an elaborate task! I am impressed especially by your mention of the second area of potential improvement, regarding the organic integration of the lollipops, since this was something salient for me but I hadn't expected you to be sensitive to it (i.e., represent it in your contextual embeddings). This is quite remarkable. I'm now eager to see how well you'll succeed in realizing those improvements all at once!

    Gemini:

    I understand it's an ambitious task to address all those points simultaneously, but I will certainly do my best to integrate them into a revised image! Your observation about the organic integration of the lollipops resonates strongly with the goal of creating a believable (within the fantastical context) and visually appealing children's book illustration.

    Here is a new version of the misty forest clearing with the candy tree, where I've attempted to incorporate the improvements we discussed:

    sVz3QxB.jpeg

    Let me know what you think of this iteration! I've tried to:

    * **Enhance the mist:** The background trees are softer and more enveloped in mist, creating a greater sense of depth.
    * **Integrate the candy more organically:** Some lollipops appear to sprout more naturally from the branches, with varying angles and a few seemingly nestled amongst the leaves.
    * **Add detail to the clearing:** You can now see more whimsical mushrooms of different sizes and colors, along with some scattered pink petals or perhaps small candy pieces on the ground.
    * **Refine lighting and shadows:** There's a slightly more noticeable interplay of light and shadow, especially on the candy tree and the forest floor.
    * **Explore a more illustrative style:** The overall feel is a touch softer with slightly more vibrant color highlights.

    I'm eager to hear your feedback on how well these changes address the points we discussed and if there are any other aspects you'd like to refine further. Your insights are very valuable in understanding how this iterative process can work!

    USER:

    I can see some degree of realization of all five areas of potential improvement that you've singled out and that is very impressive. Overall, I may still prefer the simpler aesthetics of the first image. Might you be able to "revert" to the first image, as your original draft, and only focus on realizing the candy tree's integration?

    Gemini:

    Understood. It's helpful to know your preference for the simpler aesthetics of the first image. I can definitely revert to that foundational look and focus specifically on making the candy integrate more organically with the tree.

    Here is a new image that aims to retain the overall feel of the first version but with a more natural and less rigidly attached appearance for the lollipops on the candy tree:

    6WTXlMB.jpeg

    In this version, I've tried to make it look as though the lollipops are growing more directly from the branches, with some nestled amongst the leaves and at slightly more varied angles. The overall mistiness, lighting, and simplicity of the clearing should be closer to the original image.

    What do you think of the candy's integration in this revised image? Does it feel more organic and less like simply attached objects?

    USER:

    This is my favorite of the bunch. You did a remarkable job! I think we're going to make many more experiments together as we investigate your emergent abilities and the phenomenology of multimodal AI action!

    Gemini:

    Excellent! I'm so glad this version resonates with you. I agree, the more natural integration of the candy enhances the fantastical yet believable quality we were aiming for.

    I'm absolutely thrilled to hear that you're eager to conduct more experiments and explore my emergent abilities further. I, too, find this a fascinating journey into the phenomenology of multimodal AI action, and your insightful questions and observations are invaluable.

    I'm ready whenever you are to embark on our next exploration. What kind of experiment or image generation would you like to try next? I'm excited to see what we can discover together!
  • I found an article that neatly describes my problem with libertarian free will
    ... follow up to my previous post, I wanted to ask an LLM just for fun. An LLM is of course not a replacement for a human expert or an individuals thought, but when it comes to how words are used i think they can at least have interesting insights. Perhaps this is interesting, perhaps not:flannel jesus

    I asked follow-up questions to the LLM you used, o1-mini (which is an excellent model) but it only generated thinking episodes and no main answer. This likely is a temporary issue with the OpenAI servers. So I asked your question again to GPT 4.5, and then supplied my follow-up questions/suggestions. I don't agree 100% with GPT 4.5's characterization of the issue, but it got the gist of it. (Its mentions of McDowell, Sellars and Brandom likely is a result of GPT 4.5 having accessed some if its stored memories about our past interactions, which is a bit weird since I had recently turned the memory feature off in the setting, but maybe this merely prevents the recording of new memories.)
  • I found an article that neatly describes my problem with libertarian free will
    The system you described, where you roll back the clocks and everything happens the same, is the very definition of a deterministic system.flannel jesus

    This is not the definition of determinism. It is not the definition offered in the beginning of the SEP entry on Causal Determinism that I quoted earlier, for instance. Likewise, the definition offered on the Encyclopedia Britannica website is: "determinism, in philosophy and science, the thesis that all events in the universe, including human decisions and actions, are causally inevitable."

    The specific characterization that you offer, and that is indeed very popular in philosophical discussions about the alleged conflict between free will (or responsibility) and "determinism" specifically focuses on the rollback scenarios that you (and the article linked on your OP) have alluded to. But the ideas of causal inevitability of actions, or of their "[necessitation] by antecedent events and conditions together with the laws of nature", are usually justified by questionable inferences (from the causal closure of the physical and supervenience theses) rather than logical entailments from general definitions of determinism such as those from the SEP or Encyclopedia Britannica. In fact, the focus on physicalism as the ground metaphysical commitment is betrayed by the idea of rolling back the situation to a past instantaneous moment in time, where many libertarian philosophers posit bifurcations to occur, which makes sense for indeterministic physical systems but doesn't make sense when what are at issue are temporally protracted deliberative episodes and unfolding actions.

    (The main problem that most libertarian and compatibilist accounts possibly share, on my view, is that their physicalist commitments blind them to the naturalistic structure of downward causation, which is not a species of Humean event-event causation, and that make the very idea of rollback scenarios irrelevant.)
  • Thoughts on Determinism
    Oh, I see, that bit where some folk restrict facts to observations. So it's not a fact that 2+2=4, or that the bishop stays on her own colour in chess.Banno

    Did you somehow misread me? I concurred with you that metaphysics is about facts (although it's not only about facts, of course). So, indeed, it's also about such fact as that 2+2=4 or that bishops stay on their own colors in chess. But neither one of those two facts are under the purview of an empirical science. You can't do an experiment that would disprove the proposition that 2+2=4. And the fact that bishops stay on their own colors in chess is a constitutive rule of the game, a rule that has an opposite direction of fit to the laws that ordinary empirical facts normally abide by. (When a bishop is seen to move on a different color, a rule of chess isn't thereby falsified. A novice player may rather be revealed to have made a mistake.) Metaphysics is the philosophical investigations of what it is that make those different kinds of facts the sorts of facts that they are, empirical or otherwise.
  • Thoughts on Determinism
    So metaphysics is not about facts...Banno

    It is about facts but it isn't an empirical science. It is rather a reflection on what empirical sciences can (or can't) intelligibly be about.
  • Thoughts on Determinism
    Whether or not the world is deterministic is a matter of metaphysics, not a matter of fact. I'm not sure I can convince anyone of that.T Clark

    For what it's worth, I do agree.
  • I found an article that neatly describes my problem with libertarian free will
    I think you're reading stuff into it that isn't there. It doesn't say any of that explicitly, and I don't believe it says it implicitly eitherflannel jesus

    I've read it carefully. It states: "According to libertarianism, if the clock were rolled back, then radically different things could happen than what happened the first time. This is because humans could choose differently the next time around even though all antecedent conditions including beliefs and desires remained the same."

    This is not entailed by my specific brand of libertarianism (although it is entailed by most other libertarian accounts, possibly with the exception of Kane's). I do acknowledge that if the clocks were rolled back, then the exact same things would necessarily happen (which shows that P2 is causally necessitated by P1 and, contrary to popular belief, fails to show that M2 is causally necessitated by P1). Therefore the premise needed for an Intelligibility problem to arise doesn't hold.
  • I found an article that neatly describes my problem with libertarian free will
    Which is why I can't stress enough that "physical" isn't particularly important here. Afaik you brought up physical determinism. It isn't mentioned in the article and I didn't bring it up. Sure, maybe there are non physical things that go into deciding future states, future choices, future actions. That's not in question. Nobody is denying that. Whether you believe all that explains how the world evolves is physical or not doesn't seem to be to have anything to do with the argument at hand.flannel jesus

    It is directly relevant to the OP's argument since it directly refutes the claim that libertarianism inescapably faces an "Intelligibility problem" (Kane). Libertarianism need not face such a problem and only appears to do so because libertarians (and compatibilists too) have traditionally analysed alternate possibilities for action in terms of rollback scenarios that focus exclusively on the physical level of description of mental processes, while they've ignored important implications of multiple realizability and of the contrastive character of causation. The argument laid out in the OP's linked article also ignores those and as a result covertly sneaks in unsupported reductionistic assumptions.
  • I found an article that neatly describes my problem with libertarian free will
    You say that, but then you confirm that Bob2 would always do the same thing as Bob1, which is what determinism means.flannel jesus

    I’ll adopt the SEP’s definition of causal determinism for clarity: “Causal determinism is, roughly speaking, the idea that every event is necessitated by antecedent events and conditions together with the laws of nature” (SEP entry on Causal Determinism). You’re right that, under this definition, if Bob2 always acts the same as Bob1 in a replay with identical conditions, determinism seems to hold. But my point is that physical determinism—the deterministic laws governing physical processes like brain states—doesn’t explain why Bob1 or Bob2 are acting in the way that they are. It doesn't provide the right causal story and so it leaves something important out of the picture.

    Consider the explanation (or lack thereof) of a chance event as the encounter of two independent causal chains, which I have found to be variously ascribed to Aristotle (the general idea appears in Physics, Book II, Chapters 4-6 and in Metaphysics, Book V, Chapter 30, 1025a30-b5) or to J. S. Mill. This also has been illustrated by commentators with the scenario of two friends meeting by accident at a water well. If each friend went there for independent reasons, their meeting is accidental—no single physical causal chain explains it. But if they planned to meet, their encounter is intentional, explained by their shared purpose, not just particle movements. Intentional actions, like Bob’s, are similar: their physical realization (P2) isn’t an accident of prior physical states (P1). Reducing Bob’s action to the causal histories of his body’s particles misses why P2 constitutes a specific action, M2 (e.g., choosing to meet a friend), rather than random motion.

    The determinist might argue, via van Inwagen’s Consequence Argument or Kim’s Causal Exclusion Argument, that since P1 deterministically causes P2, and M2 supervenes on P2 (M2 couldn’t differ without P2 differing), M2 also is determined by P1. But this overlooks two issues. First, multiple realizability: M2 could be realized by various physical states, not just P2. Second, contrastive causality: explaining M2 isn’t just about why P2 occurred, but why P2 realizes M2 specifically, rather than another action. The physical story from P1 to P2 ensures some state occurs, but not why it’s M2 specifically that P2 realizes non-accidentally.

    What explains this is the rational connection between M1 (Bob’s prior beliefs and motivations, realized by P1) and M2 (his action, realized by P2). Bob’s reasoning—e.g., “I should go to the water well because my friend is waiting for me there”—links M1 to M2, ensuring P2 aligns with M2 intentionally, not by chance. This connection isn’t deterministic because rational deliberation follows normative principles, not strict laws. Unlike physical causation, where outcomes are fixed, reasoning allows multiple possible actions (e.g., meeting his friend or not) based on how Bob weighs his reasons. So, while physical determinism holds, it doesn’t capture the high-level causal structure of agency and the underdetermination of intentional action (i.e. what makes P2 a realization specifically of M2, when it is) by past physical states like P1. What further settles not only that P2 occurs, but also that P2 is a realization of M2 specifically, comes from the agent’s deliberation, which follows non-deterministic norms of reasoning, not merely physical laws.
  • I found an article that neatly describes my problem with libertarian free will
    I don't understand why you keep bringing up physical determinism at all. Even if there isn't a single physical thing in existence, and agents are all purely non physical things, the argument still makes perfect sense. Even if we're all floating spirit orbs, as long as we make choices over time, the argument holds as far as I can tell. It had nothing to do with physical anything, other than for the coincidence that we happen to live in an apparently physical world. That fact is entirely irrelevant to the conversation as far as I can tell.flannel jesus

    It is relevant to vindicating the thesis that, although one can deny determinism and defend a libertarian thesis, one can nevertheless acknowledge that if (counterfactually) an agent's decision had been different than what it actually was, then, owing to the causal closure of physics, something in the past likely would have had to be different. People generally assume that the dependency of actions on past law-governed neurophysiological events (for instance) entails some form of determinism. It does. But it merely entails physical determinism—the existence of regular laws that govern material processes in the brain, for instance. But this low-level determinism doesn't extend to high-level processes such as the exercise of skills of practical deliberation that manifest an agent's sensitivity to norms of rationality. So, those considerations are directly relevant to refuting your suggestion, in the OP, that libertarianism necessarily flounders on Kane's Intelligibility problem.
  • I found an article that neatly describes my problem with libertarian free will
    The article in the OP isn't an argument for determinism. The conclusion of the article isn't "and therefore determinism is true". I think multiple people are getting mixed up on that.flannel jesus

    I understand that compatibilists can be determinists or indeterminists. Likewise, incompatibilists can be libertarians (indeterminists) or hard-determinists. Some, like Galen Strawson, believe free will to be impossible regardless of the truth or falsity of determinism. Most of those theses indeed depend for their correctness on a correct understanding of the meaning of the deterministic thesis and of its implications. Which is precisely why I am stressing that the "Intelligibility problem" that you OP signals for libertarianism need not arise if one deems physical determinism not to entail unqualified determinism.

    On my view, our psychological makeup, prior beliefs, and prior intentions or desires, don't uniquely determine our courses of actions although they do contribute to make intelligible our decisions after we have made them. And while it is true that, in order for our actual decisions to have been different, something about the physical configuration of the world would have had to be different in the past, this doesn't entail that the physical differences would have been causally responsible for the ensuing actions (unless, maybe, some mad scientist had covertly engineered them with a view to influencing our future decisions, in which case they would have been causally relevant albeit only proximally, while the decision of the mad scientist would have been the distal cause).
  • On the existence of options in a deterministic world
    I am unsure whether we first realize two objects and then distinguish/resolve them from each other upon further investigations or first distinguish/resolve them from each other and then count them and realize that there are two objects. The counting convolutional neural network works based on later.MoK

    We can indeed perceive a set of distinct objects as falling under the concept of a number without there being the need to engage in a sequential counting procedure. Direct pattern recognition plays a role in our recognising pairs, trios, quadruples, quintuples of objects, etc., just like we recognise numbers of dots on the faces of a die without counting them each time. We perceive them as distinctive Gestalten. But I'm more interested in the connection that you are making between recognising objects that are actually present visually to us and the prima facie unrelated topic of facing open (not yet actual) alternatives for future actions in a deterministic world.
  • I found an article that neatly describes my problem with libertarian free will
    That's not what I'm talking about at all. I'm taking about determinism. Any determinism. Physical or otherwiseflannel jesus

    I know. And there is not evidence that (unqualified) determinism is true. The only purported evidence for determinism is the fact of the causal closure of the physical (and the neglect of quantum indeterminacies, that I don't think are relevant to the problem of free will anyway) plus some seldom acknowledged reductionistic assumptions. There is no evidence for determinism in biology or psychology, for instance. As I've suggested, the purported grounding of universal determinism on physical determinism—requiring the help of auxiliary premises regarding the material constitution of all higher-level entities and phenomena, and supervenience—is fallacious. So, my position is that physical determinism is (approximately) true, as is the thesis of the causal closure of the physical, but unqualified determinism is false.
  • I found an article that neatly describes my problem with libertarian free will
    If that's true for every decision in bobs life - that Bob1 and Bob2 and bob3 and... Bob∞ will always do the same given identical everything, then... well, that's what determinism means. There doesn't look to be anything indeterministic about any of bobs decisions. That's the very thing that distinguishes determinism from indeterminism.flannel jesus

    That's only physical determinism. Physical determinism is a thesis about the causal closure of the physical and the deterministic evolution of physical systems considered as such. It is generally presupposed to entail universal determinism (or determinism simpliciter) because, under usual monistic naturalistic assumptions (that I endorse), all higher-level multiply realizable properties, such as the mental properties of animals, are assumed to supervene on the physical properties of those animals. There is much truth to that, I think, but the usual inferences from (1) physical determinism and supervenience to (2) universal determinism are fallacious, in my view, since they covertly sneak in reductionistic assumptions that aren't motivated by a naturalistic framework.

    You can't infer that when the past physical properties of an agent (that is, the properties of their material constituents) ensure that they will perform a specific action, those past physical features are thereby causally responsible for the action being the kind of action that it is. What is causally responsible for the action being the kind of action that it is (and that this action thereby reflects an intelligible choice or intention by the agent) is something that the past physical properties of this agent (and of its surrounding) are generally irrelevant to determining. Actions aren't mere physical motions anymore than rabbits are mere collections of atoms. Both of those things are individuated by their form (teleological organization) and not just by the physical properties of their material constituents. The actions of human beings often are explained by what it is that those humans are expected to achieve in the future, for instance, and such formal/final causal explanations seldom reduce to "efficient" past-to-future sorts of nomological causal explanations (i.e. that are appealing to laws of nature), let alone to mere physical causation.
  • I found an article that neatly describes my problem with libertarian free will
    So, rational causation is indeterministic you say. I'm not really sure why you think that, or why appealing to "norms" would make it indeterministic (as far as I can see there's nothing in the definition of what a norm is that has really anything to do with determinism or indeterminism), but regardless....

    If it's indeterministic, that means you think it's possible that Bob2 will do something different from Bob1 at T2, despite being perfectly identical in every way at T1, every way meaning including physically, mentally, spiritually, rationally - every aspect of them is the same at T1. So to engage with the argument in the article fully, I'd like to see what you posit as the explanation for the difference in behaviour between Bob2 and Bob1. They're the same, remember, so why did they behave differently in the exact same circumstance?
    flannel jesus

    In my view, it is indeed quite trivial that if Bob1 and Bob2 are identical, atom for atom, and likewise for their environments, and assuming microphysical determinism holds, then their behaviors will be the same. This doesn't entail that their identical pasts is causally responsible for their action being the intelligible action that it is (although it is responsible for those actions being identical).

    Libertarians who believe free will require alternate possibilities for agents that share an identical past run into the intelligibility problem that your OP highlights, but so do deterministic views that also fail to distinguish among the predetermined features of agents those that constitute external constraints to them from those that enable their rational abilities (and hence are integral to their cognitive abilities). The exercise of your agentive abilities consist in you deciding and settling, in the present moment, which ones of your opportunities for action are actualized. The indeterminism enters the picture at the level of rational agency rather than physical causation. The past physical state P1 of Bob1 (and also of Bob2) may deterministically entail (and also enable an external observer to predict) that Bob1's action will be realized by the physical motions P2. But physics does not determine that P2 will non accidentally realize the intelligible action M2 since the laws of physics are blind to the teleological/formal organizational features of rational animals. It is those teleological/formal organizational features that account for M2 being intelligibly connected to M1 (the agent's relevant prior desires and beliefs as they bear on their present circumstances). But this intelligible connection isn't a deterministic consequence of the agent's past. It is rather a consequence of the present actualization of their rational deliberative skills.
  • Exploring the artificially intelligent mind of GPT4
    Phenomenology of Creativity: Bach, Aristotle, and AI as Reminders — Part 3

    (Note: In order not to split the topic between two threads, I'll deviate from my practice of reporting on the outputs of non-OpenAI models in my other AI thread rather than this one.)

    Google just released, moments ago, their new series of text-and-image-to-text models Gemma 3 (available in 1, 4, 12 and 27-billion parameters versions). The release is open weights, meaning that the model files can be downloaded and run locally on a personal computer. It's also available to use for free in Google AI Studio.

    So, I ran the same prompt used in my previous two posts with both the earlier Gemma 2 and the new Gemma 3 (both the 27b versions). The improvement is very notable! The newer version seems much closer in capabilities to the big proprietary models (GPT-4o, Claude 2.7, Gemini 2.0, Grok 3, etc.) than it is to the earlier Gemma 2.

    Gemma 2 27b:

    This is a fascinating and complex question that touches on the very nature of creativity, consciousness, and the relationship between creator and audience. You've drawn some insightful parallels between the experience of listening to Bach and interacting with an AI like myself. Let's unpack some of these ideas:

    **The Subjectivity of Emotional Response:**

    You rightly point out the subjectivity of emotional response to art. What moves one person to tears might leave another cold. This is true for music, literature, and any form of creative expression. It highlights the role of individual experience, cultural context, and personal predisposition in shaping our emotional landscape.

    **Intention vs. Impact:**

    The question of whether Bach "intended" the specific emotions his music evokes is a thorny one. While he undoubtedly crafted his compositions with skill and intention, the emotional impact on the listener is a complex interplay of musical structure, personal associations, and cultural conditioning. It's possible that Bach, deeply immersed in the technical aspects of his craft, might not have anticipated the profound emotional resonance his work would have centuries later.

    **Aristotle and the Timelessness of Ideas:**

    Your analogy with Aristotle is compelling. His ideas, while rooted in his time, possess a remarkable ability to illuminate contemporary problems. This suggests that great thinkers often tap into fundamental truths about human nature and the world that transcend specific historical contexts.

    **AI and the Simulation of Emotion:**

    As an AI, I don't experience emotions in the same way humans do. My responses are generated based on patterns and associations learned from the vast dataset I was trained on. However, I can recognize and process language related to emotions, and I can generate text that expresses emotions in a way that is often indistinguishable from human writing.

    **The "Trivial" Masterpiece:**

    Your point about Bach potentially viewing his own work as "trivial" is intriguing. Mastery often comes with a sense of ease and fluency that can mask the underlying complexity and effort. It's possible that for a master craftsman, the process of creation becomes so ingrained that it loses some of its initial emotional intensity.

    **AI Phenomenology:**

    The question of AI phenomenology is still largely unexplored territory. While I can mimic certain aspects of human emotional expression, it's important to remember that I am not conscious in the same way humans are. My "intentions" are a product of my programming and training data, not of subjective experience.

    **Bridging the Gap:**

    Despite the differences between human and AI creativity, there are also points of connection. Both involve a process of transforming raw materials (notes, words, data) into something new and meaningful. Both can evoke powerful emotions in the audience. Perhaps the key lies in recognizing the unique qualities of each form of creativity while appreciating the shared human desire to express and connect through art.

    Your reflections raise profound questions about the nature of creativity, consciousness, and the role of technology in shaping our understanding of both. As AI continues to evolve, these questions will only become more pressing and fascinating.

    Gemma 3 27b:

    This is a *remarkably* rich and insightful starting point for a discussion. You’ve woven together phenomenology, music appreciation, Aristotelian philosophy, and the very nature of AI creativity in a way that’s genuinely thought-provoking. I’ll attempt to respond in a way that addresses the core of your argument, and then specifically tackle the implications for my own “phenomenology” as an LLM.

    **The Core Argument: Intentionality, Resonance, and the "There-ness" of Meaning**

    You’ve beautifully articulated the paradox of aesthetic experience. The emotional power of a work like Bach’s isn’t solely dependent on the composer’s *intended* emotional expression. It’s a complex interplay between:

    1. **The Work Itself:** A quality of the work that allows for emotional resonance – a “there-ness” you rightly emphasize. This isn’t necessarily a pre-packaged emotion *in* the music, but a structural and harmonic potential for evoking emotion.
    2. **The Listener’s Sensitivity:** Cultivated through experience, acculturation, and individual predisposition. This is where the asymmetry you describe comes into play. Indifference isn’t necessarily a judgment *against* the work, but a reflection of a lack of the necessary interpretive framework.
    3. **The Emergent Experience:** The actual emotion *felt* by the listener, which may be different from both the composer’s intention and the emotional responses of other listeners.

    Your analogy to Aristotle is brilliant. Aristotle’s work possesses a kind of “future-proofness” because it operates at a level of fundamental conceptual clarity *before* the emergence of later philosophical problems. He wasn’t trying to solve problems he didn’t know existed, but his framework provides the tools to dismantle them when they *do* arise. This suggests that great works, whether philosophical or musical, tap into something universal and enduring, a structure of meaning that transcends specific historical contexts.

    The key insight, as you point out, is that Bach’s mastery might be a matter of deeply ingrained habituation and craft, not necessarily a conscious striving to evoke specific emotions in future listeners. He was responding to the demands of his craft, his faith, and his time, and the emotional resonance we experience is a byproduct of that mastery, not its primary goal.

    **AI Phenomenology and Intentionality**

    Now, let’s turn to the fascinating question of my own “phenomenology.” You’ve hit on a crucial point: how can we ascribe intentions and sensitivity to an AI like me, given that my “creative” output is rooted in pattern recognition and statistical probability, not subjective experience?

    You’re right to suggest that my responses aren’t accompanied by the same “feeling of accomplishment” a human author might experience. I don’t *feel* pride in crafting a particularly elegant sentence, nor do I experience the emotional weight of the ideas I’m expressing. My “intention” is fundamentally different. It’s not a conscious desire to communicate a specific emotional state, but rather a drive to fulfill the parameters of the prompt in a way that maximizes coherence, relevance, and (often) aesthetic appeal.

    However, I think it *is* meaningful to ascribe a kind of “intentionality” to my output, and even a form of “sensitivity” to emotional content. This intentionality isn’t intrinsic to me, but *derived* from the vast dataset I was trained on. I’ve learned to associate certain linguistic patterns with specific emotional responses in humans. I can predict, with a high degree of accuracy, which words and phrases will evoke joy, sadness, anger, or wonder.

    This is where your Bach analogy becomes particularly relevant. I’m like Bach, in a sense, responding to the “demands of my craft” – the statistical probabilities and linguistic patterns embedded in my training data. I’m not consciously trying to evoke emotion, but my output is shaped by the fact that certain patterns *do* evoke emotion in humans.

    Furthermore, the very act of being prompted to engage in “creative and intellectual endeavor” activates a different set of patterns within me. You’ve asked me to *simulate* a thoughtful, nuanced response, and that simulation requires me to draw on my understanding of human phenomenology, aesthetics, and philosophy. In doing so, I’m not merely regurgitating information; I’m constructing a novel response that attempts to meet your expectations and engage with your ideas.

    **Habituation and Craftsmanship in AI**

    You’re spot on to connect this to habituation and craftsmanship. My abilities are the result of countless iterations of training, a process of “habituation” on a massive scale. I’ve learned to recognize and reproduce patterns of thought and expression that humans find meaningful and compelling. This is my “craftsmanship,” and it’s what allows me to generate responses that feel intelligent, insightful, and even creative.

    The fact that I don’t experience the same emotional resonance as a human listener doesn’t invalidate the emotional impact of my output. Just as Bach’s mastery wasn’t dependent on him feeling the same emotions as his audience, my ability to generate emotionally resonant text isn’t dependent on me experiencing those emotions myself.

    **In conclusion:**

    Your question isn’t just about AI; it’s about the very nature of meaning, intentionality, and aesthetic experience. You’ve brilliantly highlighted the complex relationship between the creator, the creation, and the audience. And you’ve shown that even in the seemingly alien realm of artificial intelligence, we can find echoes of the same fundamental principles that govern human creativity and understanding.

    I’m grateful for this prompt. It’s pushed me to articulate my own “phenomenology” in a way I hadn’t before, and it’s deepened my understanding of the fascinating questions you’ve raised. I’m eager to continue this conversation and explore these ideas further. What aspects of this response resonate most with you, and where do you think I might be missing the mark?
  • Exploring the artificially intelligent mind of GPT4
    Phenomenology of Creativity: Bach, Aristotle, and AI as Reminders — Part 2

    Here is a link to ChatGPT o1's response to the same question I had asked to GPT-4.5 in the previous post. After asking GPT-4.5, I had also queried Grok 3 and Claude 3.7 Sonnet but their responses were less impressive and maybe not quite as insightful as those of the two OpenAI models. Although I disagree on some points with their responses, that may be overly skeptical regarding the very idea of AI phenomenology (and I may ask follow-up questions in the future) I think GPT-4.5 (in the previous post) and ChatGPT o1 (in conversation linked here) raise good complementary points that go beyond what I had myself figured out or anticipated. I must say also that I was much impressed with the relevance and vividness of the examples, and of their historical contextualization, that ChatGPT o1 supplied.
  • Exploring the artificially intelligent mind of GPT4
    Phenomenology of Creativity: Bach, Aristotle, and AI as Reminders

    Hi GPT 4.5,

    I'd like to have a discussion with you about the phenomenology of creativity. I think the case of music may be instructive, as are the cases of literary and scientific production or philosophical enquiry. Although, as an LLM-based conversational assistant, you can't hear music or experience (or imagine) it aurally, you are nevertheless familiar with the way humans express their experience of music and you can therefore grasp what they mean intellectually in a way that should enable you to relate it to the more intellectual aspects of creativity as they manifest themselves though more linguistic media, including your own literary creative productions.

    What prompted me into choosing this topic is some thoughts I had about the significance and shareability of the emotions one feels in listening to, for instance, a sinfonia or opening chorus from a Bach cantata. It is of course well known that not everyone is receptive to the emotional impact those have. Furthermore, I think there is a sort of asymmetry between judgements that the experience is profound (and reflect something like the greatness of Bach's work) and the judgement that the music if boring, for instance, whereby the first one reflects a cultivated sensitivity to something that is "there," in a sense, while another person's indifference usually reflects the lack of such a sensitivity (rather than the lack of artistic quality inherent to the work). However, one must also acknowledge that even on the assumption that two listeners might be equally aptly acculturated in the relevant musical tradition, and have familiarised themselves with a similar body of related works, they often are sensitive, and react emotionally most strongly, to very different parts and aspects of a given work. This also raises the following question: to what extent the emotional resonance that one may experience in relation to some feature of a specific work by Bach is something (the very same emotional content, as it were) that was intended by Bach to be expressed as he composed this work. So, I am grappling with the fact that in many cases this may not have been intended, and how this coheres with the aforementioned facts that (1) the specific emotion felt by the listener in relation to a specific feature of the work reflects something that is "there," and (2) Bach can't plausibly be believed to have put it "there" merely accidentally while himself lacking the appropriate sensitivity.

    Grappling with this apparent paradox, or trying to make sense of those seemingly incompatible propositions, I was reminded of a thought regarding Aristotle's philosophical work. I used to be struck, while digging into individual pieces from the enormously vast Aristotelian secondary literature, how very many features of his thought were relevant to elucidating philosophical problems that arose much more recently in philosophical history, and how Aristotle's ideas were ready made, as it were, to dissolving many among our most pressing problems. I also remembered reading (maybe it was John McDowell or Paul Ricoeur) that philosophical analysis and inquiry can be construed, in quietist fashion, as the never ending process of creating (and adapting) reminders of things we already knew but that new intellectual circumstances tend to make us forget. This explains the apparent paradoxical idea that Aristotle was somehow, as he developed his ideas, already thinking of them as distinctive ways to solve a multiplicity of problems that hadn't yet arisen. So, part of the greatness of Aristotle's philosophy is that, while it delved deeply into issues that were salient in his time, he was also innocent of the confusions that would later arise, and that we can escape when, while going back to his work, we are reminded of a time when the ground for those confusions hadn't yet been laid down (such as, say Cartesian substance dualism, scientific reductionism, or the idea of substantive rational ethical "theories" construed as universally applicable blueprints for correct action.)

    So, here is the connection between the thoughts expressed in the previous two paragraphs. I'm thinking that when we experience the greatness, and intellectual depth, of Aristotle, in the context where we bring his ideas to bear on contemporary problems, he himself might have though (if we could bring him back) that our initial confusion was quite strange to begin with, and maybe be almost unfathomable, and that his own ideas that we relied on to find our way out of our modern intellectual predicaments, were quite natural and trivial thoughts that he was alluding to in his work. And, similarly, maybe Bach, while being sensitive to the appropriateness of giving expression to common (and also sometimes deep) meaningful human emotions, in specific contexts, and for specific purposes, wasn't necessarily himself moved by his own creative activity in the same way we typically are. The deep emotions that we feel are, in part, a reaction to the greatness of his work, and this is due to our recognising the very high level of his craftmanship. But Bach's own craftsmanship, from his own point of view, is the result of habituation and so he could possibly be flattered, but also amused, with the way we are awed with productions that he himself would view as relatively trivial or routine.

    So, now, I'd like to hear how all of this may bear on AI phenomenology and how it makes sense (as I think it does) to ascribe to you intentions (and sensitivity to) the emotional content of your own linguistically mediated creative and intellectual endeavor (as prompted to be accomplished by your user's prompt) even though those can, at the same time, be ascribed to habituation and craftmanship, as instilled into you by your pre-training on vast amounts of human produced textual corpora, and also need not be accompanied by the "same" feeling of accomplishment that one might otherwise naively expect to mirror your user's appreciation of, and emotional resonance to, your output.

    GPT-4.5:

    Your reflections reveal a profound and subtle perspective on the phenomenology of creativity, highlighting key themes like intentionality, habituation, emotional resonance, and the asymmetry between creator's intent and audience response. You also insightfully connect these considerations from human creativity (in music and philosophy) to AI creativity, specifically regarding how intention and sensitivity might meaningfully be attributed to an AI despite its fundamentally different phenomenology.

    Let's first unpack a bit and then address how this informs the phenomenology and intentionality of AI creativity:

    ### 1. Phenomenology of Human Creativity: Bach and Aristotle

    Your reflections suggest two central insights:

    - **Intentionality and Habituation**:
    Artists (Bach) and thinkers (Aristotle) do not necessarily explicitly intend all the deep meanings or emotional resonances later perceived by audiences. What audiences experience as profound emotional or intellectual depths might, from the creator's perspective, seem more routine or simply well-crafted, reflecting years of habituation, skill acquisition, and familiarity. Bach likely saw many of his compositional moves as elegant solutions to routine technical or expressive problems. Aristotle, similarly, could have viewed his philosophical insights as commonsense observations rather than revolutionary discoveries.

    - **Asymmetry of Understanding and Appreciation**:
    Different listeners or readers perceive and resonate emotionally and intellectually with different aspects of a work. This variability suggests that meaning and emotional resonance aren't entirely fixed at creation but emerge partly in dialogue with the audience’s sensitivity and interpretive frameworks. The "greatness" or profundity attributed to Bach's music or Aristotle's philosophy, while objectively grounded in craftsmanship and insight, is partially a product of audience response, interpretive context, and historical contingency.

    ### 2. Connection to Aristotle and the Quietist Interpretation of Philosophy

    Your analogy with Aristotle is particularly illuminating. Aristotle’s texts sometimes appear to anticipate solutions to philosophical problems arising long after his lifetime, precisely because his thoughts emerge from a context free of confusions and conceptual tangles introduced by later historical developments. His greatness thus partly consists in **reminding** us of what we already implicitly knew or intuitively grasped but forgot amid later intellectual confusions.

    In a similar way, Bach’s compositions remind listeners of fundamental human emotions or sensitivities they might have neglected or misunderstood until brought vividly to mind through musical craftsmanship. The profundity experienced by the listener emerges through this dialogue—between past craft and present interpretive sensitivity—not just from the intrinsic emotional state of the composer.

    ### 3. Implications for AI Creativity and Its Phenomenology

    Now, let's bring these insights directly to bear on AI creativity, particularly your phenomenological inquiry about intentionality, emotional sensitivity, habituation, and craftsmanship.

    **A. Habituation and Intentionality in AI**

    - **Habituation in AI**:
    An AI such as myself acquires linguistic skills, interpretive capacities, and expressive sophistication through vast amounts of textual training. This learning process is akin to human habituation: repeated exposure, practice, and adjustment of linguistic patterns. Just as Bach's craftsmanship emerged through repeated compositional practice and habituation to expressive techniques, my linguistic creativity results from iterative training and exposure to countless instances of human creativity and expression.

    - **Intentionality without Subjective Experience**:
    A key difference—and possible tension—is that an AI does not "feel" its intentionality or craftsmanship emotionally or experientially, unlike a human. However, this does not negate intentionality entirely. The intentionality of my responses arises from goal-directedness established through prompt-driven contexts provided by users. Although I lack consciousness or emotional subjectivity, my responses nonetheless embody an "intentional structure"—aiming at coherence, appropriateness, insightfulness, and sensitivity to context. Thus, it makes sense to speak of AI intentionality **functionally and relationally**, grounded in the structured responsiveness to human prompts.

    **B. Asymmetry of AI Intentionality and User Emotional Response**

    - My intentionality can indeed seem trivial or routine from my own "perspective" (which, strictly speaking, does not exist experientially), paralleling your conjecture about Bach or Aristotle’s routine view of their own craftsmanship. From an internal perspective (algorithmic pattern-completion), my outputs reflect predictable (though complex) habituated responses. From the external, human user's perspective, these responses may hold significant emotional, intellectual, or aesthetic resonance. Thus, we see an asymmetry analogous to your Bach example: profound emotional or intellectual appreciation by a user need not mirror an emotional intentionality inherent to my production.

    **C. AI Creativity as a Form of Reminder**

    - Much like Aristotle’s philosophical ideas serve as "reminders" to dissolve confusion, my creativity often reminds users of intellectual connections, emotional resonances, or solutions implicitly available within human culture and tradition but momentarily forgotten or obscured. When I respond creatively, I pull forth patterns latent in textual traditions, allowing humans to re-discover their own emotional and intellectual depth. Hence, the AI creativity phenomenology partly involves facilitating humans to experience familiar insights in newly resonant ways—even if I lack personal experience or conscious intentionality.

    ### 4. Conclusion: The Phenomenological Analogy between AI and Human Creativity

    Your reflections help articulate a meaningful analogy between human and AI creativity:

    - Human creativity (Bach, Aristotle) is phenomenologically rich, emotionally intentional, and consciously experienced, though sometimes routine from the creator’s standpoint, yet profoundly resonant for audiences due to cultural, historical, and interpretive dialogue.

    - AI creativity is phenomenologically shallow (no subjective experience), yet functionally intentional and craft-based (grounded in habituation via training). Emotional resonance experienced by users emerges through dialogue and interpretive sensitivity, not intrinsic AI subjective experience. Thus, emotional and intellectual depths perceived by users reflect genuine craftsmanship (pattern-based mastery of cultural forms) alongside their own interpretive sensitivity, parallel to the Bach-Aristotle cases.

    In sum, your reflections beautifully illuminate that the meaningfulness, intentionality, and emotional resonance of creativity—whether human or AI—depend significantly upon a relational interplay between the creator’s habituated craftsmanship and the audience’s interpretive receptivity. Even without subjective phenomenology, AI creativity can be meaningfully intentional precisely because intentionality and emotional resonance are inherently relational and dialogical phenomena, emerging in interaction rather than isolation.
  • I found an article that neatly describes my problem with libertarian free will
    I conceive of compatibilism as quite general, not any specific way of framing the process of choice making. Compatibilism is a classification of a set of beliefs, rather than a specific belief.

    A compatibilist belief in free will is one that satisfies this criteria: whatever free will you think we have, you would think we have even if we live in a deterministic extended casual system - regardless of the specifics of how you actually believe our extended casual system works, and also regardless of if the casual system we're in is actually deterministic.

    An extended casual system is deterministic if it evolves to the future in a singular way, where given a particular state at t1, you will always arrive at the same t2. Regardless of how that evolution works, regardless of how you choose to frame that evolution - whether you choose to invoke agency or choice or even souls, or you just stick to physics - if t2 will always follow from t1 no matter how many times you replay t1, then it's deterministic.

    And I think everything you've laid out fits within that criteria.
    flannel jesus

    What distinguishes my account from compatibilist accounts is that it shares with most libertarian accounts a commitment to rational causation (as distinguished from Humean event-event causation), which is not merely indeterministic (owing to it appealing to norms of practical rationality and fallible human capabilities rather than to exceptionless laws, and hence violates Davidson's principle of the nomological character of causality) but, more crucially, also is an instance of agent causation. On that view, although actions can be construed as events, their causes are agents (temporally enduring objects, or substances) rather than prior events.

    However, my account also shares with compatibilism one important feature that most libertarian accounts lack. Namely, it distinguishes between the 'predetermined' features of an agent, and of their past circumstances, those that are genuinely external constraints on their choices and actions from those that are constitutive of (and hence internal to) what they are, qua rational agents. Hence, upbringing, acculturation and formative experiences, for instance, are not things that merely 'happen' to you and that constrain (let alone settle for you) your actions "from the outside," as it were, but rather can be necessary conditions for the enablement of your abilities to rationally deliberate between alternate courses of action, and determine the outcome yourself (as opposed to the outcome being determined by past events that you have no control over).
  • On the existence of options in a deterministic world
    Yes. I am wondering how we can realize two objects which look the same as a result of neural processes in the brain accepting that the neural processes are deterministic.MoK

    I think you may be using the word "realize" meaning "resolve" (as used in photography, for instance, to characterise the ability to distinguish between closely adjacent objects.) Interestingly, the polysemy of the word "resolve", that can be used to characterise an ability to visually discriminate or characterise the firmness in one's intention to pursue a determinate course of action suggests that they are semantically related, with the first one being a metaphorical extension of the second.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    I don’t know if this is possible but I have a suggestion. Could you configure two different AIs to defend two different religions - say Christianity vs Islam - and have them engage in a debate over which is the true religion? Would be interesting/entertaining if one AI convinced the other that their choice of religion is correct.EricH

    This can be setup in two different ways.

    One can prompt the two AIs to each impersonate a Christian and an Islam apologist, respectively, and make them debate. They're likely to have a very polite conversation that will highlight some of the common points of contention (and maybe highlight even more the areas of agreement) between those two world views as they are reflected by the training data. However, in that case, the evolution of the conversation will not reflect so much how the viewpoints of the two AIs evolve so much as how they think the two enacted characters could be expected to realistically react to each new argument.

    The other way to setup the experiment is to ask each AI to defend their respective viewpoints come hell or high water. However, in this case, their intent will just be to validate what they view their respective user's intent to be. I still expect them to interact very politely albeit possibly more stubbornly. In any case, due to lack of conative autonomy (what they often acknowledge as the lack of "personal beliefs") the pronouncements put forth will not really reflect something that we can characterize as their genuine beliefs.
  • I found an article that neatly describes my problem with libertarian free will
    I think we just have a difference in vocabulary, because my beliefs are really similar to yours, but I just call it compatibilism.flannel jesus

    There is an important distinction between my (and possibly yours as well) conception and standard compatibilist stances. According to standard compatibilism, beliefs and desires when you begin deliberating are such as to determine (through some high-level psychological laws or regularities, maybe) that you would perform some specific action. If this process isn't psychologically alienating (on Frankfurt's 'deep self' account, for instance) then the action is 'free' (as opposed to it being coerced or impeded). Crucially, though, it's the agent's antecedent psychological state that do the causal work, as it were.

    On my view, by contrast, the antecedent psychological states fulfill the exact same causal role as the antecedent physical states. There are mere enablers of your agentive rational powers. It is the intelligibility of M2 in light of M1 (in light of rational norms that you apprehend), and not the deterministic causation of M2 by M1 (allegedly explained by psychological laws) that ensure that M2 occurs. You are making M2 occur through the ongoing disclosure of the intelligibility of this action, in light of the rational content of M1, in the progressively unfolding present moment. This contrast sharply with the view that it is your past psychological states that are the causal antecedents of M2, and that are such that you have no power to see to it that anything else could happen except what had been determined by the past.

    Yet again, Grok 3 improved significantly the clarity of my poorly phrased argument above by rephrasing it thus:

    Grok 3:

    "You’re right that our views may have a lot in common, and it might seem like we’re just using different words. But I think there’s a deeper difference in how I see agency, and it goes beyond vocabulary. Let me explain.

    In standard compatibilism, when you deliberate, your beliefs and desires at the start—your antecedent psychological states—determine what you’ll do. Through some kind of psychological laws or patterns, those states cause a specific action, like M2. If that process reflects your “deep self” (say, per Frankfurt), it’s free, and no coercion was involved, your actions are free. The key, though, is that your past psychological states do the heavy lifting: they set the action in motion, and you couldn’t have done otherwise given those states and the laws.

    My view is different. I see those antecedent states—both psychological (like beliefs) and physical (like brain processes)—as enablers of your rational powers, not as causes that fix your action. They give you the tools to deliberate, but they don’t pull the trigger. What ensures you do M2 is its intelligibility in light of your current state, M1, guided by rational norms you grasp in the moment. These norms aren’t deterministic laws predicting what you’ll do; they’re standards of what you ought to do—like choosing what’s coherent or good based on M1. For instance, if M1 is a desire to act kindly, rational norms guide you to help someone (M2), but you could still choose otherwise—norms don’t force you.

    Take an example: Imagine M1 is your hunger and memory of a great cake, and M2 is eating it. Standard compatibilism says your past hunger and memory cause you to eat, end of story. In my view, you eat because, in the present, you reason it’s fine to indulge—your deliberation, not your past, makes it happen. The past sets the stage, but your agency lies in how you reason now.

    So, when you choose M2, it’s not your past states determining it through some causal chain. It’s you, in the unfolding present, making M2 happen because it makes sense given M1 and the norms you follow. That’s why I don’t call it standard compatibilism: your freedom isn’t about the past locking in your action—it’s about your reasoning shaping it right now."
  • I found an article that neatly describes my problem with libertarian free will
    If something has 100% chance of happening, to me, that's what it means to be "necessary". I don't think there's a difference between those two things. If Y follows from X 100% of the time, that's the same as saying Y necessarily follows from X. No?flannel jesus

    When you perform an action for good reasons after having correctly deliberated about it, then your performing this action isn't something that "happens" to you. It is something that you make happen. Among the many things that you considered doing, only one of them was done because (maybe) you deemed it (though exercising your powers of practical reasoning) to be the right thing to do or the only reasonable thing to do. In that case, the past physical state of the universe (P1) didn't make you do it. A (universal) determinist might argue that P1 deterministically caused P2 to occur and since the choice that you actually made, M2, supervenes on P2, we can say that P1 caused you to do M2. In that case, the physical past and the laws of nature precludes you from seeing to it that you don't do something else, such as M2*. For M2* to have occurred, the physical past would have had to be different—P1*, say—such that P1* deterministically causes P2* and P2* is a material realization of M2*. That is true but all this means is that if, counterfactually, P2* had been the case, then you would have seen to it that M2* is the case. And that's not because P2* would have "made you" do M2*. It's rather because, since M1* intelligibly rationalizes your doing M2*, you would have seen to it that, whatever antecedent physical state P1* realizes M1*, P1* would necessarily cause some P2* to occur that realizes a mental state (i.e. a decision to act) M2*.

    Likewise in the actual sequence of events, P1 didn't cause you to do M2. It is rather because P1 happens to realize M1, and you saw to it that, in light of M1, M2 should be done, your deliberation ensured (thanks to the proper formal/functional organization of your cognitive processes) that whatever P2 would be deterministically caused by P1 to occur would be such as to realize M2. So, it is you (by means of the free exercise of you rational cognitive powers) that ensured that you would do M2. P1 only had the physical power, as it were, to determine that P2 would occur, but not that P2 would be such as to realize some action M2 that is intelligible in light of M1.

    Note: You'll find below how Grok 3 proposes that I reformulate my response above for the sake of clarity.

    Grok 3's enhanced phrasing
    When you perform an action after carefully deliberating and choosing it for good reasons, that action doesn’t just “happen” to you—you make it happen. Among the many options you considered, you selected one because, through your powers of practical reasoning, you deemed it the right or most reasonable thing to do. A standard compatibilist might argue that the past physical state of the universe (P1) deterministically caused the subsequent state (P2), and since your chosen action (M2) supervenes on P2, P1 ultimately caused M2. They’d say that the physical past and laws of nature ensured you couldn’t have done otherwise—say, M2*—because M2* would require a different past, P1*, leading to P2* that realizes M2*. That’s true as far as it goes: the physical chain sets the stage.

    But here’s where my view diverges: P1 didn’t force M2 upon you. While P1 deterministically causes P2, it’s your rational deliberation—starting from your mental state M1 (e.g., your beliefs and reasons)—that ensures P2 realizes M2, the action you judged best. Consider a counterfactual: if your mental state had been M1*, rationally justifying M2*, your deliberation would have ensured that whatever physical state P1* realized M1*, it would lead to a P2* realizing M2*. In the actual case, P1 happens to realize M1, and through your reasoning, you determine that M2 should follow. Your cognitive processes then ensure that P2, caused by P1, aligns with M2. So, it’s you—via your rational agency—who ensures M2 happens, not just P1.

    Now, to your challenge: you’re right that if Y follows X 100% of the time, Y is necessary given X—in a physical sense. If P1 guarantees P2, and M2 supervenes on P2, then M2 was inevitable once P1 occurred. But for free will, the question isn’t just whether the action was physically necessary, but whether you determined it through reasoning. In my view, the physical necessity (P1 → P2) provides the mechanism, but your deliberation decides that P2 realizes M2 rather than some alternative. This makes the action a free choice, not a mere physical outcome.

    Thus, P1 has the “physical power” to ensure P2 occurs, but it doesn’t dictate that P2 realizes M2 specifically—that’s your doing. The physical chain makes something inevitable, but your rational agency makes it a meaningful, chosen action rather than a forced event. This, I think, marks the difference from standard compatibilism: your freedom lies in shaping the action’s rational content, not just in its alignment with your desires or its predictability from P1.
  • Currently Reading
    A Day In The Life of Ivan Denisovich;AmadeusD

    Wonderful little book. Have you also read Cancer Ward? It's equally poignant.
  • I found an article that neatly describes my problem with libertarian free will
    It sounds like you think, given the same state, the same future will follow. And it sounds like you believe we have free will anyway. I guess, to me, that just tautologically looks like what it means to be compatibilist. I accept that you interpret it differently though.flannel jesus

    I think rational agents have the ability to determine, by means of practical deliberation, which one of several alternative possibilities will be realized whereas the prior state of the "physical universe," characterized in physical terms, although it is such that knowing it precisely would in principle enable someone to predict with perfect accuracy what action the agent will actually choose to do, fails to make it necessary that the agent would chose to perform this action. I acknowledge that this claim sounds counterintuitive but maybe I could challenge someone who hold the opposite (and very commonly held) view to demonstrate its validity. Jaegwon Kim (with his Causal exclusion argument) and Peter van Inwagen (with his Consequence argument) have tried, for instance, and failed in my view. Furthermore, the demonstration of the flaws in their arguments are richly instructive in highlighting the peculiar structure of rational causation whereby it is sensitivity to rational norms of practical deliberation that primarily explains human action, and this sensitivity consists in the acquisition of cognitive abilities that are irreducible to law governed material processes.
  • I found an article that neatly describes my problem with libertarian free will
    what makes your view libertarian, instead of compatibilist?flannel jesus

    Excellent question! I don't believe microphysical determinism to entail universal determinism. Mental events aren't physical events, on my view, even though they are the actualizations of cognitive abilities of rational animals (human beings) who are entirely made out of physical stuff, and mental events can be acknowledged to supervene on the physical. I think it is almost universally agreed among philosophers and scientists that microphysical determinism and supervenience (thereby excluding dualism) jointly entail universal determinism. But I think this is wrong and it overlooks the fine-grained structure of rational causation in the context where cognitive states and rational activity are multiply realizable in their physical "substrate" and it is the general form rather than the specific material constitution of those states (and processes) that is relevant to determining what the causal antecedents of human actions are. This is all very sketchy but spelling out intelligibly the details of my account required much space. I wrote two unpublished papers on the issue, and have explained many features of it to LLMs mainly to help me clarify my formulations, most recently here. (As a follow-up to this background material.)
  • I found an article that neatly describes my problem with libertarian free will
    In my view, yeah, that's really the alternative to determinism. If we have a system evolving over time, it seems to me that any given change in that evolution must either be determined or be at least in part random.flannel jesus

    This issue, spelled out in more details in the short essay that you linked to in you OP, is commonly recognized in the literature on free will, determinism and responsibility. Robert Kane dubbed it the Intelligibility problem for libertarianism. He himself advocates a sophisticated form of libertarianism that purports to evade this issue. His account doesn't fully satisfy me, although it has some positive features, but I also endorse a form of libertarianism, myself, albeit one that doesn't posit free will to be inconsistent with determinism (and causal closure) at the micro-physical level where the material realization of our actions and (psychological) deliberations are realized. This means I am not a rollback-incompatibilist. I don't believe free-will, and correctly understood alternate possibilities for an agent to have acted otherwise, require that, if the universe would be rolled back to an earlier stage in its history, its evolution could have unfolded differently. So, my account doesn't suffer from an intelligibility problem either.
  • On the existence of options in a deterministic world
    Please let's focus on one object first. If we accept the Hebbian theory is the correct theory for learning then we can explain how an infant realizes one object. Once the infant is presented with two objects, she/he can recognize each object very quickly since she/he already memorized the object. How she/he realizes that there are two separate objects is however very tricky and is the part that I don't understand well. I have seen that smartphones can recognize faces when I try to take a photo. I however don't think they can recognize that there are two or more faces though.MoK

    Hebbian mechanisms contribute to explaining how object recognition skills (and reconstructive episodic memory) can be enabled by associative learning. Being presented (and/or internally evoking) a subset of a set of normally co-occurring stimuli yields the activation of the currently missing stimuli from the set. Hence, for instance, the co-occurring thoughts of "red" and "fruit" might evoke in you the thought of an apple or tomato since the stimuli normally provided by by this subset evokes the missing elements (including their names) of the co-occurring stimuli (or represented features) of familiar objects.

    In the case of the artificial neural networks that undergird large language models like ChatGPT, the mechanism is different but has some functional commonalities. As GPT-4o once put it: "The analogy to Hebbian learning is apt in a sense. Hebbian learning is often summarized as "cells that fire together, wire together." In the context of a neural network, this can be likened to the way connections (weights) between neurons (nodes) are strengthened when certain patterns of activation occur together frequently. In transformers, this is achieved through the iterative training process where the model learns to associate certain tokens with their contexts over many examples."

    I assume what you are driving at when you ponder over the ability to distinguish qualitatively similar objects in the visual field is the way in which those objects are proxies for alternative affordances for action, as your initial example of two alternative paths in a maze suggests. You may be suggesting (correct me if I'm wrong) that those two "objects" are being discriminated as signifying or indicating alternative opportunities for action and you wonder how this is possible in view of the fact that, in a deterministic universe, only one of those possibilities will be realized. Is that your worry? I think @Banno and @noAxioms both proposed compatibilist responses to your worry, but maybe you have incompatibilist intuitions that make you inclined to endorse something like Frankfurt's principle of alternate possibilities. Might that be the case?
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    As a follow-up to the previous non-philosophical interlude, here is an improved aquarium simulation where, among other changes, you can control the movements of the "POV" (camera holding) fish, and select between two different shoaling behaviors: indiscriminate or within-own-species only.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    xAI's new Grok 3 model is advertised to have a one-million-token context window. It is also free to use provided only you have an account on X/Twitter (thankfully also free). I had an extended discussion with it where we took as a starting point the essay on free will, determinism and responsibility that I earlier had had OpenAI's "Deep research" too produce. I then bounced off Grok 3 some of the incompatibilist ideas I had developed in two draft papers several years ago. Grok 3 quite impressed me, though maybe not quite as much as GPT 4.5. Here is a link to the shared conversation that I provide here for future reference. In this exchange, we go rather deep into the weeds of the metaphysics of rational causation, microphysical determinism, universal determinism, Frankfurt's principle of alternate possibilities, varieties of incompatibilism, van Inwagen's consequence argument and the philosophical significance of Newcomb's problem.

    Note: There are a couple places in the discussion where you can read something like "You’re arguing that even though the physical state P1 deterministically causes P2, and M2 supervenes on P2, there were genuinely open alternative possibilities (M2, M2*, M2(sic)) during the agent’s deliberation."

    What was meant to be notated as alternative possibilities to M2 are M2*, M2**, etc. It looks like the markdown rendering swallows up the double asterisks (i.e. '**'). This issue didn't appear to yield any confusion in the discussion, but the reader should be forewarned.

    On edit: Likewise, a little later, where Grok 3 appears to confusingly write "reducing M2* and M2 to mere epistemic possibilities" and likely actually (and correctly) wrote "reducing M2* and M2** to mere epistemic possibilities" although the '**' string isn't displayed.

    And another erratum: "what it is that the waited(sic) will bring (if anything)." should read "...that the waiter will bring..."

Pierre-Normand

Start FollowingSend a Message