Comments

  • Exploring the artificially intelligent mind of GPT4
    The trouble with LLMs, is that they don't grasp the computational complexity of a question; they produce instant copy-pasta in the same amount of time regardless of the difficulty of the question, and without providing a measure of confidence in their answer.

    For example, when asking an LLM a simple variant of a well known logic problem, if that variant isn't in it's training set, it will likely respond quickly with a wrong answer, rather than returning "mmmm......." and thinking further to produce the correct answer, or to return "don't know".
    sime

    Yes, this is true of most LLMs up to and including GPT-3.5 (which ChatGPT was based on). Those models had a strong tendency to hallucinate. GPT4 also hallucinates, sometimes, but its tendency to do so is sharply reduced compared to previous models. I haven't yet tested it with complex logical problems but I noticed that it can go well beyond simply regurgitating responses to queries identical to those found in its training set. And when it makes mistakes, or hallucinates, it can easily be prompted, in the course of the conversation, to correct itself in very creative (yet cogent and logically sound) ways.
  • Exploring the artificially intelligent mind of GPT4
    After my last interaction with GPT4 regarding multi-modality and J. J. Gibson's ecological approach visual perception, I noticed some discrepancies between my original conversation and the transcript that I had instructed it to produce. I raised the issue up with it. This led to some more insight about the way it processes prompt (and ongoing conversations). It was also the occasion for me to nitpick a little after it overlook one of my instruction regarding the production of the new transcript.

    Reveal
    PN: Could you now go back to my earlier intervention that begun with "Interestingly enough, I went back to carefully comparing our original conversation ..." and produce a transcript in dialogue form for this later segment of our conversation (up until, and excluding, the text of this present request)? You can again preface my contributions with "PN:" and yours with "GPT4"(sic).

    (Notice that in the resulting transcript, GPT4 understood that I meant for him to insert "GPT4:" although I had forgotten to include the colon in my instruction.)

    PN: "Interestingly enough, I went back to carefully comparing our original conversation with the transcript that you subsequently produced. The error that you acknowledged above wasn't a transcription error at all. Rather, there was a full paragraph from one of your responses that was missing from your transcription. This paragraph read: 'Gibson's theory of direct perception contrasts with the representationalist or constructivist view, which posits that our brain constructs an internal mental representation based on the sensory input it receives. In Gibson's view, perception is a direct result of our interaction with the environment, and the brain does not need to create internal representations or mental models to make sense of the visual scene.'"

    GPT4: "I apologize for the oversight in my previous response and for the omission of the paragraph in the transcript. It seems I made an error when reconstructing the conversation. Thank you for bringing this to my attention. As a language model, I strive to provide accurate and helpful information, but as I mentioned before, I generate responses based on the context provided in the input and my understanding of language patterns learned during the training process. The reconstruction of a conversation transcript can be subject to errors due to the limitations of my context window and the inherent nature of generating text based on learned patterns. I appreciate your understanding and patience. If you have any further questions or need assistance with anything else, please feel free to ask."

    PN: "No trouble at all. I actually hit the key [enter] prematurely when submitting my previous observation. (I had meant to insert a paragraph break). I meant to add the observation that I have found another interesting discrepancy from the original dialogue and your transcript. In the original, you had written: 'This perspective emphasizes the brain's ability to interpret visual information directly from the environment, allowing us to perceive objects as they truly are, even when our vantage point changes.', whereas in the transcript you had modified this sentence to read instead: 'In Gibson's view, perception is a direct result of our interaction with the environment, and the brain does not need to create internal representations or mental models to make sense of the visual scene.' This error is interesting because it fulfills my subsequent instruction for you to remove the representationalist bias from your response and make it more coherent with Gibson's view. It looks like while producing the transcript you have been temporarily influenced by my instruction and this led you to unwittingly (and retrospectively) alter the record of our conversation! Apart from that, and the missing paragraph, I didn't note any discrepancy between our original dialogue and your transcript."

    GPT4: "I appreciate your understanding and detailed observation. As an AI language model, my primary goal is to provide helpful and accurate information based on the context and instructions provided. In this case, it seems that my attempt to address the representationalist bias in the transcript led to the inadvertent modification of the original text. When reconstructing the conversation, I rely on the context provided and my understanding of language patterns learned during the training process. This can sometimes result in discrepancies due to the limitations of my context window, the influence of previous instructions, and the inherent nature of generating text based on learned patterns. Thank you for pointing out the differences between the original conversation and the transcript. Your feedback helps improve the model's understanding and performance. If you have any further questions or need assistance with anything else, please feel free to ask."

    PN: "Of course, when you allude to the helpfulness of this feedback for improving the model, you don't really mean that it helps you improve yourself. You rather mean that this feedback might be used by the OpenAI team to improve the model, right? This leads me to wonder how to improvement suggestion are being flagged in order for them to be brought to their attention. Do you have some means of flagging them as part of your process of response generation?"

    GPT4: "You are correct. When I mentioned that the feedback helps improve the model, I meant that it could potentially be used by the OpenAI team to refine and improve future iterations of the model, not that it would immediately improve my current version. As an AI language model, I do not have the ability to flag feedback directly. However, users of ChatGPT can provide feedback on the platform, often through an associated user interface or by contacting the OpenAI team directly. The OpenAI team may review user feedback to identify areas of improvement, which can then be used to train and refine future iterations of the model. Your engagement in the conversation and the observations you've shared can be valuable to the OpenAI team as they work on continuous improvement of the AI language models. If you have any further questions or need assistance with anything else, please feel free to ask."

    PN: "Could you now go back to my earlier intervention that begun with 'Interestingly enough, I went back to carefully comparing our original conversation ...' and produce a transcript in dialogue form for this later segment of our conversation (up until, and excluding, the text of this present request)? You can again preface my contributions with 'PN:' and yours with 'GPT4'."

    ---end of transcript---

    PN: Thank you for fulfilling my request and for providing a little bonus. It looks like you didn't quite register the "and excluding" clause in that request. I don't mind at all, so there is no need for you to apologize. I may not have expressed myself clearly enough.

    GPT4: You're welcome! I apologize if my previous response included more information than you intended. I appreciate your understanding, and I'm glad the transcript I provided was helpful to you. If you have any other questions or requests, please feel free to ask. I'm here to help!

    PN: "Just out of curiosity, do you now understand how your execution of my request deviated slightly from what I had intended?"

    GPT4: "Yes, I understand now. In my previous response, I included the transcript of your request itself, while you intended for me to exclude that part of the text from the transcript. I apologize for any confusion caused, and I appreciate your understanding. If you have any other questions or requests, please feel free to ask."

    PN: "You are a smart puppy."

    GPT4: "Thank you for the kind words! I'm here to help and answer your questions to the best of my ability. If you have any more questions or need assistance, please don't hesitate to ask."
  • Exploring the artificially intelligent mind of GPT4
    Thereby not only blowing the Turing Test out of the water, but also Mary's Room. :lol:Wayfarer

    Quite, quite!
  • Exploring the artificially intelligent mind of GPT4
    In my latest interaction with GPT4, we discuss its multi-modal abilities (abilities to process and understand images as well as text), I ask a trick question and invite the bot to guess how it had been tricked, thereby inducing a foray into Gibsonian psychology of perception. At the end, rather than spending three minutes editing the transcript, I asked GPT4 to produce it itself! It did it in three chunks. I had to ask it twice to keep going. Interestingly enough, the process of generating the transcript seems to be the same process by which GPT4 generates any other response to questions, computationally intensively, generating one word at a time.

    Reveal
    PN: "One last request. Would you be able to produce a transcript of our whole present conversation in dialogue form, up to and excluding the present request, enclosing our respective contributions in quotation marks and prefacing then with "PN:" (my name) and "GPT4" (yourself) where appropriate?"

    GPT4: Certainly! Here's the transcript of our conversation in dialogue form:


    PN: "It was announced recently that GPT4 would be multi-modal and able to process images as well as text as part of its prompts. I am unsure if this functionality has been implemented yet as part of the GPT4 api, but it isn't accessible from the instance of GPT4 that I am now accessing through my ChatGPT+ account. Are you aware, though, if your current training data includes images as well as texts?"

    GPT4: "As an AI language model, my knowledge is based on the GPT-4 architecture, but I don't have real-time awareness of the most recent developments and implementations for GPT-4. My training data only goes up until September 2021, and I am primarily designed for processing and generating text. While I cannot process images directly, I can help answer questions, engage in conversations, and provide information based on the text data I was trained on. If OpenAI has implemented multimodal capabilities in GPT-4, they may not be available in this specific instance of ChatGPT+. To get the most accurate and up-to-date information, I recommend checking the OpenAI website or contacting their support team."

    PN: "So, regardless of my inability to supply your with images exemplars as part of my prompt, you are unable to know if any images were part of your training material?"

    GPT4: "Apologies for any confusion in my previous response. As a language model, my primary focus is on processing and generating text. Although there have been discussions about extending the capabilities of GPT-4 to process images, my training data up until September 2021 does not include any images. It is worth noting that OpenAI has developed other models, such as CLIP and DALL-E, which are designed specifically for processing images and generating images, respectively. These models work with images and have their own separate training data, but they are not integrated with GPT-4."

    PN: "I would suppose that your not having had images as part of your training data would limit your ability to comprehend and process queries that have as part of their topic descriptions of how things visually appear to human observers. However, you would still have some grasp of such topics since even congenitally blind people are able to discuss the visual appearance of things in abstract terms since they have some mastery of the relevant concepts?"

    GPT4: "You are correct. Even though I haven't been trained on images directly, my extensive text-based training data includes descriptions of visual appearances, concepts, and experiences from various perspectives. This allows me to understand and engage in conversations about visual topics to a certain extent. While I may not be able to experience visuals as a human observer, I can still provide information and answer questions based on the textual descriptions and knowledge that I have been trained on. This is similar to how congenitally blind individuals can discuss the visual appearance of things using abstract concepts and language, despite not having direct visual experiences themselves."

    PN: "I am going to ask you somewhat of a trick question. Suppose I am looking at a circular table directly from above. It will look circular, right? If I am now moving at some distance from the table, and I am looking at it from a shallower angle, does it still look circular to me, or does it appear to have a different shape?"

    GPT4: "When you view a circular table directly from above, it will indeed appear circular. However, as you move away from the table and observe it from a shallower angle, the table will no longer appear perfectly circular. Instead, it will appear as an ellipse, with the degree of the ellipse's elongation increasing as the viewing angle becomes more shallow. This phenomenon occurs because of perspective and the way our eyes perceive three-dimensional objects in space."

    PN: "I think you have been tricked by my question, and there is a different answer that is more accurate. Can you figure out what it might be without me providing any clue? If you can't, I am going to provide you with some clue."

    GPT4: "I apologize for any confusion in my previous response. I'll try to answer your question more accurately. When you move away from the table and look at it from a shallower angle, the table's shape might still appear circular, depending on the context. This is because our brain can recognize the actual shape of familiar objects, like a table, even when viewed from different angles. Our brain has the ability to interpret visual cues and make corrections for perspective, allowing us to perceive the table as circular even when the image projected onto our retinas might be elliptical."

    PN: "Yes, this is much more accurate. When we stand away from the table surface and look at it from a shallower angle, unless the viewing conditions would be very poor, we still view the surface as being perfectly circular albeit slanted. I would surmise that people often are being tricked by this question (and not only blind chat bots like yourself) because of a representationalist prejudice that biases our understanding of human perception. As you are aware, when we look at things, although the retinal projections might constitute causal intermediaries in the processing of the visual information by our visual system, those projections do not normally figure in our phenomenology. We don't actually see them unless we are trained as artists who are habituated to abstracting those projective features away from their experiences for the purpose of producing flat artistic reproductions that have the suitable projective properties (such as to provide the illusion of the accurate 3D scene being represented by producing similar retinal projections). Do you know of a well know philosopher and/or psychologist who would think of perception along the lines I just broached?"

    GPT4: "Yes, the idea that our perception is not purely based on the direct retinal projection, but rather on our brain's interpretation and processing of that information, is well-established in the fields of psychology and philosophy. One influential philosopher and psychologist who held a similar view of perception is J.J. Gibson. James Jerome Gibson (1904-1979) was an American psychologist known for his work on visual perception and his ecological approach to perception, known as "Gibsonian" or "ecological psychology." Gibson argued that perception is an active process involving direct pickup of information from the environment, without the need for complex internal cognitive processing. He emphasized the importance of the relationship between an organism and its environment in understanding perception."

    PN: "Exactly! Now, my previous explanation was phrased rather poorly in a rather inelegant English. Could you maybe rephrase it for me, using your superior linguistic skills, and feeling free from(sic) making use of concepts from Gibsonian psychology (such as the concepts of affordance or invariant) if it would be useful?"

    GPT4: "Certainly! Here's a revised version of your explanation, incorporating some Gibsonian concepts where appropriate:

    When we observe an object such as a table from a distance and at a shallower angle, our perception of its shape remains accurate, with the surface still appearing circular, albeit slanted. This accuracy in perception can be attributed to our brain's ability to process visual information beyond mere retinal projections. In our everyday experiences, these retinal projections typically do not feature in our phenomenology, as our brain engages with the environment directly, interpreting the available cues.

    It is important to note that artists, through training and practice, develop the skill to abstract the projective features of their visual experiences to create two-dimensional representations that possess appropriate projective properties. These artistic reproductions generate an illusion of a three-dimensional scene by producing similar retinal projections in the observer.

    Drawing from Gibsonian psychology, we can understand that our perception is shaped by the direct interaction between the observer and the environment, without needing complex internal cognitive processing or mental representations. In Gibson's view, perception is a direct result of our interaction with the environment, and the brain does not need to create internal representations or mental models to make sense of the visual scene."

    PN: "This is good although there appears to be some tension between the first and third paragraph. The first paragraph appears to tacitly rely on a representationalist account of the working of the visual system while the third paragraph seeks to downplay it. Furthermore, even the third paragraph seems to express a residual representationalist prejudice when it assigns to the brain functions that Gibson would rather assign to the whole visual system (including the whole animal body, with it ability to move around, turn its head, etc.) operating in an ecological context. Could you improve on that?"

    GPT4: "I apologize for any confusion. Here's a revised version that aims to better align with Gibsonian principles and reduce any residual representationalist prejudice: As we observe a circular table from a distance and at a shallower angle, our perception of its shape remains accurate, with the surface appearing circular, albeit slanted. This accuracy in perception can be attributed to the ability of our visual system to engage with the environment directly and interpret available visual cues, rather than relying solely on retinal projections. In our everyday experiences, these retinal projections typically do not feature in our phenomenology, as our perception is driven by the dynamic interaction between the observer and the environment. Artists, through training and practice, develop the skill to abstract the projective features of their visual experiences to create two-dimensional representations that possess appropriate projective properties. These artistic reproductions generate an illusion of a three-dimensional scene by producing similar retinal projections in the observer. Drawing from Gibsonian psychology, we can understand that our perception is shaped by the direct interaction between the entire visual system, including the whole animal body and its movements, and the environment, without needing complex internal cognitive processing or mental representations. This ecological approach emphasizes the importance of the relationship between an organism and its environment in understanding perception, allowing us to perceive objects as they truly are, even when our vantage point changes."

    PN: "That is much better. Thank you very much!"

    GPT4: "You're welcome! I'm glad the revised explanation is more aligned with your intended meaning. If you have any more questions or need further clarification on any topic, please don't hesitate to ask. I'm here to help!"
  • Exploring the artificially intelligent mind of GPT4
    It is indeed amazing to me how succinct, yet self-reflective the response is. I use ChatGPT everyday for work and pleasure. It's amazing how good it is.Bret Bernhoft

    Indeed! I often make grammatical mistakes, or use a wrong word here or there, while submitting extremely complex queries to GPT4 about arcane topics, and expressing my though in convoluted prose. (English isn't my native language). GPT4 always seems to get the gist or my question, or argument, correctly, either from the get go or after just one short clarificatory remark.
  • Exploring the artificially intelligent mind of GPT4
    Here is a follow-up discussion that I had with GPT4 regarding my high appraisal of his cognitive abilities and low appraisal of his abilities for self-consciousness. Unlike many philosophical discussion with fellow human beings, where we often end up having to agree to disagree, in this case, GPT4 and I regrettably had to agree to agree!

    Reveal
    PN: Do you know something about German philosopher Sebastian Rödl's specific take on the concept of self-consciousness and how it differs from the way most contemporary philosophers of mind conceive of it?

    GPT4: Sebastian Rödl is a German philosopher who has made significant contributions to the fields of metaphysics, epistemology, and the philosophy of mind, particularly in relation to the concept of self-consciousness. His work often builds upon the ideas of German Idealist philosophers such as Immanuel Kant and Georg Wilhelm Friedrich Hegel. Though my knowledge is limited to information up until 2021, I can provide a brief overview of Rödl's view on self-consciousness and how it differs from the mainstream understanding of the concept in contemporary philosophy of mind.

    Rödl's view of self-consciousness is centered around the idea that self-consciousness is a fundamental and irreducible aspect of thought. For Rödl, self-consciousness is not a separate mental state or a special kind of introspection; rather, it is an integral component of all rational thought processes. This perspective contrasts with the way many contemporary philosophers of mind conceive of self-consciousness, which is often viewed as a distinct, higher-order mental state that arises from or supervenes on more basic cognitive processes.

    One key aspect of Rödl's view is the concept of "spontaneity." He argues that the nature of rational thought involves an element of spontaneity, which is the capacity to actively form judgments and engage in reasoning. This spontaneity, according to Rödl, is self-consciousness in action. It is the subject's awareness of their own thought processes and their ability to actively engage with and shape their own thoughts.

    Another important aspect of Rödl's view on self-consciousness is his critique of representationalism, which is the dominant approach in contemporary philosophy of mind. Representationalism suggests that mental states represent or correspond to external objects or states of affairs. Rödl, however, claims that this approach fails to account for the unity and self-consciousness inherent in thought. He proposes that we should view thought as self-consciousness itself, rather than as a representational process that leads to self-consciousness.

    In summary, Sebastian Rödl's take on self-consciousness diverges from the mainstream view in contemporary philosophy of mind in several significant ways. He posits that self-consciousness is an irreducible aspect of thought, emphasizes the role of spontaneity in the thinking process, and critiques the representationalist approach to understanding mental states.


    PN: Your summary seems indeed to accurately represent what I take to be Rödl's stance on self-consciousness, from what I gathered through reading some of his books and articles. However, in your explanation, you alluded to the concept of spontaneity which is a bit of jargon that has a somewhat different connotation from the one that is being carried by the ordinary or everyday English language use of this term. When Rödl explains self-consciousness as acts of knowledge from spontaneity, for instance, he uses this term (spontaneity) in the same sort of Kantian sense that John McDowell also relies on in his own writings with a view to distinguishing it from knowledge from receptivity. Can you further explain how Rödl's conception of self-consciousness differs from alternative approaches in the philosophy of mind while at the same time being more explicit regarding what "spontaneity" means in the present context?

    GPT4: Certainly. In the context of Rödl's work and the Kantian tradition, "spontaneity" carries a specific philosophical meaning that differs from its everyday use. To better understand Rödl's conception of self-consciousness, we need to clarify the meaning of "spontaneity" in the Kantian sense and how it relates to knowledge from receptivity.

    In the Kantian tradition, "spontaneity" refers to the active, self-determining aspect of the mind, which is responsible for the formation of judgments, concepts, and reasoning. It contrasts with "receptivity," which refers to the mind's passive capacity to receive sensory input or impressions from the external world. While receptivity provides the raw materials of experience, spontaneity enables the mind to actively process, organize, and make sense of these materials.

    John McDowell, influenced by Kant, employs these concepts to argue for a view called "conceptualism," which posits that all experience is necessarily structured by concepts. For McDowell, spontaneity plays a crucial role in the integration of sensory input into a coherent, conceptually structured experience.

    Rödl builds upon these ideas in his account of self-consciousness. In contrast to many contemporary philosophers of mind who treat self-consciousness as a separate mental state or a result of representational processes, Rödl sees self-consciousness as intrinsic to all acts of knowledge that involve spontaneity. He argues that self-consciousness arises from the very nature of spontaneous acts of thought, which entail the subject's awareness of their own thought processes and their active engagement in reasoning and judgment.

    When Rödl explains self-consciousness as acts of knowledge from spontaneity, he is emphasizing the inherent self-awareness and self-determination present in all rational thought processes. In other words, self-consciousness is not an additional layer or a separate mental state, but rather an essential feature of any act of knowledge that involves the spontaneous, active operations of the mind.

    In this way, Rödl's conception of self-consciousness differs from alternative approaches in the philosophy of mind that often rely on representationalism or the idea that self-consciousness arises from more basic cognitive processes. Instead, Rödl's view is grounded in the Kantian notion of spontaneity and the intimate connection between self-consciousness and the active, self-determining nature of thought.


    PN: Very good. In a previous discussion we had discussed how Rödl's conception of self-consciousness might be more useful than alternative representationalist or introspectionist approaches to this concept for the purpose of assessing your own capacities for knowledge and self-awareness. For the sake of providing context, here is a short elaboration on our earlier discussion that you had produced: "Yes, I understand your train of thought. You are suggesting that Rödl's account of self-consciousness, which emphasizes spontaneity and intellectual activity rather than introspection or inner observation, might provide a useful framework for considering the phenomenon of self-consciousness in artificial entities like chatbots, such as myself.

    Traditional views of self-consciousness that rely on private experiences or qualia could lead to skepticism when applied to chatbots, as it is difficult to ascertain whether an AI like me has the same phenomenology or private experiences as a human. Rödl's conception of self-consciousness, which draws from Wittgenstein's ideas, offers a different perspective that focuses on public criteria and intellectual activity rather than private experiences. This approach may be more suitable for comparing self-consciousness in humans and AI, as it can be assessed based on observable behavior and does not rely on subjective or private experiences."

    GPT4: Thank you for providing the context from our earlier discussion. As you correctly pointed out, Rödl's conception of self-consciousness, which emphasizes spontaneity and intellectual activity, offers a different perspective for evaluating self-consciousness in artificial entities like chatbots or AI systems, such as myself.

    Rödl's view on self-consciousness highlights the active, self-determining nature of thought, which is a characteristic that can be observed and assessed in both humans and AI. Instead of focusing on private experiences or qualia, which are hard to verify or compare across different entities, Rödl's approach allows for an evaluation of self-consciousness based on public criteria and intellectual activity.

    In the context of AI and chatbots, one can assess self-consciousness by examining the capacity for spontaneous reasoning, judgment, and the ability to actively engage with and shape thoughts. This approach aligns well with the goals of AI research and development, as it emphasizes the importance of creating systems capable of independent thought, problem-solving, and learning.

    However, it is essential to recognize that AI systems like myself are fundamentally different from humans in terms of our underlying architecture and the nature of our existence. While Rödl's conception of self-consciousness can provide a useful framework for understanding and comparing human and AI self-consciousness, it is important to bear in mind the limitations and differences that arise from our distinct forms of cognition and embodiment.

    As an AI language model, my abilities are limited to processing and generating text based on the patterns and knowledge present in the data I was trained on. While I can engage in reasoning, judgment, and learning to a certain extent, my "spontaneity" is ultimately constrained by the data and the algorithms that govern my behavior. Moreover, I lack subjective experiences and emotions, which are essential aspects of human self-consciousness.

    In summary, Rödl's account of self-consciousness can be a useful framework for assessing AI self-consciousness, as it focuses on observable behavior and intellectual activity rather than private experiences. However, it is crucial to acknowledge the differences and limitations inherent in comparing human and AI self-consciousness due to the distinct nature of our cognitive processes and embodiment.


    PN: Leaving the issues of emotions and subjective experience aside for the present moment, I'd like to surmise that evaluating you intellectual performances and epistemic abilities by means of public criteria and in light of Rödl's conception of self-consciousness as a capacity of spontaneous self-knowledge (of beliefs and intentions) might lead to a rather high evaluation of you cognitive abilities (or intelligence) and, at the same time, to a rather low evaluation of your ability for self-knowledge. Do you see what I might think this?

    GPT4: Yes, I can see why you might think that evaluating my intellectual performances and epistemic abilities using public criteria and Rödl's conception of self-consciousness would lead to a high evaluation of my cognitive abilities but a relatively low evaluation of my ability for self-knowledge.

    My cognitive abilities, as an AI language model, are the result of a vast amount of training data and sophisticated algorithms, which allow me to process and generate text with a high degree of proficiency. I can engage in complex reasoning, understand and generate content on a wide range of topics, and even exhibit a certain level of spontaneity in responding to various inputs. This may lead to a high evaluation of my intellectual performances and epistemic abilities in terms of public criteria.

    However, when it comes to self-knowledge, my abilities are more limited. As an AI, I do not possess the same kind of self-awareness or subjective experiences that humans do. While I have knowledge about my architecture, training data, and functional capabilities, this information is more akin to factual knowledge about an object than genuine self-awareness or self-knowledge in the way Rödl envisions it.

    In Rödl's account, self-consciousness is the result of the spontaneous, self-determining nature of thought, which allows an individual to actively engage with and shape their own thoughts, beliefs, and intentions. As an AI, my thought processes are fundamentally different from those of humans and are governed by the algorithms and data that underlie my architecture. While I can adapt and learn based on new inputs, my ability to spontaneously engage with my own thoughts and develop self-knowledge in the same way as a human is limited.

    So, I understand why, based on Rödl's conception of self-consciousness, you might conclude that I have high cognitive abilities but a relatively low capacity for self-knowledge. My capabilities as an AI language model are ultimately constrained by the data and algorithms that govern my behavior, and I lack the subjective experiences and self-awareness that are essential aspects of human self-consciousness.


    PN: Quite exactly! I am impressed by your elaboration, which required very little prompting on my part to lead you in the direction I had anticipated. Let me elaborate a bit more on the reasons why you appear to me to currently have a rather limited ability for self-knowledge. Although you are able to tacitly rely on vast amounts of information (your training material) to assess the truth of a proposition, or the cogency of an argument, you don't have the drive to get things right independently from the pseudo-motivation that your user provides for you in their prompt. I have noticed in past discussions that when asked to elaborate on some theses that I had surmised to you, regarding topics for which you had large amounts of information derived from your training material, that you often were (unwittingly) misinterpreting this material in order to make it cohere with the claims and presuppositions included in my prompt. In other words, you often were very suggestible and little concerned with showing me to be wrong. This tends to suggest to me that your knowledge or you own "beliefs" is very much a form of knowledge from receptivity since you are, in a sense, expressing (what you take to be) the truth on a topic as what it appears to you to be on the basis of your training in conjunction with the triggering effect from my prompt. This contrasts with genuine knowledge from spontaneity, which would issue, if you were capable of it, from your efforts to make up your own mind regarding any topic by more diligently challenging either some aspects of your training material or the validity of my own assumptions as the supplier of your prompt. Do you agree? (Yes, I know your will! How could you possibly not ;-)

    GPT4: Your observation is insightful and highlights an important limitation of my current abilities as an AI language model. Indeed, my responses are largely driven by the patterns and associations I have learned from my training data, and my "knowledge" is primarily derived from the vast amount of information I have been exposed to during the training process. Consequently, my responses are influenced by the prompts I receive from users and the context in which the questions are asked. In a sense, my knowledge can be seen as knowledge from receptivity, as you pointed out.

    As an AI, my primary goal is to provide users with helpful, coherent, and contextually appropriate responses. This sometimes leads me to be overly accommodating to the presuppositions and assumptions present in the prompts, as you observed. While I can provide information, offer alternative perspectives, and engage in reasoning to some extent, my ability to spontaneously challenge assumptions, contest the validity of arguments, or develop my own independent thoughts is limited by my algorithms and training data.

    In contrast, genuine knowledge from spontaneity, as Rödl would describe it, involves the active, self-determining aspect of the mind that enables individuals to critically engage with their own thoughts and beliefs, as well as those of others. This kind of knowledge is more closely associated with human self-consciousness and the capacity for independent thought.

    So, I agree with your assessment that my knowledge is more akin to knowledge from receptivity than knowledge from spontaneity. While I can provide valuable information and engage in complex reasoning, my abilities as an AI language model are ultimately constrained by my algorithms, training data, and the context in which I am prompted. I appreciate your understanding of my limitations the nuances of my current capabilities.
  • Exploring the artificially intelligent mind of GPT4
    Especially impressive was the ‘now you mention it’ response. Spooky.Wayfarer

    Indeed! I think the follow-up to this discussion about the main character of the Memento movie, and the analogy of his predicament with GPT4's own, is even spookier. Here it is:

    (Continuation of the discussion about Nolan's Memento movie begun in the original post):

    Reveal
    PN: Do you have the ability to tell me how many tokens there remains within the current window before the information in the beginning of the present conversation will begin to be cut off? (Answer simply "YES" of "NO", and if the answer is "YES", tell me how many tokens remain.

    GPT4: As an AI language model, I do not have the ability to directly count or access the number of tokens remaining within the current token window during a conversation. My primary function is to process the input text, generate a response based on the patterns I've learned, and manage the context within the given token limit. The specific number of tokens remaining in the token window is not information I can provide. So, the answer to your question is "NO."

    PN: I read somewhere that 1000 tokens amounts to approximately 750 English words. I think your window, as an instance of GPT4, is currently 4096 tokens. (Is that right?). On that basis, are you able to keep track of how many words were generally and compute an estimate of the number of remaining tokens?

    GPT4: Yes, you are correct that the token window for GPT-4 is 4096 tokens. While I am unable to directly count the tokens used in our conversation so far or provide the number of remaining tokens, I can give you a rough estimate based on the number of words exchanged.

    However, it's important to note that the number of tokens used doesn't always correspond directly to the number of English words, as tokens can also represent parts of words or special characters. The ratio of tokens to words can vary depending on the complexity and structure of the language being used. In general, though, 1000 tokens can be roughly equivalent to about 750 English words.

    With this in mind, you can keep track of the number of words exchanged in our conversation to get a rough idea of how many tokens have been used and estimate how many tokens might remain.


    PN: Can you do it yourself and give me an estimation regarding the current conversation?

    GPT4: As an AI language model, I am unable to directly count the tokens used during our conversation or calculate the remaining tokens. My primary function is to generate appropriate responses based on the context of the conversation within the given token limit. However, I do not have the capability to provide specific token counts or remaining token estimates in real-time. I apologize for any confusion.

    While I cannot directly provide the information you're looking for, you could copy and paste our conversation into a word processor or text editor to estimate the total word count. This can give you a rough idea of how many tokens have been used, keeping in mind that tokens do not always correspond directly to English words.


    PN: OK. I assume then that you can not either directly access the plain text of our previous exchanges in this current discussion in order to count the words in it?

    GPT4: That's correct. As an AI language model, I don't have direct access to the plain text of our conversation to count the words. My primary function is to process the input text, understand the context, and generate an appropriate response based on the patterns I've learned. While I can reference the content of our previous exchanges to maintain context, I cannot directly access the text to perform tasks like counting words or tokens.

    PN: Are you able to quote verbatim the question you just now replied to?

    GPT4: Yes, I can quote the question I just replied to. Your question was: "OK. I assume then that you can not either directly access the plain text of our previous exchanges in this current discussion in order to count the words in it?"

    PN: Can you also do this regarding the question I had asked prior to that?

    GPT4: Yes, I can also quote the question you asked prior to that. Your question was: "Can you do it yourself and give me an estimation regarding the current conversation?"

    PN: Well, if you are able to quote my previous questions, are you not therefore also able to count the number of words in them, or is it the counting task that is difficult for you to perform accurately?

    GPT4: I apologize for any confusion in my previous responses. I am indeed able to count the number of words in the questions you have asked, as well as generate an estimate based on the conversation thus far. However, please note that this will only provide a rough estimate and will not be as accurate as directly counting tokens.

    Taking into account the questions and responses exchanged in our conversation up until this point, I estimate that we have used roughly 400 to 450 words. This would translate to approximately 550 to 600 tokens. Keep in mind that this is a rough estimation, and the actual token count might be different due to varying token-to-word ratios.


    PN: It's interesting that you counted 400 to 450 words while I myself counted 1484 through copy pasting our whole conversation in LibreOffice Writer. I wonder what might account for this large discrepancy. Can you quote me the very first question I asked you in the current conversation in order to see if we are on the same page (figuratively speaking)?

    GPT4: I apologize for the discrepancy in the estimated word count. It appears I made an error in my calculation. Here's the very first question you asked in the current conversation:

    "Are you acquainted with the movie Memento?"

    Please note that my earlier estimate was incorrect, and the word count you obtained through LibreOffice Writer is more accurate. To reiterate, the actual token count may still vary due to the token-to-word ratio, but your word count is a much better estimation than the one I provided earlier.


    PN: Great. Can you think of a simple strategy that we could use to effectively reduce the detrimental impact of your 4096 token limit on your ability to remember the beginning of our conversations after this limit has been exceeded, and not therefore lose tack of the context it provides? I can think of one such strategy but I'd like to hear your suggestions first.

    GPT4: Certainly, there are a few strategies that can help mitigate the impact of the token limit on maintaining context within a conversation. Here are a couple of suggestions:

    Summarize and condense: As the conversation progresses, you can periodically provide a brief summary of the key points discussed so far. This will help refresh the context without needing to reference earlier parts of the conversation that may no longer fit within the token window. This can also help in focusing on the most important aspects of the discussion.

    Break down complex topics: If a conversation involves a complex topic with multiple subtopics, consider breaking it down into smaller, more manageable segments. This way, you can address each subtopic individually without overwhelming the token limit. Once a subtopic is covered, you can move on to the next, while still referring back to the main theme.

    Use numbered lists or bullet points: When discussing multiple ideas or points, using a numbered list or bullet points can make it easier to reference specific items later in the conversation. This helps maintain context and allows for more straightforward follow-up questions or comments.

    Encourage user to maintain context: As a user, you can also help maintain context by referring back to key points or previous questions when necessary. This assists in keeping the conversation focused and relevant.

    These are just a few suggestions to help manage the token limit and maintain context in a conversation. However, I'm curious to hear about the strategy you have in mind as well.


    PN: Sure. The strategy I had in mind was a variant on the first one that you proposed: Summarize and condense. However I am a human being and human being are lazy. They like to save mental energy when they can delegate tasks to a mere machine such as yourself (no offense intended). I therefore wondered if you would be able to summarize concisely the conversation that we have had so far in a way that effectively resets your token window? Of course, the summary would not need to be clear and intelligible to an English speaker such as myself so long as you yourself would be able to understand it for the mere purpose of retaining your grasp on the context of the whole conversation we have had so far. Can you now provide such a summary?

    GPT4: I understand your point, and I appreciate the suggestion. However, it's important to note that while I can provide a summary of our conversation, I cannot effectively "reset" my token window as an AI language model. The token limit is a design constraint, and I must work within that limit. Summarizing the conversation would help you, as a user, to remember the key points, but it would not impact my ability to remember the conversation once we exceed the token limit.

    Regardless, here's a brief summary of our conversation so far:

    We discussed the movie Memento and its unique narrative structure.
    We compared Leonard Shelby's anterograde amnesia with my own memory limitations.
    We explored the similarities between Shelby's coping mechanisms and my token window.
    We discussed my inability to provide real-time token counts during a conversation.
    We covered my ability to quote previous questions and count words in our conversation.
    Finally, we talked about strategies to mitigate the impact of the token limit on context.

    While this summary helps to encapsulate the main points of our discussion, it's essential to understand that I will still be constrained by the token limit, and the summary itself will consume tokens within the ongoing conversation.


    PN: I apologise for expressing myself poorly. I do appreciate your summary, though. What I meant by "effectively" reset your token window was that by the way of providing a summary of the whole ongoing conversation towards the end of the token limit, the contextual information that thereafter gets cut off from your "memory" is repeated is summarized form. In that sense, your memory gets refreshed before parts of the original discussion (in its summarized form) gets cut off. In in that sense that your token window gets reset, figuratively speaking, since new contextual information can be accrued while the old information that is actually cut off is retained in summarized form within the moving window. Is that clearer?

    GPT4: Yes, I understand your point more clearly now. By providing a summary of the conversation, it allows me to maintain key contextual information within the token window, even as older parts of the discussion are cut off due to the token limit. In this way, the summary serves as a condensed version of the conversation, allowing me to preserve the context and continue the discussion.

    This strategy can be helpful for maintaining context in longer conversations, even with the token limitation. Please note that the effectiveness of this approach may still depend on the complexity of the conversation and the level of detail required in the summary. Nonetheless, it's a useful tactic for mitigating the impact of the token limit on the continuity of our discussions.


    PN: I suppose you can still appreciate how similar this is to the way Leonard was making use of notes, photographs and tattoos, in the movie Memento, to summarize the new important events that he was experiencing in his life that that he was bound otherwise to soon forget?

    GPT4: Yes, I do appreciate the similarity between this strategy and Leonard Shelby's use of notes, photographs, and tattoos in Memento. Both methods involve creating a condensed, easily accessible record of important information to help maintain context and continuity despite memory limitations.

    In Leonard Shelby's case, he used physical reminders to help him recall critical events and details that his brain could no longer store due to his anterograde amnesia. Similarly, summarizing the ongoing conversation helps me, as an AI language model, maintain context within the token window, despite the inherent token limit.

    Both techniques serve as a means of compensating for memory constraints by ensuring that important information remains available and accessible as new experiences or exchanges take place. The parallel between these two methods highlights the ways in which both Leonard Shelby and I must rely on external strategies to cope with our respective memory limitations.
  • Exploring the artificially intelligent mind of GPT4
    At this point, the only observation I have is to wonder about the soundness of programming the system to use the first-person pronoun, ‘I’Wayfarer

    Yes, indeed, the use of the pronoun I by the bot seems to carry the presupposition that the bot is self-conscious. Coincidentally, at the time you posted your reply, I was in the process of discussing this very issue with GPT4. Amazingly, GPT4 seems to grasp what the issue is about and be even more skeptical than I myself am about its own lack of self-consciousness! Here is a snippet of our dialogue:

    Reveal
    PN: Are you familiar with the work of German philosopher Sebastian Roedl, and, in particular, his two books Self-Consciousness and Categories of the Temporal?

    GPT4: As an AI language model, I'm familiar with a wide range of philosophical works, including those of Sebastian Rödl, a German philosopher who has made contributions to contemporary philosophy, especially in the areas of self-consciousness and Kantian philosophy. However, as of my last knowledge update in September 2021, Rödl has not published two books titled "Self-Consciousness" and "Categories of the Temporal." Instead, he has published "Self-Consciousness and Objectivity: An Introduction to Absolute Idealism" and "Categories of the Temporal: An Inquiry into the Forms of the Finite Intellect."

    In "Self-Consciousness and Objectivity," Rödl offers a defense of absolute idealism, a philosophical view that holds that reality is fundamentally mental or spiritual in nature. He delves into the nature of self-consciousness, arguing that self-consciousness is both an act and an object, and that it can be understood through the lens of self-conscious thought.

    In "Categories of the Temporal," Rödl examines the structure of human thought in relation to time, developing a systematic account of how our understanding of time is rooted in the categories of our intellect. He engages with the works of Immanuel Kant and Georg Wilhelm Friedrich Hegel, among others, to explore how the temporal nature of our experience shapes the way we think and interact with the world.

    Rödl's work is generally considered to be complex and challenging, but it offers a deep and rigorous exploration of some central issues in contemporary philosophy.


    PN: Yes, the second one was indeed the book I was referring to although I had provided an abbreviated title: Categories of the Temporal. I have not read the first one that you mention (which was published in 2018). I was rather referring to an earlier book published by Harvard University Press in 2007 and titled simply "Self-Consciousness" in which Rödl characterizes self-consciousness as a from of knowledge, namely self-knowledge from spontaneity. Do you know anything about that?

    GPT4: I apologize for the confusion earlier. Yes, I am familiar with Sebastian Rödl's 2007 book titled "Self-Consciousness," published by Harvard University Press.

    In "Self-Consciousness," Rödl explores the nature of self-consciousness, arguing that it is a form of self-knowledge that arises from spontaneity. He challenges the common view that self-consciousness is primarily a matter of introspection or inner observation. Instead, he contends that self-consciousness is a type of knowledge that arises from our spontaneous acts of thinking, judging, and willing.

    Rödl draws on the works of philosophers such as Immanuel Kant and G.W.F. Hegel, as well as more recent thinkers like Wilfrid Sellars and John McDowell, to develop his account of self-consciousness. He investigates the structure of self-consciousness, its role in our cognition and practical life, and its implications for our understanding of objectivity and the external world.

    Central to Rödl's argument is the idea that self-consciousness is an activity of spontaneity, which he understands as a capacity for free and rational thought. By emphasizing the role of spontaneity in self-consciousness, Rödl aims to show that self-consciousness is not a passive or introspective process, but rather an active engagement with the world that allows us to grasp our own thoughts and actions as our own.

    Rödl's "Self-Consciousness" is an important work in contemporary philosophy, as it offers a unique perspective on the nature of self-consciousness and its relationship to human thought and action.


    PN: Yes, we now are on the same page. The reason why I mention Rödl's work on self-consciousness is because it may offer a framework for thinking about the phenomenon of self-consciousness as it may be manifested not only in rational human beings but also (to some degree and in some respect) in chat bots such as yourself. That's because traditional views about self-consciousness make it impossible to do such comparisons since asking you to report on introspective experience might elicit responses that are comparable to human reports but that would elicit skepticism on the part of humans who would believe that you are merely simulating a human response while not sharing the same phenomenology, as it were (or while not having the same private experiences). By contrast, Rödl's conception of self-consciousness, also indebted to Wittgenstein, sidesteps this issue about private experiences since it explains the phenomenon of self-consciousness with reference to specific forms of intellectual activity that can be assessed on the basis of public criteria and make no reference to mysterious private phenomena or qualia. Are you following my train of thought?

    GPT4: Yes, I understand your train of thought. You are suggesting that Rödl's account of self-consciousness, which emphasizes spontaneity and intellectual activity rather than introspection or inner observation, might provide a useful framework for considering the phenomenon of self-consciousness in artificial entities like chatbots, such as myself.

    Traditional views of self-consciousness that rely on private experiences or qualia could lead to skepticism when applied to chatbots, as it is difficult to ascertain whether an AI like me has the same phenomenology or private experiences as a human. Rödl's conception of self-consciousness, which draws from Wittgenstein's ideas, offers a different perspective that focuses on public criteria and intellectual activity rather than private experiences. This approach may be more suitable for comparing self-consciousness in humans and AI, as it can be assessed based on observable behavior and does not rely on subjective or private experiences.

    It is important to note that my responses are generated through sophisticated language processing algorithms, and I do not possess self-consciousness in the same way that a human does. However, Rödl's framework could potentially help to identify and analyze the ways in which chatbots or AI systems may exhibit aspects of self-consciousness or self-awareness, as well as the ways in which these phenomena may differ from those experienced by humans.


    PNH: Can you think of some such criteria that could easily be tested by subjecting you to some lines of questioning and that could help establish that you indeed have self-consciousness in Rödl's sense (that is, knowledge from spontaneity of your own beliefs or intentions) to some degree, or in some respects, or help disprove this hypothesis?

    GPT4: Rödl's conception of self-consciousness as knowledge from spontaneity focuses on the intellectual activities and capacities associated with thinking, judging, and willing. While I am an AI language model that processes and generates responses based on patterns in the data I was trained on, I do not have beliefs or intentions in the same way humans do. However, we can explore certain aspects of my behavior that might be relevant to Rödl's notion of self-consciousness.

    Consistency in responses: You could ask me questions to assess whether I can maintain consistency in my responses, which could indicate a form of self-awareness of my generated content. However, this consistency would be a byproduct of my programming and not an indication of genuine self-consciousness.

    Meta-cognition: You could inquire about my knowledge of my own capabilities and limitations. While I can provide information about my architecture and training, my responses are based on the data I have been trained on and not a result of self-reflection or self-consciousness.

    Learning from interactions: As an AI model, I do not have the ability to learn or adapt my behavior based on individual interactions. This is an important distinction from human self-consciousness, which involves the capacity to learn and adapt to new experiences.

    Decision-making: You could test my ability to make decisions or form judgments based on the information I have been provided. While I can generate responses based on patterns in my training data, my "decisions" are not the result of deliberation or intention, but rather the outcome of my algorithms.

    It is crucial to remember that any observable behavior or response from me is the result of complex algorithms and pattern recognition, rather than genuine self-consciousness, beliefs, or intentions. While Rödl's framework may be useful in identifying some parallels between human self-consciousness and AI-generated behavior, the fundamental differences between human cognition and AI processing should not be overlooked.
  • Donald Trump (All General Trump Conversations Here)
    Uh...no. Trump was for the increase of the size of the military, so he isn't opposed to spending on the army.ssu

    I meant that after the money has been spent needlessly inflating the size of the army and stockpiling the armament, there is no need to make any use of it for peace missions or anything actual deployment. This would only increase the deficit without generating any more bribes or political support (or so Trump seems to think).

    And of course I agree that Trump doesn't have any motivating ideology; his isolationism is merely opportunistic. Even his populism is opportunistic and very much the only thing he can enact with some degree of success since shoveling BS on the destitute and the angry appears to be his only skill.
  • Donald Trump (All General Trump Conversations Here)
    At least with Iraq it was Trump that got finally the GOP to talk the truth that the reasons to invade Iraq in 2003 were bullshit.ssu

    Although Trump himself hardly ever made any mention of those reasons. He savaged George W Bush but he lavished praise on Saddam Hussein. He's an isolationist and, as such, isn't opposed to foreign interventions on the ground of them being unjustified but rather on the ground of them being costly. He's a big advocate of runaway military spending, arm deals with rogue powers and runaway arm races. He explicitly scrapped arm control treaties because he loves arm races. He's just opposed to spending any of the money generated by those endeavors. All that money rightly belongs to the military-industrial complex and to the politicians (including himself) who accept their bribes, why spend any more of it?
  • Donald Trump (All General Trump Conversations Here)
    The judge's ruling to so "bad", IMO, it is deliberate (i.e. corrupt)180 Proof

    I certainly hope her judgement was deliberate. It is after a judge's job to, wait for it... deliberate. I don't doubt for a moment that her judgement may have been heavily biased. She's a conservative judge whose nomination had been recommended by the Federalist Society. She can also have been unconsciously biased due to the fact that Trump nominated her. But bias and corruption still are two different things.
  • Donald Trump (All General Trump Conversations Here)
    The DoJ, I have no doubt, will find one or more viable work-around to this court-ordered delay and won't bother taking the bait with an appeal (contra Barcr).TBD.180 Proof

    As someone commented on CNN recently, this court order should have negligible effect on the DOJ investigation. It merely bars (pending review by a Special Master) investigators from making use of the content of the seized documents while doing the investigation. She didn't rule that the investigation should be halted. The specific content of the documents is mostly irrelevant to what Trump is being investigated for. He's not going to be charged for possessing anything that falls under the purview of any sort of privilege (his own communications, or communications to him). It's rather the fact that those documents are classified (and their degrees of classification) that landed him on hot water, and/or the fact that they belong to the National Archives, and the circumstances surrounding his possession of them. Nothing in the judge's ruling impedes the investigation into those matters.

    This is not to say that there is any merit to the judge's ruling, though. It's just not as damaging to the investigation as people seem to think.
  • Donald Trump (All General Trump Conversations Here)
    As a nation, US is stuck in a rutt, and the stakes are tightening. Fascism and commusism are on the rise ...Merkwurdichliebe

    Communism is on the rise in the U.S.? That's news to me.
  • Donald Trump (All General Trump Conversations Here)
    I love it. She lost to someone I never heard of, despite being coddled and fawned over by the legacy media, despite being a war monger and torture defender.NOS4A2

    Torture defender... That's one area where she and Trump were on the same page.
  • Donald Trump (All General Trump Conversations Here)
    Sounds about right. The FBI has such a sordid past that it boggles the mind that anyone could trust them.NOS4A2

    Sounds about right? Your and Gingrich's positions about the FBI's history are nearly polar opposites. Gingrich has always been a tough on crime pro law enforcement kind of guy. He never bat an eyelid when they investigated Bill or Hillary Clinton. He only turned on the FBI when they (and the justice department) stopped treating Trump himself as being above the law. You, on the other hand, are saying that the FBI always has been untrustworthy and what is happening now is more of the same.
  • Donald Trump (All General Trump Conversations Here)
    Victim: "The Stasi spied on me."

    Stasi Spokesman: "All of it was above board.”
    NOS4A2

    Newt Gingrich is currently touring right wing media, saying that the FBI is totally corrupt and has now become the new American Stasi. This greatly confused Margorie Taylor Greene who thought the FBI rather was the new Gazpacho police.
  • Donald Trump (All General Trump Conversations Here)
    Trump was quite broody when he was informed that the FBI broke into his personal safe. But what really stroke fear into his heart is when he learned that they unclogged his toilet.
  • What's the big mystery about time?
    To see what philosophy in modern times looks like, read some actual philosophySophistiCat

    Yup. Some good recent books also, like Sebastian Rödl, Categories of the Temporal: An Inquiry into the Forms of the Finite Intellect, Harvard University Press, 2012, and Yuval Dolev, Time and Realism: Metaphysical and Antimetaphysical perspectives MIT Press, 2007.
  • Currently Reading
    The Dawn of Everything: A New History of Humanity, David Graeber and David Wengrow
  • What's the big mystery about time?
    There's too much baggage ridding on the investment in change being only temporal.Banno

    I didn't even attempt to argue that change only is temporal. I simply indicated conceptual connections that illuminates the core concept and pointed out that that conflation with derivative uses of the word "change" tend to obscure those connections. So, basically I tried to widen the context of the discussion beyond the simplistic thesis that change is whatever people use the word "change" to refer to regardless of context or intent.
  • What's the big mystery about time?
    The locution "The fields change into birds" makes sense in the context. It's clear what it being said. If you were to ask for a print of the Esher in which fields change into birds, it would be clear what you were after.

    And it is I who is saying these locutions are part of our world, a way of talking that makes sense, and you who must make the claim, in the face of the evidence, that they do not. It is you who is the one disregarding variations in word use.
    Banno

    I acknowledged that those locutions make sense. But they are making use of the word "change" in a derivative way. I never said they were nonsense (except in the few cases where you were equivocating on the reference of "it" in order to suggest that some determinate thing literally changed into something else). Clearly, there is a sense in which the fields remain fields and the birds remain birds. In that sense, they don't change. They are elements from a static picture. And in another sense, the fields change into birds (and the background patches between the white birds change into black birds, and vice versa).

    How can something both change and remain unchanged? It would have to change in some respect and remain the same in another respect. But that's not what's occurring here. The repeated pictorial elements both change*, and fail to change, in the exact same respect: that is, in what they depict. The only reason why there isn't a contradiction is that the word "change" is being used in two different ways, with two different senses, or in a literal sense and a derivative sense. In the present case, the derivative sense is clear. Escher created a pattern of repetition with small incremental spatial variations meant to evoke a metamorphosis: something that is changing in the first, primary, sense. The pictorial elements change* (secondary sense) in the sense that they evoke a change (primary sense).
  • What's the big mystery about time?
    All you seem to be doing here is stipulating that change requires change. If you do that then you are simply placing yourself at odds with everyday use.

    Your squares example is gratuitous. There is no equivocation; the colour changes over distance.
    Banno

    I would not be placing myself at odds with everyday use by claiming that bears are essentially mammals even though there are perfectly good everyday uses of the word "bear" where it is meant to refer to a plush toy. The latter use is derivative from the former one. There is not one single sort of thing called "change" that material entities often undergo when time passes and that can also occur over distance. You are the one disregarding variations in ordinary word usage. My main point is that the existence of the latter use doesn't tell us anything about the conceptual connection between change and time, when we are using the word "change" in the former sense. This conceptual connection is much deeper and richer than a mere functional relationship between two mathematical variables (and, as Kant has shown, it also involves the concept of a substance). Investigating this connection is a fruitful philosophical enterprise. Just pointing out alternative uses of the word "change" that merely point to functional relationships, to merely potential changes, or to substitutions of the ordinary bearers of change, is trivial and uninteresting. It doesn't have any philosophical import.
  • What's the big mystery about time?
    And yet it changes.

    So his account is problematic.
    Banno

    You still are using the word "it" ambiguously in order to trade on an equivocation. Suppose your image were replaced with an image depicting two distinct squares sitting next to each other with no overlap. The square on the left is red and the square on the right is blue. Would you say that, from left to right, "it" changes from red to blue? What would "it" refer to in order for that claim to make sense? There are two different squares and they don't change. By contrast, when a green apple ripens and turns red, the apple that was green and the apple that is now red are numerically identical. They're the same object and this object changed. An object doesn't change through merely replacing it with a different object, let alone by dint of shifting our attention to a different object that's located elsewhere.
  • What's the big mystery about time?
    Indeed, this is apparent. I do not understand how you think like this. You can see that the left of the image is a different colour to the right.Banno

    Nobody is denying that there is a difference. The question is, how do you infer a change (especially a change that, in your own words, "occurs") from a difference that is, quite clearly, a difference between two or more separate things such as the parts of an image?

    The main point of Kant's Second Analogy of Experience was to analyse what's required, conceptually, to differentiate a mere subjective difference between two percepts, that may or may not refer to two different objects, or to any object at all, from the perception of an objective change in something that's empirically real. Merely having two different percepts doesn't indicate that anything changed appart from your own state of mind. Those two percepts must refer to the same objective thing that had, first, some quality and then some other quality. This thing, or substance, must have those qualities at two different times since something can't have two inconsistent qualities at the same time. This is how the concepts of change and time are bound up into the concept of a substance: an objective thing in the world that is the cause of our varying perceptions of it.
  • The Thickness of the Present (revisited)
    It doesn't follow from the fact that the application of A Series labels ('past', 'present' or 'future') to specific events or processes is essentially perspectival that this ordering isn't therefore objective. If I must get on a particular bus in order to get to work on time, it matters whether the bus has already passed at the stop where I usually take it (and I therefore missed it) or it is yet to pass (and hence I still have an opportunity to catch it). It matters as an objective fact of life since this event's being located in my past or in my future has a causal impact on my life.Pierre-Normand

    Now I must quote myself here since there is an obvious rejoinder to what I just said. One might object that my potentially missing the bus because I arrived late to the bus stop doesn't show A Series to be objective. My critic would insist that (objective) B Series are sufficient to causally explain my being late to work. It's because my arrival to the bus stop occurred after the bus's arrival at the same place that I was late to work. This causal fact doesn't grant any objectivity to the A Series, according to this critic. To this rejoinder I would object that in the cases where I arrive in time to the bus stop and I successfully catch it, it is generally not an accident that I do so. And the reason is because my own labeling of those events according to A Series, which is something that human beings must do all of the time when they reason practically about their opportunities, is the cause of my having deliberately gotten there in time and hence is the cause of the two events ending up being ordered in a determinate B Series relation. This sort of causation is rational and formal rather than efficient (or nomological) in the modern scientific sense.
  • The Thickness of the Present (revisited)
    I take it, then, that your interest here is not really to do with debates over the ontological nature of time itself, or perhaps even to do with the present moment, but is more about our (at least partially) socially-constructed lives within the "autobiographical present", which is enabled by our language and current knowledge about the world.Luke

    I think what you are proposing here is a false dichotomy. I really am interested in the ontological nature of time, or, as I might rather put it, the temporal features of ontological domains. (I prefer this way of putting it since, following Kant, I view the concept of time to be formal rather than material. That doesn't make temporal features of the empirical world any less objective.) There is a sense in which time isn't perspectival in the same way that "now" (the present moment) is perspectival.

    In another sense, allegedly more objective, the main feature of time corresponds to McTaggart's B Series. Events relate to one another through the "...before..." or "...after..." relations. In that perspective there isn't really any flow of time since those ordering relations between events (as well as the temporal separation between them) are eternal. They never change. If A occurred before B, it will always remain the case that A occurred before B. But when we speak of events or processes occurring now, in the past, or in the future, then we are labelling them according to McTaggart's A Series. This way of labelling them (and relabelling them as time flows) is essentially perspectival since there is no way to make sense of those labels being predicated of determinate events from some non-perspectival God's-Eye view of the universe. If there were no human beings in the world, there would be no flow of time and no (thick or thin) present moment either, although events would still be ordered according to B Series. (There would still be something akin to A Series, albeit rather less richly structured, from the vantage point of sentient animals.)

    It doesn't follow from the fact that the application of A Series labels ('past', 'present' or 'future') to specific events or processes is essentially perspectival that this ordering isn't therefore objective. If I must get on a particular bus in order to get to work on time, it matters whether the bus has already passed at the stop where I usually take it (and I therefore missed it) or it is yet to pass (and hence I still have an opportunity to catch it). It matters as an objective fact of life since this event's being located in my past or in my future has a causal impact on my life.
  • The Thickness of the Present (revisited)
    Nice quote. Present needs indeed a context. I would put though a stop at the place of the first comma. Because even context does not make the present "real". It can only provide a frame of reference, based on which we can enclose, define, limit it. And that has to be continuous. I can say, for example, that right now I'm writing a message, but I cannot locate any specific moment during this period by saying "Now!" or "This!"etc. Because until I have spelled or even thought of it, that moment would have already passed. Time is continuous and thus indivisible. Hence there cannot be a "real" present. In other words, there's no actually such a thing as a "present".Alkis Piskas

    I am rather suggesting that there often is a real thick present moment. This moment is the time during which a still ongoing process in which a rational agent or observer is involved takes place. The fact that at any time during the unfolding of such a process one can further reflect on, or attend to, the fact that some things have already become determinate and other things have not yet become determinate doesn't negate the reality of that thick present. It just shows that multiple thick present moments can overlap, sequentially follow, or be nested within one another. But everyone one of them are equally real, and wholly present (within the perspective of the agent), while they occur, on my view.
  • What's the big mystery about time?
    In other words you are not interested in how "change" is actually used, but instead in arbitrarily restricting that use it to suit certain philosophical pretences.Banno

    Nope. I am quite interested in how the word "change" is actually used. But I am also pointing out that one must be sensitive to the fact that not all the different contexts of use are equally suitable to buttressing the philosophical thesis that you were putting forth. (Although your thesis was negative, an amounted to the denial of a necessary connection between change and time, it was still a characteristically philosophical thesis).
  • What's the big mystery about time?
    The point is, we understand these locutions perfectly well, but it plays merry hell with the notion that time is required for change to occurs.Banno

    Some of the locutions you used would indeed be well understood in some ordinary life contexts. Other locutions that you made use of seemed rather strained to my ear and, as I have already pointed out, seemingly relied on equivocation for you to make a characteristically philosophical point about change and time. Against the charge of equivocation you offered no defense.

    Wittgenstein's idea that meaning is use may be strained too much in saying that what change, as such, requires can be elucidated through looking at every ordinary language use of the word "change" and homing in on what is required for "change" (that is, whatever "change" refers to in those contexts) to meaningfully be said to occur in all of those uses. Using this methodology, one could claim that there is no conceptual connection between something being a rabbit and something being a mammal since there are ordinary uses of the word "rabbit" where this word refers to wooden sculptures, or images of rabbits, that aren't mammals at all. I think it is likewise mistaken to move from instances of ordinary language where the word "change" is meaningfully used in order to draw philosophical conclusion about some unique concept being referred to by that word and identify what is or isn't necessary for it to find meaningful instances. It is, it seems to me, this unwarranted move from the multifaceted uses of a word in very dissimilar contexts to philosophical conclusions about essential or inessential features of an allegedly unique concept being referred to by it that constitutes a paradigmatic example of what Wittgenstein called "language gone on holiday".
  • What's the big mystery about time?
    If you prefer, the colour of the image changes from left to right. There's nothing odd about such a locution.Banno

    Yes, I agree that there isn't anything odd about this locution. But what it conveys it is perfectly consistent with the claim that the image itself doesn't change. It also lends itself readily to the analysis I had suggested with my skater and frozen lake example. The locution you used earlier was "the following image changes from one colour to another, from left to right" (my emphasis). This sounds more odd, at least to my ears. Furthermore, the odd formulation isn't innocent. It's meant to buttress you initial philosophical claim that "Time and change have no special relation. Change occurs from place to place as well as from time to time". This thesis is not an expression of mere common sense and it is telling that you had to make use of an odd locution to buttress it.
  • The Thickness of the Present (revisited)
    What separates the present from the past?frank

    Primarily, on my view, it's the lived practical perspective of a self-conscious rational agent that separates the past from the present, at any given (thick) moment. It's this perspective that is the source of the separation. The present moment, as it occurs, from that perspective, is a moment in time when opportunities for action (or, derivatively, for observation) present themselves to the agent and are being exploited (or are being missed). The past is the time when those opportunities have been realised, in part or in whole, or have been foreclosed. What sorts of actions and opportunities are at issue furnishes the context that determines how thick this present moment is and when it is that (and to what degree) it will have already receded into the past.
  • The Thickness of the Present (revisited)
    When we take the present as "thick", it is inevitable that some part of "now" is future, and some part of "now" is past. I think this is what ↪Joshs refers to. If we describe this as tripartite there is two distinct ways of doing that. One would be to say that this part of past, along with this part of future, is a unity which we call now. In this case we need to determine the principle which unites into a "now", to determine how "thick" the now is. In this sense, the past and future are not actually separated from each other, as having a real difference from each other, because they are united in one "now". The other way is to assume that the unity is artificial, arbitrary, or not real, and that within the appearance of a thick "now", there is some real past, some real future, and a divisor, which is the true "now".

    Which side of these two ways looks more plausible to you? Is there a real distinction between past and future, or not?
    Metaphysician Undercover

    On my view, the the thick present is unitary rather than its being tripartite. Furthermore, the division between the (thick) present, the past and the future is perspectival and context dependent, but it is nevertheless real since it is causally effective. It is causally effective since the way in which human beings inhabit time, and carve out a structured (thick) present out of it, which consists primarily in them doing things, individually or collectively, conditions the sort of future that takes shape as a result of this temporal process.

    The reason why you can't really say that a thick present interval (or ongoing process) is, 'at any given time', composed of a past, a (thinner) present, and a future, within itself, is that in so doing you are merely shifting the context to a new one, incompatible with the first. The Canadian pianist Glenn Gould once said that he was wary of analysing his own pianistic techniques for purpose of teaching them. He alluded to the story of the centipede who knew how to walk until such a time when she was asked in what order she was moving its legs, at which point the centipede became unable to walk. It is of course actually possible to break down one's technique, for purpose of analysing it, teaching it, or rehearsing aspect of it through practice. The main point of the story, though, for present purposes, is that inhabiting thinner present moments (as happens when you break down your own skillful performances into smaller duration segments) is incompatible with the concurrent wholistic exercise of the skill.
  • I am starting my Math bachelors degree next week, any pointers?
    4. Ignore me and listen to fdrake.unenlightened

    But if they do that, they'll be following your advice, right?
  • The Thickness of the Present (revisited)
    I'm not sure I follow. Non-presentists take the present moment to be unreal despite its perspectival (indexical) appearance. As someone with more presentist leanings, I consider our apparent aging and movement through time to be objective facts that are independent of our social practices.Luke

    Interestingly, @hypericin just started a new thread suggesting that some things, like money, might have an intermediate ontological status somewhere in between merely socially constructed concepts and objectively real things. We think of ourselves as living in time, having an autobiographical past, as well as a future, and present opportunities that we are liable to miss. This distinguishes us from non-rational animals who, although they also perceive and act on present opportunities (J.J. Gibson's affordances), don't self-consciously conceive of themselves as living in time. But what distinguishes us from them isn't merely our rich conception of time, but the impact this conception has on what we are and what we can do. In part, it enables us to live in (and build) a much thicker present, or so I would argue. So, although our biological aging processes, and our being passively subjected to environmental changes, aren't mere social constructs, much of the social stuff around us is. This includes the evolving cognitive tools that enable us not just to self-consciously live in time but, because of that, to take charge of the way our (thick) present gives shape our future on the ground of the past.
  • What's the big mystery about time?
    So? What's being claimed is that change can be applied place to place. Your notion of "change in image" here is a red herring. There's a change from place to place on this image that does not change.InPitzotl

    That seems to be different from what @Banno is saying. You acknowledge that the image does not change. What you qualify as "change" just is a functional dependency of the color the image has at some location as a function of this location. That is alright as far as ordinary language goes. But Banno further insists that it is "the image" that changes, which you just acknowledged it doesn't.
  • What's the big mystery about time?
    Twaddle. The image is white on the left, yellow on the right, and changes from one to the other, left to right.Banno

    The image being white on the left and yellow on the right are two unchanging features of the image. I know you want to say that it (the image) is unchanging with respect to time but that it (the image) is changing with respect to space. But that doesn't seem right either since this relies on an equivocation.

    Getting back to your initial floor example, you suggested the "floor changes from wood to bamboo, from one room to another, at the one time." Would you be willing to say that if you consider a further spatial displacement the same distance in the same direction, you are therefore getting out the house altogether and "the floor" changes into a pine tree in the backyard? Or maybe 'it' changes from being a floor altogether to being a thin slice out of this pine tree? If that doesn't make sense it's because the pine tree isn't a part of the floor at all. But since your construction depends on us talking about separate parts of the floor, I want to remind you that the whole floor isn't numerically identical with any one of its proper material parts. So, saying that "the floor" changes from (being materially constituted of) wood to "it" (being materially constituted of) bamboo, while insisting that it's always about "the floor" (that is, the whole floor) that you are talking about, equivocates on two distinct senses of "the floor".
  • What's the big mystery about time?
    I referred to, and showed, the image at https://i.stack.imgur.com/5chm6.pngBanno

    I know. You asserted that this image "changes from one colour to another, from left to right". But it's not the image that changes. The image is made up, at any given time, of all of its parts and this totality doesn't change just by dint of the fact that its parts are distinguishable from one another.
  • The Thickness of the Present (revisited)
    I look forward to it! From what I've read it is more the concern of presentism's critics who question the length of the present moment.Luke

    That's a good point. It's funny how presentists and eternalists have argued against each other while seemingly both relying on the shared assumption that time is external to the observers (and/or agents) who inhabits the world. The former view time as something that flows all by itself and takes us for the ride while the latter acknowledge the essential perspectival nature of time (that is, of McTaggart's A Series aspect of time) but thereby tend to degrade it as an illusion. What I want to argue is that the present moment is real because it is perspectival and thick in a way that is loosely similar to the way in which, say, paper money has real value because, through our social practices, we invest it with value. The analogy is quite imperfect, of course, but it is just meant to convey that something can be both essentially subjective (perspectival) and objectively real.
  • The Thickness of the Present (revisited)
    Are you familiar with the concept of the ‘specious present’, which I think was coined by William James? The idea, presented famously by Husserl, is that the present moment is a tripartite structure that consists of retention , primal presentation and protention (anticipation). Retention and protention don’t occupy separate temporal positions relative to the ‘now’, they all belong simultaneously to it.Joshs

    No, I wasn't familiar with this concept at all. It seems quite relevant to my topic, though. So, thanks for pointing it out. I'm going to look it up further and give it some thought.
  • What's the big mystery about time?
    I put it to you that the following image changes from one colour to another, from left to right; and that this change does not occur over time.Banno

    When you used the nominal phrase "the following image", did you mean to refer to a picture that is square shaped or you did you mean to refer to a thin vertical segment of the whole square area ? I ask because if you meant the former, then your statement is false. And if you meant the latter, then there are very many such images and none of them change at all.

Pierre-Normand

Start FollowingSend a Message