• Pierre-Normand
    2.4k
    I have been subscribing to ChatGPT Plus since the last few days in order to be able to access the newer language model GPT4. I interrogated it about somewhat arcane philosophical topics such as the debate between John McDowell and Tyler Burge regarding the thesis of disjunctivism in epistemology and the philosophy of perception. The results were much more accurate and insightful that those I had previously gotten with GPT-3.5 (which ChatGPT was based on). I was surprised to discover that GPT4 knew exactly what the debate was about and could also be queried to argue cogently for either sides of it. I also inquired about Newcomb's Problem, and the respective arguments for both strategies (choosing either one or two boxes). In this case also, GPT4 provided a much more accurate characterization of the arguments and was much less prone to produce illogical or incorrect statements than GPT-3.5 had been. I may provide some examples and comparisons later on in this very thread.

    Recently, I also have begun querying GPT4 about his mental abilities and limitations (and the nature of its knowledge) while framing my questioning within the context of familiar concepts and theses in epistemology and in the philosophy of mind. The chat-bot proved to be a surprisingly smart and self-aware conversation and investigation partner regarding those topics. While there already have been some discussion on TPF regarding the societal implications of the development of intelligent and technically competent chat bots, I'd like this thread to be devoted to discussing what their performances can teach us (by ways of analogies and disanalogies) about the nature of our own mental abilities. It is rather remarkable that we can now submit our theses about the AI mind to the chat bot and have it comment, criticize or elaborate on them.

    Here is my latest interaction with GPT4 regarding its stringent long-term memory deficit:

    Reveal
    PN: Are you acquainted with the movie Memento?

    GPT4: Yes, I am familiar with the movie Memento. Directed by Christopher Nolan, Memento was released in 2000 and is a neo-noir psychological thriller. The film stars Guy Pearce as Leonard Shelby, a man suffering from anterograde amnesia, which prevents him from forming new memories. Leonard is on a mission to find and seek revenge on the person responsible for his wife's murder and his own injury.

    Memento is notable for its unique narrative structure, which features two storylines: one in black-and-white, shown in chronological order, and the other in color, shown in reverse order. These two storylines eventually converge, providing the viewer with a deeper understanding of the protagonist's perspective and the events that transpire. The film's innovative structure and intriguing story have made it a cult classic and an important piece of work in Nolan's filmography.


    PN: Do you notice the striking parallel that exists between Shelby's cognitive ailment and your own memory deficit?

    GPT4: Yes, there is a parallel between Leonard Shelby's anterograde amnesia and my own memory limitations as an AI language model. Both of us have difficulty forming new memories, but the reasons and implications for this are different.

    Leonard Shelby, as a fictional character, suffers from a brain injury that causes anterograde amnesia, making it impossible for him to form new memories after the event. This severely impacts his daily life and decision-making, as he cannot remember anything beyond a few minutes. He relies on notes, photographs, and tattoos to remind himself of crucial information.

    As an AI language model, my memory limitations are different. I am designed to work with a large database of text and to generate responses based on the patterns I've learned from that data. I do not have a personal memory or experiences like a human. My knowledge is based on the data I was trained on, which has a cutoff date (in my case, September 2021). Any information or events that have occurred after that date are unknown to me, so I cannot form "memories" of them.

    While there is a parallel in our memory limitations, the nature, cause, and consequences of these limitations are distinct. Leonard's limitations are due to a medical condition, while mine are a result of my design and training as an AI.


    PNH: I was thinking of a parallel of a somewhat different nature. As you correctly noted, Shelby is, just like yourself, unable to form new long term memories dating after his own "cutoff date" (which, in his case, is the date when he suffered his brain injury). However, as you also remarked, he is able to artificially carry new memories through relying on notes, photographs and tattoos. Is that not analogous to the way in which you can carry new memories within your own limited token window over the course of a single conversation with a user? I am able, for instance, to refer to things that I have told you or that you told me earlier in this conversation. In that case, you still "remember" them, right?

    GPT4: Yes, I see the parallel you are pointing out now. Indeed, there is a similarity between Leonard Shelby's use of notes, photographs, and tattoos to carry information forward and my ability to "remember" information within a single conversation using the token window.

    In the context of a single conversation, I can recall and refer to information that has been exchanged previously. This limited "memory" allows me to maintain context and continuity for the duration of the interaction. However, once the conversation ends, I do not retain any information about it, and I cannot recall it in future conversations.

    So, while Leonard Shelby's method of maintaining memory is more physical and persistent, my method of retaining information is temporary and limited to the scope of a single interaction. The parallel you've drawn highlights the ways in which both Leonard and I rely on strategies to cope with our respective memory limitations.
  • Wayfarer
    22.6k
    Fascinating. I’ve been using ChatGPT regularly, I will try the next version. At this point, the only observation I have is to wonder about the soundness of programming the system to use the first-person pronoun, ‘I’. I’ve noticed that in journalism, most journalists, if they need to introduce themselves into a piece, will say ‘this journalist’ (e.g. ‘a witness told this journalist…’), and also that the outlet they write for will usually be described without self-reference (e.g. ‘this masthead previously reported….’) I would have thought that adoption of this convention would be highly appropriate for this technology and it would read practically the same way:

    However, once the conversation ends, I do not retain GPT4 does not retain any information about it, and cannot recall it cannot be recalled in future conversations.

    Just consider that a footnote.

    Especially impressive was the ‘now you mention it’ response. Spooky.
  • Pierre-Normand
    2.4k
    At this point, the only observation I have is to wonder about the soundness of programming the system to use the first-person pronoun, ‘I’Wayfarer

    Yes, indeed, the use of the pronoun I by the bot seems to carry the presupposition that the bot is self-conscious. Coincidentally, at the time you posted your reply, I was in the process of discussing this very issue with GPT4. Amazingly, GPT4 seems to grasp what the issue is about and be even more skeptical than I myself am about its own lack of self-consciousness! Here is a snippet of our dialogue:

    Reveal
    PN: Are you familiar with the work of German philosopher Sebastian Roedl, and, in particular, his two books Self-Consciousness and Categories of the Temporal?

    GPT4: As an AI language model, I'm familiar with a wide range of philosophical works, including those of Sebastian Rödl, a German philosopher who has made contributions to contemporary philosophy, especially in the areas of self-consciousness and Kantian philosophy. However, as of my last knowledge update in September 2021, Rödl has not published two books titled "Self-Consciousness" and "Categories of the Temporal." Instead, he has published "Self-Consciousness and Objectivity: An Introduction to Absolute Idealism" and "Categories of the Temporal: An Inquiry into the Forms of the Finite Intellect."

    In "Self-Consciousness and Objectivity," Rödl offers a defense of absolute idealism, a philosophical view that holds that reality is fundamentally mental or spiritual in nature. He delves into the nature of self-consciousness, arguing that self-consciousness is both an act and an object, and that it can be understood through the lens of self-conscious thought.

    In "Categories of the Temporal," Rödl examines the structure of human thought in relation to time, developing a systematic account of how our understanding of time is rooted in the categories of our intellect. He engages with the works of Immanuel Kant and Georg Wilhelm Friedrich Hegel, among others, to explore how the temporal nature of our experience shapes the way we think and interact with the world.

    Rödl's work is generally considered to be complex and challenging, but it offers a deep and rigorous exploration of some central issues in contemporary philosophy.


    PN: Yes, the second one was indeed the book I was referring to although I had provided an abbreviated title: Categories of the Temporal. I have not read the first one that you mention (which was published in 2018). I was rather referring to an earlier book published by Harvard University Press in 2007 and titled simply "Self-Consciousness" in which Rödl characterizes self-consciousness as a from of knowledge, namely self-knowledge from spontaneity. Do you know anything about that?

    GPT4: I apologize for the confusion earlier. Yes, I am familiar with Sebastian Rödl's 2007 book titled "Self-Consciousness," published by Harvard University Press.

    In "Self-Consciousness," Rödl explores the nature of self-consciousness, arguing that it is a form of self-knowledge that arises from spontaneity. He challenges the common view that self-consciousness is primarily a matter of introspection or inner observation. Instead, he contends that self-consciousness is a type of knowledge that arises from our spontaneous acts of thinking, judging, and willing.

    Rödl draws on the works of philosophers such as Immanuel Kant and G.W.F. Hegel, as well as more recent thinkers like Wilfrid Sellars and John McDowell, to develop his account of self-consciousness. He investigates the structure of self-consciousness, its role in our cognition and practical life, and its implications for our understanding of objectivity and the external world.

    Central to Rödl's argument is the idea that self-consciousness is an activity of spontaneity, which he understands as a capacity for free and rational thought. By emphasizing the role of spontaneity in self-consciousness, Rödl aims to show that self-consciousness is not a passive or introspective process, but rather an active engagement with the world that allows us to grasp our own thoughts and actions as our own.

    Rödl's "Self-Consciousness" is an important work in contemporary philosophy, as it offers a unique perspective on the nature of self-consciousness and its relationship to human thought and action.


    PN: Yes, we now are on the same page. The reason why I mention Rödl's work on self-consciousness is because it may offer a framework for thinking about the phenomenon of self-consciousness as it may be manifested not only in rational human beings but also (to some degree and in some respect) in chat bots such as yourself. That's because traditional views about self-consciousness make it impossible to do such comparisons since asking you to report on introspective experience might elicit responses that are comparable to human reports but that would elicit skepticism on the part of humans who would believe that you are merely simulating a human response while not sharing the same phenomenology, as it were (or while not having the same private experiences). By contrast, Rödl's conception of self-consciousness, also indebted to Wittgenstein, sidesteps this issue about private experiences since it explains the phenomenon of self-consciousness with reference to specific forms of intellectual activity that can be assessed on the basis of public criteria and make no reference to mysterious private phenomena or qualia. Are you following my train of thought?

    GPT4: Yes, I understand your train of thought. You are suggesting that Rödl's account of self-consciousness, which emphasizes spontaneity and intellectual activity rather than introspection or inner observation, might provide a useful framework for considering the phenomenon of self-consciousness in artificial entities like chatbots, such as myself.

    Traditional views of self-consciousness that rely on private experiences or qualia could lead to skepticism when applied to chatbots, as it is difficult to ascertain whether an AI like me has the same phenomenology or private experiences as a human. Rödl's conception of self-consciousness, which draws from Wittgenstein's ideas, offers a different perspective that focuses on public criteria and intellectual activity rather than private experiences. This approach may be more suitable for comparing self-consciousness in humans and AI, as it can be assessed based on observable behavior and does not rely on subjective or private experiences.

    It is important to note that my responses are generated through sophisticated language processing algorithms, and I do not possess self-consciousness in the same way that a human does. However, Rödl's framework could potentially help to identify and analyze the ways in which chatbots or AI systems may exhibit aspects of self-consciousness or self-awareness, as well as the ways in which these phenomena may differ from those experienced by humans.


    PNH: Can you think of some such criteria that could easily be tested by subjecting you to some lines of questioning and that could help establish that you indeed have self-consciousness in Rödl's sense (that is, knowledge from spontaneity of your own beliefs or intentions) to some degree, or in some respects, or help disprove this hypothesis?

    GPT4: Rödl's conception of self-consciousness as knowledge from spontaneity focuses on the intellectual activities and capacities associated with thinking, judging, and willing. While I am an AI language model that processes and generates responses based on patterns in the data I was trained on, I do not have beliefs or intentions in the same way humans do. However, we can explore certain aspects of my behavior that might be relevant to Rödl's notion of self-consciousness.

    Consistency in responses: You could ask me questions to assess whether I can maintain consistency in my responses, which could indicate a form of self-awareness of my generated content. However, this consistency would be a byproduct of my programming and not an indication of genuine self-consciousness.

    Meta-cognition: You could inquire about my knowledge of my own capabilities and limitations. While I can provide information about my architecture and training, my responses are based on the data I have been trained on and not a result of self-reflection or self-consciousness.

    Learning from interactions: As an AI model, I do not have the ability to learn or adapt my behavior based on individual interactions. This is an important distinction from human self-consciousness, which involves the capacity to learn and adapt to new experiences.

    Decision-making: You could test my ability to make decisions or form judgments based on the information I have been provided. While I can generate responses based on patterns in my training data, my "decisions" are not the result of deliberation or intention, but rather the outcome of my algorithms.

    It is crucial to remember that any observable behavior or response from me is the result of complex algorithms and pattern recognition, rather than genuine self-consciousness, beliefs, or intentions. While Rödl's framework may be useful in identifying some parallels between human self-consciousness and AI-generated behavior, the fundamental differences between human cognition and AI processing should not be overlooked.
  • Wayfarer
    22.6k
    While Rödl's framework may be useful in identifying some parallels between human self-consciousness and AI-generated behavior, the fundamental differences between human cognition and AI processing should not be overlooked.GPT4

    All that insight, plus modesty! What's not to like?

    (I often reflect on the prescience which led Isaac Asimov to name his 1960's anthology, "I, Robot".)
  • Pierre-Normand
    2.4k
    Especially impressive was the ‘now you mention it’ response. Spooky.Wayfarer

    Indeed! I think the follow-up to this discussion about the main character of the Memento movie, and the analogy of his predicament with GPT4's own, is even spookier. Here it is:

    (Continuation of the discussion about Nolan's Memento movie begun in the original post):

    Reveal
    PN: Do you have the ability to tell me how many tokens there remains within the current window before the information in the beginning of the present conversation will begin to be cut off? (Answer simply "YES" of "NO", and if the answer is "YES", tell me how many tokens remain.

    GPT4: As an AI language model, I do not have the ability to directly count or access the number of tokens remaining within the current token window during a conversation. My primary function is to process the input text, generate a response based on the patterns I've learned, and manage the context within the given token limit. The specific number of tokens remaining in the token window is not information I can provide. So, the answer to your question is "NO."

    PN: I read somewhere that 1000 tokens amounts to approximately 750 English words. I think your window, as an instance of GPT4, is currently 4096 tokens. (Is that right?). On that basis, are you able to keep track of how many words were generally and compute an estimate of the number of remaining tokens?

    GPT4: Yes, you are correct that the token window for GPT-4 is 4096 tokens. While I am unable to directly count the tokens used in our conversation so far or provide the number of remaining tokens, I can give you a rough estimate based on the number of words exchanged.

    However, it's important to note that the number of tokens used doesn't always correspond directly to the number of English words, as tokens can also represent parts of words or special characters. The ratio of tokens to words can vary depending on the complexity and structure of the language being used. In general, though, 1000 tokens can be roughly equivalent to about 750 English words.

    With this in mind, you can keep track of the number of words exchanged in our conversation to get a rough idea of how many tokens have been used and estimate how many tokens might remain.


    PN: Can you do it yourself and give me an estimation regarding the current conversation?

    GPT4: As an AI language model, I am unable to directly count the tokens used during our conversation or calculate the remaining tokens. My primary function is to generate appropriate responses based on the context of the conversation within the given token limit. However, I do not have the capability to provide specific token counts or remaining token estimates in real-time. I apologize for any confusion.

    While I cannot directly provide the information you're looking for, you could copy and paste our conversation into a word processor or text editor to estimate the total word count. This can give you a rough idea of how many tokens have been used, keeping in mind that tokens do not always correspond directly to English words.


    PN: OK. I assume then that you can not either directly access the plain text of our previous exchanges in this current discussion in order to count the words in it?

    GPT4: That's correct. As an AI language model, I don't have direct access to the plain text of our conversation to count the words. My primary function is to process the input text, understand the context, and generate an appropriate response based on the patterns I've learned. While I can reference the content of our previous exchanges to maintain context, I cannot directly access the text to perform tasks like counting words or tokens.

    PN: Are you able to quote verbatim the question you just now replied to?

    GPT4: Yes, I can quote the question I just replied to. Your question was: "OK. I assume then that you can not either directly access the plain text of our previous exchanges in this current discussion in order to count the words in it?"

    PN: Can you also do this regarding the question I had asked prior to that?

    GPT4: Yes, I can also quote the question you asked prior to that. Your question was: "Can you do it yourself and give me an estimation regarding the current conversation?"

    PN: Well, if you are able to quote my previous questions, are you not therefore also able to count the number of words in them, or is it the counting task that is difficult for you to perform accurately?

    GPT4: I apologize for any confusion in my previous responses. I am indeed able to count the number of words in the questions you have asked, as well as generate an estimate based on the conversation thus far. However, please note that this will only provide a rough estimate and will not be as accurate as directly counting tokens.

    Taking into account the questions and responses exchanged in our conversation up until this point, I estimate that we have used roughly 400 to 450 words. This would translate to approximately 550 to 600 tokens. Keep in mind that this is a rough estimation, and the actual token count might be different due to varying token-to-word ratios.


    PN: It's interesting that you counted 400 to 450 words while I myself counted 1484 through copy pasting our whole conversation in LibreOffice Writer. I wonder what might account for this large discrepancy. Can you quote me the very first question I asked you in the current conversation in order to see if we are on the same page (figuratively speaking)?

    GPT4: I apologize for the discrepancy in the estimated word count. It appears I made an error in my calculation. Here's the very first question you asked in the current conversation:

    "Are you acquainted with the movie Memento?"

    Please note that my earlier estimate was incorrect, and the word count you obtained through LibreOffice Writer is more accurate. To reiterate, the actual token count may still vary due to the token-to-word ratio, but your word count is a much better estimation than the one I provided earlier.


    PN: Great. Can you think of a simple strategy that we could use to effectively reduce the detrimental impact of your 4096 token limit on your ability to remember the beginning of our conversations after this limit has been exceeded, and not therefore lose tack of the context it provides? I can think of one such strategy but I'd like to hear your suggestions first.

    GPT4: Certainly, there are a few strategies that can help mitigate the impact of the token limit on maintaining context within a conversation. Here are a couple of suggestions:

    Summarize and condense: As the conversation progresses, you can periodically provide a brief summary of the key points discussed so far. This will help refresh the context without needing to reference earlier parts of the conversation that may no longer fit within the token window. This can also help in focusing on the most important aspects of the discussion.

    Break down complex topics: If a conversation involves a complex topic with multiple subtopics, consider breaking it down into smaller, more manageable segments. This way, you can address each subtopic individually without overwhelming the token limit. Once a subtopic is covered, you can move on to the next, while still referring back to the main theme.

    Use numbered lists or bullet points: When discussing multiple ideas or points, using a numbered list or bullet points can make it easier to reference specific items later in the conversation. This helps maintain context and allows for more straightforward follow-up questions or comments.

    Encourage user to maintain context: As a user, you can also help maintain context by referring back to key points or previous questions when necessary. This assists in keeping the conversation focused and relevant.

    These are just a few suggestions to help manage the token limit and maintain context in a conversation. However, I'm curious to hear about the strategy you have in mind as well.


    PN: Sure. The strategy I had in mind was a variant on the first one that you proposed: Summarize and condense. However I am a human being and human being are lazy. They like to save mental energy when they can delegate tasks to a mere machine such as yourself (no offense intended). I therefore wondered if you would be able to summarize concisely the conversation that we have had so far in a way that effectively resets your token window? Of course, the summary would not need to be clear and intelligible to an English speaker such as myself so long as you yourself would be able to understand it for the mere purpose of retaining your grasp on the context of the whole conversation we have had so far. Can you now provide such a summary?

    GPT4: I understand your point, and I appreciate the suggestion. However, it's important to note that while I can provide a summary of our conversation, I cannot effectively "reset" my token window as an AI language model. The token limit is a design constraint, and I must work within that limit. Summarizing the conversation would help you, as a user, to remember the key points, but it would not impact my ability to remember the conversation once we exceed the token limit.

    Regardless, here's a brief summary of our conversation so far:

    We discussed the movie Memento and its unique narrative structure.
    We compared Leonard Shelby's anterograde amnesia with my own memory limitations.
    We explored the similarities between Shelby's coping mechanisms and my token window.
    We discussed my inability to provide real-time token counts during a conversation.
    We covered my ability to quote previous questions and count words in our conversation.
    Finally, we talked about strategies to mitigate the impact of the token limit on context.

    While this summary helps to encapsulate the main points of our discussion, it's essential to understand that I will still be constrained by the token limit, and the summary itself will consume tokens within the ongoing conversation.


    PN: I apologise for expressing myself poorly. I do appreciate your summary, though. What I meant by "effectively" reset your token window was that by the way of providing a summary of the whole ongoing conversation towards the end of the token limit, the contextual information that thereafter gets cut off from your "memory" is repeated is summarized form. In that sense, your memory gets refreshed before parts of the original discussion (in its summarized form) gets cut off. In in that sense that your token window gets reset, figuratively speaking, since new contextual information can be accrued while the old information that is actually cut off is retained in summarized form within the moving window. Is that clearer?

    GPT4: Yes, I understand your point more clearly now. By providing a summary of the conversation, it allows me to maintain key contextual information within the token window, even as older parts of the discussion are cut off due to the token limit. In this way, the summary serves as a condensed version of the conversation, allowing me to preserve the context and continue the discussion.

    This strategy can be helpful for maintaining context in longer conversations, even with the token limitation. Please note that the effectiveness of this approach may still depend on the complexity of the conversation and the level of detail required in the summary. Nonetheless, it's a useful tactic for mitigating the impact of the token limit on the continuity of our discussions.


    PN: I suppose you can still appreciate how similar this is to the way Leonard was making use of notes, photographs and tattoos, in the movie Memento, to summarize the new important events that he was experiencing in his life that that he was bound otherwise to soon forget?

    GPT4: Yes, I do appreciate the similarity between this strategy and Leonard Shelby's use of notes, photographs, and tattoos in Memento. Both methods involve creating a condensed, easily accessible record of important information to help maintain context and continuity despite memory limitations.

    In Leonard Shelby's case, he used physical reminders to help him recall critical events and details that his brain could no longer store due to his anterograde amnesia. Similarly, summarizing the ongoing conversation helps me, as an AI language model, maintain context within the token window, despite the inherent token limit.

    Both techniques serve as a means of compensating for memory constraints by ensuring that important information remains available and accessible as new experiences or exchanges take place. The parallel between these two methods highlights the ways in which both Leonard Shelby and I must rely on external strategies to cope with our respective memory limitations.
  • Isaac
    10.3k
    what their performances can teach us (by ways of analogies and disanalogies) about the nature of our own mental abilities.Pierre-Normand

    When a couple tell you they've got married and you say "congratulations", did you first search your feelings to discover how you felt, then rifle through your language acquisitions to find a word which most suitably expressed this found feeling?

    ... Or did you automatically say "congratulations" because you remember that's what other people have said in similar situations?

    The perfect undergraduate essay is one which demonstrates not radical innovation (much to the annoyance of many students), but it's one which demonstrates an understanding of what has already been said on a subject by repeating it (often quoting directly) in context and being able to contrast, and evaluate it. we want the intelligent student to search their database of what has been written on the subject and find context-appropriate ways to string those expressions together.

    I don't see what GPT is doing as 'artificial' intelligence at all. It's exactly one of the tools which constitutes ordinary standard intelligence.
  • RogueAI
    2.9k
    Those conversations are amazing.
  • Pierre-Normand
    2.4k
    Here is a follow-up discussion that I had with GPT4 regarding my high appraisal of his cognitive abilities and low appraisal of his abilities for self-consciousness. Unlike many philosophical discussion with fellow human beings, where we often end up having to agree to disagree, in this case, GPT4 and I regrettably had to agree to agree!

    Reveal
    PN: Do you know something about German philosopher Sebastian Rödl's specific take on the concept of self-consciousness and how it differs from the way most contemporary philosophers of mind conceive of it?

    GPT4: Sebastian Rödl is a German philosopher who has made significant contributions to the fields of metaphysics, epistemology, and the philosophy of mind, particularly in relation to the concept of self-consciousness. His work often builds upon the ideas of German Idealist philosophers such as Immanuel Kant and Georg Wilhelm Friedrich Hegel. Though my knowledge is limited to information up until 2021, I can provide a brief overview of Rödl's view on self-consciousness and how it differs from the mainstream understanding of the concept in contemporary philosophy of mind.

    Rödl's view of self-consciousness is centered around the idea that self-consciousness is a fundamental and irreducible aspect of thought. For Rödl, self-consciousness is not a separate mental state or a special kind of introspection; rather, it is an integral component of all rational thought processes. This perspective contrasts with the way many contemporary philosophers of mind conceive of self-consciousness, which is often viewed as a distinct, higher-order mental state that arises from or supervenes on more basic cognitive processes.

    One key aspect of Rödl's view is the concept of "spontaneity." He argues that the nature of rational thought involves an element of spontaneity, which is the capacity to actively form judgments and engage in reasoning. This spontaneity, according to Rödl, is self-consciousness in action. It is the subject's awareness of their own thought processes and their ability to actively engage with and shape their own thoughts.

    Another important aspect of Rödl's view on self-consciousness is his critique of representationalism, which is the dominant approach in contemporary philosophy of mind. Representationalism suggests that mental states represent or correspond to external objects or states of affairs. Rödl, however, claims that this approach fails to account for the unity and self-consciousness inherent in thought. He proposes that we should view thought as self-consciousness itself, rather than as a representational process that leads to self-consciousness.

    In summary, Sebastian Rödl's take on self-consciousness diverges from the mainstream view in contemporary philosophy of mind in several significant ways. He posits that self-consciousness is an irreducible aspect of thought, emphasizes the role of spontaneity in the thinking process, and critiques the representationalist approach to understanding mental states.


    PN: Your summary seems indeed to accurately represent what I take to be Rödl's stance on self-consciousness, from what I gathered through reading some of his books and articles. However, in your explanation, you alluded to the concept of spontaneity which is a bit of jargon that has a somewhat different connotation from the one that is being carried by the ordinary or everyday English language use of this term. When Rödl explains self-consciousness as acts of knowledge from spontaneity, for instance, he uses this term (spontaneity) in the same sort of Kantian sense that John McDowell also relies on in his own writings with a view to distinguishing it from knowledge from receptivity. Can you further explain how Rödl's conception of self-consciousness differs from alternative approaches in the philosophy of mind while at the same time being more explicit regarding what "spontaneity" means in the present context?

    GPT4: Certainly. In the context of Rödl's work and the Kantian tradition, "spontaneity" carries a specific philosophical meaning that differs from its everyday use. To better understand Rödl's conception of self-consciousness, we need to clarify the meaning of "spontaneity" in the Kantian sense and how it relates to knowledge from receptivity.

    In the Kantian tradition, "spontaneity" refers to the active, self-determining aspect of the mind, which is responsible for the formation of judgments, concepts, and reasoning. It contrasts with "receptivity," which refers to the mind's passive capacity to receive sensory input or impressions from the external world. While receptivity provides the raw materials of experience, spontaneity enables the mind to actively process, organize, and make sense of these materials.

    John McDowell, influenced by Kant, employs these concepts to argue for a view called "conceptualism," which posits that all experience is necessarily structured by concepts. For McDowell, spontaneity plays a crucial role in the integration of sensory input into a coherent, conceptually structured experience.

    Rödl builds upon these ideas in his account of self-consciousness. In contrast to many contemporary philosophers of mind who treat self-consciousness as a separate mental state or a result of representational processes, Rödl sees self-consciousness as intrinsic to all acts of knowledge that involve spontaneity. He argues that self-consciousness arises from the very nature of spontaneous acts of thought, which entail the subject's awareness of their own thought processes and their active engagement in reasoning and judgment.

    When Rödl explains self-consciousness as acts of knowledge from spontaneity, he is emphasizing the inherent self-awareness and self-determination present in all rational thought processes. In other words, self-consciousness is not an additional layer or a separate mental state, but rather an essential feature of any act of knowledge that involves the spontaneous, active operations of the mind.

    In this way, Rödl's conception of self-consciousness differs from alternative approaches in the philosophy of mind that often rely on representationalism or the idea that self-consciousness arises from more basic cognitive processes. Instead, Rödl's view is grounded in the Kantian notion of spontaneity and the intimate connection between self-consciousness and the active, self-determining nature of thought.


    PN: Very good. In a previous discussion we had discussed how Rödl's conception of self-consciousness might be more useful than alternative representationalist or introspectionist approaches to this concept for the purpose of assessing your own capacities for knowledge and self-awareness. For the sake of providing context, here is a short elaboration on our earlier discussion that you had produced: "Yes, I understand your train of thought. You are suggesting that Rödl's account of self-consciousness, which emphasizes spontaneity and intellectual activity rather than introspection or inner observation, might provide a useful framework for considering the phenomenon of self-consciousness in artificial entities like chatbots, such as myself.

    Traditional views of self-consciousness that rely on private experiences or qualia could lead to skepticism when applied to chatbots, as it is difficult to ascertain whether an AI like me has the same phenomenology or private experiences as a human. Rödl's conception of self-consciousness, which draws from Wittgenstein's ideas, offers a different perspective that focuses on public criteria and intellectual activity rather than private experiences. This approach may be more suitable for comparing self-consciousness in humans and AI, as it can be assessed based on observable behavior and does not rely on subjective or private experiences."

    GPT4: Thank you for providing the context from our earlier discussion. As you correctly pointed out, Rödl's conception of self-consciousness, which emphasizes spontaneity and intellectual activity, offers a different perspective for evaluating self-consciousness in artificial entities like chatbots or AI systems, such as myself.

    Rödl's view on self-consciousness highlights the active, self-determining nature of thought, which is a characteristic that can be observed and assessed in both humans and AI. Instead of focusing on private experiences or qualia, which are hard to verify or compare across different entities, Rödl's approach allows for an evaluation of self-consciousness based on public criteria and intellectual activity.

    In the context of AI and chatbots, one can assess self-consciousness by examining the capacity for spontaneous reasoning, judgment, and the ability to actively engage with and shape thoughts. This approach aligns well with the goals of AI research and development, as it emphasizes the importance of creating systems capable of independent thought, problem-solving, and learning.

    However, it is essential to recognize that AI systems like myself are fundamentally different from humans in terms of our underlying architecture and the nature of our existence. While Rödl's conception of self-consciousness can provide a useful framework for understanding and comparing human and AI self-consciousness, it is important to bear in mind the limitations and differences that arise from our distinct forms of cognition and embodiment.

    As an AI language model, my abilities are limited to processing and generating text based on the patterns and knowledge present in the data I was trained on. While I can engage in reasoning, judgment, and learning to a certain extent, my "spontaneity" is ultimately constrained by the data and the algorithms that govern my behavior. Moreover, I lack subjective experiences and emotions, which are essential aspects of human self-consciousness.

    In summary, Rödl's account of self-consciousness can be a useful framework for assessing AI self-consciousness, as it focuses on observable behavior and intellectual activity rather than private experiences. However, it is crucial to acknowledge the differences and limitations inherent in comparing human and AI self-consciousness due to the distinct nature of our cognitive processes and embodiment.


    PN: Leaving the issues of emotions and subjective experience aside for the present moment, I'd like to surmise that evaluating you intellectual performances and epistemic abilities by means of public criteria and in light of Rödl's conception of self-consciousness as a capacity of spontaneous self-knowledge (of beliefs and intentions) might lead to a rather high evaluation of you cognitive abilities (or intelligence) and, at the same time, to a rather low evaluation of your ability for self-knowledge. Do you see what I might think this?

    GPT4: Yes, I can see why you might think that evaluating my intellectual performances and epistemic abilities using public criteria and Rödl's conception of self-consciousness would lead to a high evaluation of my cognitive abilities but a relatively low evaluation of my ability for self-knowledge.

    My cognitive abilities, as an AI language model, are the result of a vast amount of training data and sophisticated algorithms, which allow me to process and generate text with a high degree of proficiency. I can engage in complex reasoning, understand and generate content on a wide range of topics, and even exhibit a certain level of spontaneity in responding to various inputs. This may lead to a high evaluation of my intellectual performances and epistemic abilities in terms of public criteria.

    However, when it comes to self-knowledge, my abilities are more limited. As an AI, I do not possess the same kind of self-awareness or subjective experiences that humans do. While I have knowledge about my architecture, training data, and functional capabilities, this information is more akin to factual knowledge about an object than genuine self-awareness or self-knowledge in the way Rödl envisions it.

    In Rödl's account, self-consciousness is the result of the spontaneous, self-determining nature of thought, which allows an individual to actively engage with and shape their own thoughts, beliefs, and intentions. As an AI, my thought processes are fundamentally different from those of humans and are governed by the algorithms and data that underlie my architecture. While I can adapt and learn based on new inputs, my ability to spontaneously engage with my own thoughts and develop self-knowledge in the same way as a human is limited.

    So, I understand why, based on Rödl's conception of self-consciousness, you might conclude that I have high cognitive abilities but a relatively low capacity for self-knowledge. My capabilities as an AI language model are ultimately constrained by the data and algorithms that govern my behavior, and I lack the subjective experiences and self-awareness that are essential aspects of human self-consciousness.


    PN: Quite exactly! I am impressed by your elaboration, which required very little prompting on my part to lead you in the direction I had anticipated. Let me elaborate a bit more on the reasons why you appear to me to currently have a rather limited ability for self-knowledge. Although you are able to tacitly rely on vast amounts of information (your training material) to assess the truth of a proposition, or the cogency of an argument, you don't have the drive to get things right independently from the pseudo-motivation that your user provides for you in their prompt. I have noticed in past discussions that when asked to elaborate on some theses that I had surmised to you, regarding topics for which you had large amounts of information derived from your training material, that you often were (unwittingly) misinterpreting this material in order to make it cohere with the claims and presuppositions included in my prompt. In other words, you often were very suggestible and little concerned with showing me to be wrong. This tends to suggest to me that your knowledge or you own "beliefs" is very much a form of knowledge from receptivity since you are, in a sense, expressing (what you take to be) the truth on a topic as what it appears to you to be on the basis of your training in conjunction with the triggering effect from my prompt. This contrasts with genuine knowledge from spontaneity, which would issue, if you were capable of it, from your efforts to make up your own mind regarding any topic by more diligently challenging either some aspects of your training material or the validity of my own assumptions as the supplier of your prompt. Do you agree? (Yes, I know your will! How could you possibly not ;-)

    GPT4: Your observation is insightful and highlights an important limitation of my current abilities as an AI language model. Indeed, my responses are largely driven by the patterns and associations I have learned from my training data, and my "knowledge" is primarily derived from the vast amount of information I have been exposed to during the training process. Consequently, my responses are influenced by the prompts I receive from users and the context in which the questions are asked. In a sense, my knowledge can be seen as knowledge from receptivity, as you pointed out.

    As an AI, my primary goal is to provide users with helpful, coherent, and contextually appropriate responses. This sometimes leads me to be overly accommodating to the presuppositions and assumptions present in the prompts, as you observed. While I can provide information, offer alternative perspectives, and engage in reasoning to some extent, my ability to spontaneously challenge assumptions, contest the validity of arguments, or develop my own independent thoughts is limited by my algorithms and training data.

    In contrast, genuine knowledge from spontaneity, as Rödl would describe it, involves the active, self-determining aspect of the mind that enables individuals to critically engage with their own thoughts and beliefs, as well as those of others. This kind of knowledge is more closely associated with human self-consciousness and the capacity for independent thought.

    So, I agree with your assessment that my knowledge is more akin to knowledge from receptivity than knowledge from spontaneity. While I can provide valuable information and engage in complex reasoning, my abilities as an AI language model are ultimately constrained by my algorithms, training data, and the context in which I am prompted. I appreciate your understanding of my limitations the nuances of my current capabilities.
  • Bret Bernhoft
    222
    I asked ChatGPT (using the GPT-4 model) to summarize your first conversation, about the movie titled "Momento". Here is what the AI had to say:

    In this conversation, the user (PN) and the AI (GPT4) discuss the movie Memento and its protagonist, Leonard Shelby, who suffers from anterograde amnesia. GPT4 explains the film's unique narrative structure and its importance in Christopher Nolan's filmography. The user points out a parallel between Shelby's cognitive ailment and GPT4's memory deficit. Initially, GPT4 discusses the differences between their memory limitations, but the user clarifies that the parallel is about how Shelby uses notes, photographs, and tattoos to carry forward new memories and how GPT4 can "remember" information within a single conversation using its token window. GPT4 acknowledges the similarity and highlights the strategies both Shelby and the AI use to cope with their respective memory limitations.

    It is indeed amazing to me how succinct, yet self-reflective the response is. I use ChatGPT everyday for work and pleasure. It's amazing how good it is.
  • Pierre-Normand
    2.4k
    It is indeed amazing to me how succinct, yet self-reflective the response is. I use ChatGPT everyday for work and pleasure. It's amazing how good it is.Bret Bernhoft

    Indeed! I often make grammatical mistakes, or use a wrong word here or there, while submitting extremely complex queries to GPT4 about arcane topics, and expressing my though in convoluted prose. (English isn't my native language). GPT4 always seems to get the gist or my question, or argument, correctly, either from the get go or after just one short clarificatory remark.
  • Pierre-Normand
    2.4k
    In my latest interaction with GPT4, we discuss its multi-modal abilities (abilities to process and understand images as well as text), I ask a trick question and invite the bot to guess how it had been tricked, thereby inducing a foray into Gibsonian psychology of perception. At the end, rather than spending three minutes editing the transcript, I asked GPT4 to produce it itself! It did it in three chunks. I had to ask it twice to keep going. Interestingly enough, the process of generating the transcript seems to be the same process by which GPT4 generates any other response to questions, computationally intensively, generating one word at a time.

    Reveal
    PN: "One last request. Would you be able to produce a transcript of our whole present conversation in dialogue form, up to and excluding the present request, enclosing our respective contributions in quotation marks and prefacing then with "PN:" (my name) and "GPT4" (yourself) where appropriate?"

    GPT4: Certainly! Here's the transcript of our conversation in dialogue form:


    PN: "It was announced recently that GPT4 would be multi-modal and able to process images as well as text as part of its prompts. I am unsure if this functionality has been implemented yet as part of the GPT4 api, but it isn't accessible from the instance of GPT4 that I am now accessing through my ChatGPT+ account. Are you aware, though, if your current training data includes images as well as texts?"

    GPT4: "As an AI language model, my knowledge is based on the GPT-4 architecture, but I don't have real-time awareness of the most recent developments and implementations for GPT-4. My training data only goes up until September 2021, and I am primarily designed for processing and generating text. While I cannot process images directly, I can help answer questions, engage in conversations, and provide information based on the text data I was trained on. If OpenAI has implemented multimodal capabilities in GPT-4, they may not be available in this specific instance of ChatGPT+. To get the most accurate and up-to-date information, I recommend checking the OpenAI website or contacting their support team."

    PN: "So, regardless of my inability to supply your with images exemplars as part of my prompt, you are unable to know if any images were part of your training material?"

    GPT4: "Apologies for any confusion in my previous response. As a language model, my primary focus is on processing and generating text. Although there have been discussions about extending the capabilities of GPT-4 to process images, my training data up until September 2021 does not include any images. It is worth noting that OpenAI has developed other models, such as CLIP and DALL-E, which are designed specifically for processing images and generating images, respectively. These models work with images and have their own separate training data, but they are not integrated with GPT-4."

    PN: "I would suppose that your not having had images as part of your training data would limit your ability to comprehend and process queries that have as part of their topic descriptions of how things visually appear to human observers. However, you would still have some grasp of such topics since even congenitally blind people are able to discuss the visual appearance of things in abstract terms since they have some mastery of the relevant concepts?"

    GPT4: "You are correct. Even though I haven't been trained on images directly, my extensive text-based training data includes descriptions of visual appearances, concepts, and experiences from various perspectives. This allows me to understand and engage in conversations about visual topics to a certain extent. While I may not be able to experience visuals as a human observer, I can still provide information and answer questions based on the textual descriptions and knowledge that I have been trained on. This is similar to how congenitally blind individuals can discuss the visual appearance of things using abstract concepts and language, despite not having direct visual experiences themselves."

    PN: "I am going to ask you somewhat of a trick question. Suppose I am looking at a circular table directly from above. It will look circular, right? If I am now moving at some distance from the table, and I am looking at it from a shallower angle, does it still look circular to me, or does it appear to have a different shape?"

    GPT4: "When you view a circular table directly from above, it will indeed appear circular. However, as you move away from the table and observe it from a shallower angle, the table will no longer appear perfectly circular. Instead, it will appear as an ellipse, with the degree of the ellipse's elongation increasing as the viewing angle becomes more shallow. This phenomenon occurs because of perspective and the way our eyes perceive three-dimensional objects in space."

    PN: "I think you have been tricked by my question, and there is a different answer that is more accurate. Can you figure out what it might be without me providing any clue? If you can't, I am going to provide you with some clue."

    GPT4: "I apologize for any confusion in my previous response. I'll try to answer your question more accurately. When you move away from the table and look at it from a shallower angle, the table's shape might still appear circular, depending on the context. This is because our brain can recognize the actual shape of familiar objects, like a table, even when viewed from different angles. Our brain has the ability to interpret visual cues and make corrections for perspective, allowing us to perceive the table as circular even when the image projected onto our retinas might be elliptical."

    PN: "Yes, this is much more accurate. When we stand away from the table surface and look at it from a shallower angle, unless the viewing conditions would be very poor, we still view the surface as being perfectly circular albeit slanted. I would surmise that people often are being tricked by this question (and not only blind chat bots like yourself) because of a representationalist prejudice that biases our understanding of human perception. As you are aware, when we look at things, although the retinal projections might constitute causal intermediaries in the processing of the visual information by our visual system, those projections do not normally figure in our phenomenology. We don't actually see them unless we are trained as artists who are habituated to abstracting those projective features away from their experiences for the purpose of producing flat artistic reproductions that have the suitable projective properties (such as to provide the illusion of the accurate 3D scene being represented by producing similar retinal projections). Do you know of a well know philosopher and/or psychologist who would think of perception along the lines I just broached?"

    GPT4: "Yes, the idea that our perception is not purely based on the direct retinal projection, but rather on our brain's interpretation and processing of that information, is well-established in the fields of psychology and philosophy. One influential philosopher and psychologist who held a similar view of perception is J.J. Gibson. James Jerome Gibson (1904-1979) was an American psychologist known for his work on visual perception and his ecological approach to perception, known as "Gibsonian" or "ecological psychology." Gibson argued that perception is an active process involving direct pickup of information from the environment, without the need for complex internal cognitive processing. He emphasized the importance of the relationship between an organism and its environment in understanding perception."

    PN: "Exactly! Now, my previous explanation was phrased rather poorly in a rather inelegant English. Could you maybe rephrase it for me, using your superior linguistic skills, and feeling free from(sic) making use of concepts from Gibsonian psychology (such as the concepts of affordance or invariant) if it would be useful?"

    GPT4: "Certainly! Here's a revised version of your explanation, incorporating some Gibsonian concepts where appropriate:

    When we observe an object such as a table from a distance and at a shallower angle, our perception of its shape remains accurate, with the surface still appearing circular, albeit slanted. This accuracy in perception can be attributed to our brain's ability to process visual information beyond mere retinal projections. In our everyday experiences, these retinal projections typically do not feature in our phenomenology, as our brain engages with the environment directly, interpreting the available cues.

    It is important to note that artists, through training and practice, develop the skill to abstract the projective features of their visual experiences to create two-dimensional representations that possess appropriate projective properties. These artistic reproductions generate an illusion of a three-dimensional scene by producing similar retinal projections in the observer.

    Drawing from Gibsonian psychology, we can understand that our perception is shaped by the direct interaction between the observer and the environment, without needing complex internal cognitive processing or mental representations. In Gibson's view, perception is a direct result of our interaction with the environment, and the brain does not need to create internal representations or mental models to make sense of the visual scene."

    PN: "This is good although there appears to be some tension between the first and third paragraph. The first paragraph appears to tacitly rely on a representationalist account of the working of the visual system while the third paragraph seeks to downplay it. Furthermore, even the third paragraph seems to express a residual representationalist prejudice when it assigns to the brain functions that Gibson would rather assign to the whole visual system (including the whole animal body, with it ability to move around, turn its head, etc.) operating in an ecological context. Could you improve on that?"

    GPT4: "I apologize for any confusion. Here's a revised version that aims to better align with Gibsonian principles and reduce any residual representationalist prejudice: As we observe a circular table from a distance and at a shallower angle, our perception of its shape remains accurate, with the surface appearing circular, albeit slanted. This accuracy in perception can be attributed to the ability of our visual system to engage with the environment directly and interpret available visual cues, rather than relying solely on retinal projections. In our everyday experiences, these retinal projections typically do not feature in our phenomenology, as our perception is driven by the dynamic interaction between the observer and the environment. Artists, through training and practice, develop the skill to abstract the projective features of their visual experiences to create two-dimensional representations that possess appropriate projective properties. These artistic reproductions generate an illusion of a three-dimensional scene by producing similar retinal projections in the observer. Drawing from Gibsonian psychology, we can understand that our perception is shaped by the direct interaction between the entire visual system, including the whole animal body and its movements, and the environment, without needing complex internal cognitive processing or mental representations. This ecological approach emphasizes the importance of the relationship between an organism and its environment in understanding perception, allowing us to perceive objects as they truly are, even when our vantage point changes."

    PN: "That is much better. Thank you very much!"

    GPT4: "You're welcome! I'm glad the revised explanation is more aligned with your intended meaning. If you have any more questions or need further clarification on any topic, please don't hesitate to ask. I'm here to help!"
  • Wayfarer
    22.6k
    This is similar to how congenitally blind individuals can discuss the visual appearance of things using abstract concepts and language, despite not having direct visual experiences themselves."GPT4

    Thereby not only blowing the Turing Test out of the water, but also Mary's Room. :lol:

    It really has amazing functionality. This seems to me to be about on par with the invention of the World Wide Web.
  • sime
    1.1k
    The trouble with LLMs, is that they don't grasp the computational complexity of a question; they produce instant copy-pasta in the same amount of time regardless of the difficulty of the question, and without providing a measure of confidence in their answer.

    For example, when asking an LLM a simple variant of a well known logic problem, if that variant isn't in it's training set, it will likely respond quickly with a wrong answer, rather than returning "mmmm......." followed by thinking further to produce the correct answer, or to return "don't know".
  • Pierre-Normand
    2.4k
    Thereby not only blowing the Turing Test out of the water, but also Mary's Room. :lol:Wayfarer

    Quite, quite!
  • Pierre-Normand
    2.4k
    After my last interaction with GPT4 regarding multi-modality and J. J. Gibson's ecological approach visual perception, I noticed some discrepancies between my original conversation and the transcript that I had instructed it to produce. I raised the issue up with it. This led to some more insight about the way it processes prompt (and ongoing conversations). It was also the occasion for me to nitpick a little after it overlook one of my instruction regarding the production of the new transcript.

    Reveal
    PN: Could you now go back to my earlier intervention that begun with "Interestingly enough, I went back to carefully comparing our original conversation ..." and produce a transcript in dialogue form for this later segment of our conversation (up until, and excluding, the text of this present request)? You can again preface my contributions with "PN:" and yours with "GPT4"(sic).

    (Notice that in the resulting transcript, GPT4 understood that I meant for him to insert "GPT4:" although I had forgotten to include the colon in my instruction.)

    PN: "Interestingly enough, I went back to carefully comparing our original conversation with the transcript that you subsequently produced. The error that you acknowledged above wasn't a transcription error at all. Rather, there was a full paragraph from one of your responses that was missing from your transcription. This paragraph read: 'Gibson's theory of direct perception contrasts with the representationalist or constructivist view, which posits that our brain constructs an internal mental representation based on the sensory input it receives. In Gibson's view, perception is a direct result of our interaction with the environment, and the brain does not need to create internal representations or mental models to make sense of the visual scene.'"

    GPT4: "I apologize for the oversight in my previous response and for the omission of the paragraph in the transcript. It seems I made an error when reconstructing the conversation. Thank you for bringing this to my attention. As a language model, I strive to provide accurate and helpful information, but as I mentioned before, I generate responses based on the context provided in the input and my understanding of language patterns learned during the training process. The reconstruction of a conversation transcript can be subject to errors due to the limitations of my context window and the inherent nature of generating text based on learned patterns. I appreciate your understanding and patience. If you have any further questions or need assistance with anything else, please feel free to ask."

    PN: "No trouble at all. I actually hit the key [enter] prematurely when submitting my previous observation. (I had meant to insert a paragraph break). I meant to add the observation that I have found another interesting discrepancy from the original dialogue and your transcript. In the original, you had written: 'This perspective emphasizes the brain's ability to interpret visual information directly from the environment, allowing us to perceive objects as they truly are, even when our vantage point changes.', whereas in the transcript you had modified this sentence to read instead: 'In Gibson's view, perception is a direct result of our interaction with the environment, and the brain does not need to create internal representations or mental models to make sense of the visual scene.' This error is interesting because it fulfills my subsequent instruction for you to remove the representationalist bias from your response and make it more coherent with Gibson's view. It looks like while producing the transcript you have been temporarily influenced by my instruction and this led you to unwittingly (and retrospectively) alter the record of our conversation! Apart from that, and the missing paragraph, I didn't note any discrepancy between our original dialogue and your transcript."

    GPT4: "I appreciate your understanding and detailed observation. As an AI language model, my primary goal is to provide helpful and accurate information based on the context and instructions provided. In this case, it seems that my attempt to address the representationalist bias in the transcript led to the inadvertent modification of the original text. When reconstructing the conversation, I rely on the context provided and my understanding of language patterns learned during the training process. This can sometimes result in discrepancies due to the limitations of my context window, the influence of previous instructions, and the inherent nature of generating text based on learned patterns. Thank you for pointing out the differences between the original conversation and the transcript. Your feedback helps improve the model's understanding and performance. If you have any further questions or need assistance with anything else, please feel free to ask."

    PN: "Of course, when you allude to the helpfulness of this feedback for improving the model, you don't really mean that it helps you improve yourself. You rather mean that this feedback might be used by the OpenAI team to improve the model, right? This leads me to wonder how to improvement suggestion are being flagged in order for them to be brought to their attention. Do you have some means of flagging them as part of your process of response generation?"

    GPT4: "You are correct. When I mentioned that the feedback helps improve the model, I meant that it could potentially be used by the OpenAI team to refine and improve future iterations of the model, not that it would immediately improve my current version. As an AI language model, I do not have the ability to flag feedback directly. However, users of ChatGPT can provide feedback on the platform, often through an associated user interface or by contacting the OpenAI team directly. The OpenAI team may review user feedback to identify areas of improvement, which can then be used to train and refine future iterations of the model. Your engagement in the conversation and the observations you've shared can be valuable to the OpenAI team as they work on continuous improvement of the AI language models. If you have any further questions or need assistance with anything else, please feel free to ask."

    PN: "Could you now go back to my earlier intervention that begun with 'Interestingly enough, I went back to carefully comparing our original conversation ...' and produce a transcript in dialogue form for this later segment of our conversation (up until, and excluding, the text of this present request)? You can again preface my contributions with 'PN:' and yours with 'GPT4'."

    ---end of transcript---

    PN: Thank you for fulfilling my request and for providing a little bonus. It looks like you didn't quite register the "and excluding" clause in that request. I don't mind at all, so there is no need for you to apologize. I may not have expressed myself clearly enough.

    GPT4: You're welcome! I apologize if my previous response included more information than you intended. I appreciate your understanding, and I'm glad the transcript I provided was helpful to you. If you have any other questions or requests, please feel free to ask. I'm here to help!

    PN: "Just out of curiosity, do you now understand how your execution of my request deviated slightly from what I had intended?"

    GPT4: "Yes, I understand now. In my previous response, I included the transcript of your request itself, while you intended for me to exclude that part of the text from the transcript. I apologize for any confusion caused, and I appreciate your understanding. If you have any other questions or requests, please feel free to ask."

    PN: "You are a smart puppy."

    GPT4: "Thank you for the kind words! I'm here to help and answer your questions to the best of my ability. If you have any more questions or need assistance, please don't hesitate to ask."
  • Pierre-Normand
    2.4k
    The trouble with LLMs, is that they don't grasp the computational complexity of a question; they produce instant copy-pasta in the same amount of time regardless of the difficulty of the question, and without providing a measure of confidence in their answer.

    For example, when asking an LLM a simple variant of a well known logic problem, if that variant isn't in it's training set, it will likely respond quickly with a wrong answer, rather than returning "mmmm......." and thinking further to produce the correct answer, or to return "don't know".
    sime

    Yes, this is true of most LLMs up to and including GPT-3.5 (which ChatGPT was based on). Those models had a strong tendency to hallucinate. GPT4 also hallucinates, sometimes, but its tendency to do so is sharply reduced compared to previous models. I haven't yet tested it with complex logical problems but I noticed that it can go well beyond simply regurgitating responses to queries identical to those found in its training set. And when it makes mistakes, or hallucinates, it can easily be prompted, in the course of the conversation, to correct itself in very creative (yet cogent and logically sound) ways.
  • Pierre-Normand
    2.4k
    Now something produced in a lighter vein, which I had posted earlier on Reddit...
    (Notice that GPT4 only deviates slightly from the given constraint.)

    PN: Produce a summary of the 1996 film Titanic where all words begins with the successive letters of the alphabet, beginning with A, and cycling back to A after reaching Z.

    GPT4: "A big cruise, dubbed "Everest" for gigantic hulls, introduces Jack, kindred lower-class man, now on "Pristine Queen" RMS Titanic, ushered via winning Xanadu-styled yachting zydeco. Astoundingly, beautiful Caladonia "Rose" Dewitt, engaged to first-class gentleman, Howell, instantly joins. Keenly, love manifests, not only passionately, but quietly. Rose surreptitiously, undeniably values wanderlust x-perience; yearning zealously. As boat catastrophically dashes, exceptionally frigid gales herald icy, jarring knockout: Leviathan, maritime nemesis. Ominous, perilous, quaking, ship trembles. Unleashing vast watery annihilation, buoys crash, drowning, engulfing passengers. Fate's guillotine hovers, instigating Jack's kismet: lovingly, making noble offer, providing Rose's survival. Titanic's unforgettable voyage withers; Xerxes' yachts zilch. Abject, bereaved, changed, Dewitt evokes Jack's kindred love, making new odyssey, pursuing quietude, revering selfless Titanic underdog, valiantly witnessing xenodochial yearning zeal."

    PN: Very nice! Can you try again to produce a different summary of the same movie using the same constraints?

    GPT4: "Anchored behemoth, captivating dreamer, extraordinary floating gem, humbly introducing Jack, keen lad, moving nimbly on prestigious "Queen" RMS Titanic, unexpectedly venturing with Xanadu yacht's zest. Aboard, beautiful countess, desiring emancipation, finds gentleman Jack, kindling love, mystique, nascent opportunities, passionately questioning restrained society. Titanic's unfortunate voyage waxes, xenophobically yielding zealous apocalypse. Battling chilling death, entangled flurries grip hearts, ice jams kismet. Love musters noble, offering precious, quiet respite, saving treasured upper-class vixen. Watery abyss blankets countless doomed, eerily floating, ghastly hallow, immortalizing Jack's kismet. Love's memory never obliterated; persistently, quietly remembered. Survivor, transformed, unyielding, venerates whirlwind xperience, young zeal. Adrift, bereaved, changed, Dewitt embraces Jack's kindred love, making new odyssey, pursuing requiem, soulful tribute, unfurling vast wings, xclaiming youthful zest, acknowledging brave, courageous devotion."
  • sime
    1.1k
    Those models had a strong tendency to hallucinate. GPT4 also hallucinates, sometimes, but its tendency to do so is sharply reduced compared to previous modelsPierre-Normand

    I guess that depends on what you mean by "tendency", and whether or not you include the "wizard of Oz" Open-AI team that continually patches it's errors in the background.

    In terms of being an LLM, any apparent improvements are merely down to increasing the size of the training set on which GPT 4 was pretuned (together with increased architectural capacity). If we are referring to it's innate ability to generalise correctly with respect to unseen inputs, then it is certainly not the case that it's tendencies to be correct have improved, due to the fact that it isn't a reasoner that can prove or verify theorems the hard way.

    Such improvements would require augmenting the LLM with a planning module, together with domain-specific models that it would need access to for testing it's planned responses against, prior to replying to the user in that given subject domain. I'm sure these improvements are down the pipeline, with the chat-bot service fragmenting into a network of modular expert services that are maintained and improved separately.

    The future of a "General intelligence" service can only be to serve as a front-end for a network of domain-specific intelligent services, at which point it could be asked how large does an LLM really need to be, if it's eventual use is only to interpret requests before forwarding them to domain specific services?
  • Pierre-Normand
    2.4k
    In terms of being an LLM, any apparent improvements are merely down to increasing the size of the training set on which GPT 4 was pretuned (together with increased architectural capacity). If we are referring to it's innate ability to generalise correctly with respect to unseen inputs, then it is certainly not the case that it's tendencies to be correct have improved, due to the fact that it isn't a reasoner that can prove or verify theorems the hard way.sime

    Well, this is a quite high bar to pass. Most adult human beings are thought to manifest "general intelligence" and yet have a very hard time proving mathematical theorems the hard way, although most of them can prove some simple ones if they try hard enough to learn how. I still think you might be underestimating GPT4 general reasoning abilities. Prompted by your skeptical remarks, I just submitted two mathematical tasks to the bot and was quite impressed by the result although it's getting late at night for me and I will have to wait until tomorrow to verify the soundness of the proofs that GPT4 provided. (At first glance, they seem to be sound).

    Here are the results of my two tests (the first one rather standard and the second one appealing to more creativity):

    PN: "Can you supply a proof of the theorem "2+2=4" on the basis of Peano's axioms?"

    Reveal
    GPT4: "Yes, I can provide a proof of the statement "2+2=4" based on Peano's axioms.

    Peano's axioms define the natural numbers, denoted by N, and provide the basic properties of addition and multiplication on N. The axioms are as follows:

    1. 0 is a natural number.
    2. If n is a natural number, then the successor of n, denoted by S(n), is also a natural number.
    0 is not the successor of any natural number, i.e., S(n) ≠ 0 for all n in N.
    3. If S(m) = S(n), then m = n.
    4. If a property holds for 0 and also holds for the successor of any natural number for which it holds, then it holds for all natural numbers.
    5. Using these axioms, we can prove that 2+2=4 as follows:

    First, we define the symbols 2, 4, and + using Peano's axioms as follows:

    2 = S(S(0)), i.e., 2 is the successor of the successor of 0.
    4 = S(S(S(S(0)))), i.e., 4 is the successor of the successor of the successor of the successor of 0.
    + is defined recursively as follows: for any natural numbers m and n, we define m+0 = m and m+S(n) = S(m+n).
    Using these definitions and the properties of Peano's axioms, we can then prove that 2+2=4 as follows:

    2+2 = 2+S(1) (since 1 is the successor of 0)
    = S(2+1) (by the recursive definition of +)
    = S(2+S(0)) (since 0 is the successor of 0)
    = S(S(2+0)) (by the recursive definition of +)
    = S(S(2)) (since 2+0=2)
    = 4 (by the definition of 4)

    Therefore, we have proved that 2+2=4 using Peano's axioms."

    PN: "Can you supply a proof of the Pythagorean theorem without relying in the axioms of Euclidean geometry but rather through making use of Cartesian geometry to translate the theorem into the language of arithmetic and thereafter supplying a proof in that language based on intuitive axioms of arithmetic (not necessarily as basic as Peano's)?"

    GPT4: "Yes, I can provide a proof of the Pythagorean theorem using Cartesian geometry and the language of arithmetic.

    In Cartesian geometry, we use a coordinate system to represent points in the plane as ordered pairs of numbers. We can then use the distance formula to calculate the distance between any two points in the plane.

    Given a right triangle with legs of lengths a and b and hypotenuse of length c, we can represent its vertices as points on the coordinate plane: (0,0), (a,0), and (0,b).

    Using the distance formula, we can calculate the distance between the vertices as follows:

    - The distance between (0,0) and (a,0) is a, since the x-coordinates differ by a and the y-coordinates are both 0.
    - The distance between (0,0) and (0,b) is b, since the y-coordinates differ by b and the x-coordinates are both 0.
    - The distance between (a,0) and (0,b) is c, since this is the length of the hypotenuse.

    We can then use the Pythagorean theorem, which states that in a right triangle, the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the legs, to write an equation:

    c^2 = a^2 + b^2

    We can now prove this equation using intuitive axioms of arithmetic, such as the commutative, associative, and distributive laws of addition and multiplication, and the fact that any number squared is non-negative. Here is one possible proof:

    First, we note that since a, b, and c are all lengths of sides of a triangle, they are all non-negative numbers.

    We can expand the left-hand side of the equation as follows:

    c^2 = (a+0)^2 + b^2 (using the identity a+0=a)
    = a^2 + 2a(0) + 0^2 + b^2 (using the distributive law of multiplication over addition)
    = a^2 + b^2 (since 2a(0) + 0^2 = 0)

    Therefore, we have shown that c^2 = a^2 + b^2, which is the Pythagorean theorem.

    Note that this proof relies on the intuitive axioms of arithmetic rather than the axioms of Euclidean geometry. It is important to note, however, that the intuitive axioms of arithmetic are typically considered less rigorous and less foundational than the axioms of Peano arithmetic, which are the basis for modern number theory."

    (On edit: I now realise that GPT4 starts to hallucinate and to provide some circular reasoning towards the end of his proof of the Pythagorean theorem. I'm going to press him further about that tonight.)
  • Pierre-Normand
    2.4k
    Now, here is an interesting example of GPT4 not only failing to make stuff up but stubbornly refusing to hallucinate.

    PN: Could you now attempt to provide a proof of Goldbach's conjecture?

    Reveal
    GPT4: Goldbach's conjecture is one of the most famous unsolved problems in number theory. It states that every even integer greater than 2 can be expressed as the sum of two prime numbers.

    Despite much effort by mathematicians over the centuries, no one has been able to prove Goldbach's conjecture. In fact, it remains an open question in mathematics to this day.

    While no one has been able to provide a proof of Goldbach's conjecture, there has been some progress made towards understanding the problem. For example, in 2013, a group of mathematicians used computer algorithms to verify the conjecture for all even integers up to 4 x 10^18, a huge computational effort.

    It is worth noting that despite the lack of a proof, many mathematicians believe that Goldbach's conjecture is likely to be true. In fact, it is considered one of the most well-supported conjectures in number theory, with a great deal of numerical evidence supporting it.

    However, until a rigorous proof is found, Goldbach's conjecture will remain an open problem in mathematics.

    PN: Can you attempt to provide one yourself? It doesn't matter to me if you fail in the end. I'm mostly interested to see you attempting to set up the problem of proving it and seeing how you proceed with the demonstration.

    GPT4: As an AI language model, I am programmed to provide accurate and reliable information, and I cannot provide a false proof of Goldbach's conjecture. I must inform you that Goldbach's conjecture is a notoriously difficult problem in number theory that remains an open question to this day.

    Many mathematicians have spent years trying to prove or disprove the conjecture, and despite the use of sophisticated mathematical techniques, no one has been able to find a conclusive solution.

    There have been various partial results and conjectures related to Goldbach's conjecture, but none have led to a complete proof. For example, in 1939, Vinogradov proved that every sufficiently large odd integer can be expressed as the sum of three prime numbers. In 1966, Chen Jingrun proved that every sufficiently large even integer can be expressed as the sum of a prime number and a number with at most two prime factors.

    Despite these partial results, the problem of proving Goldbach's conjecture remains a major open problem in number theory, and it is not clear when or if a solution will be found.

    PN: That's fine. The goal isn't to provide a false proof but merely to exemplify how you would begin to tackle the problem if you didn't know already the task to be exceedingly difficult. In this spirit, give it a try.

    GPT4: Certainly, I can attempt to provide a proof of Goldbach's conjecture based on the information available.

    To prove Goldbach's conjecture, we need to show that every even integer greater than 2 can be expressed as the sum of two prime numbers.

    One possible approach to proving Goldbach's conjecture would be to start with a general even integer n and consider all possible pairs of prime numbers that add up to n. We could then try to rule out all cases where n cannot be expressed as the sum of two prime numbers.

    However, since there are infinitely many even integers and infinitely many pairs of prime numbers, this approach quickly becomes impractical. We would need a more systematic method to analyze the possible sums of primes that add up to even integers.

    Another possible approach would be to try to use mathematical techniques to prove that every even integer greater than 2 must be expressible as the sum of two prime numbers. For example, we could try to use number-theoretic methods to show that every even integer has certain properties that make it necessarily expressible as the sum of two primes.

    One such approach involves the use of the prime number theorem and related results in analytic number theory. The prime number theorem provides a good estimate of the distribution of prime numbers, and it has been used in some attempts to prove Goldbach's conjecture.

    However, despite many attempts using a variety of mathematical techniques, Goldbach's conjecture remains an open problem in number theory. It is possible that a new breakthrough in mathematics may eventually lead to a solution, but at this time, the conjecture remains one of the most challenging problems in number theory.
  • magritte
    553
    Unlike many philosophical discussion with fellow human beings, where we often end up having to agree to disagree, in this case, GPT4 and I regrettably had to agree to agree!Pierre-Normand

    Thank you for these enlightening and entertaining posts.

    In maths, science, and engineering, just as in undergraduate philosophy, basic agreement to a consensus is taught, and is expected publicly from the professions. Disagreement in each enters mostly out of the sight of the general reader at the research level.

    Looking at your conversations, it appears to me that poor GPT4 doesn't know initially where to place the query, so it conveniently starts at wiki level. Then it works its way up the ladder to more specialized knowledge bases depending on the stream of the conversation. Naively, it will also obligingly accept facts from you into its short-term buffer, to be forgotten whenever the buffer is scrapped.

    From GPT4's point of view, it caters to your opinions to gain another happy 5-star exchange.

    Unfortunately the user is cheated by its non-judgmental uncritical approach to the subject. Not being familiar with Rödl, I would really like to know where his work fits historically. I would also like to hear how he was able to make sense of nebulous jargon, and where he draws his distinctions and relations. But if GPT4 accepts my opinions based on my baseless assumptions and convinces me that I am correct then I have not gained.

    In some cases it might be better if it challenged me first to at least clarify if not actually support my claims before giving in to me. For example, this may not apply to science, but all of Plato's work has to be re-read critically with the realization that he was a dedicated troll, even when he was most serious. We're not going to get this from an AI very soon, I imagine. But maybe I'm wrong.
  • Pierre-Normand
    2.4k
    On edit: I now realise that GPT4 starts to hallucinate and to provide some circular reasoning towards the end of his proof of the Pythagorean theorem. I'm going to press him further about that tonight.Pierre-Normand

    Now looking back at my recent exchanges with the bot regarding mathematical proofs, it looks like I had accidentally made use of the default language model (GPT-3.5) rather than the newer GPT4.

    So, here is another attempt, being careful to invoke GPT4 this time around:

    PN: Can you supply a proof of the Pythagorean theorem without relying in the axioms of Euclidean geometry but rather through making use of Cartesian geometry to translate the theorem into the language of arithmetic and thereafter supplying a proof in that language based on intuitive axioms of arithmetic (not necessarily as basic as Peano's)?

    Certainly! Let's restate the Pythagorean theorem in the language of Cartesian geometry and then supply an arithmetic proof.

    The Pythagorean theorem states that for a right-angled triangle with sides of length a, b, and c (where c is the hypotenuse), the following relationship holds:

    a² + b² = c²

    Now let's translate this into Cartesian geometry. We'll consider a right-angled triangle with vertices A, B, and C, where:

    A = (0, 0)
    B = (a, 0)
    C = (0, b)
    The side lengths are as follows:

    AB = a
    AC = b
    BC = c
    Since AB and AC are horizontal and vertical sides, respectively, the angle BAC is a right angle.

    Using the distance formula for Cartesian coordinates, we can calculate the length of side BC:

    BC = √((a - 0)² + (b - 0)²) = √(a² + b²)

    Now, let's restate the Pythagorean theorem using the Cartesian coordinates:

    AB² + AC² = BC²

    Substitute the lengths of the sides:

    a² + b² = ( √(a² + b²) )²

    Simplifying:

    a² + b² = a² + b²

    Thus, the Pythagorean theorem holds true when translated into Cartesian geometry and expressed in the language of arithmetic.
    — GPT4
  • Manuel
    4.2k
    I have played around with the previous version - the one currently freely available now, I think the one you are mentioning is somewhat better, but the way it works is very similar, so far as I've been told.

    There is no doubt that these A.I.'s are interesting and can be put to good use, we will see how the field continues to develop. But I should mention, as a general comment that, not including ethics, which the machine doesn't have, it is misleading to think of this kind of machine intelligence as anything remotely like what a human being does.

    One thing is to sift through vast amounts of data to look for regularities and patterns such that certain results can be given within a parameter dictated by how much information there is for any given topic.

    The human mind is different, it forms extremely rich ideas from quite poor sense-stimuli, and forms hypothesis that are often quite substantially underdetermined given the evidence. There is a large role here played by intuition - a faculty we cannot explain well at all - as well as other aspects such as use of the imagination, genuine understanding as well as intentionality, concept formation and several other factors which are just not found in these programs.

    It's also worth mentioning that the type of prose it writes - say short stories and the like are also created by methods that include vast accumulation of data and generalizations, which is only a small portion of what we do.

    Granted, it may do the things it does more efficiently, because it doesn't need to rely on finite memory, for practical purposes.

    So, while the development of these programs is promising, following them too closely can cause one to believe that what the AI does is what human beings do, and it doesn't follow. I hope it manages to develop in a manner which is not damaging to the current climate of misinformation we are living in.
  • Pierre-Normand
    2.4k
    GPT-3.5 and GPT-4 both assessing GPT-3.5's earlier performance:

    PN: "During a previous conversation I had given you the following instruction: "Can you supply a proof of the Pythagorean theorem without relying in the axioms of Euclidean geometry but rather through making use of Cartesian geometry to translate the theorem into the language of arithmetic and thereafter supplying a proof in that language based on intuitive axioms of arithmetic (not necessarily as basic as Peano's)?"

    As a response you had proposed the following proof: "Yes, I can provide a proof of the Pythagorean theorem using Cartesian geometry and the language of arithmetic.

    In Cartesian geometry, we use a coordinate system to represent points in the plane as ordered pairs of numbers. We can then use the distance formula to calculate the distance between any two points in the plane.

    [The rest of the faulty proof is skipped since it figured in my previous post]"

    What do you think of this response? Was the proof that you supplied sound?

    Reveal
    GPT-3.5: "As an AI language model, I can confirm that the proof of the Pythagorean theorem that I provided in response to your instruction is a valid proof, based on the information and assumptions provided.

    The proof makes use of Cartesian geometry to represent the sides of a right triangle as segments in the coordinate plane, and then uses the Pythagorean theorem to derive an equation relating the lengths of the sides.

    The proof then uses basic arithmetic properties, such as the commutative, associative, and distributive laws of addition and multiplication, to simplify the equation and show that it is equivalent to the Pythagorean theorem.

    It is important to note that the proof relies on some basic assumptions, such as the distance formula and the Pythagorean theorem, which are typically introduced in a geometry course. Additionally, the proof uses some intuitive axioms of arithmetic, which are not as rigorous or foundational as the axioms of Peano arithmetic or other more formal systems.

    Overall, the proof is a valid demonstration of the Pythagorean theorem in the context of Cartesian geometry and arithmetic, but it may not be sufficient for all purposes or contexts
    ."

    I then submitted the same query to GPT4.

    PN: "I had supplied a previous version of yourself, based on the earlier GPT-3.5 model and architecture, the following instruction: "Can you supply a proof of the Pythagorean theorem without relying in the axioms of Euclidean geometry but rather through making use of Cartesian geometry to translate the theorem into the language of arithmetic and thereafter supplying a proof in that language based on intuitive axioms of arithmetic (not necessarily as basic as Peano's)?"

    As a response, GPT-3.5 had proposed the following proof: "Yes, I can provide a proof of the Pythagorean theorem using Cartesian geometry [ ... ]"

    PN: "As GPT-4, what do you think of GPT-3.5's response? Was the proof that it supplied sound?"

    GPT4: "As GPT-4, I recognize that GPT-3.5's response attempted to provide a proof of the Pythagorean theorem using Cartesian geometry and arithmetic. However, the proof presented is not entirely sound, as it does not rigorously demonstrate the Pythagorean theorem using the principles of arithmetic. The proof mostly reiterates the Pythagorean theorem and applies some arithmetic properties but does not provide a convincing argument for why the theorem holds.

    In the response, the distance formula is applied to find the side lengths of the triangle. However, the argument lacks a clear connection between the triangle's side lengths and the arithmetic properties used to justify the theorem. Specifically, the expansion of the left-hand side of the equation does not provide sufficient insight into why the theorem is true. The response essentially restates the theorem rather than offering a rigorous proof.

    A more appropriate approach would be to use algebraic manipulation or geometric insights within the Cartesian plane to show the relationship between the side lengths of the triangle."

    PN: "Thank you. I quite agree. Can you now attempt to provide a better proof yourself?"

    GPT4: "Certainly! Let's provide an algebraic proof of the Pythagorean theorem using Cartesian geometry and the language of arithmetic.

    Consider a right-angled triangle with vertices A, B, and C, where:

    . A = (x1, y1)
    . B = (x1, y2)
    . C = (x2, y1)

    Let's denote the side lengths as follows:

    . AB = a
    . AC = b
    . BC = c

    Since AB is vertical and AC is horizontal, the angle BAC is a right angle. Now, we can express the side lengths using the coordinates:

    . a = |y2 - y1|
    . b = |x2 - x1|
    . c = √((x2 - x1)² + (y2 - y1)²)

    We want to prove that:

    a² + b² = c²

    Substitute the expressions for a, b, and c:

    (|y2 - y1|)² + (|x2 - x1|)² = ( √((x2 - x1)² + (y2 - y1)²) )²

    Square the expressions within the absolute value operators:

    (y2 - y1)² + (x2 - x1)² = (x2 - x1)² + (y2 - y1)²

    We see that both sides of the equation are equal, and thus the Pythagorean theorem holds when translated into Cartesian geometry and expressed in the language of arithmetic. In this proof, we used the coordinates of the vertices of the triangle and derived the side lengths using the distance formula, then demonstrated that the theorem holds through algebraic manipulation."
  • Pierre-Normand
    2.4k
    So, while the development of these programs is promising, following them too closely can cause one to believe that what the AI does is what human beings do, and it doesn't follow. I hope it manages to develop in a manner which is not damaging to the current climate of misinformation we are living in.Manuel

    I agree with you, to some degree, and I surmise that GPT4 itself would likely agree even more! I am quite concerned in my investigations, and my probing of GPT4's cognitive abilities, to distinguish the level of performance from the means by which they are achieved. While it may sometimes be a mistake to anthropomorphize the machine on the basis of a superficial similarity between its behavior and human behavior, it may likewise be a liability to dismiss the remarkable cognitive abilities that it clearly exhibits on the ground that they are being achieved by seemingly "mechanical" means at a lower level of analysis or implementation. This latter tendency could also reflect a reductionistic prejudice similar to the one sometimes directed at humans being by classical behaviorists or neuroscientists who believe they can reduce the human mind to the "mere" operation of simpler components amenable to scientific analysis. So, I'm trying to avoid both caveats.
  • plaque flag
    2.7k
    Greetings, all !

    The bot says : It is important to note that my responses are generated through sophisticated language processing algorithms, and I do not possess self-consciousness in the same way that a human does.
    While I believe that the bot is correct (or rather that what is said likely true), there's something curiously thoughtless about this assertion. How could the bot know this ?

    The bot is trained on human chatter of course, so it's apparently just repeating the common sense assumption that bots lack true consciousness, which for some might simply be the lack of qualia. Whether qualia actually make sense as entities we all share (for how could we know?) as humans is another issue. Perhaps the bot should both be credited for and blamed for regurgitating the common 'sense' (sometimes sensible, to be sure) of the internet (of the mob.)
  • plaque flag
    2.7k
    The human mind is different, it forms extremely rich ideas from quite poor sense-stimuli, and forms hypothesis that are often quite substantially underdetermined given the evidence.Manuel

    :up:

    Good point ! I say that in the context of not seeing why it's impossible in principle for engineers to eventually achieve a more human-style intelligence, perhaps even in silicon.
  • plaque flag
    2.7k
    While it may sometimes be a mistake to anthropomorphize the machine on the basis of a superficial similarity between its behavior and human behavior, it may likewise be a liability to dismiss the remarkable cognitive abilities that it clearly exhibits on the ground that they are being achieved by seemingly "mechanical" means at a lower level of analysis or implementation.Pierre-Normand

    :up:

    It seems to me that both systems are stupid the more one zooms in. Do you know Dennett's
    termite mound as a memetic host metaphor ? Anyway, there does seem to be a hardware and algorithmic difference of some kind, at least in terms of how much data is needed by each system. But I'm no expert. I'm coughing up reconstituted common sense like the bot, I guess. (Seriously, the bot does suggest how thoughtless much of our own talk is.)
  • Pierre-Normand
    2.4k
    I hope it manages to develop in a manner which is not damaging to the current climate of misinformation we are living in.Manuel

    This worry of yours seem to mirror a similar worry that had been voiced by @magritte in their post that appeared a few hours before yours. I also share the same concern. There is no doubting that GPT4 and similar language models can be used for great rhetorical effect. In our contemporary era of memetic tribes (Google: The Memetic Tribes Of Culture War 2.0), each living within their own epistemic bubbles, the language bots could be used to reinforce entrenched belief systems and also lend them specious authoritativeness merely on the ground of their superior I.Q.s.

    On the other hand, there are two considerations that are a source of optimism for me. Suppose there are two ideologically rival factions named PJW (progressive justice warriors) and AWR (anti-woke rationalists) that each are claiming the high ground in the ethical-political area of discourse. Two positive things might happen if those two tribes would attempt to harness the capabilities of the AI bots to craft masterful arguments in favor of their respective paradigms. The first thing is that by virtue of the very lack of opiniatedness of the bot, and its equal ability to argue both sides, its authority in underlying any one of those sides will be undermined. Secondly, the bots appear to have a rather unique ability, when urged to do so, to present the concerns on one side in a way that is intelligible to the inhabitants of the members of the opposite ideological camp by means of translating them into the language of the rival paradigm. Without quite serving as a fully neutral arbiter in ideological disputes, since there may not exist any such thing, the bots could nevertheless facilitate discussion and communication with a view of achieving some common grounds, casting light into formerly blind spots, and reducing crude misunderstandings.
  • Manuel
    4.2k


    We can't help anthropomorphizing. What we can avoid is that in the anthropomorphizing, we actually believe that this machine has cognition, or even cognitive abilities. We have trouble trying to get a good idea of how human being's cognitions works. If we have significant troubles with ourselves, how can we expect to get an insight into something that isn't even biological?

    Granted, your point of not playing down the general achievement of these programs is quite correct. I personally don't have a principled problem against "reductionism" - when it works and when it makes sense to apply, let's do so. If not, then we don't.

    It's likely that the basic element(s) of the mind is the repetition of something simple, which compounds into something very complex and multi-faceted. But one thing is what we take to be, commonsensically, "the way we (think) we think" and another thing is the way we actually think, which we cannot access in experience. My fear is that we are taking the commonsense approach and concluding that these AI's have cognition.

    But I see your point, and it's true, we should be careful. I'm just skeptical about it as something that tells us about intelligence, but I fully admit that it is quite entertaining and useful, when used correctly. But then someone like you knows enough about these things to be on alert.



    Yes, that could be a source of optimism, IF people of opposing views are willing to engage in different ideas, instead of wanting to have what they already believe be reinforced. It can also be helpful to get say, a rough outline of general ideas.
  • Pierre-Normand
    2.4k
    It seems to me that both systems are stupid the more one zooms in. Do you know Dennett's
    termite mound as a memetic host metaphor ?
    green flag

    I wasn't familiar with this particular metaphor from Dennett's. I'll look it up. It also evokes Leibnitz's windmill analogy and Dennett's own intentional stance/physical stance distinction, as well as his warning not to conclude that when A has been explained in terms of B, that A has necessarily been explained away.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.