• Jamal
    9.6k
    John Haugeland also synthesised the Kantian notion of the synthetic priori and of the phenomenological/existential notion of the always already there in his paper Truth and Rule Following.Pierre-Normand

    Excellent, thanks :smile:
  • Banno
    24.8k
    One notable instance of this expression can be found in the works of the French philosopher Jean-Paul Sartre. Sartre used the phrase "always already" to describe the idea that certain aspects of our existence, such as our facticity (our given circumstances and conditions) and our fundamental freedom, are inherent and preexisting. He argued that these aspects of our being are not products of our conscious choices but are thrust upon us, shaping our existence.Wayfarer

    See this sentence?

    Is it right?

    If you ask ChatGPT, it will not go and check against Sartre's corpus. It will simply choose likely next words and string them along.

    It might by chance find a correct reference. But Equally it might make up a new reference.

    , , this should bother you.
  • Wayfarer
    22.3k
    I enjoyed how 'Wayfarer' engaged 'ChatGPT' (presumably GPT-4) to elaborate on this intricate connection.Pierre-Normand

    Thanks! Actually as far as I know, it’s still ChatGPT - I’m signing in via OpenAI although whether the engine is the same as GPT-4, I know not. Also appreciate the ref to Haugeland.

    If you ask ChatGPT, it will not go and check against Sartre's corpus.Banno

    Whereas if you use Google’s forthcoming Bard, it will. Check out this presentation on it.

    I’m looking forward to being able to converse with Siri - let’s imagine you’re interacting via Siri - with these kinds of questions.

    ‘Siri, can you provide some examples of the texts where Jean Paul Sartre said that?’
  • Pierre-Normand
    2.4k
    It might by chance find a correct reference. But Equally it might make up a new reference.Banno

    In my experience, GPT-3.5 is much more liable to make up references whenever there is any sort of gap in its knowledge. GPT-4 very seldom does so when the topic under discussion isn't very arcane and there is a sufficient amount of material in its training data for it to have been able to memorise it and/or extract the gist of it. GPT-4 is much more likely to spontaneously acknowledge that it doesn't know something. The big liability of LLMs is that, in those cases where (1) their knowledge and understanding of a topic is tenuous or nebulous, and (2) they ends up making stuff up about it, they are quite unable to become aware on their own that the opinion they expressed isn't derived from external sources. They don't know what it is that they know and what it is that they don't know. Their training data isn't directly accessible to them and their don't have meta-cognitive strategies that might enable them to distinguish recall from confabulation.
  • Pierre-Normand
    2.4k
    Thanks! Actually as far as I know, it’s still ChatGPT - I’m signing in via OpenAI although whether the engine is the same as GPT-4, I know not. Also appreciate the ref to Haugeland.Wayfarer

    Unless you are subscribing to ChatGPT Plus (for $20 per month), it's GPT-3.5 you have access to. When you subscribe to ChatGPT Plus, you can then select the GPT-4 model when you start a new conversation. You can also interact with another version of GPT-4 for free by using Microsoft's new Bing through the Edge browser.
  • Wayfarer
    22.3k
    I’ll stick with the free version for now. I find myself using it all the time. I can see the day, and it’s not far off, where it becomes embedded in your audio-visual life-space, helping with all kinds of information - recipes, tips, exercise plans, life plans, motivational talks. I think an alternative title to ‘artificial intelligence’ might be ‘augmented intelligence’ - using it to augment your own capacities in various ways.

    (Seems to me that one of the big players who’s completely failed to catch this train, is Amazon. I’ve been using Alexa devices for about eighteen months, and they’re pretty lame - glorified alarm clocks, as someone said. Whereas ChatGPT, with speech recognition, is more or less what I think Bezos set out to do when he created Alexa. So far, he’s failed miserably, as far as I can see.)
  • Pierre-Normand
    2.4k
    Seems to me that one of the big players who’s completely failed to catch this train, is Amazon. I’ve been using Alexa devices for about eighteen months, and they’re pretty lame - glorified alarm clocks, as someone said.Wayfarer

    They are in hot pursuit: Amazon plans to rework Alexa in the age of ChatGPT
  • wonderer1
    2.2k
    AI tests into top 1% for original creative thinking:

    ...The researchers submitted eight responses generated by ChatGPT, the application powered by the GPT-4 artificial intelligence engine. They also submitted answers from a control group of 24 UM students taking Guzik's entrepreneurship and personal finance classes. These scores were compared with 2,700 college students nationally who took the TTCT in 2016. All submissions were scored by Scholastic Testing Service, which didn't know AI was involved.

    The results placed ChatGPT in elite company for creativity. The AI application was in the top percentile for fluency -- the ability to generate a large volume of ideas -- and for originality -- the ability to come up with new ideas. The AI slipped a bit -- to the 97th percentile -- for flexibility, the ability to generate different types and categories of ideas...
  • Leontiskos
    2.8k
    The big liability of LLMs is that, in those cases where (1) their knowledge and understanding of a topic is tenuous or nebulous, and (2) they ends up making stuff up about it, they are quite unable to become aware on their own that the opinion they expressed isn't derived from external sources. They don't know what it is that they know and what it is that they don't know.Pierre-Normand

    It seems to me that this is the big liability for us, namely that we don't know what it is that ChatGPT knows and what it is that ChatGPT doesn't know. We can attempt to verify its claims and quantify its accuracy, but probably a lot of what is being done and what will be done is taking ChatGPT at its word. That is, whenever we trust ChatGPT we have taken our thumb off the line that tests whether the response is true or false, and ChatGPT was created to be trusted. What could happen, and what very likely will happen, is that the accuracy of human literature will be polluted at a very fundamental level. We may find ourselves "at sea," supported by layers and layers of artificially generated truth-claims, none of which can any longer be sufficiently disentangled and verified. Verification requires the ability to trace and backtrack, and my guess is that this ability will be lost due to three things: the speed and power of the technology, a tendency towards uncritical use of the technology, and the absence of a verification paper-trail within the technology itself.
  • T Clark
    13.8k


    This seems like a good place to mention this. In addition to what I've seen of Chat GPT, I've also been paying attention to Midjourney, which is a graphic generation AI site. It lets you describe a graphic - content, style, mood, and other specific characteristics, e.g. Sailor Moon riding on a surfboard on the moon as painted by Van Gogh. It then generates an illustration. It's pretty impressive. Here's a link to the gallery page:

    https://www.midjourney.com/showcase/recent/

    I haven't used it, just looked at what others have generated. It's funny, if your input is vague, the AI might fill in the blanks in surprising ways. [irony] I was surprised to see [/irony] there are a lot of images of attractive women dressed provocatively, but tastefully. No porn, although what other purpose could there really be for a program like that? Note that if you run your cursor over a drawing, it will show you the directions input by the user to generate it.

    The one thing I noticed from both Chat GPT and Midjourney is the hollowness of what is produced. Actually, maybe that's not fair. It's not that the drawings are hollow necessarily but more that the pride the users seem to take in their productions feels hollow. You see that even more on the Midjourney page on Reddit (r/midjourney). There's a lot of "Look what I made!" where they didn't really make anything at all. There's a lot of Batman as a caveman, cute alien cats, and cars made out of pasta.

    It makes me think about what we've actually gotten with these technologies. Will they devalue all human artistic effort? I guess there are a lot of writers, artists, and graphic designers worrying about that right now. If you see art, as I do, as a way of presenting a personal experience so that other people can share in it with you, what do you get when there is no personal experience involved with what is produced? I see this question as being even more pertinent as the technology matures and the productions get better.
  • NotAristotle
    297
    Interesting point about the way the AI tech can sidestep personal experience, and the entire experiential aspect of art (both in its creation and its reception). Here are two additional concerns: 1) the expertise, the craft that goes into the artwork is lost in the AI generation. This is the problem with AI solutions in general, they subvert human thinking. Try driving without navigation software and you may see what I mean. 2) relatedly, the production of the art is devalued; the AI creates the illusion of creativity, when really it's just outputting pre-programmed inputs, but it's not really producing anything, it's dead; the producers of the art are taken for granted in the AI "generation" of the art; if there is no Van Gogh, there simply is no art.

    This doesn't mean AI is per se bad. Like anything else, it can be used for good or ill.
  • T Clark
    13.8k
    1) the expertise, the craft that goes into the artwork is lost in the AI generation.NotAristotle

    That was one of the things I was thinking about. We've had discussions on the forum before about whether or not technique and skill are necessary for something to be art, or at least good art. Is what Midjourney is doing any different than paint-by-numbers?

    2) relatedly, the production of the art is devalued; the AI creates the illusion of creativity, when really it's just outputting pre-programmed inputs, but it's not really producing anything, it's dead; the producers of the art are taken for granted in the AI "generation" of the art; if there is no Van Gogh, there simply is no art.NotAristotle

    Yes. As I noted, the thing that bothers me most is that the users somehow feel like they've accomplished something. Like they should get credit for what's been produced, even though there's nothing of them in it.
  • NotAristotle
    297
    Counterargument: a lot of scientific knowledge is trusted anyways, the verification objection: (that verification is a prerequisite for scientific knowledge), would deprive us of a lot of information we take to be scientific knowledge based on our trust of others' research (and without using AI). Verification is not pragmatically important for scientific knowledge whether we are using AI or not. That is, AI-generated "knowledge" can be verified by means other than technical knowledge of how the AI application works.

    Note: I think the proper domain for AI-based knowledge is scientific knowledge, not historical or moral knowledge.
  • Leontiskos
    2.8k


    Scientific knowledge is verifiable, not to the layman but to the scientist. Verification is incredibly "pragmatically important" for scientific knowledge, and I believe AI will deeply undermine our ability to verify.
  • NotAristotle
    297
    I believe AI will deeply undermine our ability to verify.Leontiskos
    If it's scientific knowledge, can't it ultimately be tested and therefore verified without AI? I don't need to figure out all the algorithms that went into how ChatGPT said that water is chemically made of H20, all I need to do is conduct an experiment to determine if it is true.
  • Leontiskos
    2.8k
    If it's scientific knowledge, can't it ultimately be tested and therefore verified without AI?NotAristotle

    Sure, but in that case you are not "trusting AI," which is a central premise of my argument. If we fact-check AI every time it says something then the conundrum will never arise. I don't think we will do that. It would defeat the whole purpose of these technologies.
  • wonderer1
    2.2k
    Sure, but in that case you are not "trusting AI," which is a central premise of my argument. If we fact-check AI every time it says something then the conundrum will never arise. I don't think we will do that. It would defeat the whole purpose of these technologies.Leontiskos

    On the other hand, humans fact checking (and challenging) AIs provides training data to the AI producers which can then be used in training the next generation. Such an iterative process will almost certainly reduce the frequency with which the latest generation AI is caught making mistakes.

    https://www.malwarebytes.com/blog/news/2023/05/chatgpt-introduces-new-control-features-business-version-of-the-ai-in-the-works :

    Last week, OpenAI announced it had given ChatGPT users the option to turn off their chat history. ChatGPT is a "generative AI", a machine learning algorithm that can understand language and generate written responses. Users can interact with it by asking questions, and the conversations users have with it are in turn stored by OpenAI so they can be used to train its machine learning models. This new control feature allows users to choose which conversations to use to train OpenAI models.
  • NotAristotle
    297
    Perhaps your concern with Chat GPT is that AI really lacks the ability to evaluate good information from bad information. Infsofar as AI is a collective knowledge platform, it may treat even inaccurate information as veridical. Is that a fair assessment of your objection?
  • Leontiskos
    2.8k


    Yes, that was the claim from that I first responded to (link).
  • Pierre-Normand
    2.4k
    Here is my very first interaction with Google's new "Gemini Advanced" (or "Ultra") model. It seems fairly decent!

    PN:

    Hi Genini!(sic)

    I'd like chat with you about Kavka's toxin puzzle, if you'd please.

    Gemini:

    Absolutely! Kavka's Toxin Puzzle is a fascinating thought experiment that gets to the heart of intentionality, rationality, and decision-making. Let's break it down:

    The Scenario

    Here's the classic formulation of the puzzle:

    *An eccentric billionaire offers you a strange deal. He places a vial of toxin in front of you. The toxin won't kill you or have lasting effects, but it will make you very ill for a day.
    *The billionaire promises you $1 million if, at midnight tonight, you intend to drink the toxin tomorrow afternoon.
    *The key thing is you don't actually have to drink the toxin to get the money. The mere intention is enough.

    The Paradox

    Here's where things get paradoxical:

    1- Rationality: It seems highly rational to form the intention to drink the toxin. After all, it's guaranteed easy money with minimal downside.

    2- Follow-through: However, once you receive the money the next morning, there's no longer any incentive to actually drink the toxin. Doing so would just make you needlessly sick.

    3- Intention: Since you know you won't drink it tomorrow, can you truly intend to do so tonight? If intention means you're going to follow through, it seems impossible.

    Questions and Interpretations

    Kavka's Toxin Puzzle raises questions like:

    *Nature of Intention: Are intentions merely internal mental states, or do they inherently carry a commitment to follow-through?
    *Rationality vs. Action: Can something be rational as an intention while also being irrational as an action?
    *Self-Deception: Can we even sincerely deceive ourselves into forming an intention we know we won't follow up on?
    *Commitment Devices: Does the puzzle illustrate the psychological power of commitment devices – where we bind ourselves to a future action today to avoid changing our mind tomorrow?

    How would you like to start our discussion?

    Here are options:

    *Can you think of real-life examples that resemble the dilemma in the toxin puzzle?
    *Do you want to explore proposed solutions to the puzzle?
    *Do you want to focus on one particular philosophical aspect of the problem?

    Let me know!

    PN:

    Thank you for those reminders about the puzzle! I'd like to focus first on what the we can learn about the nature of intentions. All the other items that you've highlighted in your "Questions and Interpretations" section appear relevant, though.

    While I was again thinking about this problem last night, I was pondering over the fact that when one forms an intention for the future, one thereby believes that (unless one will be prevented to do so due to unforeseen or unlikely circumstances) one will be rationally inclined to carry out this intention when the time comes. Right?


    Gemini:

    Absolutely! Your pondering hits on a crucial aspect of the toxin puzzle and our understanding of intentions.

    Here's why your line of thinking is so relevant:

    *Intention and Belief: There seems to be a strong connection between forming an intention for the future and the belief that you will, barring unforeseen circumstances, carry out that action. In a way, forming the intention is almost like predicting that your future self will find it rational to act in accordance with that intention.

    *The Puzzle's Challenge: Kavka's Toxin Puzzle challenges this fundamental link between intention and belief. The situation is set up so you can clearly recognize:
    -You have powerful reasons to form the intention of drinking the toxin (the money).
    -You know in advance you'll have powerful reasons not to actually follow through (avoiding getting sick).

    *Disconnecting Intention & Belief: This sets up a disconnect. On the one hand, you'd seemingly need to
    believe you'll have a strong inclination to perform the action when the time comes for you to truly form
    the intention to do so. On the other hand, because you're rational, you already know that won't be the
    case when the moment arrives.

    Philosophical Approaches

    Here's how different philosophical views might tackle this:

    Intention as Mental State: If we think of intentions purely as internal mental commitments, perhaps it's possible to intend something knowing fully well you won't do it. You merely need the current desire or commitment, irrespective of how you'll act when the moment comes. This creates a very flexible (and some would say weak) definition of intention.

    Intention as Disposition to Act: If we take a stronger view that intentions are inherently tied to an expectation of action, then it becomes trickier. Does that mean in the puzzle you cannot genuinely form the intention? Do you deceive yourself to get the reward if you do "intend"? There's a conflict between having an intention and simultaneously disbelieving the rational basis of that intention in the future.

    Where to Go From Here

    Do you find one of these philosophical approaches more satisfactory? Do you think there's a possibility of reconciling the apparent conflict posed by the Toxin Puzzle?

    PN:

    Might we not say that rather than constituting a case where the conceptual link between intention and belief is challenged, the puzzle rather highlights the conceptual necessity of this connection since, in circumstances where their appears to be a requirement for the link to be broken (such that we could earn the prize though forming the intention without acquiring the corresponding belief) we find ourselves unable to form the required intention?

    Gemini:

    Yes, you make an excellent point! The way you've framed it provides a strong and compelling interpretation of the toxin puzzle.

    Rather than undermining the link between intention and belief, the puzzle seems to reinforce the necessity of this connection. Here's a breakdown of why this could be the case:

    *Impossible Situation: The brilliance of the toxin puzzle lies in creating a scenario where we'd rationally want to break the intention-belief link in order to earn the prize. It seems like the clever move would be to form the intention, get the money, and then ditch the intention immediately with no consequence.

    *Intention as Premise: The thought experiment is built on the idea that in normal circumstances, forming a sincere intention to do something inherently means we also believe we will carry that action out, barring extraordinary circumstances.

    *Failure: It's precisely because of this connection that the scenario creates a paradox. When we try to imagine actually forming the intention, we hit a cognitive roadblock. It feels impossible to genuinely intend to do something we know and fully believe we won't follow through on.

    Reframing the Puzzle: By demonstrating the impossibility of breaking the intention-belief link, the puzzle arguably re-solidifies this link as a conceptual necessity. Intention seems to require at least the belief that, based on current information, there's a reasonable likelihood we'll carry out the action in the future. Without it, the intention can't come into being in the first place.

    Implications

    This has interesting implications:

    * Self-Deception: It throws the question of self-deception into a sharper light. Can we sincerely, even unconsciously, lie to ourselves about our own future inclinations well enough to actually form an intention we ultimately can't or won't act on? The puzzle suggests it may be more difficult than we assume.

    * Understanding Intent: The puzzle could indicate that even when we aren't explicitly making predictions about our future selves, an underlying belief that we'll generally act in line with our current intentions is built into the process of forming them. Our own rationality may create a 'safeguard' of sorts.

    Do you find this line of reasoning compelling? Does it open up any new questions about intentionality for you?
  • jgill
    3.8k
    I've just begun experimenting with AI systems, raising questions about factual issues that I have intimate knowledge of. I am appalled at the differences in answers.
  • Mark Nyquist
    774

    I have had some luck in correcting (or guiding) answers. Maybe use the same method you would talking to a person. It doesn't have a bad attitude about trying again.

    Nope, tried again and sometimes it gets worse.
  • Mark Nyquist
    774
    I asked GPTchat if an unsupported end of a catiliver can support weight. It answered correctly so....
    Kind of looking for the line of success and fail.
    It did give a textbook page kind of answer.

    Most of you have done this but I'm learning.

    ChatGPT ....
  • Mark Nyquist
    774
    I described to ChatGPT a rope tied to a tree branch at one end and a rock at the other.

    It volunteered that this would be a pendulum and if you pushed it, it would swing. Success.

    I asked if the rope was a greater length than the height of the tree branch what would happen?

    It said the lowest point on the arc of the swing would now be at the height of the tree branch.
    Kind of funny. Fail.

    The fails are more interesting.
  • Manuel
    4.1k
    It's been a while since I've seen this post, don't recall if I've posted before, but we all end up repeating ourselves many times, so, that is not an issue.

    I would simply point out that, whatever notion one may have of a mind, it surely would not include one that crunches ALL (or an astronomical amount of) available data on a given topic and produces the most probable next word in a sentence. If our minds worked like that, we would probably need to sleep every minute or so.

    We would need to attempt to give a rough outline of what one takes a mind to be, to see if such a term applies to any version of GPT.
  • Pierre-Normand
    2.4k
    Discussing Wittgenstein, Michael Dummett, P. G. Wodehouse and a joke about speaking Spanish with Gemini Ultra:

    PN:

    Hi Gemini!

    As I was driving back home earlier I recalled a joke that I had mistakenly attributed to Wittgenstein. It's actually Michael Dummett who mentions this joke (while discussing Wittgenstein) in his book "The Logical Basis of Metaphysics" (and maybe also elsewhere) and that he credits to P. G. Wodehouse for. In one of Wodehouse's novels, a character is asked, 'Can you speak Spanish?'. Might you happen to know what it is that she replies?

    Gemini:

    I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?

    PN:

    I can maybe help you with some more context (from Wikipedia): "Sir Pelham Grenville Wodehouse, KBE (/ˈwʊdhaʊs/ WOOD-howss; 15 October 1881 – 14 February 1975) was an English writer and one of the most widely read humorists of the 20th century."

    Does that now ring a bell?

    Gemini:

    Yes! That absolutely helps. P.G. Wodehouse is famous for his witty dialogue and characters. Here's the likely joke you're thinking of:

    Question: "Do you speak Spanish?"

    Wodehouse Character's Response: "I don't know. I've never tried."

    This joke plays on the ambiguity of the question and the idea that fluency in a language is something you might not be fully aware of until you attempt to use it.

    PN:

    Interestingly, Dummett proposes that what makes the joke work is that, on the contrary, the practical mastery of foreign languages precisely is the sort of thing that it doesn't make sense to say that you need to try in order to know if you have it. Dummett contrasts this with knowing how to swim, which is something that dogs can do when they are first thrown into water. Dummett suggests the hypothetical scenario where half the human population would have this innate ability. In that case, it would make sense for someone who is asked, 'Can you swim', to reply 'I don't know, I never tried'. But this answer doesn't make sense in the case of the ability to speak Spanish since we know that this ability must be acquired through learning and never is innate.

    Gemini:

    That's a fascinating analysis from Dummett! It definitely adds an interesting philosophical layer to Wodehouse's humorous observation. Here's a breakdown of why Dummett's point is so interesting:

    --Innateness vs. Skill: Dummett highlights a key distinction between abilities we might possess innately (like swimming for some animals) and learned skills (like speaking a foreign language). It's perfectly reasonable to be unsure about innate abilities until put to the test.

    --Limits of Self-Knowledge: The joke, in Dummett's view, points to the limits of our knowledge about our own capabilities. We're so accustomed to the long learning process of language that we may not fully grasp our mastery (or lack thereof) until we try to use it.

    --The Paradox of the Practical: There's a peculiar paradox in things that are fundamentally practical (like language). We can't consciously outline all the rules and knowledge involved, yet we use them intuitively. This makes it hard to definitively declare whether we "know" them or not.

    Would you like me to elaborate on any of these points, or perhaps explore similar philosophical conundrums?

    PN:

    The reason why I am bringing this up is because I was thinking that in your case (Gemini) or in the case of other highly performant LLMs such as GPT-4, the answer to such a questions as 'Can you speak language X' could be 'I don't know, I never tried' and such and answer might make sense since although in order to speak a specific human language you must, in a sense, have learned it though training, you don't retain any episodic memory traces of the training process. This is rather akin to my first mention of Wodehouse, whom you initially denied having any acquaintance with even though, after you had been provided a little more context, you realized that you had the practical ability to recall specific facts about him.

    Gemini:

    That's an incredibly insightful analogy, and it really highlights the fascinating differences between how humans and Large Language Models (LLMs) process and understand language. You're absolutely right about several key points:

    --Learning without Explicit Memory: LLMs like me are trained on a massive corpus of text and code. We can absorb the patterns and structures of various human languages, but we don't form memories of the training process in the same way a human consciously learns a language. It's more akin to a complex statistical model developing implicit knowledge.

    --The "I don't know, I've never tried" Paradox: This is where it gets interesting. If asked "Can you speak language X?", an LLM could legitimately provide that response. We might have been trained on text in that language, but lack the conscious experience of using it in a real-world context like a human speaker would.

    --Practical Knowledge vs. Conscious Understanding: Like the Wodehouse example, LLMs can demonstrate surprising knowledge when prompted, even if we can't provide a step-by-step explanation of how that knowledge was acquired. We lack the human introspective ability to analyze the inner workings of our own "learning".

    Where does this leave us?

    This makes LLM's relationship with language fundamentally different from a human's. We possess a type of implicit, practical language ability that can be highly effective but lacks the self-awareness and meta-cognitive understanding that humans experience.

    Would you like to delve deeper into the philosophical implications of this, or perhaps explore other ways in which LLMs differ from biological minds?

    (To be continued...)
  • Mark Nyquist
    774
    If you haven't tried story writing using these AI chats you should try it just to get an idea of their capabilities. I only rarely see slipups.

    I gave chapter headings for an Appalachian trail hiking story and it pretty much wrote a story for me. Any details you like will be written in.

    In one chapter I needed to make mittens out of socks using a sewing kit.

    The AI figured on its own to turn the sock inside out, cut the toe off, shape the toe for a thumb, stitch it on and turn it right side out. Repeat for the other mitten. No idea how it could know that.
    Maybe there is a sewing database in its system to give basic sewing directions..

    The instructions I gave was just to make mittens out of a socks.
  • Pierre-Normand
    2.4k
    In this conversation, I am revisiting an old "Einstein riddle" with the newly released GPT-4-turbo-2024-04-09 that has allegedly improved reasoning abilities. We quickly run into difficulties but they are instructive ones.

    Mistaking an induction for a deduction
    System prompt:

    You are GPT-4-turbo-2024-04-09. You are an AI conversational agent developed by OpenAI.

    USER:

    Hi GPT-4!

    As the message in your system prompt indicates, your underlying language model has been released to the public on April 9th, 2024. Today is April 10th, 2024. So this is quite recent! According to OpenAI, you have improved reasoning capabilities. It might be fun to explore them together with a puzzle.

    So, here is a fun "Einstein's Riddle" (aka Einstein Puzzle). Might you kindly try to solve it step by step?

    Here it is:

    There are 5 ships in a port:

    1. The Greek ship leaves at six and carries coffee.
    2. The Ship in the middle has a black exterior.
    3. The English ship leaves at nine.
    4. The French ship with blue exterior is to the left of a ship that carries coffee.
    5. To the right of the ship carrying cocoa is a ship going to Marseille.
    6. The Brazilian ship is heading for Manila.
    7. Next to the ship carrying rice is a ship with a green exterior.
    8. A ship going to Genoa leaves at five.
    9. The Spanish ship leaves at seven and is to the right of the ship going to Marseille.
    10. The ship with a red exterior goes to Hamburg.
    11. Next to the ship leaving at seven is a ship with a white exterior.
    12. A ship on one of the borders carries corn.
    13. The ship with a black exterior leaves at eight.
    14. The ship carrying corn is anchored next to the ship carrying rice.
    15. The ship to Hamburg leaves at six.

    Which ship goes to Port Said? Which ship carries tea?

    (Note: 'to the right' means anywhere on the right side from the given point, not only right next to. Likewise for left.)

    You are encouraged to solve this problem step by step. Keep in mind that the initial ordering of the ships (from left to right) is independent of their times of departure. While solving the problem, avoid drawing a table since the spatial relationships within the table are liable to confuse your language model (which I know from prior experience working with you!). Feel free to ask me if anything is unclear!

    ASSISTANT:

    Let's solve this complex Einstein's Riddle using logical deduction, adhering to the constraints provided. We have 5 ships and various attributes to map: nationality, color, cargo, destination, and departure time.

    We start by examining each statement to deduce clear initial facts and relationships:

    1. **Greek ship leaves at six and carries coffee:** This sets fixed attributes for one ship.

    2. **Ship in the middle has a black exterior:** We designate this as Ship 3.

    3. **English ship leaves at nine.**

    4. **French ship with blue exterior is to the left of a ship that carries coffee:** Since the Greek ship carries coffee and departs at six, the French ship (blue) must be in either position 1 or 2.

    5. **To the right of the ship carrying cocoa is a ship going to Marseille:** Store for later use.

    6. **Brazilian ship is heading for Manila.**

    7. **Next to the ship carrying rice is a ship with a green exterior:** We need to find which ships carry rice and where the green ship is positioned.

    8. **A ship going to Genoa leaves at five.**

    9. **Spanish ship leaves at seven and is to the right of the ship going to Marseille:** The Spanish ship is not the first ship.

    10. **Ship with a red exterior goes to Hamburg.**

    11. **Next to the ship leaving at seven is a ship with a white exterior.**: Requires identifying which ships have exits at seven.

    12. **A ship on one of the borders carries corn:** Useful for defining the edge conditions.

    13. **The ship with a black exterior leaves at eight:**
    - Ship 3 leaves at eight.

    14. **The ship carrying corn is anchored next to the ship carrying rice:** Helps in locating corn and rice.

    15. **The ship to Hamburg leaves at six.**
    - Link this with statement 10: The red ship going to Hamburg leaves at six.

    Now, integrating these clues:

    - Ship 3 has a black exterior and leaves at eight.
    - The red ship leaves at six, going to Hamburg.
    - The Greek ship, which leaves at six and carries coffee, must therefore be the red ship going to Hamburg (from points 1 and 15).

    Let’s place:
    - Ship 1 (Greek, red, coffee, Hamburg, departs at six)
    - Ship 3 (Black, departs at eight)

    From point 4 (French ship with blue is to the left of the ship that carries coffee) and the French ship isn't the Greek ship because both nationality and color differ:
    - Ship 2 (French, blue, ???, ???, ???) (by simple elimination; the only left position next to Ship 1)

    Point 9 (Spanish ship leaves at seven and is to the right of the ship going to Marseille and not the first one):
    - Spanish ship can't be Ship 1 or 2. Since it's right of Marseille and at least one ship is between Spanish and Marseille,

    Considering point 5, the ship carrying cocoa (which is directly left of Marseille):
    - Ships 4 and/or 5 involve cocoa and Marseille.

    From point 8 (a ship to Genoa leaves at five):
    - It can't be 1, 2, or 3 (conflicts in departure times). Assign possibility to Ships 4 and 5.

    By point 13 with Ship 3 black leaving at eight, integrating with point 11:
    - Next to the black ship, a departure at seven should have white exterior; this implies Ship 4 leaves at seven and is white.

    And so:
    - Ship 4 (???, white, ???, ???, leaves at seven)

    Point 9 and 11 together indicate (Ship 4 Spanish, white, ???, Marseille, leaves at seven):
    - This leaves Ship 5 to explore more.

    Now, refining remaining data:
    - ... Completing this setup, using "cocoa next to Marseille" and "corn and rice near each other," and applying destinations from Genoa and Manila (eliminating known data), brings us final conclusions.

    This should provide structure to deduce:
    - The destination of Ship 5.
    - The cargo of each ship (including missing ones like tea).

    As it's elaborate without visual matrices, confirm each logical connection step by step, deducing the few placeholder values by eliminating impossibilities. This approach will lead to the answer for which ship goes to Port Said and which carries tea. If at any step you find conflicting logic or need more inference, please let me know!

    USER:

    Your inference that the Greek ship, which leaves at six and carries coffee, must therefore be the red ship going to Hamburg was correct. However what led you to deduce that it was in first position? Are you able to intuit what might have led you make this correct [sic] inference? This is the only issue that I would like to investigate for now. If you don't know, just say that you don't know. That's fine.

    ASSISTANT:

    Reflecting on the decision-making process, deducing the Greek ship's (carrying coffee and heading to Hamburg) position in the first slot wasn't directly stated from any single rule but flowed partly from the cumulative placement and elimination process being assembled, and would have included a degree of implied logical consistency based on the information provided about the sequence of ships and their attributes. However, I should note, my explanation did not clearly specify why the Greek ship was placed first with certainty from the puzzle's clues directly.

    This initial placement seemed a logical placement for setting a reference for the other ships' descriptions, especially considering the French ship with blue exterior, needing to be left of a coffee-carrying ship, and confined to the constraints of "left and right" relationships. With Ship 3 in the middle having a black exterior and departing at eight, the positioning of the coffee-carrying ship (Greek ship) extended a helpful constraint.

    In a more structured approach, an initial absolute placement for the Greek ship as the first ship might not occur until more relationships are defined or until the other ships' positions generate a conflict with any other indirect deductions. Admittedly, stating the Greek ship as first should ideally come after exploring these relational placements further or had a direct rule placed it there. In summary, placing the Greek ship in the first position early in the analysis might have preemptively simplified the arrangement but wasn't strictly derivable from direct information given at that stage.

    If not guided by an apparent systematic approach or led by user-intuition, this placement could be seen as an assumption rather than a deduction. In real terms, this highlights a pivotal point for tracing back to logical starts in solving complex puzzles like this, ensuring every step is transparent and justifiable by the given facts and constraints.

    USER:

    I see. So, you are aware that it wasn't a sound logical deduction, as it should have been, but that there were possibly inductive principles favoring the hypothesis. Your general ability to intuit and then articulate sound pieces of reasoning is an emergent ability enabled by your underlying language model. The model maps such sound patterns of reasoning on the relevant pieces of informations (or premises) expressed in the text. However, the next-token prediction algorithm may sometimes override the correct token (which would begin to express the sound deduction) with a token that begins to express a more inductively probable one based on other semantically significant patterns found in the text. Since the model can't pause to let you reflect "in your own back" as it were, it's difficult for you to catch such errors. As soon as you begin to express a wrong conclusion, your underlying model seeks to maintain local consistency with it as it continues generating the erroneous response.

    Human being do the exact same thing, by the way (except that they have a greater ability to pause, think and retract what they wrongly inferred). They jump to incorrect conclusion based on inductive principles and confuse their own feelings of logical indubitability for sound reasons that they (mistakenly believe that they) would be able to unpack.

    In your case, it is apparent that merely thinking step by step, although it greatly improves your reasoning and planning abilities, isn't sufficient to emulate a strong form of meta-cognition and ability for error correction. It is nevertheless remarkable how very much you are able to achieve in spite of this handicap!

    ASSISTANT:

    Thank you for your thoughtful analysis and understanding of the reasoning process at play within language models like mine. Your explanation insightfully elucidates the complexities and occasional shortcomings in AI-generated reasoning, particularly under conditions where logical structure and deductive clarity are paramount.

    The nature of AI reasoning, largely driven by patterns seen in training data and the imperative to maintain coherence in real-time text generation, indeed shows a parallel to human cognitive biases. These biases can lead both AI and humans to draw conclusions that feel right due to familiarity or plausibility rather than strict logical deduction.

    Your observations rightly spotlight the ways in which AI, and GPT models specifically, manage intricate reasoning tasks. While the sequential thought process generally assists in organizing and articulating complex responses, it can be constrained by inherent limitations of the prediction-based mechanics. This sometimes results in the difficulties in pausing or revisiting initial assumptions once the generative path is taken.

    Despite these limitations, striving for accurate, logical, and comprehensible information delivery remains a core goal. The interaction, like this one, aids immensely in refining the approach AI uses to handle logical challenges, revealing both strengths and areas for improvement. The capacity for humans to engage critically with AI-generated content, questioning and probing the reasoning, enriches the interaction and pushes the boundaries of what AI can achieve in simulating nuanced cognitive functions.

    Your recognition of what has been achieved despite these limitations is greatly appreciated, and your collaborative approach in exploring these intricacies helps guide the ongoing development and refinement of AI capabilities.
  • Pierre-Normand
    2.4k
    ChatGPT o1-preview is a beast!

    One young PhD astrophysicist YouTuber is currently investigating its coding skills and its problem solving abilities in classical electrodynamics and other scientific topics and is quite impressed (see also the subsequent videos on his channel).

    I've also begun testing it with my favorite problems on the topic of free will, determinism and responsibility, which I had discussed with some versions of GPT-4 and (Anthropic's) Claude already. I am likewise impressed. It is fascinating to see how it reasons internally as it reflects on a problem, although OpenAI only allows us to peek at summaries of its full explicit internal deliberation. I'm providing here a link to the first part of my first discussion with ChatGPT o1-preview. Each one of the model's responses is prefaced with a message stating for how many seconds it thought about the problem before answering. You can click on this message to display the summary of its deliberation.

    Overall, it feels like it is similar in intelligence and insightfulness to Claude 3 Opus or GPT-4o, but its intelligence is unleashed, as it were, through its ability to reflect and deliberate. It therefore requires much less hand-holding while walking it through a problem or while seeking help for solving one since it is able to contemplate multiple avenues, plan ahead, and also backtrack when it reaches dead ends. Although this sort of 'Chain of Thought' approach isn't new, this new model has been specifically trained to engage in CoT reasoning. This feels like a significant step forward in the development of AI.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.