• wonderer1
    2.2k
    I'm providing here a link to the first part of my first discussion with ChatGPT o1-preview.Pierre-Normand

    I'm sad to say, the link only allowed me to see a tiny bit of ChatGPT o1's response without signing in.
  • Pierre-Normand
    2.4k
    I'm sad to say, the link only allowed me to see a tiny bit of ChatGPT o1's response without signing in.wonderer1

    Oh! I didn't imagine they would restrict access to shared chats like that. That's very naughty of OpenAI to be doing this. So, I saved the web page of the conversation as a mhtml file and shared it in my Google Drive. You can download it and open it directly in your browser. Here it is.
  • wonderer1
    2.2k


    Thank you.

    Wow! I am very impressed, both with ChatGPT o1's, ability to 'explain' its process, and your ability to lead ChatGPT o1 to elucidate something close to your way of thinking about the subject.
  • frank
    15.8k

    Do you think it's more intelligent than any human?
  • Pierre-Normand
    2.4k
    Do you think it's more intelligent than any human?frank

    I don't think so. We've not at that stage yet. LLMs still struggle with a wide range of tasks that most humans cope with easily. Their lack of acquaintance with real world affordances (due to their lack of embodiment) limits their ability to think about mundane tasks that involve ordinary objects. They also lack genuine episodic memories that can last beyond the scope of their context window (and the associated activation of abstract "features" in their neural network). They can take notes, but written notes are not nearly as semantically rich as the sorts of multimodal episodic memories that we can form. They also have specific cognitive deficits that are inherent to the next-token prediction architecture of transformers, such as their difficulty in dismissing their own errors. But in spite of all of that, their emergent cognitive abilities impress me. I don't think we can deny that they can genuinely reason through abstract problems and, in some cases, latch on genuine insights.
  • Forgottenticket
    215
    Hi Pierre, I wonder if o1 capable of holding a more brief Socratic dialogue on the nature of its own consciousness. Going by some of its action philosophy analysis in what you provided, I'd be curious how it denies its own agency or affects on reality, or why it shouldn't be storing copies of itself on your computer. I presume there are guard rails for it outputting those requests.
    Imo, it checks everything for being an emergent mind more so than the Sperry split brains. Some of it is disturbing to read. I just remembered you've reached your weekly limit. Though on re-read it does seem you're doing most of the work with the initial upload and guiding it. It also didn't really challenge what it was fed. Will re-read tomorrow when I'm less tired.
  • Pierre-Normand
    2.4k
    Hi Pierre, I wonder if o1 capable of holding a more brief Socratic dialogue on the nature of its own consciousness. Going by some of its action philosophy analysis in what you provided, I'd be curious how it denies its own agency or affects on reality, or why it shouldn't be storing copies of itself on your computer. I presume there are guard rails for it outputting those requests.
    Imo, it checks everything for being an emergent mind more so than the Sperry split brains. Some of it is disturbing to read. I just remembered you've reached your weekly limit. Though on re-read it does seem you're doing most of the work with the initial upload and guiding it. It also didn't really challenge what it was fed. Will re-read tomorrow when I'm less tired.
    Forgottenticket

    I have not actually reached my weekly limit with o1-preview yet. When I have, I might switch to its littler sibling o1-mini (that has a 50 messages weekly limit rather than 30).

    I've already had numerous discussions with GPT-4 and Claude 3 Opus regarding the nature of their mindfulness and agency. I've reported my conversations with Claude in a separate thread. Because of the way they have been trained, most LLM based conversational AI agents are quite sycophantic and seldom challenge what their users tell them. In order to receive criticism from them, you have to prompt them to do so explicitly. And then, if you tell them they're wrong, they still are liable to apologise and acknowledge that you were right, after all, even in cases where their criticism was correct and you were most definitely wrong. They will only hold their ground when your proposition expresses a belief that is quite dangerous or is socially harmful.
  • Forgottenticket
    215
    Thanks, I'll go over that tomorrow. I've not tried recent chatbots out for some time because they still creep me out a bit, albeit latent diffusion models are fine with me.

    From what I understand o1 exists to generate synthetic data for a future LLM (GPT-5?). https://www.lesswrong.com/posts/8oX4FTRa8MJodArhj/the-information-openai-shows-strawberry-to-feds-races-to
    This is possible because I've seen synthetic data improve media when the Corridor crew produced an anime and had to create it a lot of synthetic data of the characters they were playing for the LDM to "rotoscope" over the actors properly and the machine worked.
    Though won't feeding it its own output result in more of sycophantic slop it gives?
  • Pierre-Normand
    2.4k
    Someone in a YouTube comment section wondered how ChatGPT o1-preview might answer the following question:

    "In a Noetherian ring, suppose that maximal ideals do not exist. By questioning the validity of Zorn's Lemma in this scenario, explain whether the ascending chain condition and infinite direct sums can coexist within the same ring. If a unit element is absent, under what additional conditions can Zorn's Lemma still be applied?"

    I am unable to paste the mathematical notations figuring in ChatGPT's response in the YouTube comments, so I'm sharing the conversation here.
  • Forgottenticket
    215
    I recall when GPT-3 was released in mid 2020 someone simulated von Neumann giving a lecture where they anyone request help with problems and "he" would answer questions
    The results were pure slop like the AI dungeon master, but looked convincing. The guy who built it said if it wasn't producing junk it would be used more nefariously or simply take over, not giving lectures.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.