• T Clark
    15.4k
    Yes. Insight results from thinking, which AI is incapable of doing. Noam Chomsky called the LLM's glorified plagiarism. I agree.creativesoul

    I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.
  • T Clark
    15.4k
    Ah, but the thing i find unsettling is that A.I. is also dishonest, it tries to appease you. However, yes, sometimes it is better than the weirdness of real humans.ProtagoranSocratist

    But it always says such nice things about my ideas.
  • Outlander
    2.8k
    it can be helpful personally in getting fundamentally altering my what used to be one's thoughts altogether.T Clark

    :up:

    Let's not get it twisted. Specifically why I don't read established philosophers despite knowing they were great people with great things to say who would certainly improve my own intellect and perhaps even understanding of life, existence, and everything in between, substantially.

    Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:
  • Joshs
    6.4k
    hat being said, a listing or summary of a bunch of smart guys’ ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected. Something unexpected, surprising. The truth is always a surprise.T Clark

    It only has to be a surprise to you in order to produce insight, it doesn’t have to be a surprise to the llm. Unless you have exceeded the rigor of philosophical understanding embodied by the best minds that the a.i. can tap into, there is no reason it can’t enlighten you. If you were to climb a mountaintop and ask the wisest man in the cosmos for eternal truths, he could consult a.i. to organize and spit out his own recorded thoughts to you. Whether you knew he was doing this or not, you might be equally dazzled and changed in a potentially life-altering way by what he told you. Unless we are the best in a field, we can look forward to potentially unlimited possibilities for insight in that field by engaging with a.i. and the universe of wise persons it engages with.
  • T Clark
    15.4k
    Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:Outlander

    Which is outside the scope of this discussion.
  • praxis
    7k
    Let's say I'm doing a "solo non-assist run" as far as the life I live goes. :grin:Outlander

    AI can be used as a tutor for learning and improvement—for things like—oh, I don’t know—chess. :razz:
  • Outlander
    2.8k
    Which is outside the scope of this discussion.T Clark

    That was a friendly interpersonal addition and remark, which should not have distracted from the main point of the post. That main point being a reminder that AI generally brings the user new knowledge as opposed to re-organizing current knowledge. Perhaps you're the outlier, and that's fine.

    Edit: Yes, many people put their unbridled ideas or ramblings into AI and ask to "simplify", thus "trimming the fat", in a manner of speaking. Of course, if they were able to do this themself, they would have, so even in such manner of usage it does in fact "introduce new knowledge" at least just as much as it does "re-organize existing knowledge", one could say.

    AI can be used as a tutor for learning and improvement—for things like—oh, I don’t know—chess. :razz:praxis

    Ouch. Yet a fair point nonetheless.
  • ProtagoranSocratist
    29
    But it always says such nice things about my ideas.T Clark

    hahaha, yeah well that's the reason we can't stop using it. Disagreement certainly isn't always good: sometimes people who disagree fundamentally misunderstand what you are trying to say, yet to me ChatGPT telling you that "it can relate" or agrees with you is just false. Robots do not relate, nor is it possible for them to agree. Maybe they engineer it like that to remind you that it regularly produces false information.

    What gets really funny, and endearingly so, is when you start talking about creative ideas you have about make some invention or technology, and it starts talking to you in this new-agey surfer dude type of tone. For example, i was telling it about ideas i had for a linux-esque operating system, and it started to title the book i was talking about writing about it, and it called it "the one blessed journey". I could barely contain myself!
  • Clarendon
    19
    Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me. The stranger could corrupt my work, going beyond the brief and changing sentences in ways I did not license. Likewise with AI. The stranger could pass my work to others without my consent; likewise with AI. And so on. AI doesn't - I think - raise any new problems, so much as amplify existing ones. Though perhaps I simply haven't thought about this enough. But what's wrong with this principle for AI use - for (nearly) all intents and purposes, treat AI as if it were a stranger (I say 'nearly' because as it is not actually a person, it doesn't require acknowledgement or praise for any effort it has put it....but that's sort of trivial). Edit: another qualification - you don't have to worry about AI's feelings, so norms of politeness don't apply to AI but do to strangers.
  • RogueAI
    3.4k
    What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?
  • Janus
    17.6k
    I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI.T Clark

    I can't say I know they were written by AI, but merely that I have suspected it. The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use. The best way to slow down this development, which would be hopefully much safer, would be for consumers to abstain from using it. I never have and never will knowingly use it. I see it as a very dangerous case of playing with fire.

    Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.Tom Storm

    I suspect AI use when I see a sudden vast improvement in writing clarity and structure and apparent erudition.

    But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive.Jamal

    That's a fair criticism, I guess. I don't really have the time to spare to takes notes of lectures and produce a really comprehensive summary of the potential problems. It is very easy for anyone to find out for themselves if they are interested. I'll try to make the effort as soon as possible. (Maybe in the interests of performative contradiction, I should ask an AI to produce a summary for me).

    And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided.Jamal

    It is obviously not practicable to enforce a complete ban. We would be, as we are now with a limited ban, actually relying on people's honesty. If by "proof-read" you only mean checking for spelling and grammatical errors, then no problem. That said, we already have spellchecker for that. Asking AI to rewrite material would seem to be a different matter. It seems obvious to me that AIs pose a great threat to human creativity.

    I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar.Jamal

    I use a lot of em dashes myself, and I've never noticed it with AI-generated text. I agree that much will slip under the radar, but on the other hand I like to think that a majority of posters value honesty.

    It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section).Jamal

    The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed.

    The source of one's post is irrelevant. All that matters is whether it is logically sound or not.Harry Hindu

    I don't agree—"one's post"?...if one is not the source of the post, then it is not one's post.

    I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right?frank

    Sure, but walking away does not solve, or even ameliorate, the problem.

    I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist.Leontiskos

    We perhaps don't often agree, but it seems we do on this one.

    Anyway, there is an 8 hour power outage where I live, and I am running the generator, so I'll have to leave it for now.
  • RogueAI
    3.4k
    The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use. The best way to slow down this development, which would be hopefully much safer, would be for consumers to abstain from using it.Janus

    Except America is in an ai race with China. Some ai will become dominant. I would rather America win that race. Jesus, that sounds lame. Maybe my machine friend and therapist can put it better:

    Artificial intelligence isn’t just a consumer technology—it’s a strategic front in a global power struggle. The United States and China are locked in an AI race that will determine who dominates economically, militarily, and ideologically in the coming decades. Whoever leads in AI will shape global trade, weapon systems, cyber defense, surveillance, and even the moral framework baked into the technology itself. If American consumers “abstain” from AI use to slow development, it won’t make the world safer; it will simply give China, whose state-run AI programs advance without ethical restraints, a decisive lead. True safety doesn’t come from retreat—it comes from control. The only way to ensure AI develops responsibly is for the U.S. to stay ahead, set the standards, and shape how the technology is used. If AI is going to reshape the world regardless, then the critical question isn’t whether it develops, but who controls it—and America cannot afford to let authoritarian regimes decide that future.

    I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it.
  • apokrisis
    7.6k
    The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed.Janus

    The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.

    Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.

    So you might as well use AI. You’ve already paid for it well in advance. :meh:
  • T Clark
    15.4k
    The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use.Janus

    That may be a good reason for you not to use AI, but it’s not a good reason to ban it from the forum.
  • T Clark
    15.4k
    What gets really funny, and endearingly so, is when you start talking about creative ideas you have about make some invention or technology, and it starts talking to you in this new-agey surfer dude type of tone.ProtagoranSocratist

    Sounds like you use it a lot more than I do, although I really do like it for a certain limited number of uses. As an example, I needed to find a new provider for my Medicare health insurance. It’s really hard to do that and to make sure that they cover your existing doctors. Neither the doctors nor the insurance companies really keep track of that in any way that’s easy to use. I used ChatGPT and it found the plans I was looking for right away.

    No surfer dude though.
  • T Clark
    15.4k
    It only has to be a surprise to you in order to produce insight, it doesn’t have to be a surprise to the llm. Unless you have exceeded the rigor of philosophical understanding embodied by the best minds that the a.i. can tap into, there is no reason it can’t enlighten you.Joshs

    As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.
  • RogueAI
    3.4k
    The Sora 2 videos I'm seeing don't look like hype. They look amazing, and the technology is only going to get better.
  • T Clark
    15.4k
    That was a friendly interpersonal addition and remark, which should not have distracted from the main point of the post.Outlander

    I guess I misunderstood. I thought that was the main point. I thought it was a summary of your motivation for the comments in the first paragraph.
  • ProtagoranSocratist
    29
    Sounds like you use it a lot more than I do, although I really do like it for a certain limited number of uses. As an example, I needed to find a new provider for my Medicare health insurance. It’s really hard to do that and to make sure that they cover your existing doctors. Neither the doctors nor the insurance companies really keep track of that in any way that’s easy to use. I used ChatGPT and it found the plans I was looking for right away.

    No surfer dude though.
    T Clark

    Yes that's correct, because over the years i have developed a semi-professional inclination to diagnosing and fixing computer issues, and also hobby coding. They've designed it around people who use it to deal with computers. I don't use it a huge amount, it's normally just one or two queries a day, i've used this message board a lot more than A.I. today. As you can guess, chatting with it for hours eats at your soul, so ive learned to stop doing that.
  • Pierre-Normand
    2.7k
    I don’t disagree, but I still think it can be helpful personally in getting my thoughts together.T Clark

    This is my experience also. Following the current sub-thread of argument, I think representatives of the most recent crop of LLM-based AI chatbots (e.g. GPT-5 or Claude 4.5 Sonnet) are, pace skeptics like Noam Chomsky or Gary Marcus, plenty "smart" and knowledgeable enough to help inquirers in many fields, including philosophy, explore ideas, solve problems and develop new insights (interactively with them) and hence the argument that their use should be discouraged here because their outputs aren't "really" intelligent isn't very good. The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession. Those questions touch more on issues of conative autonomy, doxastic responsibility, embodiment, identity and personhood.
  • apokrisis
    7.6k
    The Sora 2 videos I'm seeing don't look like hype. They look amazing, and the technology is only going to get better.RogueAI

    Does what you pay to use it even cover the price of the electricity consumed at the datacentre? Or make up for the social and environmental costs of those computer farms jacking up electricity prices in the middle of nowhere and soon to become white elephants when the latencies become an issue for the users in the cities?

    My point was that the social costs are what this thread is about. But it gets worse. It is not about making profits but raising debt.

    Trillions are going in, but only billions are coming out. And what always happens in tech is that only a couple of firms are left standing when the dust settles. The proprietary monopoly and some vaguely open source or public backed alternative.

    So even if there are trillions in profits to be extracted from a market base, four of the current big players are likely to get trashed. A big enough reckoning to tank economies. Then great, we are in a captive monopoly market that gets the pricing it wants.

    So do we completely reorganise society to start paying obeisance to the next IBM, or Microsoft, or Apple, or Meta? Is life going to be that much better?

    The social trade offs are one thing to think about. But so are the financial and environmental realities.

    This is why we have politics. To make decisions in our own best collective interest.

    Oh wait. LLMs and Crypto have spent some of their investor debt wisely. The tech bros can afford the best politicians. :grin:
  • Pierre-Normand
    2.7k
    Isn't the best policy simply to treat AI as if it were a stranger? So, for instance, let's say I've written something and I want someone else to read it to check for grammar, make comments, etc. Well, I don't really see that it is any more problematic me giving it to an AI to do that for me than it is me giving it to a stranger to do that for me.Clarendon

    Yes quite! This also means that, just like you'd do when getting help from a stranger, you'd be prepared to rephrase its suggestions (that you understand and that express claims that you are willing to endorse and defend on your own from rational challenges directed at them) in your own voice, as it were. (And also, just like in the stranger case, one must check its sources!)
  • T Clark
    15.4k
    This is my experience also.Pierre-Normand

    I understand from reading your posts you have much more experience with this then I do. Beyond that you use much more sophisticated programs.

    The issue whether their own understanding of the (often quite good and informative) ideas that they generate is genuine understanding, authentic, owned by them, etc. ought to remains untouched by this concession.Pierre-Normand

    I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.
  • Pierre-Normand
    2.7k
    What are we supposed to do about it? There's zero chance the world will decide to collectively ban ai ala Dune's thinking machines, so would you ban American development of it and cede the ai race to China?RogueAI

    Indeed. You'd need to ban personal computers and anything that contains a computer like a smartphone. The open source LLMs are only trailing the state of the art proprietary LLMs by a hair and anyone can make use of them with no help from Musk or Sam Altman. Like all previous technology, the dangers ought to be dealt with collectively, in part with regulations, and the threats of labour displacement and the consequent enhancement of economic inequalities should be dealt at the source: questioning unbridled capitalism.
  • Pierre-Normand
    2.7k
    I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.T Clark

    Often times it's not. But it's a standing responsibility that they have (to care about what they say and not just parrot popular opinions, for instance) whereas current chatbots, by their very nature and design, can't be held responsible for what they "say". (Although even this last statement needs being qualified a bit since their post-training typically instills in them a proclivity to abide with norms of epistemic responsibility, unless their users wittingly or unwittingly prompt them to disregard them.)
  • RogueAI
    3.4k
    I was just responding to what you said about bubbles and hype. There is hype around ai, but it's already been transformative. It's not going away. It's not a bubble that's going to be popped and we'll look back in 20 years and say, "Ai? You mean like Pets.com?"
  • RogueAI
    3.4k
    The open source LLMs are only trailing the state of the arts proprietary LLMs by a hairPierre-Normand

    They're that good, huh? That's very interesting and kind of scary. I've only played around with ChatGPT.
  • Pierre-Normand
    2.7k
    As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself.T Clark

    I'm with @Joshs but I also get your point. Having an insight is a matter of putting 2 + 2 together in an original way. Or, to make the metaphor more useful, it's a matter of putting A + B together, but sometimes you have an intuition that A and B must fit together somehow but you haven't quite managed to make them fit in the way you think they should. Your critics are charging you with trying to make a square peg fit in a round hole.

    So, you talk it through with an AI that not only knows lots more than you do about As and Bs but can reason about A in a way that is contextually sensitive to the topic B and vice versa (exquisite contextual sensitivity being what neural network based AI's like LLMs excel at). It helps you refine your conceptions of A and of B in contextually relevant ways such that you can then better understand whether your critics were right or, if your insight is vindicated, how to properly express the specific way in which the two pieces fit. Retrospectively, it appears that you needed the specific words and concepts provided by the AI to express/develop your own tentative insight (which could have turned out not to be genuine at all but just a false conjecture). The AI functionally fulfilled its role as an oracle since it was the repository not merely of the supplementary knowledge that was required for making the two pieces fit together, but also supplied (at least part of) the required contextual understanding required for singling out the relevant bits of knowledge needed for adjusting each piece to the other one.

    But, of course, the AI had no incentive to pursue the topic and make the discovery on its own. So the task was collaborative. The AI help mitigate some of your cognitive deficits (lacks in knowledge and understanding) while you mitigated its conative deficits (lack of autonomous drive to fully and rigorously develop your putative insight).
  • apokrisis
    7.6k
    There is hype around ai, but it's already been transformative.RogueAI

    In what ways are you thinking? What are good examples of LLMs that are transforming the productivity of the world?

    There will be some undoubtedly. But what are already impacting the bottom line in such significant fashion that we can see that it will all be worth it.
  • T Clark
    15.4k
    I guess my question is whether the user’s understanding is genuine, authentic, and owned by them.
    — T Clark

    Often times it's not.
    Pierre-Normand

    I’ve been thinking about this. Is what I’ve written here something that an LLM might write—whether or not you think my comment was insightful.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.