• Harry Hindu
    5.8k
    Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.Sam26
    The objective in thinking for yourself is to take every idea you hear from others with a grain of salt, and to even question your own ideas constantly. I have come up with certain ideas on my own only to find out that others came up with it as well. Some minds do think alike given the same kinds of experiences.
  • Sam26
    3k
    The objective in thinking for yourself is to take every idea you hear from others with a grain of salt, and to even question your own ideas constantly.Harry Hindu

    If you take every idea with a grain of salt, you’ll never move beyond hesitation. Critical thinking isn’t about doubting everything, it’s about knowing when doubt is justified. In logic, mathematics, or physics, for instance, constant suspicion would paralyze learning; you suspend doubt provisionally because the framework itself has earned trust through rigor.

    In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.
  • Fire Ologist
    1.7k
    Just what we need to add to the [online] world - more sociopaths that make errors and lie about them.Fire Ologist

    Maybe “sociopaths” is unnecessary. Wouldn’t want to scare any children.

    AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.

    What worries me is that people will cede authority to it without even asking themselves whether that is appropriate.Ludwig V

    They surely will, because sheep are easily calmed by things that sound authoritative.

    ———

    It occurs to me that: isn’t a book, AI? It’s information received from a non-human thing. We read a book and ingest the text. We treat the words in a book as if they come from an “intelligence” behind them, or we can judge the veracity and validity of the text qua text with or without any concern for what is behind it. We can also refuse to take the author as authority, and fact check and reconstruct our own analysis.

    For instance, is a reference to Pythagoras in Pythagorean Theory of any significance whatsoever, when determining the length of one side of a triangle? Is it essential to our analysis of “It is the same thing to think as it is to be” that we know who said it first? Context might be instructive if one is having trouble understanding the theory, but it might not matter at all once one sees something useful in the text.. We create a new context by reading and understanding text.

    (This is related to @Banno’s point on his other thread.)

    So banning any reference to AI would be like banning reference to any other author. (I said “like it” for a reason - this doesn’t mean AI is an author the same way we are authors - that is another question.)

    What concerns the philosopher qua philosopher most is what is said, not who (or now, what) says it. I think.

    This not to say we shouldn’t disclose the fact that AI is behind text we put our names on (if we use AI). That matters a lot. We have to know we are dealing with AI or not.

    But I genuinely don't believe using it helps anyone to progress thought further.Moliere

    Don’t we have to wait and see? It’s a new tool. Early 20th century mathematicians could say the same thing about calculators. We didn’t need AI before to do philosophy, so I see your point, but it remains to be seen if it will be any help to someone or not.

    The conclusions in philosophic arguments matter, to me. It is nice to think that they matter to other people as well. (But isn’t essential?) Regardless, I would never think the conclusions printed by an LLM matter to the LLM.

    So the interaction (“dialogue”) with AI and my assessment of the conclusions of AI, are inherently lonely, and nowhere in the world except my own head, until I believe a person shares them, and believe I am dialoguing with another person in the world who is applying his/her mind to the text.

    Bottom line to me is that, as long as we do not lie about what comes from AI and what comes from a person, it is okay to use it for whatever it can be used for. And secondly, no one should kid themselves they are doing philosophy if they can’t stare at a blank page and say what they think philosophically with reference to nothing else but their own minds. And thirdly, procedurally, we should be able to state in our own words and/or provide our own analysis to every word generated by AI, like every word written by some other philosopher, or we, along with the AI, risk not adding anything to the conversation (meaning, you take a massive risk of not doing philosophy or not doing it well when you simply regurgitate AI without adding your own analysis.)
  • Paine
    3k

    My pup tent is located somewhere on your hill. Kafka must also be nearby:

    He eats the droppings from his own table; thus he manages to stuff himself fuller than the others for a little, but meanwhile he forgets how to eat from the table; thus in time even the droppings cease to fall. — Kafka, Reflections, 69, translated by Willa and Edwin Muir
  • Athena
    3.6k
    But I genuinely don't believe using it helps anyone to progress thought further. Go ahead with the next phase, I'll be waiting on my hill of luddites for the prodigals to return ;)Moliere

    That is like saying riding horses can't be fun, when you don't ride horses. How could you know the joy of riding a horse if you don't ride? How could you experience the joy of using AI as much as I do if you don't use it? What can you know of the future that is being opened up if you withdraw from the change instead of participating in it?

    This morning, I came across an AI explanation that was biased and disappointing. If 50% of the time, I was disappointed by AI explanations, I would not think so highly of it, but at the moment, I think it has enriched my life a lot. For me, it has replaced Wikipedia because it captures the explanation of a subject so concisely and is relatively free of biases that are more apt to show up with Wikipedia. I will still use and support Wikipedia, but it isn't my favorite right now. For me, the difference is like finding a better camera that produces more detailed pictures with brighter colors, or using an old Brownie camera with black and white film. :confused:
  • Athena
    3.6k
    AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.Fire Ologist

    AI is like a hammer? That is like saying humans are like apes. I think we are evolved from that line of evolution, but humans have changed the planet in dramatic ways, and apes have not. The potential for AI to act on its own might make it different from a hammer.
  • Athena
    3.6k
    ↪Moliere Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.Sam26

    That may be true, but the first person who showed up at the protest in Portland, Oregon, dressed as a frog has started a wonderful movement of being creative and fun in this moment of high tensions. I was not looking forward to the Saturday, No Kings Day march, until I figured out how to use the Mad Hatters tea party to make my statement. I am looking forward to what creative people are doing. This is such a marvelous human thing to do, and that is something to celebrate.

    I asked what AI can create and it says...
    AI can create a wide range of original content, including text (stories, essays, code), images, audio (music, spoken words), and video by learning patterns from vast datasets. It also creates data-driven insights through analysis and prediction, develops personalized user experiences in areas like shopping, and generates functional outputs such as spreadsheets and automated tasks, effectively acting as a powerful tool for creativity, productivity, and automation.

    I really look forward to insights based on patterns, but hopefully with less human bias. I think it may do better than humans. However, I am not comfortable with giving the power to make decisions and act on them without flesh and blood human control and judgment. Like, No Kings Day is about our liberty to govern ourselves free of tyranny. I am not willing to give that up. :wink:
  • Sam26
    3k
    I won't comment on the political part of your post because I think we're very far apart. However, in the future I can see where humans will merge with AI, so we'll probably become one with machines, probably biological machines.
  • baker
    5.8k
    I think, given the dangers of AI, and the ways in which prominent members of this site have used it to make themselves look smarter than they really are, that its use should be banned altogether on this site.Janus

    I say outsmarten the AIs and their faithful users. That doesn't necessarily mean stop using AIs altogether, but only using them sparsely and deliberately. Most of all, it means lowering or otherwise changing one's goals in life.

    To me, using AIs, especially LLMs for everyday things or for work is like using drugs to get the energy and the focus necessary to do one's work. Occasionally, this can be a last resort, but is not sustainable in the long run. If one cannot do one's job on one's own, consistently, then one has a job that is too demanding and that will eventually get one into trouble in one way or another.

    It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.
  • baker
    5.8k
    In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.Sam26

    Which is easily remedied by cultivating good character for oneself.

    People of substance don't post much on internet forums.
  • Athena
    3.6k
    I won't comment on the political part of your post because I think we're very far apart. However, in the future I can see where humans will merge with AI, so we'll probably become one with machines, probably biological machines.Sam26

    That sounds like the Sumerian notion of many gods and humans being created to serve them. I am against merging humans with machines; however, our industrial society did exactly that! And our hierarchical patriarchy has maintained humans exploiting humans. There is an excellent website explaining the ancient mythology and how the Hebrews reworked it, giving us more freedom and human dignity than the original mythology gave us.

    The Industrial Age merged humans with machines. Our Industrial economy/society made humans extensions of the machines. Union workers risked their lives in a fight for better working conditions and wages when the flood of workers needing jobs made them cheap labor.

    We took that a step further when we got on the path of the military-industrial complex. We see humans doing jobs, but this is a computer-driven reality, only that the computer is not made of inorganic material. The increasingly centralized computer has human components, like the Borg of Star Trek. All those workers are controlled by policies that come with the beginning of each bureaucracy/machine. The jobs are explained in detail, and the workers are dispensible because the new person who does the job will do it the same as the person who left the job. It is policy set in the past that controls the present.

    Joseph Campbell, the guru of mythology, said humanity needs mythology and that Star Trek is the best mythology for our time. However, my understanding of the human computer governing us, comes from studying Public Policy and Administration at the U or O. The US adopted the Prussian models of bureaucracy and education. That is what makes the Military/Industrial Complex that Eisenhower warns us about.

    Whatever, if people don't want AI running things, they need to be aware of our evolution that made us extensions of machines and now attempts to manage every aspect of our lives, just as Tocqueville warned would happen around 1830, after the French Revolution, and visiting the US.
  • Athena
    3.6k
    In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.
    — Sam26

    Which is easily remedied by cultivating good character for oneself.

    People of substance don't post much on internet forums.
    22 minutes ago
    baker

    Do you guys ever experience hypobaric hypoxia from being so high above everyone else?
  • baker
    5.8k
    Do you guys ever experience hypobaric hypoxia from being so high above everyone else?Athena

    Now what did I just say about cultivating good character for oneself?
  • Janus
    17.6k
    Don't mistake the speculative misuse of ideas for the ideas themselves. AI is no longer in the realm of “mental masturbation,” it’s already reshaping science, mathematics, and even philosophy by generating proofs, modeling complex systems, and revealing previously inaccessible patterns of thought. To dismiss that as delusory is to confuse ignorance of a subject with the absence of rigor within it.Sam26

    You are misunderstanding. My comments re "mental masturbation" were specifically targeting text like the response made to @Number2018 by ChatGPT. I think use of AIs in science and math is fine. In my view they are just the kinds of disciplines AIS should be trained on. Of course they have to be trained on basic pattern recognition initially. I don't know and would need to look into what they initially were specifically trained on before being released "into the wild". Now that they are out there they are being trained on whatever content is to be found in their casual interactions with people.

    The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it.Sam26

    Can you name a few of those "forward-looking thinkers"? As I said in the OP my main objections are that it was irresponsibly released before being properly understood, and that its being used without acknowledgement to make posters on these forums look smarter than they are. They will also have an horrendous environmental impact. But I accept that their continued use and evolution is now inevitable, and, unfortunately, unpredictable. It is a case of playing with fire.

    Out of time now, I'll try to respond when I have more time.
  • Sam26
    3k
    Can you name a few of those "forward-looking thinkers"?Janus

    There are those who view AI as an epistemic tool, something that extends, rather than replaces human inquiry. There's a long list of people who fit the bill. For example, Nick Bostrom and Luciano Floridi have been working on the conceptual implications of AI for ethics, cognition, and the philosophy of information. Vincent Müller and Mariarosaria Taddeo have been exploring how AI reshapes the logic of justification and responsibility in scientific reasoning. On the cognitive side, Joscha Bach treats AI systems as experimental models of mind, ways to probe the nature of understanding. Even researchers outside philosophy, in fields like computational linguistics and mathematical discovery, are beginning to treat AI as a genuine collaborator capable of generating new proofs and hypothesis.

    Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?
  • Jamal
    11k
    But I genuinely don't believe using it helps anyone to progress thought furtherMoliere

    What does it mean to "progress thought"? According to any sense I think of, using an LLM certainly can help in that direction. As always, the point is that it depends how it's used, which is why we have to work out how it ought to be used, since rejection will be worse than useless.

    Anyway, here is one example: ask it to critique your argument. This is an exercise in humility and takes your thoughts out of the pugilistic mode and into the thoughtful, properly philosophical mode. It's a form of Socratization: stripping away the bullshit, re-orientating yourself towards the truth. Often it will find problems with your argument that can only be found when interpreting it charitably; on TPF this often doesn't happen, because people will score easy points and avoid applying the principle of charity at all costs, such that their criticisms amount to time-wasting pedantry.

    Relatedly, let's say you're on TPF, criticizing Nietzsche's anti-egalitarianism. Before you hit the submit button you can ask an LLM to put forth the strongest versions of Nietzsche's position so you can evaluate whether your criticism stands up to it, and then rewrite your criticisms (yourself). How can this be inferior to—how does this require less thought than—hitting the submit button without doing that? Granted that it's good to take the long way round and go and consult the books, but (a) one could spend an infinite length of time on any one post, reading all the books in the world just to produce a single paragraph, so we have to draw the line somewhere, and (b) using the LLM in this way will direct you towards books and papers and the philosophers you didn't know about who you can learn from.

    Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.

    I myself want to discourage its use amongst students as much as possible. I want them to be able to think for themselves.

    AI is just a way to not do that.
    Moliere

    A lot of people think it is, and it's clear to me that it can be. We are at the point now where its general use is stigmatized because it has, understandably, been used by students to cheat. I think it's clear that we need to think about it in a more fine-grained way.

    The world has a tool that will be used, by more and more people. The important task now is stigmatizing improper use of this tool, and encouraging responsible use. As I said in the other thread, stigmatizing all use of it will be counterproductive, since it will cause people to use it irresponsibly and dishonestly.
  • Jamal
    11k
    @Moliere Let's say you object to some of the points I've made above. For example, I can see that you might push back against this:

    Another example: before LLMs it used to be a tedious job to put together an idea that required research, since the required sources might be diverse and difficult to find. The task of searching and cross-referencing was, I believe, not valuable in itself except from some misguided Protestant point of view. Now, an LLM can find and connect these sources, allowing you to get on with the task of developing the idea to see if it works.Jamal

    But your pushback is potentially constructive, in that it can help us decide on which uses of LLMs are good and which are bad. The unconstructive way, I think, is in just wishing the toothpaste were back in the tube.
  • Fire Ologist
    1.7k
    The potential for AI to act on its own might make it different from a hammer.Athena

    You sell hammers way too short, and maybe give AI way too much credit.

    AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.
    — Fire Ologist
    Athena

    You say “act on its own”; and I said “accidentally”.

    So you don’t think AI is a tool? What else is “artificial” but some sort of techne - the Greek root for technology and for hand-tooling? AI is a word sandwich machine. It obviously is a device we’ve built like any other machine that does measurable work - it just now takes a philosopher to measure the work AI does.
  • Baden
    16.6k
    I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.
  • javi2541997
    6.8k
    I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.Baden

    Well-put! :up: :100:
  • Pierre-Normand
    2.8k
    I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.Baden

    I assume, but I also mention it here for the sake of precision, that the clause "(an obvious exceptional case might be, e.g. an LLM discussion thread where use is explicitly declared)" remains applicable. I assume also (but may be wrong) that snippets of AI generated stuff, properly advertised as such, can be quoted in non-LLM discussion threads as examples, when it topical, and when it isn't a substitute for the user making their own argument.
  • Baden
    16.6k


    Yes, that's correct.
  • Baden
    16.6k


    Thanks, javi. :pray: (I've written some more on this in Banno's AI discussion).
  • unenlightened
    9.9k
    Do you guys ever experience hypobaric hypoxia from being so high above everyone else?Athena

    If I say 'yes', will it make you look up to me?
  • Outlander
    2.8k
    Do you guys ever experience hypobaric hypoxia from being so high above everyone else?Athena

    If popular aphorisms are to be trusted, it's quite lonely at the top. But at least they're nice. That or desperate to trap another unwitting soul so as to alleviate their loneliness and deprive another from that nearly forgotten feeling of what is was once upon a time when one knew so little, yet could dream of so much. :cry:
  • Ludwig V
    2.2k
    Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.Sam26
    Literally parroting is often a waste of time. But formulating existing ideas for oneself, discussing and debating them, playing with them are all part of understanding them. This is worth while in its own right, and is often a necessary prerequisite for coming up with one's own worthwhile ideas.

    The irony of the “information” super highway.Fire Ologist
    Actually, on further thought, I'm beginning to think that the real fault lies with the naivety of thinking that the internet would be immune from all the varieties of human behaviour. Almost everything that goes on is normal behaviour - on steroids.

    The irony of calling its latest advancement “intelligent”. We demean the intelligence we seek to mimic in the artificial, without being aware we are doing so.Fire Ologist
    Many people seem to think that the point of AI is to mimic human intelligence. I can't understand that, except as a philosophical exercise. We have, I would say, a quite reasonable supply of human intelligence already. There are plenty of things that AI can do better and quicker than humans. Why don't we work with those?

    I've added the note: NO AI-WRITTEN CONTENT ALLOWED to the guidelines and I intend to start deleting AI written threads and posts and banning users who are clearly breaking the guidelines. If you want to stay here, stay human.Baden
    That seems a bit radical. What does bother me a bit is how one can identify what is and isn't written by AIs. Or have you trained an AI to do that?
  • Outlander
    2.8k
    Much of what all of us do is "parrot." Not many people can come up with an original idea to save their life.Sam26

    Because it's all been said and done before. The average person in the past 50 years comes from a multi-sibling household with TV or Internet or otherwise endless forms of entertainment that people a mere few centuries ago never had. Nobody has to think anymore. Not really. Other than the basic desires and how they relate to one's safety, gain, and resulting comfort in life.

    Philosophy:
    "There is no such thing as a new idea. It is impossible. We simply take a lot of old ideas and put them into a sort of mental kaleidoscope."
    - Mark Twain

    Religion:
    "There is nothing new under the Sun."
    - Ecclesiastes

    I mean, what yours is suggesting is akin to creating a bonfire underwater. Even if you did, what good or purpose could ever come from it? :chin:
  • praxis
    7k
    @Jamal @Baden

    Regarding the new policy, sometimes when I’ve written something that comes out clunky I run it through an AI for “clarity and flow” and it subtly rearranges what I’ve written. Is that a non-no now?
  • Harry Hindu
    5.8k
    If you take every idea with a grain of salt, you’ll never move beyond hesitation. Critical thinking isn’t about doubting everything, it’s about knowing when doubt is justified. In logic, mathematics, or physics, for instance, constant suspicion would paralyze learning; you suspend doubt provisionally because the framework itself has earned trust through rigor.

    In a philosophy forum, though, caution makes sense. Most participants lack grounding in epistemology, logic, or linguistic analysis, so what passes for argument is often just speculation dressed up as insight. Honestly, you could gain more from interacting with a well-trained AI than from sifting through most of what appears here, it would at least give you arguments that hold together.
    Sam26

    "With a grain of salt" is a 1600s direct translation from Modern Latin "cum grano salis", and salis is genitive of sal, which, in addition to ‘salt’, figuratively means "ntellectual acuteness, good sense, shrewdness, wit.

    The Latin phrase is found in English literature in the 1600s and 1700s, and salis appears to precisely mean ‘good sense, intelligence’.

    My point was that for one avoid parroting others you should be skeptical of what others say, not that you should avoid logic and reason.
  • Harry Hindu
    5.8k
    My comments re "mental masturbation"Janus
    Seems like philosophy itself could be labeled as mental masturbation.

    Of course they have to be trained on basic pattern recognition initially. I don't know and would need to look into what they initially were specifically trained on before being released "into the wild". Now that they are out there they are being trained on whatever content is to be found in their casual interactions with people.Janus
    Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.