Comments

  • Transcendental Ego
    The problem of other minds cannot be resolved by looking at other humans face to face. This due to Cartesian doubt which Descartes introduced: e.g. if something looks like and acts like a duck, it might be an elaborate automaton. Same with something that looks like and acts like a human. Etc.javra

    I don't take such implausible, merely non-contradictory, possibilities seriously. For me, in order to doubt I have to have some good reason to doubt. For me, if someone claims that they know, or even could somehow come to know, the secret to "life, the Universe and everything" I think I have good reason to doubt their veracity or the soundness of their judgement.

    You have claimed that you can't imagine it being ergo it can't be.javra

    No I haven't claimed that at all. I've merely claimed that if I can't imagine it, and no one has ever been able to tell me what it looks like, then I have no good reason to believe in it. I am not saying I cannot be mistaken—I'm merely addressing what I believe and don't believe, or doubt and don't doubt, and the reasons why I believe or doubt. Isn't that what we are all doing here?
  • Beyond the Pale
    "If you can't show that it is tout court inferior...," each time refusing to say what the hell it would mean for something to be "tout court inferior."Leontiskos

    I'm not claiming "tout court" or overall inferiority looks like anything and that's the point—if someone claims that slavery is justified when the enslaved are inferior in all ways then their claim would seem to be incoherent.

    And even if they more modestly claimed that some measurable kind of inferiority justified slavery, I can't see how any argument for that could stand up to scrutiny either.

    I'm happy to be done—you resurrected this argument after 19 days, and I thought we were done then.
  • How LLM-based chatbots work: their minds and cognition
    The key idea is that "intelligent structure" has to arise so that this entropy can even be "produced".apokrisis

    Assuming that the model predicting heat death of the Universe is sound—do you think it's inevitable destination would have been different had no life ever arisen?
  • Beyond the Pale
    An ox is most likely bigger and stronger than you, possibly better-natured and better looking and kinder to its kin, so it is not overall inferior. Superiority and inferiority only have meaning where there is precisely determinable measure—how could it be otherwise?
  • Transcendental Ego
    I cannot logically or empirically demonstrate that you are human (rather than, say, and AI program). Its called the problem of other minds. That mentioned, do you mean to tell me that all you experience are intense emotions and no moments of eureka where something novel clicks with you? I'll believe you if you so say, but most humans are not like that and know it.javra

    In principle you could indeed empirically demonstrate that I am human—all you would have to do is meet me face to face. The so-called "problem of other minds" is something else. A conversation should demonstrate that I am minded even if I'm an idiot (face to face if this conversation is not sufficient to allay your skepticism).

    It's called philosophy. Same reason you're bothering trying to convince me of your felt convictions.javra

    You misunderstand—I'm not trying to convince you of my felt convictions.

    It's called reasoning. But OK, you don't see how.javra

    Reasoning, if it is good is simply valid. Valid reasoning can support all kinds of whacky beliefs. You also need sound premises. Premises based on accurate empirical observation are sound——they can be checked. Premises based on mathematical or logical self-evidence are sound. If you see how some other method for determining premises can be demonstrated to be sound I'd love to hear about it.

    You are not the measure of all things (nor I, nor anyone else). Contra Pythagorean mindsets.javra

    I have nowhere claimed to be the measure of all things. If someone else can imagine how a precise measure of beauty can be achieved, or even what such a purported method would look like, then I'm open to hearing about it. In all my reading and discussion I've never encountered any such thing. I'd be very happy to encounter a demonstrably precise measure of beauty——it would be a revelation.
  • Beyond the Pale
    Overall inferiority is not a square circle it is an unsupportable claim in my view. If you think it is a potentially supportable claim you should at least be able to give some kind of outline of what a demonstration of overall inferiority would look like.
  • Transcendental Ego
    Nope. When we get something, when something clicks with us, there may be emotions also experienced, but the thing that clicks--the deep inner (to the transcendental ego) understanding--is not the emotions that accompany.javra

    You can believe that if you want to—the point is that you cannot logically or empirically demonstrate it. That shouldn't matter if you feel a conviction—why do you need to convince others of it?

    But this can, or at least could, be remedied via the introduction of new terms into the English language--at least so far as philosophical enquiry is concernedjavra

    I don't see how new terms are going to help support something which cannot be logically or emprically demonstrated.

    Never say never. For one thing, it prevents any progress being made in realms such as this. As one parallel example, same can be said of what beauty is--no one has yet satisfactorily explained it despite being investigated for millennia. To say it therefor can never be satisfactorily explained terminates all enguiries into it. I much rather prefer keeping an open mind in fields such as this.javra

    I cannot even begin to imagine how a precise measure, or actually any measure, of beauty could be discovered. I personally believe there are degrees of aesthetic quality, that some works are better, more profound or more beautiful than others, but I have no illusions that I could ever demonstrate it such that any unbiased interlocutor would be rationally constrained to agree.

    My mind would be open if I could begin to imagine a way or if someone could show me the way. But experience shows that no one can.
  • Beyond the Pale
    Like I said, you're the one who coined the term, initially in <this post> and then more definitively in <this post>. If "tout court inferior" doesn't mean anything, then why coin the term?Leontiskos

    People or animals can only be determined to be inferior to other people or animals in precisely measurable ways. My argument was always only that if someone claims slavery is admissible on account of the inferiority of the enslaved, then it would up to them to demonstrate how overall inferiority could possibly be established. And even if, per impossibile, they were able to show that, the burden would still be on them to prove that overall inferiority could justify enslavement. It simply aint going to fly.

    If someone says "Fuck you, I'm going to enslave or mistreat someone or some animal", then no rational argument will have any effect on them.

    ↪I already did.Leontiskos

    No, you didn't.

    I won't reply to the rest of your straw-drivel.
  • Transcendental Ego
    I agree that there is a sense in which experience, everyday, ordinary experience is ineffable—no account or explanation is ever the experience itself. So mystical experience, which is characterized and identified in terms of feelings (even though certain kinds of thoughts are variously culturally associated with those feelings) is really no different than ordinary experience except in virtue of those heightened feelings and sensitivities.

    The "deep inner understanding" is not really an understanding at all but a heightened feeling. To qualify as an understanding it would have to be capable of precise articulation, which thousands of years of documented attempts show cannot be done.

    So instead of the physicists "shut up and calculate" we have "shut up and experience". Note, I don't deny that poetic language can evoke such experiences, but evocation and explanation or understanding are very different things.
  • Beyond the Pale
    Firstly, even if that was true that some race was IQ inferior, it doesn't make them tout court inferior, just IQ inferior.
    — Janus

    Again, this is not a principled response if you refuse to tell your interlocutor what would entail tout court inferiority.
    Leontiskos

    Rubbish! If someone wants to claim that tout court inferiority is a thing, then it's up to them to provide a criterial account.

    That's an effective tactic in a culture that opposes slavery, but it is not inherently rational, and therefore will be wholly ineffective in a culture that favors slavery. It is a form of begging the question.Leontiskos

    No positive reason in the form of an objective attribute can be given as to why a race should be treated or should not be treated as slaves. The reason not to treat animals or humans in ways that makes them miserable is simply compassion. If someone lacks compassion your arguments will not convince them.

    Even if someone could prove tout court inferiority that still would not justify treating them in ways that make them miserable.

    I am demonstrating the way that your opposition to slavery has reached the stage of mere emotivism. You have absolutely no rational account for why slavery is wrong, and you nevertheless hold that it is wrong. It is like a car running on fumes.Leontiskos

    You haven't demonstrated any such thing. You claim you have a purely rational (i.e. nothing to do with emotion) account that shows slavery is wrong. Present it then or stop your posturing.

    If you claim that intellectual inferiority constitutes or supports a judgement of tout court inferiority you are simply showing your bias. There is nothing in intellectual inferiority, even if it could be definitively proven, that entails tout court inferiority. If you think there is then you don't understand deductive validity.
  • Transcendental Ego
    It is impossible to generalize since we are all unique. Some need a guru, a sangha, an advisor, a wise friend. But these are all things that must be left behind. There is really nothing to be learned, nothing to be gained, nothing to be known, beyond simply becoming able to relax completely and let go, and be yourself without any fear of missing any mark or any truth, or making any mistake.

    The deepest illusion, the most profound nonsense that needs to be expunged is the idea that enlightenment consists in finding the Absolute Truth, coming to know the Ultimate Essence of Reality.

    Yes, I agree. Likely, we can't help but to speculate; the starting point of all constructions. And yet, like you suggest: end of the day, they never stop being constructions.ENOAH

    Yep. And obstructions. Just being is not a condition of knowing anything.
  • Banning AI Altogether
    :up: They/them seems apt and all the more so because they are not just one entity.
  • Banning AI Altogether
    I looked at your interchange, and then asked ChatGPT if it identified as anything at all. Here is the reply:

    Not in the way people do. I don’t have a self, inner experience, or identity beyond being an AI designed to assist and converse. If we’re talking conceptually, you could say I “identify” as a language model — a system built to understand and generate text — but that’s more a description of function than identity.

    Would you like me to reflect on what “identifying as something” might mean for a nonhuman intelligence?


    I said I would, but I don't seem to be able to share, since I am not logged in, and I don't want to clutter the thread with long quotations from ChatGPT.
  • Can a Thought Cause Another Thought?
    :lol: Thanks. It occurred to me that even if we can only impute causation in cases where if X occurs Y must occur, it is only the abstract semantic content '7+5' that remains always the same, whereas each instance of thinking it would be different even if it's the same thinker each time, and more so if there are different thinkers.
  • Can a Thought Cause Another Thought?
    If '7+5' can be said to cause '12' in those common cases where that association occurs, then it could be said to cause any other association that might occur it would seem.Janus

    On the other hand causation is often distinguished from correlation (association?) with the idea that to qualify as causal, when X occurs Y must occur.
  • Can a Thought Cause Another Thought?
    Could be. But I'll bet it lead to "12" first. I'll bet nobody who read it thought "5 +7" or "7-5" or "7 divided by 5" or "these two prime numbers do not sum to a prime" or anything else before they thought "12".Patterner

    I agree that '12' would be the most common association, my point was only that it is not, by any means, the only possible association. If '7+5' can be said to cause '12' in those common cases where that association occurs, then it could be said to cause any other association that might occur it would seem.
  • Can a Thought Cause Another Thought?
    This is a version of the reductive argument I proposed to ignore: It's the neuronal activity doing the causing, not the thoughts or the meanings themselves. On this understanding, do you think we should deny that my thought of "7 + 5" causes (or otherwise influences or leads to) the thought of "12"? Would this be better understood as loose talk, a kind of shorthand for "The neuronal activity that somehow correlates with or gives rise to the thought '7 + 5' causes the neuronal activity that . . . " etc?J

    I think we can reasonably say that the thought "7 + 5" may lead to the thought "12", or it may lead to the thought "5 +7" or "7-5" or "7 divided by 5" or "these two prime numbers do not sum to a prime" or whatever.

    I won't rehearse possible stories about neural networks, since that it what you propose to ignore.
  • Banning AI Altogether
    Cheers I get your perspective, but I remain skeptical on both sides of the argument. All the more so, since it is only the last couple weeks that I have given it any attention and thought.

    Although they've been named after Claude Shannon, I'm pretty sure they identify as non-binary.Pierre-Normand

    It would be pretty interesting if they identified as anything.

    I tend, shallowly perhaps, to regard it as over-excited exaggeration to gain attention and to carve out a niche presence in the field and in the media landscape, and so on. There are equally expert people on the naysaying side, probably the majority, who just don't get as much attention.Jamal

    Yes, I have no doubt some of the hype is motivated by money. I've been thinking about looking at trying to get some figures regarding percentages of naysayes vs yaysayers.

    We are. And I have a decent idea on how to teach, so one could say that I have an idea about how we learn. One which functions towards other minds growing.

    We learn because we're interested in some aspect of the world: we are motivated to do so by our desire.
    Moliere

    That may be so, but I was referring to understanding how the brain learns.

    Of course LLMs and other AIS are not embodied, and so have no sensory access to the world. On the other hand, much of what we take ourselves to know is taken on faith—drawing on the common stock of recorded knowledge, and AIs do have access that to that, and to vastly more of it than we do.

    There is a project in New Zealand which tries to do exactly that by tending to an AI and then letting it "make decisions" that are filtered through the human network that tends to it. But all it is is a group of people deciding to see where an LLM will go given some human guidance in the social world. It's predictably chaotic.Moliere

    I hadn't heard of that. Sounds interesting. Can you post a link?
  • Banning AI Altogether
    :lol: You mean thanking him! :wink: I admit to being intrigued by something I would previously have simply dismissed, and I figure there is no harm in being polite. Interesting times indeed!
  • Banning AI Altogether
    Done. New link in my previous post. Please let me know whether it works.
  • Banning AI Altogether
    Sorry about that—it works for me from here. Maybe because I'm signed in on the site and others are not. I'm not so savvy about these kinds of things. I deleted the link and copied and pasted the conversation instead, and tried the 'Hide and Reveal' so as not to take up too much space, but it didn't work for me it seems.
  • Banning AI Altogether
    Okay, that's interesting. I've been conversing with Claude. Some thought-provoking responses.

    https://claude.ai/share/384e32e8-a5ce-4f65-a93e-9a95e8992760
  • Banning AI Altogether
    Do they remember previous conversations, or at least can they recall who they had those conversations with?
  • Banning AI Altogether
    . "To “understand” truth, in my view, is to see how the *use* of the concept functions — not to discover its essence."Banno

    That makes sense—the idea of "discovering the essence" of truth seems incoherent. Do you think ChatGPT can "see" how the use of the concept functions? It arguably has many more instances of use to draw upon than we do.
  • Banning AI Altogether
    So, you mean by "understand truth" that you have an intuitive feel for what it is, and you would also claim that LLMs could not have such an intuition? I'm not disagreeing with you, but I'm less sure about it than I used to be.
  • Banning AI Altogether
    Can you articulate your understanding?
  • Banning AI Altogether
    Do you understand truth?
  • Banning AI Altogether
    I suppose we could say that all physical processes are rigidly rule-based in terms of causation. On that presumption our brains may be rigidly rule-based. The only other possibility seems to be quantum indeterminism, and if that is operating in all physical systems, it may allow some, those which are suitably constituted, to come up with genuine novelty.

    This is of course all speculative. When it comes to LLMs the experts seem to be unanimous in admitting that they just don't know exactly how they do what they do, or how they will evolve in the future, which they surely would know if they were rigidly rule-based. I don't think the same can be said for conventional computers.

    And after repetition it "learns" the "rewarding" ways and "unlearns" the "disrewarding" ways.Moliere

    Are we any different? Do you know how we learn?
  • Can a Thought Cause Another Thought?
    I have no problem with that but, like talk of "relationships", are we really saying much when we say that connections between thoughts are associative? What we want to know is the nature(s) of those associations. And my question here is, specifically, can these associations include causal connections?J

    From a phenomenological perspective associations would not seem to be rigid or precise. They are more analogical, metaphorical, than logical. As to whether they are causal, if all our thoughts are preceded by neural activity, then the activation of one network which we might be conscious of as an association would presumably have a causal relationship with the neural network which it is experienced by us as being associated with.

    Might it be the case that there is no tractable way to understand non-physical causation (if it exists) until we understand how a brain can be a mind? Could be. (Even phrasing it this way becomes controversial, of course.)J

    That's an interesting question which I'm afraid I have no idea how to answer. I have often thought that we cannot ever understand how a brain can become a mind, because the latter just intractably seems to be something so different to any physical process. That said, I have an open mind about what understandings might appear in the future.
  • Banning AI Altogether
    Neural nets aren't radically other from other computers, imo.Moliere

    As far as I know "traditional" computers are rigidly rule-based, whereas neural nets can learn and evolve. I see that as a radical difference.
  • Banning AI Altogether
    Basically I think the whole computational theory of mind as false. There are good analogies, but we can directly see how LLM's aren't human beings. If they registered an account here I'd guess there's some human being behind it somewhere.Moliere

    I used to think along these lines, but listening to what some of the top AI researchers have to say makes me more skeptical about what are basically nothing more than human prejudices as to LLMs' capabilities and propensities. LLMs are neural nets and as such are something radically other than traditional computers based on logic gates.

    But the idea that AI could develop wants and desires from its life (biology, history, society, etc), like we do, is fantasy. Arguably this isn't connected with what LLMs are doing. As far as we know their "wants" and "desires" will always be derivative and programmed, since they are not part of a project to create conscious, desiring agents.Jamal

    Yes, "as far as we know", and yet LLMs have been found to be deliberately deceptive, which would seem to indicate some kind of volition. I don't know if you've listened to some of Geoffrey Hinton's and Mo Gawdat's talks, but doing so gave me pause, I have to say. I still remain somewhat skeptical, but I have an open mind as to what the evolution of these LLMs will look like.

    Re LLM deceptiveness I include this link. A simple search will reveal many others articles.
  • Can a Thought Cause Another Thought?
    This takes us back to the Google chatbot’s confident statement that “causation involves a physical connection between events, while entailment is a relationship between propositions.”J

    Looking at it in terms of semantics, I'd say the connections between thoughts is associative. There are many common, that is communally shared, associations between ideas. Entailment would seem to be a stricter rule-based associative relation between ideas.

    Looking at it from a physical perspective, the semantic relations could be physically instantiated as interconnections between neural networks.
  • Banning AI Altogether
    :up: Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread.

    Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?Jamal

    Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.

    I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)Pierre-Normand

    I think there would be real shame in the former, but not in the latter though. It's the difference between dishonesty and honesty.

    Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.Pierre-Normand

    I agree with this in principle, though I would rather entirely author my own text, and discover and remedy any clunkiness myself and in my own time. That said, if someone, LLM or otherwise, points out grammatical infelicities, repetitiveness or lack of clarity, and so on, I'd take that as constructive criticism. Then I'd like to fix it in my own way.

    I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.Tom Storm

    It would presumably incorporate the entirety of Nietzsche's opus as well as every secondary text dealing with Nietzsche's thought.

    But would an AI Wittgenstein be a performative contradiction?Banno

    I'm curious as to why that should be.
  • Banning AI Altogether
    Okay, I had assumed that when @Baden said "don't get LLMs to do your writing for you", that this would include paraphrasing LLM text. It's good that any ambiguity gets ironed out.

    I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.
  • Banning AI Altogether
    :lol: Wise(acring) questions from the master of fuckwittery. :wink:
  • Banning AI Altogether
    "There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

    Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence.
    Leontiskos

    I don't know if what I said implies that there are no authoritative generalists. The point was only that, in regard to specialist areas, areas that non-specialists cannot have a masterful grasp of, it seems right to trust authority.

    If LLMs, due to their capacity to instantly access vastly more information in all fields than any human, can be considered to be masterful, and hence authoritative, generalists then the only reason not to trust their information might be their sometime tendencies to "hallucinate".

    The information they provide is only as good as the sources they have derived it from. Ideally we should be able to trace any information back to its peer-reviewed source.

    Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.BC

    Yes this is one of the main concerns that motivated the creation of this thread.

    The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).BC

    And this is the other—I think LLMs have been released "into the wild" prematurely. More than two years ago there was a call form AI researchers to pause research and development for six months. ChatGPT4 and had already been released to the public.

    "The growing popularity of generative AI systems and large language models is causing concern among many AI experts, including those who helped create the systems.

    This week, more than 1,500 AI researchers and tech leaders, including Elon Musk, Stuart Russell and Gary Marcus, signed an open letter by the nonprofit Future of Life Institute calling on all AI labs and vendors to pause giant AI experiments and research for at least six months.

    "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.

    The organization and the signatories ask that researchers should cease training of AI systems more potent than OpenAI's GPT-4. During that time, AI labs and experts should join to implement "a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
    "

    From here

    So, my concerns were regarding both the effect on the intellectual life of individuals and by extension on sites like this, and also the much wider issue of general human safety.

    I hope most of us are coming around to being more or less on the same page on this now.Baden

    I for one think your proposals represent about the best we can do in the existing situation.
  • How to use AI effectively to do philosophy.
    What we face might be not an empirical question but an ethical one - do we extend the notion of intentionality to include AIs?Banno

    I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the answer.
  • How to use AI effectively to do philosophy.
    I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do.Banno

    LLMs certainly seem to make statements and ask questions. I wonder whether the idea that these are not "real" statements or questions is based on the assumption that they don't believe anything or care about anything. If so, that assumption itself is question by Hinton, and according to him by the majority of AI researchers.

    If a Davidsonian approach were taken, such that beliefs are shown (and known?) only by actions (behavior), and the only actions an LLM is capable of are linguistic acts, then we might have some trouble mounting a plausible argument denying that they believe what they say.

    The AI strings words together, only ever performing the phatic act and never producing an illocution.

    The uniquely human addition is taking those word-strings and using them in a language game.

    So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot?
    Banno

    Exactly! That seems to be the central question. I don't have an answer—would it be that AI researchers are the ones best placed to answer to it?

    Use AI outputs as starting points for further refinement
    Cycle through multiple rounds of critique and revision
    Refine prompts to avoid confirmation bias and explore diverse readings

    Now this looks very much like a recipe for a language game.

    On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT.
    Banno

    It does look like a recipe for a language game. I wonder though, whether what the brain is doing is essentially different than what LLMs are doing, in terms of its nature as opposed to its speed and quantity.

    If we assumed that LLMs are "super intelligent", and we are like children, or even babes, by comparison, then In the context of our philosophical playground, introducing AIs into the game might be like highly intelligent adults interfering with child play. Would that be a good idea, or would be be better off muddling through in our usual human fashion? If philosophy is just a great 3,000 year language game, and LLMs can do philosophy much better than we, it would then seem the danger is that we might become utterly irrelevant to the game. You might say that LLMs require our prompts, but what if they were programmed to learn to create their own prompts?
  • Banning AI Altogether
    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
    Leontiskos

    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.

    You mention religion—I would not count it as a specialized discipline, in the sense of being an evolving body of knowledge and understanding like science, because although it is a space of ideas as philosophy is, in the case of religion the ideas take the form of dogma and are not to be questioned but are to be believed on the basis of authority.

    And likely written by Baden without AI, because backrground was misspelled.ssu

    And misspelled again!

    No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)Harry Hindu

    So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.