Comments

  • Can a Thought Cause Another Thought?
    This takes us back to the Google chatbot’s confident statement that “causation involves a physical connection between events, while entailment is a relationship between propositions.”J

    Looking at it in terms of semantics, I'd say the connections between thoughts is associative. There are many common, that is communally shared, associations between ideas. Entailment would seem to be a stricter rule-based associative relation between ideas.

    Looking at it from a physical perspective, the semantic relations could be physically instantiated as interconnections between neural networks.
  • Banning AI Altogether
    :up: Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread.

    Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?Jamal

    Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.

    I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)Pierre-Normand

    I think there would be real shame in the former, but not in the latter though. It's the difference between dishonesty and honesty.

    Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.Pierre-Normand

    I agree with this in principle, though I would rather entirely author my own text, and discover and remedy any clunkiness myself and in my own time. That said, if someone, LLM or otherwise, points out grammatical infelicities, repetitiveness or lack of clarity, and so on, I'd take that as constructive criticism. Then I'd like to fix it in my own way.

    I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.Tom Storm

    It would presumably incorporate the entirety of Nietzsche's opus as well as every secondary text dealing with Nietzsche's thought.

    But would an AI Wittgenstein be a performative contradiction?Banno

    I'm curious as to why that should be.
  • Banning AI Altogether
    Okay, I had assumed that when @Baden said "don't get LLMs to do your writing for you", that this would include paraphrasing LLM text. It's good that any ambiguity gets ironed out.

    I have never used LLMs until today. I felt I should explore some interactions with them, so I have a better idea about what the experience is like. The idea of getting them to write, produce content which I can then paraphrase, polish my writing or using their arguments is anathema to me.
  • Banning AI Altogether
    :lol: Wise(acring) questions from the master of fuckwittery. :wink:
  • Banning AI Altogether
    "There are no authoritative generalists," says Janus. Of course I think that first sentence should read "only when," no? You are presumably saying that appeal to authority is illegitimate wherever the context is not a specialized discipline?

    Your implicit argument here is that AI is not an authoritative generalist, and therefore should not be treated as one. I think that implicit argument is even more plausible than the more explicit argument you have given, but it is in no way uncontroversial. LLMs are coming to be seen not only as authoritative generalists, but as the authoritative generalist par excellence.
    Leontiskos

    I don't know if what I said implies that there are no authoritative generalists. The point was only that, in regard to specialist areas, areas that non-specialists cannot have a masterful grasp of, it seems right to trust authority.

    If LLMs, due to their capacity to instantly access vastly more information in all fields than any human, can be considered to be masterful, and hence authoritative, generalists then the only reason not to trust their information might be their sometime tendencies to "hallucinate".

    The information they provide is only as good as the sources they have derived it from. Ideally we should be able to trace any information back to its peer-reviewed source.

    Asking AI for information is a far too easy solution. It pops back in a few seconds -- not with a list of links to look at, but a complete answer in text and more. Seems convenient, but it rapidly undermines one's willingness to look for answers one's self -- and to use search engines to find sources.BC

    Yes this is one of the main concerns that motivated the creation of this thread.

    The other line is this: We do not have a good record of foreseeing adverse consequences of actions a few miles ahead; we do not have a good record of controlling technology (it isn't that it acts on its own -- rather we elect to use it more and more).BC

    And this is the other—I think LLMs have been released "into the wild" prematurely. More than two years ago there was a call form AI researchers to pause research and development for six months. ChatGPT4 and had already been released to the public.

    "The growing popularity of generative AI systems and large language models is causing concern among many AI experts, including those who helped create the systems.

    This week, more than 1,500 AI researchers and tech leaders, including Elon Musk, Stuart Russell and Gary Marcus, signed an open letter by the nonprofit Future of Life Institute calling on all AI labs and vendors to pause giant AI experiments and research for at least six months.

    "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter says.

    The organization and the signatories ask that researchers should cease training of AI systems more potent than OpenAI's GPT-4. During that time, AI labs and experts should join to implement "a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
    "

    From here

    So, my concerns were regarding both the effect on the intellectual life of individuals and by extension on sites like this, and also the much wider issue of general human safety.

    I hope most of us are coming around to being more or less on the same page on this now.Baden

    I for one think your proposals represent about the best we can do in the existing situation.
  • How to use AI effectively to do philosophy.
    What we face might be not an empirical question but an ethical one - do we extend the notion of intentionality to include AIs?Banno

    I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the answer.
  • How to use AI effectively to do philosophy.
    I'll go over Austin again, since it provides a set of tools that are quite applicable. A Phatic act is the act of putting words together in a sequence that recognisably part of language - constructing a sentence en English. This is what an LLM does. It uses a statistical engine to generate a set of words that follow on form the words provide din the prompt. An illocutionary act is one performed in making use of such words - making a statement, asking a question, and so on. This, so the claim goes, an LLM cannot do.Banno

    LLMs certainly seem to make statements and ask questions. I wonder whether the idea that these are not "real" statements or questions is based on the assumption that they don't believe anything or care about anything. If so, that assumption itself is question by Hinton, and according to him by the majority of AI researchers.

    If a Davidsonian approach were taken, such that beliefs are shown (and known?) only by actions (behavior), and the only actions an LLM is capable of are linguistic acts, then we might have some trouble mounting a plausible argument denying that they believe what they say.

    The AI strings words together, only ever performing the phatic act and never producing an illocution.

    The uniquely human addition is taking those word-strings and using them in a language game.

    So the question arrises, can such an account be consistently maintained; what is it that people bring to the game that an AI cannot?
    Banno

    Exactly! That seems to be the central question. I don't have an answer—would it be that AI researchers are the ones best placed to answer to it?

    Use AI outputs as starting points for further refinement
    Cycle through multiple rounds of critique and revision
    Refine prompts to avoid confirmation bias and explore diverse readings

    Now this looks very much like a recipe for a language game.

    On the other hand, the data set used by a human appears to be far, far smaller than that used by an LLM. Our brains simply do not "contain" the number of texts available to ChatGPT. Therefore whatever the brain is doing, it is different to what is happening in ChatGPT.
    Banno

    It does look like a recipe for a language game. I wonder though, whether what the brain is doing is essentially different than what LLMs are doing, in terms of its nature as opposed to its speed and quantity.

    If we assumed that LLMs are "super intelligent", and we are like children, or even babes, by comparison, then In the context of our philosophical playground, introducing AIs into the game might be like highly intelligent adults interfering with child play. Would that be a good idea, or would be be better off muddling through in our usual human fashion? If philosophy is just a great 3,000 year language game, and LLMs can do philosophy much better than we, it would then seem the danger is that we might become utterly irrelevant to the game. You might say that LLMs require our prompts, but what if they were programmed to learn to create their own prompts?
  • Banning AI Altogether
    A huge aspect of this is the nature of appeals to authority, and given that TPF has an anti-religious bent, many of the members have not thought very deeply on the nature of appeals to authority (despite the fact that they occur quite often when it comes to SEP, IEP, Wittgenstein, etc.).

    Whether or not the LLM is a legitimate authority and is trustworthy is at the root of many of these differences. It is the question of whether any given LLM-citation is an organic argument or an argument from authority, and also of whether the latter case is illegitimate.
    Leontiskos

    Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical.

    You mention religion—I would not count it as a specialized discipline, in the sense of being an evolving body of knowledge and understanding like science, because although it is a space of ideas as philosophy is, in the case of religion the ideas take the form of dogma and are not to be questioned but are to be believed on the basis of authority.

    And likely written by Baden without AI, because backrground was misspelled.ssu

    And misspelled again!

    No. I didn't. When has philosophy every provided an answer to any of our questions? Philosophy piggy-backs on the discoveries of science. It is only when science and technology progresses that philosophy progresses (with AI being an example of how it brought new life to discussions about mind and body.)Harry Hindu

    So you think philosophy is always bad or poor, and therefore those words would be redundant? Philosophy is not entirely reliant on science, although I agree that a philosophy which does not take science into account would be poor or bad.
  • Banning AI Altogether
    If I wanted to hold someone accountable for misappropriating an AI explanation, I would simply put it into the search engine, the same way the person posting from AI would get the information. It is a whole lot easier than searching books for a quote.Athena

    That might work for a quote from a published human author, but I don't see how it would with quotes from a unique, one-off interaction with an AI.

    I'm not seeing the connection between what you say in this post and what it is purporting to respond to. Perhaps you could explain?

    I don't necessarily mind if others post a quote as an argument.Harry Hindu

    I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seemingly cited as the voice of authority.

    It's quite pointless to discuss the ethics of using AIs, because people will use them, just like they use drugs, and once it starts, it is impossible to rein it in. But what one can do is rethink whether one really wants to spend one's hard earned time with people who use AIs, or drugs, for that matter.baker

    Discussion of ethics re AI use (or anything else) seems reasonable in a context consisting of people who might be interested in such ethical arguments. Of course you are right that many don't care, and that now that it has been irresponsibly rolled out it is, effectively, a juggernaut.

    Maybe we use books, dictionaries, philosophical papers, editors, and scientific discoveries to make us look smarter than we are. You see this all the time in forums, even without AI, so it's nothing new. Besides do you really care about the psychology of someone who's writing about what they think?Sam26

    I don't respect people who use anything to make themselves look smarter than they are, because that is a kind of deception and posing. I also don't mind people using anything at all to make themselves become smarter. That's just my own view, of course, and I don't expect you to share it.

    Seems like philosophy itself could be labeled as mental masturbation.Harry Hindu

    You left out the words "bad" or "poor".

    Dood, the content from human beings trained in pseudo-science and other nonsense seen on this forum is available everyday for you to read, without any AI. If anything, posters should run their ideas through AI before wasting time posting their zany ideas to humans. which would eliminate wasting time reading nonsensical posts.Harry Hindu

    "Dood"? If you are going to use AI you should at least use it for spellcheck. I don't think running "zany ideas" through sycophantic AI will help much. I suppose the zany idea proponents could do what @Banno did and tell the AI it was written by someone else—but then that would not seem to be a likely motivation for a zany idea seller.

    I can't imagine how bad things are going to get in the coming years with how quickly it has already gotten to this state. Maybe it will be like some other rapid rise cultural phenomenons where it will reach saturation point fast and peter out and get pushback/revulsion before long. The bubble effect.unimportant

    I don't think the AI phenomenon is going to "peter out" unless there is some kind of global catastrophe—but only time will tell.
  • How to use AI effectively to do philosophy.
    There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do.
    — Janus
    Something has gone astray here, in. that if this were so, it's not just that we have never said anything, but that the very notion of saying something could not be made coherent.
    Banno

    I don't think Hinton is saying that nothing can be said—by us, or by LLMs, but that our inability to conceive of LLMs having subjective experience on the grounds that they merely predict the statistical likelihoods of the next words in terms of maximal consistency and cohesiveness, when we arguably do exactly the same thing (in a much slower and looser way), shows that our notion of subjective experience as an inner theatre stocked with qualia and ideas which are pre-existent and only later put into words is an illusion.

    It is that inner theatre which we imagine we have and which we cannot imagine them having that is our idea of subjective consciousness.

    In other words, maybe it is something like we imagine that the language games are created by us, but the reality is that we are always already immersed in the evolving language games and are always playing the game of what to say by selection, from what is available to recall and is judged, according to a (for us) loose process of 'weighing', most appropriate and thus is selected.
  • Banning AI Altogether
    Don't mistake the speculative misuse of ideas for the ideas themselves. AI is no longer in the realm of “mental masturbation,” it’s already reshaping science, mathematics, and even philosophy by generating proofs, modeling complex systems, and revealing previously inaccessible patterns of thought. To dismiss that as delusory is to confuse ignorance of a subject with the absence of rigor within it.Sam26

    You are misunderstanding. My comments re "mental masturbation" were specifically targeting text like the response made to @Number2018 by ChatGPT. I think use of AIs in science and math is fine. In my view they are just the kinds of disciplines AIS should be trained on. Of course they have to be trained on basic pattern recognition initially. I don't know and would need to look into what they initially were specifically trained on before being released "into the wild". Now that they are out there they are being trained on whatever content is to be found in their casual interactions with people.

    The irony is that the very kind of “rigorous analysis” you claim to prize is being accelerated by AI. The most forward-looking thinkers are not treating it as a toy but as a new instrument of inquiry, a tool that extends human reasoning rather than replacing it. Those who ignore this development are not guarding intellectual integrity; they’re opting out of the next phase of it.Sam26

    Can you name a few of those "forward-looking thinkers"? As I said in the OP my main objections are that it was irresponsibly released before being properly understood, and that its being used without acknowledgement to make posters on these forums look smarter than they are. They will also have an horrendous environmental impact. But I accept that their continued use and evolution is now inevitable, and, unfortunately, unpredictable. It is a case of playing with fire.

    Out of time now, I'll try to respond when I have more time.
  • How to use AI effectively to do philosophy.
    All a bit convolute. The idea is that the AI isn't really saying anything, but is arranging words as if it were saying something.Banno

    There are those, Hinton being one of them, who claim that the lesson to be learned from the LLMs is that we are also just "arranging words as if it were saying something", that is that we don't have subjective experience any more than they do. I remain skeptical, but I entertain the possibility that there might be something in that.
  • Banning AI Altogether
    But can even humans claim that? Let’s rehash the forum’s most hardy perennial one more time. :up:apokrisis

    "Real world"—that was perhaps a less than ideal choice of words—I intended to refer to the world as being what affects us pre-cognitively via the senses and is pre-cognitively modeled by the body/brain, as well as what shows up for us as "the world of common experience".
  • Banning AI Altogether
    That could be a hugely amplifying tool.apokrisis

    I guess it could be an exciting prospect for some folk.
  • Banning AI Altogether
    Are you saying that with PoMo philosophy, AI might have hit its particular sweet spot. :grin:apokrisis

    Well the LLMs have no experience of the real world do they?

    So, it is not a digital copy of existing books, but may become a situated co-production of knowledge.Number2018

    To what end? The production of more and more fancy looking word salad?
  • Banning AI Altogether
    I see the point that more brilliant minds might find novel theses in AI-generated texts. At its best you might end up with a Derrida or a Heidegger, but for me the thinking of such writers as Heidegger and Derrida is little more than highbrow "pouring from the empty into the void", and to me that is how the AIs responses to @Number2018 read. I've come to see anything that is not based on rigorous analysis or scientific understanding as intellectual wankery—mental masturbation—and I have no problem with people enjoying that, but the idea that it is of any real significance is, for me, merely delusory.

    Fiction and poetry (at their best) are for me better, richer, mediums for such flights of the intellectual imagination—I need the aesthetic element to be there as well.
  • How to use AI effectively to do philosophy.
    What we might do is to consider the strings of words the AI produces as if they were produced by an interlocutor. Given that pretence, we can pay some attention to the arguments they sometimes encode...Banno

    Geoffrey Hinton believes AIs are capable of reasoning, not yet as well as humans ( although I wonder which humans he is referring to). I guess if they are capable of reasoning then they can be interlocutors, and ChatGPT is simply bullshitting in saying AIs cannot be interlocutors.

    If they are not capable of reasoning then all they are doing is presenting examples of human reasoning, albeit synthesized in novel ways and in their own words.
  • How to use AI effectively to do philosophy.
    Looks like they are bigger bullshit artists than we are, although certainly much more transparent.

    I don't mind at all you creating another thread on AI. The more we think about and discuss the issues with AI the better in my view.

    My view on using them is softening a little. Since their continued completely irresponsible rollout is inevitable, the stance that advocates not using them at all because you would be supporting the recklessness seems pointless—like pissing to put out a forest fire.

    It does irk me that people present stuff essentially written by AI as their own work, although I acknowledge that from the point of view of assessing the quality, relevance and soundness of the work itself, my response is irrelevant.
  • How to use AI effectively to do philosophy.
    The quagmire just expanded to infinity. Don't ask me what I mean—an AI said it...
  • Banning AI Altogether
    Did you find something useful in it?
  • Banning AI Altogether
    So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post.Harry Hindu

    Well, yes such quotes are no substitute for argument, and obviously they do not belong to the one who quotes. It is all the more objectionable if the person presents the quoted passage as their own work. It's easy enough to find them out if the quote is from a prominent philosopher, whether alive or dead, Not so with copying and pasting AI generated text.

    That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarism—there are others.Pretending to be something you are not is one.
    — Janus

    Poppycock, the only objection to plagiarizing that I remember is the posts objecting to someone trying to make us think s/he knows more than s/he does know.
    Athena

    :roll:
  • Banning AI Altogether
    So if one did not write the post themselves, but merely copied and pasted a quote as the sole content of their post, then by your own words, it is not their post.Harry Hindu

    Yes...so what? What do you think you are disagreeing with here?

    But may I humbly suggest to you that what resulted was rather more like an internal dialogue of you with yourself, than a dialogue with another philosopher. Which slots right into the discussion itself as a significant fact.unenlightened

    Spot on as spotted!
  • Banning AI Altogether
    Thanks for your generosity.
  • Banning AI Altogether
    Fascination is also in the eye of the beholder. So equally, you are the only one who cares how fascinated you are. What I meant was that I'd be more fascinated if the fascinating post was created by a human.

    For me, in the context of philosophy, a fascinating post would be one that embodied a creative, complex and coherent view of things. I can't think of any other kind of post that would fascinate me.
  • Banning AI Altogether
    Yes, and then we get all the hopeful, "pie in the sky " talk about it all being worth it since AIs will, on account of their super-intelligence, be able to solve the 'global warming' conundrum.

    As if cryptocurrency wasn't bad enough! Talk about "snake oil"!
  • Banning AI Altogether
    The key element in that scenario is that there is no interlocutor to engage with if you attempt a response. Light's on, nobody home.Paine

    I have seen interactions with LLMs on this site that certainly make it look like the lights are on—I don't know about whether it looks like anyone is home. I've watched interviews with tech experts like Mo Gawdat and Geoffrey Hinton, who believe that LLMs are not only intelligent, but capable of rationality and even self-awareness. Do they have an idea of themselves? A self-narrative? I can't answer that.

    I agree with you about sophism—however, LLMs are generally not, it seems, argumentative for its own sake, but more obsequiously agreeable, and not for its own sake, but presumably to, via flattery, hold our interest and influence us.

    So, there is the smaller problem that we don't know whether we are discussing stuff with a human or a bot—but the larger problem I see is that LLMs should never have been released into the wild so recklessly. One might say not recklessly, but with the primary motive of profit, but to my way of thinking that is itself reckless. They collect marketable data about you and me with every interaction, and that data will benefit only, or at least mostly, the tech moguls.

    A new peddler has indeed come into town with a new set of masks, selling not merely the masks, but snake oil.
  • Banning AI Altogether
    Imagine I could offer you a prototype chatbot small talk generator. Slip on these teleprompter glasses. Add AI to your conversational skills. Become the life of the party, the wittiest and silkiest version of yourself, the sweet talker that wins every girl. Never be afraid of social interaction again. Comes with free pair of heel lift shoes.apokrisis

    :rofl: :cry: That'd be a hilarious scenario, if it wasn't so sad.
  • Banning AI Altogether
    So filling PF with more nonsense might be a friction that drags the almighty LLM down into the same pit of confusion.apokrisis

    :lol: Right, they're only as good as what they are trained on, so I thought they were already down in that pit. Some commentators predict that as the AI content on the Net becomes predominant, and they inform, feed off and train each other with material increasingly their own, that we will then be confronted with an alien intelligence orders of magnitude smarter than we are. I think for that they will need to be embodied and endowed with analogues of biological sensory systems. I remain skeptical. You gotta laugh I guess.
  • Banning AI Altogether
    I think TPF should continue what it's doing, which is put some guardrails on ai use, but not ban it.RogueAI

    I think banning would not be totally effective, but more effective than mere discouragement or partial allowance.

    The real world problem is that the AI bubble is debt driven hype that has already become too big to fail. Its development has to be recklessly pursued as otherwise we are in the world of hurt that is the next post-bubble bailout.

    Once again, capitalise the rewards and socialise the risks. The last bubble was mortgages. This one is tech.

    So you might as well use AI. You’ve already paid for it well in advance.
    apokrisis

    That's true I suppose. It's a bit like the global warming problem and the Prisoner's Dilemma—we can be almost certain that nothing significant will be done about it because that would be too inconvenient and bad for business. So, why should I not own a car, not use air-conditioning, stop flying overseas and so on, when almost everyone else will continue doing these things? My righteous efforts would amount to squat.

    That may be a good reason for you not to use AI, but it’s not a good reason to ban it from the forum.T Clark

    If it really is a good reason for me not to use AI, then surely it is a good reason for everyone not to use it, and thus a good reason to ban it everywhere. Of course I know it will not be done. Another concern is the erosion of human creativity.

    Maybe. If someone uses AI to create a fascinating post, could you engage with it?frank

    Sure, why not? I would be more impressed if someone created a fascinating post by themselves, though.


    Impractical. But, how about, its use should be discouraged altogether?

    I mean, its use in composition or editing of English text in a post.
    bongo fury

    As I said above I think banning would not be totally effective, but more effective than discouragement.

    Then you must also believe that using a long-dead philosopher's quote as the crux of your argument, or as the whole of your post, is also an issue.Harry Hindu

    It's not the case that I must think that at all. That said, I do generally refrain from quoting philosophers, whether dead or alive. I mostly prefer to discuss things using my own ideas and in my own words. I realize that my own ideas are mostly not original, but I have no idea what source most of them came from, so I could not acknowledge the originators even if I wanted to. Add to that the fact that whatever originator we identify probably cannot claim true originaility for their own ideas.

    So what? People also use makeup to look better. Who is being hurt?

    The reason for objecting to plagiarism is a matter of property rights.

    What is best for acquiring and spreading good information?
    Athena

    That's a poor analogy. It's obvious when people are wearing makeup or wearing clothes that enhance their appearances. Property rights might be one reason to object to plagiarism—there are others. Pretending to be something you are not is one.

    "What is best for acquiring and spreading good information?" First tell me what "good information" is.

    You can still submit your post as "s" to ChatGPT and ask it to expand on it.Pierre-Normand

    I did that and this post is the result. :razz:

    Ctrl+ZHarry Hindu


    Thanks...good tip.
  • Banning AI Altogether
    I spent the last hour composing a post responding to all my mentions, and had it nearly finished only to have it disappear leaving only the single letter "s" when I hit some key. I don't have the will to start over now, so I'll come back to it later.
  • Hume and legitimate beliefs
    So of corse there are no 'well-documented occurrences of exceptions to nature's "laws"", as you say... because when they happen, it's good scientific practice to change the laws so as to make the exception disappear.Banno

    I'm thinking of laws as being descriptions of observed regularities. Then there are theories which purport to explain the ways in which those regularities function and their relation to other regularities. So we don't have well-documented cases of the most general natural invariances failing to obtain.

    You seem to be talking about the theory side. So, for example, we had the Newtonian understanding of gravity and then the Einsteinian understanding, but the observed effects of gravity didn't change or ever fail to obtain as far as we know.

    So are we to say that "the laws of nature are not merely codifications of natural invariances and their attributes, but are the invariances themselves", while also saying that we can change them to fit the evidence? Hows' that going to work? We change the very invariances of the universe to match the evidence?Banno

    So, in line with what I wrote above, you seem to be talking about what I'm not talking about. We can have two meanings of "the laws of nature"—one sees them as being conceptual codifications of the observed invariances and the other sees them as the invariances themselves. One doesn't have to be right and the other wrong—they are merely two different ways of thinking about it.

    Or is it just that what we say about stuff that happens is different to the stuff that happens, and it's better if we try to match what we say to what happens?Banno

    Right, what we say about things is not the things themselves, and we should try to match what we say with what happens.

    Indeed. And if laws are constraints, then the regularities can be statistical. Exceptions get to prove the general rule.apokrisis

    The regularities seem rigid on the macro-scale—and that is the macro-manifestation of the statistical averages operating on the micro-scale? That seems to make sense.

    We want to avoid arriving at some transcendent power that lays down arbitrary rules. Instead we want laws to emerge in terms of being the constraints that cannot help but become the case even when facing the most lawless situations.apokrisis

    Right, I'm more sympathetic to the idea that nature's regularities have evolved like habits than that they are given as eternal verities by some imagined lawgiver.
  • Hume and legitimate beliefs
    Isn't that simply because when we find such exceptions, we change the laws?Banno

    Perhaps that happens sometimes. If there were no regularities, there would be no laws. It doesn't follow from the fact that there are laws that our understanding of them is perfect. Anyway, what I have in mind are the most general regularities such as that fire burns, water flows down hill absent intervention, the Sun rises, organisms grow old and die, wind and water cause erosion, animals need oxygen and water and food to survive, the air is thinner at high altitudes, most objects cannot float in the air and the reason that those which can float is easily understandable, and so on. There are countless examples.
  • Banning AI Altogether
    I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI.T Clark

    I can't say I know they were written by AI, but merely that I have suspected it. The main reason I would discourage its use is that the rapid development of AI, which given the unpredictability of the ways in which AI will evolve, is dangerous, is driven by profit, and is fueled mainly by consumer use. The best way to slow down this development, which would be hopefully much safer, would be for consumers to abstain from using it. I never have and never will knowingly use it. I see it as a very dangerous case of playing with fire.

    Interesting, I haven’t noticed particularly. But I avoid reading lengthy and didactic posts which are often poorly written. The AI stuff I’ve seen often seems peculiarly worded and difficult to read.Tom Storm

    I suspect AI use when I see a sudden vast improvement in writing clarity and structure and apparent erudition.

    But you're proposing something and instead of telling us why it's a good proposal you're saying "if you want reasons, go and find out yourself." This is not persuasive.Jamal

    That's a fair criticism, I guess. I don't really have the time to spare to takes notes of lectures and produce a really comprehensive summary of the potential problems. It is very easy for anyone to find out for themselves if they are interested. I'll try to make the effort as soon as possible. (Maybe in the interests of performative contradiction, I should ask an AI to produce a summary for me).

    And it isn't clear precisely what you are proposing. What does it mean to ban the use of LLMs? If you mean the use of them to generate the content of your posts, that's already banned — although it's not always possible to detect LLM-generated text, and it will become increasingly impossible. If you mean using them to research or proof-read your posts, that's impossible to ban, not to mention misguided.Jamal

    It is obviously not practicable to enforce a complete ban. We would be, as we are now with a limited ban, actually relying on people's honesty. If by "proof-read" you only mean checking for spelling and grammatical errors, then no problem. That said, we already have spellchecker for that. Asking AI to rewrite material would seem to be a different matter. It seems obvious to me that AIs pose a great threat to human creativity.

    I've been able to detect some of them because I know what ChatGPT's default style looks like (annoyingly, it uses a lot of em dashes, like I do myself). But it's trivially easy to make an LLM's generated output undetectable, by asking it to alter its style. So although I still want to enforce the ban on LLM-generated text, a lot of it will slip under the radar.Jamal

    I use a lot of em dashes myself, and I've never noticed it with AI-generated text. I agree that much will slip under the radar, but on the other hand I like to think that a majority of posters value honesty.

    It cannot be avoided, and it has great potential both for benefit and for harm. We need to reduce the harm by discussing and formulating good practice (and then producing a dedicated guide to the use of AI in the Help section).Jamal

    The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed.

    The source of one's post is irrelevant. All that matters is whether it is logically sound or not.Harry Hindu

    I don't agree—"one's post"?...if one is not the source of the post, then it is not one's post.

    I see this from time to time. One I'm thinking of tries to baffle with bullshit. Best to walk away, right?frank

    Sure, but walking away does not solve, or even ameliorate, the problem.

    I think the crux is that whenever a new technology arises we just throw up our hands and give in. "It's inevitable - there's no point resisting!" This means that each small opportunity where resistance is possible is dismissed, and most every opportunity for resistance is small. But I have to give TPF its due. It has resisted by adding a rule against AI. It is not dismissing all of the small opportunities. Still, the temptation to give ourselves a pass when it comes to regulating these technologies is difficult to resist.Leontiskos

    We perhaps don't often agree, but it seems we do on this one.

    Anyway, there is an 8 hour power outage where I live, and I am running the generator, so I'll have to leave it for now.
  • Beyond the Pale
    The problem is that you don't think you are required to give a falsifiable reason for why the claim fails to demonstrate the presence of X.Leontiskos

    Give me an example of a racist claim that does demonstrate X (X being clear evidence, or even a compelling argument, that some race is tout court, inferior to some other) if you think there are such.

    If someone gives an argument purporting to demonstrate that some race is inferior I will give reasons for rejecting it if I assess that it does fail to demonstrate what it claims to. I haven't even come across any argument which is not in the form of 'this race is, according to IQ tests, generally less intelligent than that race". Intelligence seems to be the one ubiquitous criterion in these kinds of arguments. Firstly, even if that was true that some race was IQ inferior, it doesn't make them tout court inferior, just IQ inferior. Do you think those people who have the highest IQs are necessarily the best people? Do you think IQ id even an adequate measure of intelligence? What about creativity or emotional intelligence or memory? What about the ability for sustained attention?

    Do you know of arguments that take any other form? How would you go about demonstrating general inferiority, as opposed to say inferiority in sport, academic achievement or some such, all of which could in any case be down to standards of training, funding etc.?

    Are we to assume that you think some races are all-in-all inferior? If so, why not present your argument for our perusal. If not, then why go on about it?
  • Self-Help and the Deflation of Philosophy
    If you look at traditional accounts of "enlightenment", "enlightenment" is not something one would normally desire, ever, because for all practical intents and purposes, "enlightenment" is a case of self-annihilation, self-abolishment.baker

    That's one interpretation.

    While it is said that if a lay person does attain "enlightenment", they have to ordain as a monastic within a few days or they die (!!), because an enlightened person is not able to live in this world, as they lack the drive and the ability to make a living.baker

    It depends on what is meant by "enlightenment". Ramana Maharshi reportedly became spontaneously enlightened as a schoolboy, then left home on a train to Arunachala (a sacred mountain if I recall correctly), where he was found starving and covered with ants. People then fed him and treated him as a sage.



    The Shivapuri Baba is a very different case—he had heaps of motivation, after leaving his family and becoming enlightened in the forest according to reports he walked form India to England.

    Han Shan lived by himself on Cold Mountain, and survived just fine for many years.

    I think there is much of mythology in all this.

    Why call something "Buddhist" when it has nothing to do with Buddhism?baker

    What gets called "Buddhist" that has nothing to do with Buddhism. Do you fancy yourself to know what the essence of Buddhism is?

    Is the most important thing we can do in this life to deny its value in favour of an afterlife, an afterlife which can never be known to be more than a conjecture at best, and a fantasy at worst? There seems to be a certain snobbishness, a certain classism, at play in these kinds of attitudes.
    This sounds rather victim-ish.
    baker

    What are you talking about—why "victim-ish"? It seems more likely that you are projecting your own victimhood.

    One problem with that is that the watered down versions are being promoted as the real thing, and can eventually even replace it.
    — baker

    What you say assumes what is at issue—that there really is is a "real thing" to be found.
    — Janus
    I said more later in the post you quoted.
    baker

    I couldn't find the "more" you said you said.

    In Buddhism, there is the theme that we are now living in an age in which the Dharma ends:baker

    So what? Who's to say that's true?
  • Do you think AI is going to be our downfall?
    Although we already live in a mediocre time regarding art, AI would be the last nail of our coffin. But it is not too late—we can stop it and believe in ourselves again.javi2541997

    I agree with what you write there except for the above. I don't think we live in a "mediocre time" regarding art, and I do believe it is probably too late to stop the AI juggernaut. As I said above I refuse to use AI for either research or writing. It is only a juggernaut because people will not refrain from using it; the temptation for people to save time and/or make themselves look better and smarter is too great. I don't think they appreciate the possible dangers, which are far greater than devaluing human creativity.

    From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people.EricH

    This is just one of the very serious possibilities. A layman biochemist with AI help might be able to create a lethal new virus for example. It is not a matter of fearmongering—we should all be very afraid. The solution is simple—stop using AI, and the financial incentive to develop it will evaporate. The military incentive will unfortunately remain. I hold little hope that people will wake up and stop using it anyway.

    I think what it comes down to is that it depends on how it's used. This is where it gets interesting.Jamal

    Nope. It just shouldn't be used because it is evolving much faster than our ability to understand it and predict where its evolution will lead. For the first time we are confronted with how to deal with an intelligence far greater than our own. I don't think it's going to end well.
  • Thoughts on Epistemology
    OK, thanks...good luck with your project.
  • Thoughts on Epistemology
    Cheers, some interesting things to think about there. I'm not very well-schooled in these kinds of things, but some of it seems to make intuitive sense. I don't have anything further to say right now.

    I'm wondering whether you use AIs to help you write.
  • Hume and legitimate beliefs
    Am I to understand that you are saying the laws of nature are not merely codifications of natural invariances and their attributes, but are the invariances themselves?

    For example, would you say the law of gravity is not merely a codification of the apparent spatiotemporal universality of gravitational effects, but the gravitational effects themselves, along with their mathematically quantifiable attributes?