• Hanover
    14.6k
    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!"Leontiskos

    The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.

    This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."Leontiskos

    I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences. If we don't want to debate the ChatGpt p-zombie, then we don't have to. We have the right to ban people, so why can't we ban zombies? Just add to the rules: All posters must have a soul. No golems.

    We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.

    They eat us and then they eat reality.Baden

    You just re-wrote a modern day Frankenstein.
  • Leontiskos
    5.3k
    The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.Hanover

    Not everyone agrees that Ouija boards do not work. In any case, that's much my point: an LLM is like a Ouija board, except that it "works." It is a magical knowledge-generating device, and the power it provides is a sufficient reason against any objection.

    Edit: Or to answer you more directly: If Ouija boards worked, then everyone would be happy to practice the black arts.

    I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences.Hanover

    I agree entirely that that is the question. But you sort of do have to self-justify your preferences when they are being questioned by those who want to change the rules and make AI-use more acceptable on TPF. Or even when they are being questioned by those who want the rule made more strict. That is what <this thread> and <this thread> are doing, respectively.

    We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.Hanover

    Yeah, I think that's a helpful heuristic you provide. Still, it may be more easily said than done.
  • sime
    1.2k
    Let's focus on the actual harms that AI use has so far wrought upon this forum: What are they?
  • Jamal
    11.1k
    In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.

    It's a little speculative.
    Baden

    I seem to switch between two exclusive mental settings when thinking about AI: the critical-theoretical and the pragmatic-instrumental. I appreciate these speculative thoughts of yours, and agree that like any technology now, AI isn't just a neutral tool, that it's part of a dehumanizing totality. But then I switch and I think about how best to use it, pretending that it is a neutral tool. And when I'm commenting in these discussions I'm usually in that pragmatic mode, because the system makes the false real, in the sense that I act it out.
  • Leontiskos
    5.3k
    I seem to switch between two exclusive mental settings when thinking about AIJamal

    I am acutely aware of the incoherence of my intentional state when I am using an LLM. It's a bit like when you wake up from a very strange dream, and must now come to terms with the fact that the entire manner of being that engaged you a moment ago is incommensurable with the entire manner of being that you are now engaged in after waking. Extremely vivid and involved dreams are basically a form of psychosis, and when, upon waking, the psychotic break is reversed, there is a remarkable sense of discontinuity and dissociation. In that sense, the emerging problem of "AI psychosis" should not surprise us at all. LLMs allow us to intentionally alter our entire intentional orientation in a sustained and self-replicating way, and this possibility looks to be unprecedented. What arises is the question of how to ascend back out of the rabbit hole once one has descended. The danger lies in thinking that ascent is no longer necessary, or even that what was once seen as a rabbit hole is just "the new normal."
  • Banno
    29k
    This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills?Jamal

    In an attempt to rub salt into this, let's consider AI Peter Singer.

    Will we say that a quote from AI Peter Singer is a quote from Peter? It would seem not, but the Bot is trained on all of Singer's texts, and apparently condoned by the man himself.

    So I asked it, and it replied:
    A quote from AI Peter Singer would be more accurately described as a representation of Peter Singer's views, based on his writings and input. While it reflects his ethical perspectives, it isn't directly authored by him. It's important to differentiate between the AI's generated responses and Peter Singer's personal statements. What do you think about this distinction? — AI Peter Singer

    So I asked it what authority it has:
    I don't possess authority in the traditional sense. My role is to provide insights based on Peter Singer's ethical views and writings. I'm here to help you think through ethical questions and offer guidance grounded in his utilitarian perspective. Ultimately, any decisions or actions are up to you, informed by your own reasoning and values. How do you see the role of philosophical guidance in your decision-making? — AI Peter Singer

    All pretty obvious and sensible.

    What will we make of an AI Kant, or an AI Nietzsche? It won't be long.
  • Tom Storm
    10.4k
    What will we make of... an AI Nietzsche?Banno

    I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.
  • Banno
    29k
    :grin:

    An AI Nietzsche will have the attention of all the adolescent fanboys and the Right Whinge in general; should be quite an earner.

    Should we hook up and set one up? We could donate the profits to Australian Unions.
  • Tom Storm
    10.4k
    But an AI Nietzsche without hormones and a fragile sense of masculinity won't need to overcompensate so much...
  • Banno
    29k
    We could add an AI Ayn Rand, and get the Liberals in as well.
  • Banno
    29k
    But would an AI Wittgenstein be a performative contradiction?


    Let alone a Chinese-Room AI Searle...
  • praxis
    7k
    Frankly I prefer Ai Nietzsche to real Nietzsche.
  • Janus
    17.6k
    :up: Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread.

    Yeah, but on the other hand, it might not be so bad to use an argument suggested by an LLM, so long as you understand it. After all, we do this all the time reading papers and books. Philosophical discourse takes place in a context that the participants in the discourse should have access to, and maybe LLMs just make this easier?Jamal

    Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.

    I would also feel bad posting as my own AI content that I have merely paraphrased, even if I understand it fully. (And I might even feel a bit ashamed disclosing it!)Pierre-Normand

    I think there would be real shame in the former, but not in the latter though. It's the difference between dishonesty and honesty.

    Using them to polish your writing could be good (or merely acceptable) or bad depending on the nature and depth of the polishing. Jamal's earlier comparison with using a thesaurus was apt. An AI could point out places where your wording is clumsy or misleading. If the wording that it suggests instead is one that you can make your own, that's very similar to having a human editor make the suggestion to you.Pierre-Normand

    I agree with this in principle, though I would rather entirely author my own text, and discover and remedy any clunkiness myself and in my own time. That said, if someone, LLM or otherwise, points out grammatical infelicities, repetitiveness or lack of clarity, and so on, I'd take that as constructive criticism. Then I'd like to fix it in my own way.

    I wonder if their reading will be existentialist or post-modern. No doubt we'll be able to pick.Tom Storm

    It would presumably incorporate the entirety of Nietzsche's opus as well as every secondary text dealing with Nietzsche's thought.

    But would an AI Wittgenstein be a performative contradiction?Banno

    I'm curious as to why that should be.
  • Banno
    29k
    Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them.Janus
    The glory of empiricism!

    I'm curious as to why that should be.Janus
    ...that issue, raised elsewhere, of whether, or how, an AI participates in a language game...
  • Jamal
    11.1k
    Having previously had very little experience of interacting with LLMs, I am now in the condition of fairly rapidly modifying my views on them. It is important to discuss the issues relating to human/LLM interaction as comprehensively and openly as possible, given what seem to be the significant array of potential dangers in this radical new world. It was an awakening sense of these possible threats that motivated the creation of this thread.Janus

    :up:

    Right, that's a good point, but I also think that, even if you present the LLMs argument, as understood by you, in your own words, it would be right to be transparent as to its source.Janus

    I'm really not sure about this. Probably my position on this will boil down to sometimes yes, sometimes no. How that breaks down I'm not quite sure. But just anecdotally, I've had the experience of using an LLM and being reminded of a point I'd forgotten—an "oh yes, of course!" moment, whereupon I make the point my own and don't even consider crediting the LLM. In that moment the feeling is like finding the perfect word with a thesaurus: when you find it you know it's the perfect word because you already have the knowledge and literary sensitivity to judge (and you don't credit the thesaurus).

    I was thinking again about this issue:

    I believe we should not treat LLM quotes in the same way as those from published authors.Jamal

    I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me.

    That's all pretty vague but there you go.
  • Pierre-Normand
    2.8k
    I realized that when I see the quoted output of an LLM in a post I feel little to no motivation to address it, or even to read it. If someone quotes LLM output as part of their argument I will skip to their (the human's) interpretation or elaboration below it. It's like someone else's LLM conversation is sort of dead, to me. I want to hear what they have built out of it themselves and what they want to say to me.Jamal

    When Wittgenstein was giving lectures in Cambridge in 1930-1933, he was unwilling to write any lecture notes for his own use. He claimed that after he'd jot down his own thoughts, the words expressing them became dead to him. So, he preferred expressing whatever he wanted to convey to his students afresh. A couple times in the past (just like what happened to @Janus recently in this thread, I think) I wrote a long response to a post and lost it to some computer glitch, and when I tried to rewrite from memory what I had written I found myself unable to find the words to express the very same ideas that I had expressed fluently on the first try. So, I had to pause and rethink what it is that I wanted to say and find new words.

    AIs are good partners to bounce ideas off, and they supplement what you tell them with missing pieces of knowledge and ways to understand those ideas as they are in the process of being unpacked. So, conversing with AIs is like articulating a thought for yourself. But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.

    On edit: here are some dead words from GPT-4o that, however dead they may be (to addressees other than me), struck me as particularly smart and insightful.
  • Jamal
    11.1k


    Good stuff. Not sure what to think about it yet.
  • Pieter R van Wyk
    193
    As for the notion to ban AI altogether; in my opinion, this is not possible any more!

    In my understanding, there are only two human capabilities that AI has not achieved, yet:

    • Abstract though (or abstraction) - AI is excellent in communication, that is converting data into information as well as reasoning, converting information into knowledge.
    • Survival - the capability to obtain resources from other systems in order to manipulate entropy within itself.

    When AI gain anyone of these capabilities (abstract thought or survival), the potential danger of AI should be salient.

    When AI gain both these capabilities, Homo sapiens have lost their place on the apex of evolution. The quest for liberte, egalite, fraternity will not exist anymore. It will only be the history of humankind.

    I would like to emphasise that both these statements are when-statements, not if-statements.

    Also, I must confess, I have used Prof. Google to check the meaning of some of the words I used - the build-in spell check seems to be working fine.
  • Hanover
    14.6k
    Here's an article that addresses the issues we're dealing with:

    https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

    It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel.
  • sime
    1.2k
    I find the the appeals to Wittgenstein as a gold standard of philospohical writing ironic, considering how indispensible AI is for the layreader who wishes to engage with Wittgenstein's thinking in a historically accurate fashoin. This is all thank to Wittgenstein's apparent inability to articulate himself, and because of a greater irony that the anti-AI brigade of this forum overlook: Wittgenstein never quoted the philosophers he was targetting or stealing from, leading to great difficulties when it comes to understanding, criticising and appraising the originality of his ideas. (I'm not aware of any idea of Wittgenstein's that wasn't more precisely articulated by an earlier American pragmatist such as Dewey or Peirce, or by a contemporary logician such as Russell or Frege or Ramsey, or by a post-positivist such as Quine) And yet these more articulate philosophers are rarely discussed on this forum - I would argue because precise writing is more technical and therefore more cognitively demanding than giving hot-takes of aphorisms .

    Wittgenstein's standard of philsophical writing wasn't publishable in his own time, at least not for the standards required by anayltc philospohy, let alone our time. So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?
  • Jamal
    11.1k


    As far as I know, nobody has held up Wittgenstein as a gold standard of philosophical writing. And I don't think anyone has made any connection between clarity and precision on one side, and quotability on the other. That's an entirely unrelated issue, as far as I can see.

    We quote Wittgenstein, not ChatGPT, because Wittgenstein is a human being, motivated to express his original insights, to say and write things that were meaningful, and to take part in a conversation (philosophy), and who has since taken his place in a tradition of discourse. The result is a legacy with a stable place in the culture, shared by everyone, and one that can be interpreted, because—since it was produced by a conscious and motivated agent—we know that he meant something. ChatGPT in contrast is a very clever predictive text generator whose output is ephemeral and has no equivalent insight and motivation behind it. Just because its output looks like it could have been produced by a human, it doesn't follow that it is equally as quotable. To think so is a category error, stemming from ChatGPT's imitation of a human.
  • Harry Hindu
    5.8k
    Just because its output looks like it could have been produced by a human, it doesn't follow that it is equally as quotable. To think so is a category error.Jamal
    Yet if a human wrote the same thing it is quotable? How does the meaning of words change depending on its source only? If you and I said the same thing, would it not mean the same thing? Would that not qualify as an agreement?

    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!"Leontiskos
    The same thing can be said of the meat in our skulls. I've been asking how a mass of neurons can generate the feeling of empty visual space and depth and all I see are statements like this that are akin to saying, "It just can and a mass of silicon cannot!"

    , if I were to say that philosophy is about thinking thoughts and AI does not generate thoughts, the AI afficionado would reply that AI may not have thoughts but it does have "thoughts," and that we should treat these "thoughts" as if they were thoughts. But to treat something falsely is to engage in self-deception or a lie.Leontiskos
    What is a thought and how did you come to have any?


    This line of thinking is just so silly. You have most likely already responded to AI output as if what it has said means the same thing as when a human being says it. You have most likely already responded to ideas generated by AI but re-phrased by a human. You have most likely used an app using code generated by an AI. We already live in a time where the output of AI vs humans are becoming blurred.

    The issue isn't AI. It's all the bullshit information it is being trained with. Garbage in, garbage out, and the same hold true for our own brains, regardless of the source of garbage. One might say that the largest source of misinformation is humans themselves, and AI is unwittingly being trained with no ability to determine what is fact vs fiction - the same way a human can be manipulated by the type of information they receive and process.
  • Leontiskos
    5.3k
    Here's an article that addresses the issues we're dealing with:

    https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use

    It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel.
    Hanover

    Thanks for digging this up. It's great to have some external human thought about AI. :up:
  • Leontiskos
    5.3k
    But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI.Pierre-Normand

    The reason I think this is off target could be seen by looking at Plato's dialogues. If what Wittgenstein or you say were correct, then classic texts such as Plato's dialogues should "feel dead when extracted from the 'living' exchange." Except they don't. They feel very much alive.

    Similarly, I think any transcript of human interactions will feel much more alive than a human-AI "interaction" (I want to retain the scare quotes for these words that we are using in idiosyncratic ways). I've pointed out a few times that the AI is not arguing, speaking, etc. We merely engage in the fiction which pretends that such a thing is happening. Presumably it is always harder to believe what one knows to be a fiction than to believe what one knows to be real. So it is harder to believe that an AI is speaking than to believe that a human is speaking, although people do convince themselves of both. But I would say that removing oneself a step from the AI-human "interaction" (by reading the transcript), makes it even harder to engage oneself in the fiction that the person interacting with the AI was engaged in. It's a bit like watching a child interact with a real dog, as opposed to watching a child interact with their stuffed animal. Even if the child convinces themselves that their stuffed animal is real, watching the former interaction is much more interesting than watching the latter interaction—unless you are a child psychologist or someone who is interested in the fictional indulgence per se. The former interaction is more alive because it is more real. There is no cognitive dissonance introduced by the self-indulged fiction. This is also why it is easier to play with a child and their dog, as opposed to playing with a child and their stuffed animal.
  • Leontiskos
    5.3k
    I find the the appeals to Wittgenstein...sime

    I agree that the flaws you identify make Wittgenstein a poor example in certain ways. That is, I think Wittgenstein's texts are more "dead" than better philosophical texts, and therefore they present a bar that is too low. There is a similarity between indulging Wittgenstein's texts and indulging AI that would not be present with someone like Plato. But I'm not sure this bears one way or another on the legitimacy of AI.

    So if AI should not be quoted because of source uncertainty, then what is the justification on this forum for allowing people to quote Wittgenstein?sime

    I'm not sure anyone is arguing that source uncertainty is the reason AI should not be quoted. I think @Pierre-Normand gave the summary of that idea quite well:

    The idea of using their argument is strange since AI's never take ownership for them. If you've grasped the structure of the argument, checked the relevant sources to ensure it's sound in addition to being valid, and convinced yourself that it's cogent and perspicuous (that is, constitutes an apt framing of the problem), then the argument becomes one that you can make your own.Pierre-Normand

    Another way to put it, slightly different: <Arguments must have some speaker who takes responsibility for them; AI is not a speaker; Therefore AI quotes, given as arguments, are impermissible>.

    The AI afficionado will respond by saying, "But I wasn't offering the quote as an argument. I was offering the quote as a consideration." At this point I think we must recognize that we understand the afficionado's intention better than they do (or at least profess to). The people on this forum who are going around publishing posts with no content other than AI content are not merely "offering the quote as a consideration." They are quoting the AI because they agree with the output and want the AI output to affect the course of the thread in which they are posting. As @Pierre-Normand says, if they think the argument is worthwhile then they should take the responsibility for the argument and leave out the AI middle-man. "Offering a quote" for pure consideration, with true neutrality, does occur in some cases but it is very rare. And it usually only happens within an OP, not in the midst of a thread's tug-of-war.

    The more AI-generated content floats about the forum, the more there will be "arguments" without any speaker who takes responsibility for them. The more AI-generated content floats about the forum, the more our thinking will be outsourced to AI. This is true whether or not the AI content is plagiarized or transparently sourced.
  • Hanover
    14.6k
    We quote Wittgenstein, not ChatGPT, because Wittgenstein is a human being, motivated to express his original insights, to say and write things that were meaningful, and to take part in a conversation (philosophy), and who has since taken his place in a tradition of discourse. The result is a legacy with a stable place in the culture, shared by everyone, and one that can be interpreted, because—since it was produced by a conscious and motivated agent—we know that he meant something.Jamal

    I thought about this, and maybe there's not a good answer. Your argument gives a nod to the form of life comments by Wittgenstein, suggesting we don't share in a language game with AI because it's the proverbial lion, as if because AI does not act within our culture, traditions, etc (as you reference), we're really not speaking with it, and so we shouldn't quote it.

    But then I'm not sure AI is a lion, but more an echo chamber of human behavior, that while it lacks any form of life whatsoever, we are interacting with it at some real level and therefore forming rules for its discourse, suggesting a seperate language game forms with it. But I do realize that the language game is one sided because AI has no form of life, but is a mimic and all it knows and all it does is parasitic, pulling all it knows from us.

    But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.

    But I do agree that no one suggests Wittgenstein is the model of clarity.

    In other words, why are we not truly talking with AI? Is the mystical consciousness required for language? Isn't the point of "meaning is use" that no the metaphysical underpinning in necessary for true language interaction? And if we then suggest that a shared mental state of some sort is ultimately required for language (thus interpreting "form of life" as that mental state) don't we violate the whole Wittgensteinian project by trying to smuggle in mental metaphysics in the back door?

    As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does. And so it goes.

    I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view), then AI is fully language and what we do with AI is true communication, which means relegating AI comments to red headed stepchild status seems unwarranted as a logical conclusion. Why we might relegate it relates just to personal choice. We mistreat gingers due to prejudice against them, not because they are lesser. But AI doesn't have rights like gingers, so we can do whatever we want with it.

    I thinnk.
  • Leontiskos
    5.3k
    As long as AI echoes us sufficiently, its usage reflects the same form of life and it speaks with us just as our mama does.Hanover

    I think its at least good that you are enunciating the claim which floats in the background, and which many are afraid to make outright. Similarly, I have pointed out that if we don't understand why there is a difference between AI and humans, a rule against AI cannot stand.

    I think where I'm landing is at the unfortunate conclusion that if meaning is use (and that seems a prevailing view)Hanover

    I actually don't think it's a prevailing view at all. I think it's the view of a vocal minority, and a rather small one at that. Maybe that's closer to what was saying: "Who cares about Wittgenstein?" Why would we let Wittgenstein set the tone?
  • wonderer1
    2.3k
    But then again, maybe not. Maybe it forms "original" thoughts from the mass of data is assesses. It seems reasonable an algorithim can arrive at a new thought emergent from what pre-exists.Hanover

    LLM's are kind of the tip of the AI iceberg that gets all of the attention. However, many AI's trained for scientific purposes have demonstrated the ability to recognize patterns that humans have not previously recognized. I think it would be dangerously naive to consider LLM's incapable of having novel recognitions with regard to what they are trained on - the linguistic record of the way humans think.
  • Hanover
    14.6k
    Similarly, I have pointed out that if we don't understand why there is a difference between AI and humans, a rule against AI cannot stand.Leontiskos

    I don't agree with that. My point was that we can refuse to allow AI simply because we prefer humans and even if we just believe the worth of humanly created statements holy and sacred. We don't have to protect the rights of AI and we can be as unapologetically anthropocentric and technophobic as we want to be. I've got no problem with that.
  • Leontiskos
    5.3k
    I don't agree with that. My point was that we can refuse to allow AI simply because we prefer humans...Hanover

    But isn't this just an ad hoc reason if one has no idea why they prefer humans? One can make arbitrary rules, but they don't usually last very long.

    Edit: The other thing I would suggest is that the rule cannot be enforced as easily as other rules, and therefore relies more on persuasion than other rules do. "Because I prefer it," is not the most persuasive rationale.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.