• Hanover
    14.5k
    For the AI afficionado AI is to be treated like a black box, like a Ouija board or a Magic 8-Ball. They become impatient with those who ask the question, "How does it work?" They interrupt, exclaiming, "But look at what it can do!"Leontiskos

    The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.

    This is the unwritten answer to the question, "Why should we treat something as if it were something that it is not?" "Why should we lie to ourselves in this way?" The answer is, "Because it will give us great power. No more need be said."Leontiskos

    I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences. If we don't want to debate the ChatGpt p-zombie, then we don't have to. We have the right to ban people, so why can't we ban zombies? Just add to the rules: All posters must have a soul. No golems.

    We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.

    They eat us and then they eat reality.Baden

    You just re-wrote a modern day Frankenstein.
  • Leontiskos
    5.3k
    The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts.Hanover

    Not everyone agrees that Ouija boards do not work. In any case, that's much my point: an LLM is like a Ouija board, except that it "works." It is a magical knowledge-generating device, and the power it provides is a sufficient reason against any objection.

    Edit: Or to answer you more directly: If Ouija boards worked, then everyone would be happy to practice the black arts.

    I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences.Hanover

    I agree entirely that that is the question. But you sort of do have to self-justify your preferences when they are being questioned by those who want to change the rules and make AI-use more acceptable on TPF. Or even when they are being questioned by those who want the rule made more strict. That is what <this thread> and <this thread> are doing, respectively.

    We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster.Hanover

    Yeah, I think that's a helpful heuristic you provide. Still, it may be more easily said than done.
  • sime
    1.2k
    Let's focus on the actual harms that AI use has so far wrought upon this forum: What are they?
  • Jamal
    11.1k
    In one of my essays, I suggest AIs (because---depite their potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality.

    It's a little speculative.
    Baden

    I seem to switch between two exclusive mental settings when thinking about AI: the critical-theoretical and the pragmatic-instrumental. I appreciate these speculative thoughts of yours, and agree that like any technology now, AI isn't just a neutral tool, that it's part of a dehumanizing totality. But then I switch and I think about how best to use it, pretending that it is a neutral tool. And when I'm commenting in these discussions I'm usually in that pragmatic mode, because the system makes the false real, in the sense that I act it out.
  • Leontiskos
    5.3k
    I seem to switch between two exclusive mental settings when thinking about AIJamal

    I am acutely aware of the incoherence of my intentional state when I am using an LLM. It's a bit like when you wake up from a very strange dream, and must now come to terms with the fact that the entire manner of being that engaged you a moment ago is incommensurable with the entire manner of being that you are now engaged in after waking. Extremely vivid and involved dreams are basically a form of psychosis, and when, upon waking, the psychotic break is reversed, there is a remarkable sense of discontinuity and dissociation. In that sense, the emerging problem of "AI psychosis" should not surprise us at all. LLMs allow us to intentionally alter our entire intentional orientation in a sustained and self-replicating way, and this possibility looks to be unprecedented. What arises is the question of how to ascend back out of the rabbit hole once one has descended. The danger lies in thinking that ascent is no longer necessary, or even that what was once seen as a rabbit hole is just "the new normal."
1910111213Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.