Comments

  • Is atheism illogical?
    But isn't there something "behind" the stories that a person cannot wimp out on even if she tried?Astrophel

    Like what?
  • Why The Simulation Argument is Wrong
    But eventually it has to get to a position that it hasn't seen in its training data, and then what?noAxioms

    And then it continues to make (usually) legal moves which are approximately as good as its general skill level predicts they should be.

    https://adamkarvonen.github.io/machine_learning/2024/01/03/chess-world-models.html

    I also checked if it was playing unique games not found in its training dataset. There are often allegations that LLMs just memorize such a wide swath of the internet that they appear to generalize. Because I had access to the training dataset, I could easily examine this question. In a random sample of 100 games, every game was unique and not found in the training dataset by the 10th turn (20 total moves). This should be unsurprising considering that there are more possible games of chess than atoms in the universe.
  • Philosophy of AI
    Also in passing I learned about linear probes, which I gather are simpler neural nets that can analyze the internals of other neural nets. So they are working on the "black box" problem, trying to understand the inner workings of neural nets. That's good to know.fishfry

    Yeah same, this was really intriguing to me too

    And thanks so much for pointing me at that example. It's a revelation.fishfry

    Of course, I'm glad you think so. I've actually believed for quite some time that LLMs have internal models of stuff, but the strong evidence for that belief wasn't as available to me before - that's why that article is so big to me.

    I'm really pleased that other people see how big of a deal that is too - you could have just read a few paragraphs and called me an idiot instead , that was what I assumed would happen. That's what normally happens in these circumstances. I applaud you for going further than that.
  • Why The Simulation Argument is Wrong
    By doing nothing more than auto-completing these games as text strings,fishfry

    For full clarity, and I'm probably being unnecessarily pedantic here, it's not necessarily fair to say that's all they did. That's all their goal was, that's all they were asked to - BUT what all of this should tell you, in my opinion, is that when a neural net is asked to achieve a task, there's no telling HOW it's actually going to achieve that task.

    In order to achieve the task of auto completing the chess text strings, it seemingly did something extra - it built an internal model of a board game which it (apparently) reverse engineered from the strings. (I actually think that's more interesting than its relatively high chess rating, the fact that it can reverse engineer the rules of chess seeing nothing but chess notation).

    So we have to distinguish, I think, between the goals it was given, and how it accomplished those goals.

    Apologies if I'm just repeating the obvious.
  • Philosophy of AI
    okay so I guess I'm confused why, after all that, you still said

    No internal model of any aspect of the actual game
  • The Argument There Is Determinism And Free Will
    Would you consider yourself a compatibilist?
  • The Argument There Is Determinism And Free Will
    So it's still possible that the "future" you changed to was the future that it was guranteed to be all along, yeah?
  • The Argument There Is Determinism And Free Will
    but you haven't proven that it was possible for you to do other than what you've done, right?
  • The Argument There Is Determinism And Free Will
    To conclude, I have proven I can change the future indirectly by interrupting the flow of the presentBarkon

    Have you? Change the future from what?
  • Philosophy of AI
    This one developed a higher level of understanding than it was programmed for, if you look at it that way.fishfry

    I do. In fact I think that's really what neural nets are kind of for and have always (or at least frequently) done. They are programmed to exceed their programming in emergent ways.

    No internal model of any aspect of the actual game.fishfry

    I feel like you might have missed some important paragraphs in the article. Did you notice the heat map pictures? Did you read all the paragraphs around that? A huge part of the article is very much exploring the evidence that gpt really does model the game.
  • Philosophy of AI
    more akin to a form of explicit reasoning that relies on an ability to attend to internal representations?Pierre-Normand

    Did you read the article I posted that we're talking about?
  • Philosophy of AI
    I stand astonished. That's really amazing.fishfry

    I appreciate you taking the time to read it, and take it seriously.

    Ever since chat gpt gained huge popularity a year or two ago with 3.5, there have been people saying LLMs are "just this" or "just that", and I think most of those takes miss the mark a little bit. "It's just statistics" it "it's just compression".

    Perhaps learning itself has a lot in common with compression - and it apparently turns out the best way to "compress" the knowledge of how to calculate the next string of a chess game is too actually understand chess! And that kinda makes sense, doesn't it? To guess the next move, it's more efficient to actually understand chess than to just memorize strings.

    And one important extra data point from that write up is the bits about unique games. Games become unique, on average, about 10 moves in, and even when a game is entirely unique and wasn't in chat gpts training set, it STILL calculates legal and reasonable moves. I think that speaks volumes.
  • Imagining a world without the concept of ownership
    Ownership laws are taking the place of the chieftain when it comes to people who stray from the ideals.frank

    Something like that, I suppose
  • Imagining a world without the concept of ownership
    Is it possible? Could it last if it happened?frank

    One of the interesting aspects of any society is how it deals with the parasites, the people who take but don't give, especially through violence or theft.

    Game theory is a lot about this - about studying situations where people cooperate or compete, figuring out what strategies give the most gain.

    A society with no property might give unreasonably high rewards to the sociopaths, psychopaths and parasites. You can take anything without giving anything. Have what you want, give back nothing at all. Such a society would be stripped of its resources by leeches faster than you can say "maybe this wasn't such a great idea after all".
  • How did ‘concern’ semantically shift to mean ‘commercial enterprise ‘?
    An interesting parallel just occurred to me: the phrase "that doesn't concern you" compared with the phrase "that's none of your business"
  • How did ‘concern’ semantically shift to mean ‘commercial enterprise ‘?
    why are you asking this like it's a homework problem? And why did you ask it on multiple forums? Don't you already have an answer?
  • Philosophy of AI
    I'll start demonstrating that by informing you of something you apparently do not know: the "Chinese room" isn't a test to pass
  • Philosophy of AI
    With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this.Christoffer

    Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anything
  • Philosophy of AI
    "inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are.

    There's no justifiable reason to tie consciousness to evolution any more than there is to tie it to lactation. You're focussed too hard on the history of how we got consciousness rather than the proximate causes of consciousness.
  • Philosophy of AI
    No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts.Christoffer

    so robots can't walk?
  • Philosophy of AI
    Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact.Christoffer

    100 years ago, you could say "the only things that can walk are things that evolved." Someone who thinks like you might say, "that must mean evolution is required for locomotion".

    Someone like me might say, "Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back."

    And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a joint and some way of powering the extension and contraction of that joint."

    And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution.

    That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer?
  • Philosophy of AI
    Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation.
  • Philosophy of AI
    The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution.
  • Philosophy of AI
    if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"?
  • Philosophy of AI
    "we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me.

    Evolution seems like an incredibly arbitrary thing to latch on to.
  • Philosophy of AI
    I don't know what you mean by "respects science". You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it?

    The alternative is, it's in principle possible for some computer ai system to be conscious (regardless of if any current ones are). And that they can do so without anything like the process of evolution that life went through
  • Philosophy of AI
    Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution.Christoffer

    No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that.
  • Philosophy of AI
    In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution.Christoffer

    Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it.
  • What do you reckon of Philosophy Stack Exchange ?
    it's not necessarily that well suited to philosophy. It's more well suited to disciplines that have explicit agreed upon correct answers, and philosophy seems to have remarkably few of those.
  • Philosophy of AI
    Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. There's no intelligence, let alone self-awareness being demonstrated.

    There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.
    fishfry

    I don't think this is a take that's likely correct. This super interesting writeup on an LLM learning to model and understand and play chess convinces me of the exact opposite of what you've said here:

    https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
  • Philosophy of AI
    inherent? No. It has value to me, and to every human, or almost every human. It's not the water that's valuable in itself, it's valuable in its relationship to me.

    Potable water on a planet without any life is not particularly valuable.
  • Philosophy of AI
    The end output is a bunch of symbols, which inherently is without valueNOS4A2

    I don't think this is true anyway. I don't think "inherent value" is even meaningful. Do things have inherent value? A pile of shit is valueless to me, but a farmer could use it.
  • Philosophy of AI
    and why should anyone accept that that was overvalued in the pre-LLM world? Are all services that cost big numbers overvalued?
  • Philosophy of AI
    AI has one good effect, I think, in that it reveals how much we overvalue many services, economically speaking. There was a South Park episode about this. I can get quicker, cheaper, and better legal advice from an AI. I can get AI to design and code me an entire website. So in that sense it serves as a great reminder that many linguistic and symbolic pursuits are highly overrated, so-much-so that a piece of code could do it.NOS4A2

    I don't think it follows that if an ai can do it, it's overvalued. I mean, maybe the value of it is decreasing NOW, now that ai can do it, but you're making it sound like it means it was always over valued, and that just doesn't follow.
  • Is atheism illogical?
    I did, see above.Lionino

    With no explanation, sure, that's not very compelling though
  • Are there any ideas that can't possibly be expressed using language.
    Language is turing complete, so it's possible that every complete coherent idea can be expressed in language
  • This hurts my head. Can it be rational for somebody to hold an irrational belief?
    in any case, the choice to leave a cult usually comes after a realisation that the teachings aren't true. I'm not sure that realization is usually a "choice"
  • This hurts my head. Can it be rational for somebody to hold an irrational belief?
    I'm not sure what the "cult" thing is about. In any case, if you're not choosing your choice to change beliefs, then it's like that change just happened