Comments

  • Where is AI heading?
    Me: So that point is where the knowing starts, not where it ends? Isn't that open for interpretation?

    ChatGPT: You bring up a good point! The phrase "passed the point" can indeed be interpreted in different ways depending on context. It can suggest moving beyond a particular state, which could be understood as:

    1. Beyond knowing: They have moved past understanding, so now developers no longer know how it works or can predict it (as we initially interpreted).

    2. Into knowing: They have crossed into a phase where developers do know how it works and can predict its behavior.

    The ambiguity comes from how the sentence is phrased. Without further context, both interpretations could be valid, which does make it open to interpretation. If clarity is important, the sentence could be rewritten to more clearly convey whether it's suggesting that understanding has been lost or gained.
  • Where is AI heading?

    Can you (Wayfarer or anybody else) please explain why the two sentences are opposite and not the same? noAxioms and I don't seem to get it solved together.

    1. AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do.
    2. Today, AI developers know how AI works and can predict what it will do

    The problem is maybe related to this. The first sentence describes "the point where its developers don't know". The question then is, is this the end of the beginning of a time period? My natural interpretation is that it is the end, so from that point onwards developers do know. Is that a language-dependent interpretation? My native language is Dutch.

    ChatGPT says: The first sentence implies that AI has gone beyond the point where developers don't know how it works or predict its behavior, suggesting they’ve lost control or understanding.

    But I still don't get it... It conflicts with all my logical circuits
  • Limitations of the human mind
    Yes and I should reformulate that. My real question is, would it work in humanities best interest or would it go after its own interest. I do think it will do the first, but the consequences of building an 'evil' or even just 'disruptive' computer with super-human intelligence is something nobody wants. And we cannot just assume it will be nice to us.
  • Limitations of the human mind
    it still wouldn't be enough to predict the behavior of the particles with 100% certaintySkalidris

    That is already true because the uncertainty principle states that we cannot know both the position and speed of a particle with perfect accuracy. This is not a limitation of technology, it is impossible "in principle". I can give you a source but the internet gives plenty details.

    To me, that scenario is possible because our mind is limited by its building blocks.Skalidris

    That is true, and one of the core principles of my own little theory as well. My first article here is about fundamental versus conceptual reality. ( I thought it was a good idea to start that article with Kant, but the discussion went sideways because everybody has his own interpretation/understanding of Kant and they all tried to teach me a lesson. I am planning a follow up on this article. So I am happy with your 'bottom up' approach to philosophy )

    I think it's arrogant for humans to think that somehow, the universe follows the same logic as the logic in our mind.Skalidris

    True. 100% agree

    The basic wrong assumption here is that knowledge is information accumulation rather than information discard.apokrisis

    I don't see Skalidris makes that assumption, he just gives another way to tell the opposite, that even IF you knew everything, you could not predict it.

    We have all the information we need to find the value of the nth digit of pi, and it can be discovered with complete certainty. But since it is an infinite series, there is no limit to the information we can extract from the running of the algorithm and tomorrow we will find more than we have today.unenlightened

    That's a good point also.

    ENOAH about AI:
    Are they not just an extension of mind, and therefore, within its limitations? If the AI communicates in anything other than a human language, then I think, we can start talking beyond the limitations of human mind.ENOAH

    What we can do, and we are doing this on a limited basis in neural nets already, is create the conditions for intelligence to "emerge". That is the key aspect of my approach, to "get out of the way" as humans, so to speak, and let the thing evolve/emerge by itself. Finding these conditions is not unthinkable, and my idea is to let computers develop their own "language" or "representation code".

    Having said all this, I do NOT believe an AI will understand the universe in its totality. But it might form its own concepts in a different way than ours. And it can have a higher speed/bandwidth/memory. So I expect it to become more intelligent than us.

    Also, there is another limitation of the human mind, and that is that it is built on top of an animal brain that tries to survive. Causing a lot of trouble. A lot more to say on this, and a lot of open questions as well : will not having this animal brain make an AI naturally friendly or not?
  • Where is AI heading?
    Strange. Here is my question: The "rephrase" I gave seems to me identical to what you say, literally a rephrasing. But my version leads to a contradiction with what you say after that. It seems to me you have a negation too many in your first sentence.

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.noAxioms

    (My original response: ) Normally your responses read like I could've said it (but yours are better written), but this one I don't understand. Too many negations. Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." Here it breaks. The rest seems to be just a bit complaining. Go ahead, I have that sometimes.
  • Doubt, free decision, and mind
    ok enough is enough. You are not discussing, you are just repeating and not trying to understand things in context.
  • Doubt, free decision, and mind
    again, first read carefully before you comment. It is all there.
  • Doubt, free decision, and mind
    choice = "01" , that is: doubt
    Then in this case the system to select a direction in a pseudo-random way, by selecting left or right from a previously generated list.
  • Where is AI heading?
    Please also answer my question.
  • Doubt, free decision, and mind
    Again, you are not replying to what is written. Just repeating.

    a = measureDistance("left")
    b = measureDistance("right")
    if a > b then choice = "right"
    else if b > a then choice = "left"
    else choice = "01" // used here to make sense with earlier example "01" == "doubt"

    if choice == "left" or choice == "right" steerTo(choice)
    else steerTo(takeNextItemFromAPreviouslyGeneratedListOfDirections() )
  • Doubt, free decision, and mind
    If you redefine doubt as a "situation of uncertainty", you can implement a software program that handles uncertainty, no problem, that is my "01". Then there is no mind needed, nor any of your reasoning.

    Put "experience" back in the definition and yes, my program fails. And then you get a "mind" as a requirement. Since there are many opinions on mind and experience, you must make explicit what you mean by those terms in your case. And then you have to prove that the "mind" in the output is not simply caused by the "experience" you put in. I said, this is a can of worms.
  • Where is AI heading?
    Let's see it in context. You started by saying:
    If we know how humans think, we can simulate thinking using a neural networkMoK

    noAxioms pointed out that we might be able to do that without:
    It can be simulated even if one doesn't know how it worksnoAxioms
    All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like.

    In this mini-discussion about simulation, I think it is off topic how you define the end-result. It is not worth it either. noAxioms is just saying: dont wait for science to tell you how thinking works before you start building a (super-)human AI.
  • Doubt, free decision, and mind
    All I said was that my argument is not what you are saying.MoK

    I did not try to repeat your argument, I said on one end you shove "experience" in and at the other end "mind" comes out. You cannot deny that. It is just not the full argument.

    That is what I mean by : try to read the others arguments
  • Doubt, free decision, and mind
    But you were not able to define a doubtful situation in which experience is not needed.MoK
    That is because experience is in your definition, and you do not accept my example. That is all fine, but it shows how thin the ice is your theory is standing on.

    This medium is more difficult than talking face to face. I don't know you personally, I don't see your face how you read this. What I do see is a repetition of arguments that make no sense to me. If I say "A" and you say "not A", that doesn't help. I see it in the other post too, where you comment noAxioms, you do not really seem to understand what he is saying, while for me it makes perfect sense. Instead of asking clarification, you start opposing him. At some point I can no longer contribute.
  • Where is AI heading?
    With an IQ test for humans or adapted for a computer. There is tons of ideas for that.
  • Doubt, free decision, and mind
    Again, you are reacting emotionally without really trying to understand what I am saying. I am not your enemy, I try to make your idea more clear and get it more precise.

    So according to you assigning a variable to be X which is arbitrary means that the computer has doubtMoK
    I am saying that without the need for "experience" your logic fails.
  • Where is AI heading?
    how the simulation should workMoK
    yes, and this does not have to be how the brain actually works.

    it starts thinkingMoK
    ok, that was too loosely defined. "If it shows intelligence (in the objective and pragmatic definition)"
  • Doubt, free decision, and mind
    Look, you shove a term "experience" in your definition of doubt, and end up with a proof of "mind" at the other end. And you do this without explicitely pointing out what these two terms mean and how they relate. That is not a clear line of logic, it is confusing.

    Instead, if you would define "doubt" without the need for "experience", you end up with my example program and there is no need for a mind at all.

    That is what I call a can of worms. Don't feel offended, it is just that there is no end to the things you can discuss from here.
  • Where is AI heading?
    This means that you didn't simulate any system in your life. Did you?MoK

    Of course he knows, he's a computer scientist.

    He most likely means that we can try out different ideas until it starts thinking in a way that we like. Like the idea that I am proposing here. That doesn't imply that the brain works in the same way, in all its details. We might not even understand how the brain works. This is true, to some degree, for any simulation btw.
  • Doubt, free decision, and mind
    The existence of experience does not mean that there is a mind.MoK
    That opens a can of worms. Okay, let others continue this. I've done what I can.
  • Where is AI heading?
    That all is pretty much the opposite of what I saidnoAxioms

    Hope you still find a second because to me these two are the same, I just removed a pair of negations:

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise,noAxioms

    Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI"Carlo Roosen

    That is to say, it passed a point where they don't know. To me it means that after that point they do know... @anybody???
  • Doubt, free decision, and mind
    By doubt I mean an experience of uncertainty in a situationMoK

    One of the things that you need to make clear is whether doubt requires consciousness. You use the word "experience" in your definition, so it seems, yes. You end up with the conclusion that, based on this definition, there must be a mind. If by "mind" you also understand consciousness, then the whole argument collapses.
  • Doubt, free decision, and mind
    One of the challenges on this forum is that people are not actually reading the responses of others, to the point that they really try to understand it. You cannot just state ""01" does not represent a doubtful situation as I defined it in OP" while I just claimed the opposite.

    Discussion ends here until you provide a few logical steps that explain your point.
  • Doubt, free decision, and mind
    Look, it is not too difficult to write a computer program that implements doubt.

    00 = false
    11 = true
    01 = doubt
    Carlo Roosen
  • Doubt, free decision, and mind
    Using your definition.
  • Doubt, free decision, and mind
    Then here our discussion ends because I also explained why this is not valid, you can perfectly have a state 'doubt' defined in a deterministic system.
  • Doubt, free decision, and mind
    Only more modern program languages have "undefined" as a possible value for variables. Machine code hasn't
  • Doubt, free decision, and mind
    You need to be more careful with your argumentation. You cannot just say the opposite of what I said. I gave an explanation of what I meant, you didn't. "In my program there is a memory location reserved. It contains data. The interpreter or compiler has a check and generates an error if you want to print it before you define it."
  • Doubt, free decision, and mind
    By doubt I mean an experience of uncertainty in a situation.MoK
  • Doubt, free decision, and mind
    In my program there is a memory location reserved. It contains data. The interpreter or compiler has a check and generates an error if you want to print it before you define it. But undefined means the same as doubt.

    Look, it is not too difficult to write a computer program that implements doubt.

    00 = false
    11 = true
    01 = doubt
  • Doubt, free decision, and mind
    So, why can't it be part of a deterministic system? The code example I gave is deterministic.
  • Doubt, free decision, and mind

    define x
    for a = 1 to 1000000000
    y = y + 1 / a
    next a
    x = 2 + y

    x is "in doubt" while calculating y
  • Doubt, free decision, and mind
    I don't agree. "in doubt" can also be a state.
  • Post-Turing Processing
    My point is that you are not specific enough. You'll need to define more precisely what you are doing. Including some calculations of the processing time and memory demands.

    Now it sounds a bit like "could we use a generator to stop a truck instead of normal breaks, and reuse the energy?" - probably yes, but why aren't they doing it everywhere?

    LLM's for instance require a randomizer. In fact, after reading this remark I'll change "My point is that you are not specific enough." to "You seem to be dreaming"
  • Where is AI heading?
    My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic...

    I believe that if we let computers develop their own internal language, it starts to "think" independently of us. It will invent its own conceptual models of its surroundings that may be different from ours. Given the low bandwidth of human language, a computer should be able to think faster and broader than us.
  • Doubt, free decision, and mind
    By doubt I mean an experience of uncertainty in a situation.MoK

    I went back to your definition in the OP, and based on that, of course, I have doubts. Right now, for example: Should I respond to your post and have my name appear two or three times on the homepage? Some people already say I post too often.

    What I do next is become still, stopping my thoughts. (Since you're interested in free will, my choice to become still is a learned behavior—I’ve learned that thinking doesn’t resolve these questions.) In this case, an answer comes to me quickly and clearly: yes, I should post this response. Only after that does the reasoning behind it come to me. It works like a logical process, but in reverse.

    Then, of course, your question is: where does that first 'yes' come from? Is there such a thing as a free mind?

    I believe even a deterministic system can have 'free will', at least in some sense. This is because our conceptual understanding of deterministic systems is limited. A deterministic system as complex as the brain can be understood at the component level (neurons in this case), but the emergent behavior that arises operates on a different level of understanding, with no direct logical connection between the two. Yes, the connection exists, but conceptually, we can’t fully grasp it. It’s where we have to say 'stop' to conceptual thinking, much like division by zero is not allowed in math.

    So, an answer comes, and I don’t know from where. Is it a free mind? Concepts play tricks on us here. For instance, is it possible to choose the opposite of what you actually chose? If not, how can it be free will? I don’t let those concepts fool me—that’s how I deal with it.

    Finally, to clarify why I said I don’t have doubts: for me, doubt comes with a feeling of unease. In what I just described, I didn’t feel uneasy, so personally, I wouldn’t call it doubt
  • Where is AI heading?
    ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence."

    That says it all
  • Where is AI heading?
    I think we're getting close to an agreement on the topic. I am talking about a pragmatic definition of intelligence, you are talking about an understanding that implies awareness.

    I am not even opposing you, I DO believe with intelligence also comes consciousness. I just want to keep it outside the discussion here because there is too much to say about it. I will address these broader implications as well, later. My earlier post of conceptual versus fundamental reality is an important part of this discussion. However, if you can find a broader description of the topic that will not wander off into infinite complexity, I am open to that.

    Questioning the overal aim of such an AI is the whole purpose of me being here on the forum, I am as curious as you. We might come to a conclusion that we should never build the thing.
  • Where is AI heading?
    And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.Wayfarer

    But you'll agree with me that intelligence is visible, where consciousness is not. Generally we will agree on the level of intelligence we observe. To make it truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.

    Where consciousness is really asking for an internal awareness. I cannot even prove my brother is conscious, as in, I do not have access to his consciousness directly, I can only infer.