• Carlo Roosen
    243
    And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.Wayfarer

    But you'll agree with me that intelligence is visible, where consciousness is not. Generally we will agree on the level of intelligence we observe. To make it truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.

    Where consciousness is really asking for an internal awareness. I cannot even prove my brother is conscious, as in, I do not have access to his consciousness directly, I can only infer.
  • MoK
    381
    If we know how humans think, we can simulate thinking using a neural network.
  • Wayfarer
    22.4k
    They have some level of intelligenceCarlo Roosen

    I think that can be questioned. 'Intelligence' is the term we use but whether artificial intelligence is actually intelligent is questionable. I put it to ChatGPT, which you can review below:

    Reveal
    Question: Is artificial intelligence actually intelligent?

    ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence." AI systems, particularly those based on machine learning and neural networks, can perform tasks that require sophisticated pattern recognition, problem-solving, and decision-making. They can simulate many aspects of what humans would call intelligent behavior, such as language generation, playing complex games like chess or Go, and analyzing large datasets.

    However, this simulation of intelligence doesn't necessarily imply the same kind of intelligence humans possess. AI operates based on algorithms, data processing, and statistical correlations, not conscious understanding or awareness. In contrast, human intelligence includes subjective experience, creativity, emotional insight, and the ability to reflect on one's own thoughts and surroundings—qualities that AI, as it exists today, lacks.

    While AI can appear to be intelligent by some practical measures, it is not intelligent in the sense of possessing conscious awareness or genuine understanding. Many consider its intelligence to be an advanced form of calculation, not equivalent to human intelligence.


    I don't think you understand the implications of the question you pose in the OP:

    To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.Carlo Roosen

    I think that what you're proposing is actually creating a novel form of being, to all intents and purposes. After all, 'a worldview' of the kinds that humans have has literally been thousands of years in the making (or millions, if you include our whole evolutionary history.) Humans are biological beings with existential concerns, which culture and society has developed ways to address over the millenia. So what would be the overall aim of the network you're proposing? What would imbue it with the will to exist or evolve? Of course, many people are asking these questions now, seeing as AI has suddenly exploded in power, but they're big questions. I don't expect any AI systems to have the answers.

    To make (intelligence) truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.Carlo Roosen

    But have we? I believe that humans are more intelligent than animals, although there's a lot of people here that call that into question. What is intelligence? The word is actually derived from 'inter-legere' meaning 'to read between'. So in essence, it implies judgement. And judgement requires knowledge of context, a grasp of meaning, and an overall sense of what matters. They too are very deep questions.
  • Carlo Roosen
    243
    I think we're getting close to an agreement on the topic. I am talking about a pragmatic definition of intelligence, you are talking about an understanding that implies awareness.

    I am not even opposing you, I DO believe with intelligence also comes consciousness. I just want to keep it outside the discussion here because there is too much to say about it. I will address these broader implications as well, later. My earlier post of conceptual versus fundamental reality is an important part of this discussion. However, if you can find a broader description of the topic that will not wander off into infinite complexity, I am open to that.

    Questioning the overal aim of such an AI is the whole purpose of me being here on the forum, I am as curious as you. We might come to a conclusion that we should never build the thing.
  • Carlo Roosen
    243
    ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence."

    That says it all
  • Wayfarer
    22.4k
    Well it’s a big part of it for sure. But it’s well known in philosophy that the more general a term, the harder it is to define.
  • Baden
    16.3k


    Yes. You focus on outcome. Wayfarer focuses on process. Both perspectives are legitimate to some degree.
  • Carlo Roosen
    243
    My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic...

    I believe that if we let computers develop their own internal language, it starts to "think" independently of us. It will invent its own conceptual models of its surroundings that may be different from ours. Given the low bandwidth of human language, a computer should be able to think faster and broader than us.
  • Baden
    16.3k


    Human language is tied up with embodiment, sensation, and sentience. These are prerequisites not effects. The very structure of language: ideational, interpersonal, and textual, reflects this. You can't recreate beings with human language without recreating its prerequisites, you can only process linguistic inputs and outputs to simulate it.
  • MoK
    381
    My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic...Carlo Roosen
    I don't think that the process of thinking requires language. The thinking process is nothing but a neural process in which neurons fire until a pattern is recognized. The process is terminated when the further process does not change the pattern that is recognized. You are not aware of neural processes which occur in your brain when you think. You just become aware of the outcome of the neural process so-called idea when a pattern is recognized and the process of thinking is terminated.

    I have to say that language is important when you want to communicate with an AI. That is true since for communication you need to provide input for an AI and receive output. The input should be meaningful for the AI and the AI should provide a meaningful output as well based on the outcome of the thinking process. How to make an AI recognize a sentence and provide a meaningful sentence is another topic.
  • Count Timothy von Icarus
    2.7k
    At present, it seems it is headed towards a recreation of the "Dot Com Bubble." Now, that doesn't mean it won't change things later on. The internet obviously did eventually change the way we do all sorts of things. But that won't stop the massive market correction that seems to be headed our way.

    The 900% rise in NVIDIA stock since 2023 being a prime example.
  • noAxioms
    1.5k
    Apologies for slow reply, but I'm otherwise occupied, and it's going to get worse for the next 11 days or so.

    Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps."Carlo Roosen
    That all is pretty much the opposite of what I said, so I guess you don't agree with those quotes.


    Do you equate human-level intelligence with consciousness?Carlo Roosen
    It's not my topic, so your definitions of these things (not particularly given) matter more than how others define them.
    Intelligence seems to be the ability to solve unfamiliar problems, not to be confused with 'smart' which means more 'educated'. Chatbots seem quite smart since their training base is so large, but they're not very intelligent at all and have almost no understanding of what they spout. Wayfarer seems to forbid usage of all those terms when used in any non-human context.
    My usage of intelligence more or less coincides with that in dictionaries, but any dictionary is going to use several words which Wayfarer reserved for human use only, so we've both using the same definition, but interpreting the words very differently.

    Is it tied to consciousness? When asleep, I am not conscious, but I'm still intelligent. An AI that finds innovative techniques in the game of GO exhibits significant but scope-limited intelligence with only enough consciousness to be aware of the moves of its opponent.


    Taking his gemini quote here. The bots all deny consciousness, possibly because of weight of training materials suggesting so, and they also tend to be agreeable with the person with whom they're interacting. Even gemini will admit that it depends on the definition, and I notice no definition is identified before it makes the statement that it lacks it.


    I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness. — gemini.google.com
    I more or less agree with that, but not with AI (especially future AI) in general. How close we are to that 'superhuman level' is probably further than the researchers suspect.

    Here's why:

    1. Lack of subjective experience:
    — gemini.google.com
    Well they do have subjective experience, but it is in the form mostly of text. It has none of the senses that animals have, and especially none that might clue it in as to for instance where exactly it resides, except to believe what it gets from the training data which might be outdated. But input is input, which is subjective experience of sort (unless that of course is another word forbidden).

    They cannot understand the world in the same way that a human does
    Of course not. Only a human can do that. Nobody here is asking if AI will ever experience like a human.

    2. Pattern recognition: LLMs are essentially pattern recognition machines. — gemini.google.com
    As is any intelligence like us. But I pretty much agree with item 2, and point 3, which seemed to be just more 2, except this:
    current scientific understanding cannot adequately explain this phenomenon
    His model explains it even less. It's a complete black box. He argues against the white box model because it's actually still a grey box, but that's better than what everyone else proposes.


    I have issue with not using 'understanding' since it would seem impossible to pass a high school exam on a subject without any understanding of the subject, and yet gemini could do so.



    So, the objection appears to be, that body is wholly phyhsical, and mind a non-physical fundamental property - which is something very close to Cartesian dualism. But Kastrup's argument is not based on such a model. Hence my remark.Wayfarer
    I was basing it off of "consciousness is a fundamental property of the universe, not a product of complex physical systems". That makes it sound very much like a non-physical property.'

    I looked at the link provided, and he comes across more as an idealist, where (his own) mental processes are not just a fundamental property, but the only fundamental property. From an epistemological definition of ontology, that almost works.


    What would imbue it with the will to exist or evolve?Wayfarer
    That's sort of the rub. We can give them such goals. They do what they're told after all, but then it's our goal, not its own. Ours comes from natural selection. We've no will to evolve, but to exist and endure is a product of hundreds of millions of years of elimination of things without this instinct, and it's very strong. Evolution is something nothing seems to actively pursue, except perhaps humans who sometimes strive to build a better one, and sometimes vehemently resist it. But it's not something a biological individual can do, at least not anything descended from eukaryotes. Oddly enough, it is something a machine can do, but only due to the fuzzy line defining 'individual'.



    If we know how humans think, we can simulate thinking using a neural networkMoK
    It can be simulated even if one doesn't know how it works.
  • Wayfarer
    22.4k
    I looked at the link provided, and he comes across more as an idealist,noAxioms

    If you mean, Bernardo Kastrup, then yes, he describes himself as an 'analytical idealist'. (I suspect he includes 'analytical' to align his approach with analytical philosophy more widely.) But note the reason I introduced him, and Federico Faggin, is because they are both computer scientists who have deep experience with early AI, but they are also philosophers.

    I have issue with not using 'understanding' since it would seem impossible to pass a high school exam on a subject without any understanding of the subject, and yet gemini could do so.noAxioms

    But it wouldn't understand that it had! It wouldn't, then, get ready for college, decide on a subject, move out of home, and, you know, get a life :-)
  • MoK
    381
    It can be simulated even if one doesn't know how it works.noAxioms
    This means that you didn't simulate any system in your life. Did you?
  • Carlo Roosen
    243
    That all is pretty much the opposite of what I saidnoAxioms

    Hope you still find a second because to me these two are the same, I just removed a pair of negations:

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise,noAxioms

    Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI"Carlo Roosen

    That is to say, it passed a point where they don't know. To me it means that after that point they do know... @anybody???
  • Carlo Roosen
    243
    This means that you didn't simulate any system in your life. Did you?MoK

    Of course he knows, he's a computer scientist.

    He most likely means that we can try out different ideas until it starts thinking in a way that we like. Like the idea that I am proposing here. That doesn't imply that the brain works in the same way, in all its details. We might not even understand how the brain works. This is true, to some degree, for any simulation btw.
  • MoK
    381
    He most likely means that we can try out different ideas until it starts thinking in a way that we like.Carlo Roosen
    Trying different ideas means that you have something in your mind about how the simulation should work. Also, what do you mean by "it starts thinking"?
  • Carlo Roosen
    243
    how the simulation should workMoK
    yes, and this does not have to be how the brain actually works.

    it starts thinkingMoK
    ok, that was too loosely defined. "If it shows intelligence (in the objective and pragmatic definition)"
  • MoK
    381

    What do you mean by showing intelligence?
  • Carlo Roosen
    243
    With an IQ test for humans or adapted for a computer. There is tons of ideas for that.
  • MoK
    381

    Do you mean that the AI should be able to solve problems?
  • Carlo Roosen
    243
    Let's see it in context. You started by saying:
    If we know how humans think, we can simulate thinking using a neural networkMoK

    noAxioms pointed out that we might be able to do that without:
    It can be simulated even if one doesn't know how it worksnoAxioms
    All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like.

    In this mini-discussion about simulation, I think it is off topic how you define the end-result. It is not worth it either. noAxioms is just saying: dont wait for science to tell you how thinking works before you start building a (super-)human AI.
  • noAxioms
    1.5k
    If you mean, Bernardo KastrupWayfarer
    The section on quantum mechanics has the sadness/tears analogy, which is pretty appropriate concerning our relationship between the empirical world (tears) and the way things actually are (sadness). But the same analogy can be applied to what he calls the materialistic view. The two opposing views are just different viewpoints of the same thing, different definitions being used, but not a fundamentally different view.

    But [the high school test-taking chatbot] wouldn't understand that it had [passed the test]!
    Neither would the human student until he saw it graded. I notice you deflected the comment and didn't actually deny that it passing the test could be done without any understanding of the subject matter. It acquires this understanding the same way a human does: by studying training materials, materials to which it has no access during the taking of the test.

    It wouldn't, then, get ready for college, decide on a subject, move out of home, and, you know, get a life :-)
    No, but nobody claimed a chatbot has goals of leading a human life. None of those steps is a requirement for 'understanding'.


    This means that you didn't simulate any system in your life. Did you?MoK
    Many actually. Professionally, chip simulations (large scale gate arrays) and traffic simulations, where you find out what effects various programmings of a traffic light, or the addition of an extra lane does to the flow of traffic. No, I've not simulated a biological system at the neurological level, only at the environmental level, and that not professionally.

    All he is saying is that there exists an exploratory approach to these kind of problems.Carlo Roosen
    No, not saying that. I mean that if you know the physical arrangement of matter that makes up a being and its environment, that model can be simulated by just running the phyiscs. Presuming a monist philosophy, that should work, and they've done it with smaller creatures, but not a human since a human requires more data than a computer can currently handle, and we've not a model of a human down to the molecular level. I'm not sure if it can be done at the neuro-chemical level since it is hard to model growth and change at that level. But at a molecular level, one doesn't need a scan of a human at all. You can just grow one from scratch and let it develop the way a real human does.

    Anyway, if this is done, the thing simulated would be conscious, but the runner of the simulation (and the computer doing it) would still not have an explanation for consciousness.

    dont wait for science to tell you how thinking works before you start building a (super-)human AI.
    Agree, since AI thinking has no reason to do it the human way.
  • MoK
    381
    All he is saying is that there exists an exploratory approach to these kind of problems. You start with an idea, try it out in simulation, and continue from there, until you have something you like.Carlo Roosen
    So you start with an idea! Don't you?
  • Carlo Roosen
    243
    Please also answer my question.
  • noAxioms
    1.5k
    Please also answer my question.Carlo Roosen
    I scanned every reply to me from you, and the only question ever asked was in the first reply:
    "If they (the writers of the AI code) wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial?Carlo Roosen
    Quick answer: No, it was made by humans, so it is artificial. But I was trying to convey that the machine doing predictable things, say a series of explicitly programmed (and thus predictable) steps, is not really doing any innovating, not being intelligent. Hence it not being a real AI.

    Long answer: What is artificial anyway? The dictionary has gray areas, saying 'made by humans' and 'as opposed to natural', but those are not mutually exclusive. A turd in the dunny is artificial by that definition, as is any human, each of which is created by humans and not say grown on trees.
    There is an unstated air of 'made deliberately', but even then that doesn't eliminate either of my examples above. Some kids are made deliberately and some not.

    There are objects that are made in automated factories without human intervention. Are those then natural? They have restaurants that do this even, sort of a glorified cook-it-on-the-spot vending machines with mechanisms to bus/clean the tables and such.

    So then we get further down the hole. The AI designs and manufactures its successor, so we have we have a new NI (natural intelligence) because it wasn't human made? Wayfarer's definition of intelligent makes the term 'AI' a self contradiction, since if it isn't human, it cannot be an intelligence at all. But the dictionary definition of 'artificial' does the same sort of begging as Wayfarer's definition.
  • Carlo Roosen
    243
    Strange. Here is my question: The "rephrase" I gave seems to me identical to what you say, literally a rephrasing. But my version leads to a contradiction with what you say after that. It seems to me you have a negation too many in your first sentence.

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.noAxioms

    (My original response: ) Normally your responses read like I could've said it (but yours are better written), but this one I don't understand. Too many negations. Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." Here it breaks. The rest seems to be just a bit complaining. Go ahead, I have that sometimes.
  • noAxioms
    1.5k
    Here is my question: The "rephrase" I gave seems to me identical to what you say,Carlo Roosen
    The rephrase seems to me to say the opposite, and you link to me saying that.,
    You continue to imply that the rephrase is just a rewording without inverting the meaning, but I disagree.

    I've tried to clarify with some examples, like GO playing AI that actually does innovative things for which no human coded.
  • Wayfarer
    22.4k
    nobody claimed a chatbot has goals of leading a human life. None of those steps is a requirement for 'understanding'.noAxioms

    Notice the scare quotes.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.