their training data and interactions with humans do ground their language use in the real world to some degree. Their cooperative interactions with their users furnish a form of grounding somewhat in line with Gareth Evans' consumer/producer account of the semantics of proper names — Pierre-Normand
Unless, consciousness is a product of complexity. As we still don't know what makes matter aware or animate, we cannot exclude the possibility that it is complexity of information transfer that imbues this "sensation". If that is the case, and consciousness is indeed high grades of negativity entropy, then its not so far fetched to believe that we can create it in computers . — Benj96
Computer code is a bunch of symbols, recall. Could a bunch of symbols become consciously alive? The idea seems as far fetched as voodoo magic. — jkop
there is an isomorphism between the sort of information processing that occurs in modern AIs and a substantial amount of the information processing that occurs in our brains. — wonderer1
I am quite late to this thread and have not read any of it, so my comment is based only on this one post. But this is an important point, one I take great exception to.
What you claim is an isomorphism, I claim is an equivocation ("calling two different things by the same name"), an informal fallacy resulting from the use of a particular word/expression in multiple senses within an argument.
The information processing in a digital computer is nothing at all like the "information processing" in a brain.
In the computer, information is a bitstring, a sequence of 0's and 1's. The bitstrings are processed in a finite state machine. If you conceptually allow arbitrary amounts of memory you have a Turing machine. We know exactly what Turing machines can compute and what are their limits, things they can not compute.
Brains -- I can't believe I even have to explain this. Brains don't work this way. They don't have an internal clock that inputs the next bit and flips a pile of yes/no switches and takes another step along a logic path. Neurons are not bits, and connections between neurons are mediated by the neurotransmitters in the synapses between the neurons. It's a very analog process in fact.
I know the idea you expressed, "Computers process information, brains process information, therefore computers = brains" is a very popular belief among highly intelligent and competent people who in my opinion should know better. — fishfry
Their training data is, I think, based on our descriptions of the world, or their own computations and remixes of our descriptions. In this sense their relation to the world is indirect at best. — jkop
There's some research showing that when LLMs remix their own remixes, the diversity of the content decreases and becomes increasingly similar. I'm guessing it could be fixed with some additional rule to increase diversity, but then it seems fairly clear that it's all an act, and that they have no relation to the world.
I'm not sure how that follows. The authors of the paper you linked made a good point about the liabilities of iteratively training LLMs with the synthetic data that they generated. That's a common liability for human beings also, who often lock themselved into epistemic bubbles or get stuck in creative ruts. Outside challenges are required to keep the creative flame alive. — Pierre-Normand
Furthermore, if Lucas’s argument is correct, then “strong artificial intelligence,” the view that it is possible at least in principle to construct a machine that has the same cognitive abilities as humans, is false. — The Lucas-Penrose Argument about Gödel’s Theorem
One process or pattern may look like another. There can be strong isomorphism between a constellation of stars and a swarm of fruit flies. Doesn't mean that the stars thereby possess a disposition for behaving like fruit flies. — jkop
I assumed that LLMs would identify and preserve actual and relevant diversity , but the paper shows that the reduction of diversity is systematic. The LLMs follow rules, regardless of what is actual and relevant. That's basically what Searle's Chinese room shows. — jkop
Since artificial neural networks are designed for information processing which is to a degree isomorphic to biological neural networks, this doesn't seem like a very substantive objection to me. It's not merely a coincidence. — wonderer1
Consider the system reply and the robot reply to Searle's Chinese Room argument. Before GPT-4 was released, I was an advocate of the robot reply, myself, and thought the system reply had a point but was also somewhat misguided. In the robot reply, it is being conceded to Searle that the robot's "brain" (the Chinese Room) doesn't understand anything. But the operation of the robot's brain enables the robot to engage in responsive behavior (including verbal behavior) that manifests genuine understanding of the language it uses. — Pierre-Normand
It seems likely that we will soon encounter robots in our daily lives that can perform many practical and intellectual tasks, and behave in ways that manifest a sufficient understanding of our language. But I wouldn't call it genuine. A lack of genuine understanding can be buried under layers of parallel processes, and being hard to detect is no reason to reinterpret it as genuine. According to Searle, adding more syntax won't get a robot to semantics, and its computations are observer-relative. — jkop
One might also add that authenticity matters. For example, it matters whether a painting is genuine or counterfeit, not necessarily for its function, but for our understanding of its history, under what conditions it was produced, and for our evaluation of its quality etc.. The same could be true of simulated and genuine understanding.
Computer code is a bunch of symbols, recall. Could a bunch of symbols become consciously alive? — jkop
Could we per chance be at a point where our knowledge of nature's laws are advanced enough that we are simulating evolution. If so I don't think it's impossible to get a similar outcome from such processes -namely sentience. — Benj96
It seems that for AGI to join us, not only does it require some form of "reiterative automaticity" - that is to say, to spontaneously rewrite, live/in the present moment, its own predictive algorithms independent of us, but that such algorithms must be further compressed until they no longer require googols of data but the same or less data than a typical human in order to reason. — Benj96
Interesting thought. I would think that there is a sort of evolutionistic survival of the fittest going on in our brains, at the level of different neural nets encoding different competing paradigms with which to model reality. — wonderer1
. However it seems completely unlikely that the resources to do this will ever be committed authentically - which is to say devoid of some underlying economic driver which, so long as it exists, will preclude the evolutionary development of the thing in question. — Pantagruel
Whether the processing is designed or coincidental doesn't matter. The objection refers to isomorphism and the false promise that by being like the biological process the artificial process can be conscious. — jkop
In this case I would like if you consider the ecosystem as an economy of sorts. Limited resources (money we'll say) in a space or playing field where sentient beings compete for this currency. — Benj96
Searle believes that brain matter has some special biological property that enables mental states to have intrinsic intentionality as opposed to the mere derived intentionality that printed texts and the symbols algorithmically manipulated by computers have. But if robots and people would exhibit the same forms of behavior and make the same reports regarding their own phenomenology, how would we know that we aren't also lacking what it is that the robots allegedly lack? — Pierre-Normand
Are biologically active molecules not in some ways also "symbols" ie structures which "say" something - exert a particular defined or prescribed effect. — Benj96
However, my point was about the relevance of isomorphisms. Pointing out that there can be irrelevant isomorphisms such as between a constellation and a swarm of insects, doesn't change the fact that there are relevant isomorphism. (Such as between the shape of bird wings and airplane wings, or between biological neural nets and artificial neural nets.) — wonderer1
Bird wings and airplane wings have many similarities and many differences. Artificial neural networks have become increasingly different from their biological counterparts since the 1940s or 50s. — jkop
But I think the functional theory of consciousness is too narrow. Consciousness is related to a background, a body, action, perception, hormone levels, and a lot of other conditions that together leave some biological forms of life as the only plausible candidates for having conscious states. — jkop
The only way I can think of is to imbue it with a chronic angst or fear of death or suffering
— Benj96
For it to fear death, it would have to be alive. It would have to be a being, not a simulcrum — Wayfarer
Is there a reason we can't see consciousness in either context? Not necessarily now, but in principle?Still, we see effective information processing emerge from neural nets in either context, — wonderer1
Are we certain that it is only when particles are arranged in ways that we call "biological" that they can feel, as a unit? We know that it is not what is going on in a given medium that is important? Rather, what is going on must go on in only this particular medium?What prevents A.I. from having the same, so called consciousness based experiences as me, and what makes me have code based experiences, is my organic nature/structure; my brain and endocrine system etc etc (I am not a biologist). That's where "I" really "am," and I imagine, A.I. can "never" be. Not in the code or programing, no matter how sophisticated, but in the organism which feels, and is aware-ing of feeling. Like Wayfarer said, it has to be a [organic] being; but not necessarily/only to "make" it fear death; but to make it feel. — ENOAH
Are we certain that it is only when particles are arranged in ways that we call "biological" that they can feel, as a unit? We know that it is not what is going on in a given medium that is important? Rather, what is going on must go on in only this particular medium? — Patterner
If you were thinking that evolution could occur analogously with some kind of "artificial environment." — Pantagruel
Is there a reason we can't see consciousness in either context? Not necessarily now, but in principle? — Patterner
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.