• JuanZu
    346
    Experience is a stream of informationpunos

    "So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different."


    The information is arranged on a substrate in which the experience cannot be broken down without losing what we call experience (when we see a glass of water, we do not see the neurons acting). It is like when we say that experience is nothing more than neural synapses. But methodologically, we have a one-way path: the association from experience to neural processes, but not a return path: from processes to experience.

    In fact, this is confirmed in the video you brought: we FIRST have evidence of what experience is, and then we adjust the monitor so that the electrical signals resemble what we see in experience. But we can translate those signals into anything, not necessary into an image on a monitor. This raises a question: could we reconstruct experience in a physical way without first knowing what experience is (not seeing neurons, neither electrical signals, just a glass of water) and what it resembles? The answer is no.
  • T Clark
    15.3k
    ↪T Clark That's an interesting Pinker quote, although I myself frequently think in English sentences - not that I regard that as typical or as something everyone would do. Others have said here there are people who can read and speak perfectly well without ever being aware of a stream of thought in their minds. I think my 'bottom line' with respect to AI (with which I now interact every day) is that LLMs are not subjects of experience or thought. And if ask any of them - Claude, Gemini, ChatGPT - they will affirm this. They are uncannily like real humans, right down to humour and double entrendes, but they're reflecting back at us the distillation of billions of hours of human thought and speech.Wayfarer

    After this whole discussion started, I went doing a little research on Google and in the SEP. What I found is consistent with what you’re writing. There seem to have been two approaches to this question - one that uses a language-based approach and another that uses the kind of processes that are described in an LLM. I guess it is controversial which one is the proper one to use in this kind of a situation.
  • Wayfarer
    25.4k
    I asked ChatGPT ‘When an LLM ‘gets’ a joke and signal ‘ha ha’ - it doesn’t actually feel amused, so much as recognizing it as a joke and responding accordingly, right?’

    Chat GPT: ‘Yes, when an LLM ‘gets’ a joke and says ‘ha ha,’ it isn’t actually amused — it’s just recognizing the pattern of a joke and producing the kind of response people usually give. It’s a simulation of amusement, not the feeling itself.

    So just like brain-image reconstructions give us a modelled output rather than direct access to the brain’s “movie” ‘.
  • punos
    741
    But it’s important to see what’s really happening: the system has to be trained for hours on each subject, with researchers mapping brain activity against known images and then building statistical models to translate those signals back into visuals.Wayfarer

    Right. Those statistical models are needed to reproduce the information contained within the electromagnetic signals emitted by neural activity. The information at this electromagnetic level is an encoding of the spiking electrochemical propagating patterns within the brain tissue. It is a byproduct of neural communication that can be measured and tapped into. The brain itself does not use these electromagnetic emissions as its own encoding. Therefore, there is no direct transfer of information, but rather a translation into a new encoding compatible with our devices that can then rerepresent that information in yet another encoding for the video screen or monitor.. Still the same information in a different encoding.

    A single piece of information can exist in multiple places at once and be represented in multiple ways simultaneously. The information reconstructed from a brain scan is, in principle, the same information as in the brain if captured with perfect fidelity. It can be copied an infinite number of times, and each copy is identical to the original, provided the replication is perfectly accurate. The only limits to this process are practical constraints with current technology.

    So what we’re seeing isn’t the brain “projecting” a movie by itself, but a reconstruction produced through a pipeline of human design, training, and interpretation. Without that interpretive layer, the raw neural data wouldn’t 'look like' anything.Wayfarer

    Yes, this is because human expertise is required to build the system that performs the decoding and encoding. This makes it possible to extract information from the brain even if the specific image was not included in the training data for the statistical model. Without this step, there is no access to the information in the brain in order to copy it.

    They don’t show that the brain literally contains images — they’re model-based translations of neural activity, not direct readouts of images 'stored' in the neural data.Wayfarer

    That is exactly correct. It is not the image that is being read out, but the information about the image, which is then reconstructed into the image. Remember that the image in the brain is not stored in the format of an image. There is no little box of pictures in the brain with a little man looking at the picture when you see it. The information of the image is stored in the form of distributed neural weights, and we can only access that information when the brain itself activates it, which is why the stimulus and response phase of training is necessary.

    It is possible to take neural data intended for the visual center of the brain and route it into the auditory center. In that case, the experience of the image is no longer visual but auditory. It is the same information, but situated within a different neural architecture. This phenomenon is called synesthesia, as i am sure you know.
  • punos
    741
    The information is arranged on a substrate in which the experience cannot be broken down without losing what we call experience (when we see a glass of water, we do not see the neurons acting). It is like when we say that experience is nothing more than neural synapses. But methodologically, we have a one-way path: the association from experience to neural processes, but not a return path: from processes to experience.JuanZu

    I answered this here:
    We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see.punos

    This process would stimulate your brain using the information from my brain, after translating it from my encoding to yours giving you an experience of what i am seeing. My encoding would be mapped and translated to your encoding.

    In fact, this is confirmed in the video you brought: we FIRST have evidence of what experience is, and then we adjust the monitor so that the electrical signals resemble what we see in experience. But we can translate those signals into anything, not necessary into an image on a monitor.JuanZu

    The entire system can be automated to exclude the human from the loop, except for the subject being scanned or course. All that is needed is for the computer to control a monitor on which it can display images to the subject. As the subject views the images, the machine records the corresponding neural responses and independently develops a statistical model that identifies which parts of the brain are involved in processing what. This process alone can yield a viable statistical model capable of detecting arbitrary images from brain scans without human supervision.

    It's entirely possible to create a headset or helmet that constantly scans your brain throughout the day and compares images from a camera on the helmet to your neural activity, and by the end of a week maybe, it will be a robust model of the visual data in your brain.

    This raises a question: could we reconstruct experience in a physical way without first knowing what experience is (not seeing neurons, neither electrical signals, just a glass of water) and what it resembles? The answer is no.JuanZu

    I don't know what you're asking here. Perhaps you can rephrase it?
  • JuanZu
    346
    We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see.punos

    That's not a good answer. It doesn't address the issue of decomposition or methodology. A good answer would be: We can actually see neural processes first-person, and not only that, but methodologically we have discovered how to create consciousness without needing to be conscious ourselves as a necessary evidence.

    I don't know what you're asking here. Perhaps you can rephrase it?punos

    In our experience, we do not see the neural processes that would compose the glass of water. This points to an irreducible qualitative difference. Because if we try to break down the glass of water, we do not obtain those neural processes.
  • Hanover
    14.3k


    Thanks to the association of particular images and recollections, a dog reacts in a similar manner to the similar particular impressions his eyes or his nose receive from this thing we call a piece of sugar or this thing we call an intruder; he does not know what is 'sugar' or what is 'intruder'. — The Cultural Impact of Empiricism

    What scientific study does he cite for this empirical claim? If my dog goes and gets a ball when I say "go get your ball," even new balls not previously seen, have I disproved his claim by showing the dog's understanding of categories? If not, what evidence disproves his claim?
  • 180 Proof
    16.1k
    The conscious mind is defined as a substance ...MoK
    Spinoza's 'conception of substance' refutes this Cartesian (Aristotlean) error; instead, we attribute "mind" only to entities which exhibit 'purposeful behaviors'.

    The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea.
    A more useful definition of "thinking" is 'reflective inquiry, such as learning/creating from failure' (i.e. metacognition).

    An AI is a mindless thing, so it does not have access to ideas ...Therefore, an AI cannot create a new idea either.
    Circular reasoning fallacy. You conclude only what you assume.

    So, an AI cannot think, given the definition of thinking and considering the fact that it is mindless.
    "The definition" does not entail any "fact" – again, Mok, you're concluding what you assume.
  • punos
    741
    That's not a good answer. It doesn't address the issue of decomposition or methodology. A good answer would be: We can actually see neural processes first-person, and not only that, but methodologically we have discovered how to create consciousness without needing to be conscious ourselves.JuanZu

    I don't know what you mean, but i don't think you know what i mean either. You're being too vague or inconsistent about what we are talking about. I tried to show you how an image can be decoded from the brain and displayed on a non-conscious screen as pure information. I never claimed that the information has to be conscious (just the data). You wanted to know how to experience the image instead of just looking at it on a screen, so i gave you a way to do that. Now you're talking about creating consciousness when i'm explaining how to experience the sensory data of another person with your own consciousness.

    In our experience, we do not see the neural processes that would compose the glass of water. This points to an irreducible qualitative difference. Because if we try to break down the glass of water, we do not obtain those neural processes.JuanZu

    We do not see the neural processes that encode a glass of water; we experience the process of reconstructing the information about a glass of water. When you observe neural activity from the outside, you naturally would not experience the glass of water. But if you place your perspective within the neural activity, becoming the neural activity itself (which you already are), then you would experience the glass of water through the activations responsible for its representation.

    When you look at a glass of water, your brain breaks down the neural signals from the light that hits your retinas and filters those signals through a dense maze of neural pathways sorting out all the features of the image and storing the pieces all over the brain. The neural pathways that are activated every time you see a glass of water forms the neural representation of the glass of water in your brain. You experience that neural pathway as a glass of water in your conscious mind when it is activated. No activation means no experience of the glass of water.
  • Wayfarer
    25.4k
    Thanks to the association of particular images and recollections, a dog reacts in a similar manner to the similar particular impressions his eyes or his nose receive from this thing we call a piece of sugar or this thing we call an intruder; he does not know what is 'sugar' or what is 'intruder'.
    — The Cultural Impact of Empiricism

    What scientific study does he cite for this empirical claim? If my dog goes and gets a ball when I say "go get your ball," even new balls not previously seen, have I disproved his claim by showing the dog's understanding of categories? If not, what evidence disproves his claim?
    Hanover

    Perhaps by scattering a range of balls of different sizes and saying 'fetch the large, white ball' or 'the ball nearest the lemon tree.' That might do the trick.
  • MoK
    1.9k
    A car ran over the neighbor's dog.

    Does the summary meaning of this sentence comprise an irreducible mental event? It (the idea via sentence) happened, it isn't any more or less than what it means.
    Nils Loc
    Each sentence refers to at least one idea, such as a relation, a situation, etc. In your example, we are dealing with a situation.

    Compare:

    A 2024 Rapid Red Mustang Mach E ran over our neighbor's 15 year old Chiweenie.

    Does the summary meaning of this sentence comprise an irreducible mental event?
    Nils Loc
    We are dealing with a situation again, no matter how much detail you provide.
  • MoK
    1.9k
    AI simply simulates thinking.I like sushi
    They don't know what thinking is, so they cannot design an AI that simulates thinking.

    It is built for pattern recognition and has no apparent nascent components to it.I like sushi
    Are you saying that thinking is pattern recognition? I don't think so.
  • MoK
    1.9k
    What do you mean by "think"? What is your definition of "think"?Corvus
    I already defined thinking in the OP.
  • Outlander
    2.6k
    They don't know what thinking is, so they cannot design an AI that simulates thinking.MoK

    So, what is thinking? You've, from what I've seen, yet to delineate a clear and concise formula (and resulting definition) for such.

    Are you saying that thinking is pattern recognition? I don't think so.MoK

    Well, I mean, take the following sentence.

    Ahaj scenap conopul seretif seyesen

    I thought very hard to make that sentence. But, without it hitting the pattern recognition part of your brain that realizes "wait a minute that's gibberish" versus this sentence you're reading now. I mean, come on. Let's be honest. The onus is now on you to explain your claims properly. Something that at least two or more intelligent people participating in this thread feel you've so far been unable to do.

    Love your avatar BTW. Reminds me of my mood most of time sober.
  • I like sushi
    5.2k
    They don't know what thinking is, so they cannot design an AI that simulates thinking.MoK

    Well, it appears to be 'thinking' was my point. It cannot think. It would have been better of me to state that AI models do fool humans into thinking it can think.

    It simulates speech very effectively now. I do certainly not equate speech with thought though. I want to be explicit about that!

    Are you saying that thinking is pattern recognition? I don't think so.MoK

    I was not sayign any such thing. I was stating that AI is far more capable of pattern recognition than us. It can sift through masses of data and find patterns it would take us a long, long time to come close to noticing. It is likley these kinds of features of AI are what people mistaken for 'thinking' as it seriously out performance us when it comes to this kind of process.
  • MoK
    1.9k
    I’m OK with that as edited.T Clark
    Given the definition you suggested, you either don't understand what objectively exists means, or you don't know what emergence is. I don't understand why you removed substance from my definition, but something that objectively exists is a substance, as opposed to something that subjectively exists, such as an experience. A neural process cannot give rise to the emergence of a substance, or something that objectively exists.

    Moreover, the brain is subject to constant change due to the existence of the mind. So, the brain cannot produce the mind and be affected by the mind at the same time. That is true, since the neural processes are subject to change once the mind affects the brain. There is, however, no mind once neural processes change. So, you cannot have both changes in neural processes and the mind at the same time.

    Of course it can. Life emerges out of chemistry. Chemistry emerges out of physics. Mind emerges out of neurology. Looks like you’re understanding of emergence is different from mine.T Clark
    Biology, chemistry, etc., are reducible to physics. That means that we are dealing with weak emergence in these cases. Emergence of the mind, if it is possible, is strong emergence, which I strongly disagree that it is possible because of the reasons mentioned in the previous comment.

    But that’s what it means. As I’ve said before, if you want to make up definitions for words, it’s not really philosophy. You’re just playing a little game with yourself.T Clark
    To me, abstraction and imagination are examples of thinking. Remembering, free association, etc. are not.
  • I like sushi
    5.2k
    @MoK What did you think of my hypothetical where something like a 'thought' could be said to manifest in the prolonged manner I mentioned?
  • MoK
    1.9k
    Spinoza's 'conception of substance' refutes this Cartesian (Aristotlean) error; instead, we attribute "mind" only to entities which exhibit 'purposeful behaviors'.180 Proof
    He is definitely wrong. Purposeful behaviors are attributes of living creatures. Living creatures have at least a body and a mind.

    Circular reasoning fallacy. You conclude only what you assume.180 Proof
    No. You need to read things in order to see what I said follows, and it is not circular.

    P1) AI is mindless.
    P2) The mind is needed for the creation of an idea
    C1) Therefore, AI cannot create an idea (from P1 and P2)
    P3) The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea
    C2) Therefore, AI cannot think (from C1 and P3)
    C3) Therefore, AI cannot create a new idea (from P3 and C2)
  • MoK
    1.9k
    So, what is thinking? You've, from what I've seen, yet to delineate a clear and concise formula (and resulting definition) for such.Outlander
    I define thinking as a process in which we work on known ideas with the aim of creating a new idea. This definition is inclined to processes such as abstracting and imagination.

    Well, I mean, take the following sentence.

    Ahaj scenap conopul seretif seyesen
    Outlander
    You are talking about language here. Of course, this sentence does not mean anything to me since I cannot relate any of the words you used to something that I know. The language is used to communicate new ideas, which are the result of thinking. We are working with known ideas when it comes to thinking, so there is no such miscommunication between the conscious and subconscious mind.
  • MoK
    1.9k
    Well, it appears to be 'thinking' was my point. It cannot think. It would have been better of me to state that AI models do fool humans into thinking it can think.I like sushi
    Correct!

    It simulates speech very effectively now. I do certainly not equate speech with thought though. I want to be explicit about that!I like sushi
    Correct again! An AI produces meaningful sentences only based on its database and infrastructure.

    I was not sayign any such thing. I was stating that AI is far more capable of pattern recognition than us. It can sift through masses of data and find patterns it would take us a long, long time to come close to noticing. It is likley these kinds of features of AI are what people mistaken for 'thinking' as it seriously out performance us when it comes to this kind of process.I like sushi
    Correct again! :wink: An AI is just much faster than us in pattern recognition since it is silicon-based. It is specialized in certain tasks, though. Our brains are, however, huge compared to any neural net that is used in any AI, and it is multitasking. A neuron is just very slow.
  • MoK
    1.9k
    Love your avatar BTW. Reminds me of my mood most of time sober.Outlander
    I am glad you like my avatar! :wink:
  • Outlander
    2.6k
    I define thinking as a process in which we work on known ideas with the aim of creating a new idea.MoK

    Finally, the (metaphorical) tender and ignorant flesh is exposed. Now it can be graded properly. Ah, except I note one flaw. And I'm no professional by any means. There is no "we" in this abstract concept. A man can be born alone in the world and he will still think. But perhaps this is a simple habit of speech, a human flaw, like we all have to be ignored, so I shall. Just to give you the benefit of the doubt. :smile:

    But! Ah, yes, there's a but. Even still. One cannot "know an idea" without the auspices and foreprocesses of thought itself. So, this is defining a concept without explaining its forebearer. Your so called "thinking" is created by the process of involvement with "known ideas". yet how can an idea exist and be known unless thought of? This results to yet another non-answer.

    We would have evolution going in reverse, if one were to believe your so called findings and beliefs. This is a problem. You must find a solution.
  • MoK
    1.9k
    Ok. So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different.JuanZu
    Very accurate!
123Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.