• Daemon
    591
    I don't think this is the right approach. There is nothing special going on with observer dependence. Yes, a bit, or an assembly instruction, has no meaning in itself. But neither does a neuron. All take their meaning from the assembly of which they are a part (rather than an outside observer). In hardware, the meaning of a binary signal is determined by the hardware which processes that signal. In software, the meaning of an instruction from the other instructions in a program, which all together process information. And in a brain, the meaning of a neuron derives from the neurons it signals, and the neurons which signal it, which in totality also process information.hypericin

    Here's a rather lovely working model of a Turing Machine. https://youtu.be/E3keLeMwfHY?t=258

    The whole 5 minute video is worth watching, but I've skipped to the part where the narrator explains that the machine is carrying out a simple binary counting program.

    It's us outside observers, including the people who built the machine, who determine that those marks are to be read as 0s and 1s, and that the binary system is to be used. There's nothing in the physics of the machine that says that 1011 is a binary number equivalent to eleven in decimal notation.

    The situation is not the same where our brains (and bodies) are concerned. The processes in a brain are not dependent on what an outside observer says about them.
  • apokrisis
    7.3k
    there are only one set of rules for understanding Chinese, and both humans and computers would use the same rules for understanding Chinese. I don't see a difference between how computers work and how humans work.Harry Hindu

    But life and mind don’t “follow rules”. They are not dumb machine processes. They are not algorithmic. Symbols constrain physics. So as a form of “processing”, it is utterly different.

    To understand language is to know how to act. That knowing involves constraining the uncertainty and instability of the physical realm to the point that the desired outcome is statistically sure to happen.

    The connection between the information and the physics is intimate and fundamental. And with a TM, the physics is engineered out.

    So you can’t just hand wave about reconnecting the computer to the physics. You have to show where this now hybrid device is actually doing what biology does rather than still merely simulating the physics required.
  • Daemon
    591


    Hi Apokrisis,

    I'm reading Cell Phenomenology, Olivier passed on your recommendation, so thanks for that. The idea of the self/non-self distinction originating with the cell has been floating around in my head for some time so this is a fascinating read for me.

    I am slightly puzzled by the epistemic in epistemic cut. I understand that this is to be distinguished from Descartes ontological cut, but I don't see how epistemic relates to the subject/object distinction. Can you help?

    Edit: I think I've got it, it's the cut between the observer and the observed??
  • apokrisis
    7.3k
    Edit: I think I've got it, it's the cut between the observer and the observed??Daemon

    Yep. Pattee was drawing the parallel with the observer issue in quantum mechanics. And they still talk about whether the wavefunction collapse - the act of measurement - is epistemic or ontic.

    So it was a bit of jargon he imported to biology.
  • Harry Hindu
    5.1k
    But life and mind don’t “follow rules”. They are not dumb machine processes. They are not algorithmic. Symbols constrain physics. So as a form of “processing”, it is utterly different.apokrisis
    Of course life and minds follow rules. You are following the rules of the English language that you learned in grade school when you type your posts. Ever heard of the genetic code? Why do you keep saying stuff that only takes a simple observation to see that it isn't true?

    Following rules doesn't mean that you are a dumb machine. It seems to me that only smart machines can create their own rules to follow, and to then get others to follow the same rules, as in the use of language as a means of communicating. After all, understanding is the possession of rules in memory for interpreting some sensory data. Understanding cannot be severed from that act of following rules, as it is the same process.

    To understand language is to know how to act. That knowing involves constraining the uncertainty and instability of the physical realm to the point that the desired outcome is statistically sure to happen.apokrisis
    No. To understand language is to possess a set of rules in memory for interpreting particular scribbles and sounds. Like I said, understanding is the possession of a set of rules in memory for interpreting any sensory data. The man in the room has a different set of rules for interpreting the scribbles on the paper than the rules that Chinese people have for interpreting those same symbols. Hence, the instructions in the room are not for understanding Chinese because they are not the same set of rules that Chinese speakers learned or use. The room understands something. It understands, "write this symbol when you see this symbol." The room also understands the language the instructions are written in. How can that be if the room, or the man, doesn't understand language?

    So you can’t just hand wave about reconnecting the computer to the physics. You have to show where this now hybrid device is actually doing what biology does rather than still merely simulating the physics required.apokrisis
    It seems like that is your problem to solve. You are the dualist, so you are the one that sees this as a hybrid device. As a monist, I don't see it as such. What is it about carbon that is so special in being the only element capable of producing a hybrid device in the sense that you are claiming here? Why do you think that natural selection is often confused as a smart process (intelligent design), rather than a dumb (blind) process?
  • apokrisis
    7.3k
    Of course life and minds follow rules. You are following the rules of the English languageHarry Hindu

    There is a world of difference between rules as algorithms and rules as constraints.
  • Daemon
    591
    The man in the room has a different set of rules for interpreting the scribbles on the paper than the rules that Chinese people have for interpreting those same symbols. Hence, the instructions in the room are not for understanding Chinese because they are not the same set of rules that Chinese speakers learned or use. The room understands something. It understands, "write this symbol when you see this symbol." The room also understands the language the instructions are written in. How can that be if the room, or the man, doesn't understand language?Harry Hindu

    A digital computer can't understand language so that it can translate like a human. I'm a translator, I use a computer translation tool. It's excellent and amazing, but it can't do what I do. And the barrier is insurmountable.

    You're right that the rules in the room are not those that Chinese speakers use. But that's the point: a computer can't understand language in the way we can. The reason is that we learn meaning through conscious experience.

    My translation customers often want to make the reader feel good about something, typically to feel good about their products.

    To truly understand what "good" means, you have to have felt good. Because you enjoyed food, or the sunshine, or sex, or being praised by your parents.

    The same applies to technical translations. It can be very difficult even for a human to understand and translate instructions for the assembly of a machine for example, if they haven't had experience of assembling machinery.

    And of course my PC hasn't assembled a machine, or enjoyed sex, or had any of the countless experiences we have had that allow us to understand language, and life.
  • apokrisis
    7.3k
    My translation customers often want to make the reader feel good about something, typically to feel good about their products.Daemon

    Yep. Words can constrain experience. But they can’t construct experience.

    Of course words also construct those constraints in rule-constrained fashion. And the same brain, the same meat machine, is both acting out the linguistic habit and the sensorimotor habits that are the "experiences".

    So it is recursive and thus irreducibly complex.

    And that is the key when it comes to the debate over computational mind.

    The semiotic argument is that the relationship between symbol and physics that biology embodies is irreducibly complex. It is a story of synergistic co-dependency. You can't actually break it apart in some neat reductionist fashion.

    And once it is accepted that "mindfulness" is an irreducible triadic relation in this fashion - a knot in nature - then that rules out the simplicity of computational mind from the get-go. A Turing Machine is a clear category error.

    Of course, a TM does require a physics to make it a real device.

    It needs a gate mechanism to divide the continuity of real time and symbolise that flow as a series of discrete and equal steps.

    The gate also has to be able to make marks and erase marks. It has to be able to symbolise the logical notation of digital information in a way that is fixed and remembered.

    It needs an infinite length of physical tape to do this. And - usually unsaid - an infinite quantity of energy to operate the tape and the gate. And also usually unsaid, it must be isolated from a lot of other actual physics, such as the gravity that would collapse these infinite quantities into blackholes, or the quantum fluctuations that would also overwhelm the algorithmic function of a tape and gate mechanism in physical reality.

    So the TM is a hoax device. It is specified to minimise the actual physics - reduce the irreducible entanglement that must exist in any real semiotic system between symbol and physics. But in the end, such a reductionist move is physically impossible.

    And yet then, the computationalists like to wave this TM about - boast about its universality as an engine of algorithms, its Platonic status as implementation of pure logical idea - and demand of biology, why shouldn't a really complicated calculation also be conscious like any neurobiologically organised creature?

    Computationalist feel TMs have proved something about consciousness being just an information process, and all information processes being Turing computable, therefore the burden is on everyone else to disprove their claim.

    A biologist - especially if they understand the irreducible complexity of the semiotic relation - can see that a TM never actually removes the physics from the real world story. All that physics - the real constraints that space, time and entropy impose on any form of material existence, even one organised by symbols - is merely swept under the carpet.

    So the burden of explanation is really the other way around. The computationalists have to get specific about how they plan to re-introduce the physics to their de-realised realm of symbol shuffling.

    Semiotics doesn't say that can't be done. It just says to the degree the computationalists rely on a TM architecture, it has been all about constructing a machine that is as physics-denying as they could imagine. So they have a little bit of a problem having gone so far out on that particular limb.

    Neural network architectures, or even the analog computers that came before digital computers, are more embracing of actual physics. They reacted more directly to physical constraints on their informational habits. So it is not as if information technology can't be more lifelike in working with the irreducible complexity of a proper modelling relation with the world.

    But the Chinese Room argument was about dramatising how physics-less the TM story actually is.

    The problem was that it makes that criticism very plainly, but doesn't then supply the argument for life's irreducible complexity that makes the counter-position of biology so compelling.

    If the semiotic relation between symbols and physics is formally irreducible - at the level of mathematical proof, as has been argued by CS Peirce, Robert Rosen, Howard Pattee, etc - then that trumps the more limited claim of TMs as "universal computers".

    Universal computation applies only to the truly physics-less world that exists in the human imagination.

    Meanwhile back here in the real world ...
  • Daemon
    591
    Neural network architectures, or even the analog computers that came before digital computers, are more embracing of actual physics.apokrisis

    Is that right? I thought a neural network was just a program running on a digital computer. And no analog computer has any connection with the physics of consciousness either.

    The problem was that it makes that criticism very plainly, but doesn't then supply the argument for life's irreducible complexity that makes the counter-position of biology so compelling.apokrisis

    Searle frequently talks about the biological nature of consciousness, he refers to his position as "biological naturalism". It's not unreasonable for him to leave the biology to the biologists.
  • apokrisis
    7.3k
    Is that right? I thought a neural network was just a program running on a digital computer. And no analog computer has any connection with the physics of consciousness either.Daemon

    It is very easy to head back into these kinds of confusions. That is why I advocate for the clarity of the formal argument - the irreducible complexity of a semiotic relation vs the faux reducible simplicity of universal computation.

    When it comes to building technology inspired by either TM or semiotic models, the metaphysical issues always become obscured by the grubby business of implementing something of practical usefullness.

    There are no actual TM computers in use. The impracticalities of a single gate and infinite tape had to be hidden using the architectural kluges of stored programs and virtual addressing spaces. Real computers have to live within real physical constraints.

    So an epistemic cut - to use Pattee's term - has to be introduced between software and hardware. And if we zoom in close on any actual conventional computer, we can see the layers and layers of mediating mechanism - from microcode and instruction sets to operating systems and middleware - that are needed to negotiate what is supposedly a black and white distinction between the algorithms and a system of material digital switches burning electricity.

    So when it comes to neural networks, originally those were imagined as actual hardware implementations. You would have to have physical circuits that were not just digital switches and more like the analog electronics of pre-WW2 technologies.

    But then digital computers running conventional virtual machine emulations could simulate a network of weighted nodes, just as they could simulate any kind of physics for which the physical sciences have developed a theoretical description - the algorithms we call the equation of fluid mechanics, for example.

    And so that is the trick - the ruse to keep this particular debate going.

    Look, we can implement the algorithms that physics uses to make its descriptions of nature suitably algorithmic!

    But then - if you look to physics - you find that this is another level of the great con.

    Physics is good at constructing algorithmic descriptions of nature ... up to a point. But in the end - as with quantum collapse, or the ultimate non-computability of any actual complex dynamical system – the algorithms can only coarse-grain over the realities they model.

    Physicists hate this being pointed out. Like computationalists, they like to imagine that reality is actually a deterministic machine. It is the metaphysical assumption built into the discipline. And - as a useful assumption - it is great. The mechanistic view is the best way to look at the irreducible complexity of the world if your over-arching purpose is to construct a higher level of control over that world.

    To the degree you can mechanise your view of nature, you can employ that view to build a machinery for regulating nature in precisely that fashion.

    But at root - as with the "weirdness" of quantum mechanics or deterministic chaos - there is always going to be a gap between a mechanical and algorithmic description of nature and the metaphysical truth of nature being an irreducibly complex (ie: semiotic) relation.

    Searle frequently talks about the biological nature of consciousness, he refers to his position as "biological naturalism". It's not unreasonable for him to leave the biology to the biologists.Daemon

    But I was supporting Searle, not attacking him. My first post was about how he talked of simulated rain not making anyone wet, simulated carburettors being no use in an actual car.

    Simulation - or symbols - are certainly part of the biological story. But they are irreducibly connected with the physics of life from the point of origin.

    There is no sharp software/hardware division as is pretended by either computation or physics as sciences. There is instead always the necessity of an epistemic bridge that spans this epistemic divide in the fashion that even the PC on your desk has layers and layers of mechanism to give effect to the idea of a virtual machine running on real hardware plugged into an uninterrupted power supply.
  • Daemon
    591
    It is very easy to head back into these kinds of confusionsapokrisis

    I don't think I was confused. The physics of analogue computers and digital computers is not related to the physics of consciousness.
  • apokrisis
    7.3k
    The physics of analogue computers and digital computers is not related to the physics of consciousness.Daemon

    What do you mean by the physics of consciousness then? Which part of physical theory is that?
  • Daemon
    591
    Neurons, synapses, that kind of thing.
  • apokrisis
    7.3k
    I'm puzzled as that would be exactly my point. Neurons and synapses can't be understood except as prime examples of the irreducible complexity of semiosis.

    Neurons combine the physics of ion potential differences and the information of depolarisable membrane channels so as create "signals". So there is some core bit of mechanism where the two realms interface.

    But how those signals become actually a signal to an organism in its responses to an environment, rather than just an entropic and noisy bit of biophysics, is where the irreducible complexity bit comes into play.

    Neither physics, nor information processing theories, can tell us anything useful in isolation. You need the third framework of biosemiosis that has the two viewpoints already glued together in formal fashion.

    It may be too technical, but I wrote this post a while back on how biophysics actually has drilled down to ground zero on this score now. In just the past decade, the blanks have started to get filled in.

    https://thephilosophyforum.com/discussion/comment/105999
  • Daemon
    591
    I'm puzzled as that would be exactly my point. Neurons and synapses can't be understood except as prime examples of the irreducible complexity of semiosis.apokrisis

    Just a misunderstanding then.

    Searle says the brain doesn't do information processing: https://philosophy.as.uky.edu/sites/default/files/Is%20the%20Brain%20a%20Digital%20Computer%20-%20John%20R.%20Searle.pdf

    Page 34.
  • apokrisis
    7.3k
    Yeah. It was back in the 1980s that Searle was making his case. And even then a criticism was that he overplayed the physics at this point. Although given the strength of computationalism at the time, it was good to see any philosopher trying to argue so directly against it,

    So you notice how Searle says the brain isn't handling information in the TM sense - binary 0s and 1s that can literally stand for anything as they intrinsically stand for nothing.

    Instead, the brain is handling particular kinds of "experiential information" - visual, tactile, auditory, kinesthetic, gustatory, etc.

    But that then becomes a dualistic framing of the situation because he is talking about qualia and all the metaphysical problems that must ensue from there.

    So - from a mind and life sciences point of view - you don't want to shut down the computationalists by opening the door again for the idealists.

    That is where the semiotic approach came in for me during the 1990s. It is a way to glue together the computational and material aspects of organismic complexity in one formally-defined metaphysics.
  • Harry Hindu
    5.1k
    There is a world of difference between rules as algorithms and rules as constraints.apokrisis
    I don't see a world of difference between them. Algorithms are a type of constraint.
  • Harry Hindu
    5.1k

    You"re right that the rules in the room are not those that Chinese speakers use. But that's the point: a computer can't understand language in the way we can. The reason is that we learn meaning through conscious experience.Daemon
    The instructions in the room are written in a language - a different language than Chinese. How did the man in the room come to understand the language the instructions are written in? I've asked this a couple of times now, but you and Apo just ignore this simple, yet crucial, fact.
  • Harry Hindu
    5.1k
    I'm a translator, I use a computer translation toolDaemon
    If you are the translator then why do you need a translation tool? Where do the translations reside - in your brain or in you tool? If you need to look them up in a tool, then the understanding of that particular translation is in the tool, not in your brain.

    You're right that the rules in the room are not those that Chinese speakers use. But that's the point: a computer can't understand language in the way we can. The reason is that we learn meaning through conscious experience.Daemon
    These are all unfounded assertions without anything to back it up. What are conscious experiences? What do you mean by, understand?
  • Daemon
    591
    If you are the translator then why do you need a translation tool? Where do the translations reside - in your brain or in you tool? If you need to look them up in a tool, then the understanding of that particular translation is in the tool, not in your brain.Harry Hindu

    I don't need the translation tool Harry, I can do the translation on my own, the tool just saves me typing. When I come across a word that isn't in my Translation Memory I add it to the memory, together with the translation. Then the next time that word crops up I just push a button and the translation is inserted. The translation tool doesn't understand anything.

    These are all unfounded assertions without anything to back it up. What are conscious experiences? What do you mean by, understand?Harry Hindu

    A dictionary definition of "understand" is "perceive the intended meaning of". Another dictionary says "to grasp the meaning of".

    What do you think conscious experiences are?
  • Daemon
    591
    Instead, the brain is handling particular kinds of "experiential information" - visual, tactile, auditory, kinesthetic, gustatory, etc.

    But that then becomes a dualistic framing of the situation because he is talking about qualia and all the metaphysical problems that must ensue from there.
    apokrisis

    This is the introduction to Searle's 2004 book Mind, A Brief Introduction.

    INTROD U C T I O N
    Why I Wrote This Book
    There are many recent introductory books on the philoso-
    phy of mind. Several give a more or less comprehensive
    survey of the main positions and arguments currently in
    the field. Some, indeed, are written with great clarity, rigor,
    intelligence, and scholarship. What then is my excuse for
    adding another book to this glut? Well, of course, any
    philosopher who has worked hard on a subject is unlikely
    to be completely satisfied with somebody else’s writings on
    that same subject, and I suppose that I am a typical
    philosopher in this respect. But in addition to the usual
    desire for wanting to state my disagreements, there is an
    overriding reason for my wanting to write a general intro-
    duction to the philosophy of mind. Almost all of the works
    that I have read accept the same set of historically inherited
    categories for describing mental phenomena, especially
    consciousness, and with these categories a certain set of
    assumptions about how consciousness and other mental
    phenomena relate to each other and to the rest of the world.
    It is this set of categories, and the assumptions that the
    categories carry like heavy baggage, that is completely
    unchallenged and that keeps the discussion going. The
    different positions then are all taken within a set of
    mistaken assumptions. The result is that the philosophy of
    mind is unique among contemporary philosophical sub-
    jects, in that all of the most famous and influential theories
    are false. By such theories I mean just about anything that
    has “ism” in its name. I am thinking of dualism, both
    property dualism and substance dualism, materialism,
    physicalism, computationalism, functionalism, behavior-
    ism, epiphenomenalism, cognitivism, eliminativism, pan
    psychism, dual-aspect theory, and emergentism, as it is
    standardly conceived. To make the whole subject even
    more poignant, many of these theories, especially dualism
    and materialism, are trying to say something true. One of
    my many aims is to try to rescue the truth from the
    overwhelming urge to falsehood. I have attempted some of
    this task in other works, especially The Rediscovery of the
    Mind, but this is my only attempt at a comprehensive
    introduction to the entire subject of the philosophy of
    mind.

    ____________________________________________________________

    There's also this: Why I Am Not A Property Dualist: https://faculty.wcas.northwestern.edu/~paller/dialogue/propertydualism.pdf
  • Banno
    25.1k
    I really want to start a thread called "The floor in the Chinese Room".
  • Harry Hindu
    5.1k
    I'll bite.
    What's so special about the floor, Banno?

    I don't need the translation tool Harry, I can do the translation on my own, the tool just saves me typing. When I come across a word that isn't in my Translation Memory I add it to the memory, together with the translation. Then the next time that word crops up I just push a button and the translation is inserted. The translation tool doesn't understand anything.Daemon
    Isn't that how you learned the translation of a word and then use the translation? Didn't you have to learn (be programmed) with that information via your sensory inputs to then supply that information when prompted? How is the translation tool's understanding different than a brain's understanding?

    A dictionary definition of "understand" is "perceive the intended meaning of". Another dictionary says "to grasp the meaning of".

    What do you think conscious experiences are?
    Daemon
    You used the phrase. I thought you knew what you were talking about. I would define it as a kind of working memory that processes sensory information.
  • Daemon
    591
    How is the translation tool's understanding different than a brain's understanding?Harry Hindu

    It doesn't have any understanding. It doesn't perceive the intended meaning, it doesn't perceive anything. It isn't equipped to perceive anything.

    Semantics, meaning, is not intrinsic to the physics of my PC. The semantics is ascribed, in this case by me, when I tell it how to translate words and phrases.

    The translation tool often produces quite spooky results, it certainly looks like it understands to a naive observer, but it's easy to see that it doesn't understand when you allow it to translate on its own without my intervention (which I never do in practice).

    There's a very interesting paper here on the limits of machine translation: http://vantage-siam.com/upload/casestudies/file/file-139694565.pdf

    One of the author's conclusions is that "linguistic meaning is derived from the role things and people play in everyday life". I said something about this above, using the word "good" and the translation of machine assembly instructions as examples.

    If the translation tool's understanding was the same as mine, as you seem to want to believe, then machine translation would be as good as human translation. But it isn't!
  • bongo fury
    1.6k


    A trap-door

    in the floor

    of the Chinese Room will eject the philosopher into a sea of woo below, immediately upon their confusing the semiotics of intelligence with the semiotics of simulation.

    I was just citing Searle's examples.apokrisis

    Fair enough. Dare I say, he wanders perilously close.
  • Harry Hindu
    5.1k
    It doesn't have any understanding. It doesn't perceive the intended meaning, it doesn't perceive anything. It isn't equipped to perceive anything.Daemon
    So understanding has to do with perceiving meaning? What do you mean by, "perceive"? Is the computer not perceiving certain inputs from your mouse and keyboard? Does it not perceive the meaning of your keystrokes and mouse clicks and make the correct characters appear on the screen and windows open for you to look at?

    What do you mean by "meaning"? Meaning is the same thing as information. Information is the relationship between cause and effect. Information/meaning is every where causes leave effects.

    Computers contain information. They have memory. They have a processor that processes that information for certain purposes. The difference is that those purposes are not its own. They are for human purposes. It doesn't process information in order to survive and propagate. It isn't capable of learning on it's own, for it's own purposes. It can only be programmed for human purposes. But none of this is to say that there isn't some kind of mind there. If the hardware in your head can contain a mind then what makes that type of hardware special from a computer brain that processes information via inputs and outputs, just like your brain does?

    No, I'm not a panpsychist that believes everything has a mind. But I do think that we need to rethink what the mind is, because our current theories of materialism, idealism and dualism just don't work.

    Semantics, meaning, is not intrinsic to the physics of my PC. The semantics is ascribed, in this case by me, when I tell it how to translate words and phrases.Daemon
    But the semantics weren't ascribed by you. They were ascribed by your teacher(s) who taught you how to translate words. You weren't born knowing any language, much less how to translate them. You had to be taught that. You also weren't the one that created languages, to define what scribble and sound refers to what event or thing. You had to be taught that. You used your eyes and ears (your inputs) and your brain (your processor) to learn, to discern the patterns, so that you may survive in this social environment (produce the appropriate outputs as expected by your peers).


    The translation tool often produces quite spooky results, it certainly looks like it understands to a naive observer, but it's easy to see that it doesn't understand when you allow it to translate on its own without my intervention (which I never do in practice).Daemon
    That's because the only thing it knows is to spit out this scribble when it perceives a certain mouse click or key stroke. It has the same instructions as the man in the room - write this scribble when you perceive this scribble. It doesn't have instructions that actually provide the translation, of this word = that word, and then what that word points to outside of the room, which is how you understand it, because that is how you learned it.

    Given that the man in the room can understand at least one language - the language the instructions are written in, then a set of instructions that include the Chinese symbol and it's equivalent in the language the man in the room understands would go a long way in helping the man in the room understand Chinese.

    I'm sure you produced quite spooky results when you first began learning how to translate a language.

    One of the author's conclusions is that "linguistic meaning is derived from the role things and people play in everyday life". I said something about this above, using the word "good" and the translation of machine assembly instructions as examples.

    If the translation tool's understanding was the same as mine, as you seem to want to believe, then machine translation would be as good as human translation. But it isn't!
    Daemon
    Because it doesn't have the same set of instructions, nor the need to learn them, that you did when you learned them, but that doesn't mean that it couldn't if it had the need and the correct set of instructions.
  • Harry Hindu
    5.1k
    If meaning is the role words play, then what about how we use words to refer to how computers function, as if they had minds of their own? They have memory, communicate, acknowledge messages, ignores, expects, monitors, and understands. Why is the computer such a good metaphor for the mind?
  • Sir Philo Sophia
    303
    If the semiotic relation between symbols and physics is formally irreducible - at the level of mathematical proof, as has been argued by CS Peirce, Robert Rosen, Howard Pattee, etc - then that trumps the more limited claim of TMs as "universal computers".apokrisis

    I think you make many great points on this thread, which I tend to mostly agree with in spirit, if not exact details. If not formally, how does semiotics best deal with bridging physics to the symbols via and over the epistemic cut? Can you point me to the latest, best research white paper you think would answer that for me?
    thx.
  • Sir Philo Sophia
    303
    A dictionary definition of "understand" is "perceive the intended meaning of". Another dictionary says "to grasp the meaning of".Daemon

    Curious, and hypocritical, that to support your arguments you use definitions which are circular, so very flawed, because one would need to know already what perceive" or "meaning" is so as to know what the true definition of 'understand' is. However, when I try to ground definitions away from such useless circular ones, you said that is a circular endeavor that is doomed to fail as we can not define a partial truth before knowing the full truth.

    You should not cite things that you don't believe in as useful truths for the sake of arguments.

    care to revise your position on that?
  • Daemon
    591
    So understanding has to do with perceiving meaning? What do you mean by, "perceive"? Is the computer not perceiving certain inputs from your mouse and keyboard? Does it not perceive the meaning of your keystrokes and mouse clicks and make the correct characters appear on the screen and windows open for you to look at?Harry Hindu

    Thanks very much for this Harry.

    No, the computer is not perceiving inputs in the way you and I perceive things. Press a finger against the back of your hand. You feel a sensation. When you press a key on the computer keyboard, the computer doesn't feel a sensation.

    Shall we try to agree on this before we move on to the rest of your ideas?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.