Comments

  • Evolution, music and math
    What question, pray tell? :chinPattern-chaser

    The OP. Google gives a more mundane explanation of the name. :sad: nvm

    Still, it is a common (and to me reasonable) conjecture that evolution has endowed us with a general thirst for pattern.

    And after all what right have you, the mere author of your name, to dispute that interpretation?

    :joke:
  • Evolution, music and math
    Evolution, music and math3017amen

    (... and poetry, science etc.)

    I always assumed @Pattern-chaser was named in answer to this question.

    :chin:
  • Does anyone have a Degree in All or None? Yes/No?
    Thanks for reincarnating this thread. :wink:

    Translation (or mapping or commensuration) from analog to digital (continuous to discrete, spectrum to alphabet) doesn't have to be AON, thankfully. Usually it is all-or-nearly-all vs none-or-hardly-any... with a no-mans-land in between. E.g. from light intensity to white vs black, separated by grey.

    E.g., white and black then have borderline cases of light intensity that will be sometimes (e.g. for particular users of the language, or the same user on different occasions) white and other times grey (or sometimes black and other times grey). But no intensity ever taken (within the system) for a black is ever (persistently i.e. without correction) taken for a white.

    And only practical considerations limit the number of shades reliably discriminated in this way, i.e. the number of characters in the digital alphabet.

    We have semantic alphabets as well as syntactic. Sometimes. Often we can't agree on the mutual exclusivity just described, and fall into a slippery slope dispute in which each camp accuses the other of extremism, and teases the other that yes they are quite extreme... and would, for example, call snow black or coal white.

    It might end in tears (or thread closure), but the motivation is often positive... to restore mutual respect for the neutral zone, and hence the feasibility of consistent discourse about (pattern-making with) the non-neutral 'characters'.
  • Hume on why we use induction
    He proposes habit/custom as one but, from your post, never claims that it's the only explanation for it.TheMadFool

    By employing that word, we pretend not to have given the ultimate reason of such a propensity.Hume, 36.

    My emphasis.
  • Hume on why we use induction
    Indeed, but giving it another name does not give it a justification. Do you think you have done that? Or that someone else has?unenlightened

    I merely think I understand Hume to have pointed out that justification (or reason or logic or derivation or inference) is sometimes deductive but just as often inductive (habitual or associative).

    https://thephilosophyforum.com/discussion/comment/331201
  • Hume on why we use induction
    Or as we call it these days "conditioning."unenlightened

    ... or induction, of course.

    Pavlov's dog does not reasonunenlightened

    ... except by induction.

    It is the foundation of learning, but has no logical basisunenlightened

    ... if logic means deduction (as it usually does).

    You cannot derive a 'will be' from a 'has been'.unenlightened

    ... by deduction.

    For some reason people who are happy to assent to the former often have difficulty with the latter.unenlightened

    Because they assume logic and reason are deductive?
  • Emphasizing the Connection Perspective
    "Useful"? - yes.

    But no one here was arguing about 'useful'.

    The claim was that neuroscience could not fully investigate consciousness, at all.
    Isaac

    I think you are simply confusing my posts with other people's? Not sure who's.

    Not that neuroscience is using one definition but other definitions might prove equally useful. That is a claim I would entirely agree with.Isaac

    Good. I think they should use mine. :wink:

    Are they using yours? Links welcome.

    But this is begging the question.Isaac

    Or just checking agreement of premises / what we reckon.

    you're saying we trust our (clearly disputed) instincts as to what does and does not belong in that categoryIsaac

    No, I'm just hoping some cases are clear and undisputed, as premises / what we reckon.

    Is it ethical that we invent a sub-category of sufferingIsaac

    I don't know, but I don't think we invent it. We find it delineated (vaguely, but with clear cases) in common usage.

    For me, consciousness is simply a specific type of self awareness, the logging to memory of mental events for future use, the identification of a single processing unit with a history, and properties which apply to it rather than it's parts.Isaac

    I know, and I'm interested. And if "awareness" and "mental" don't beg the question but cover unconscious as well as conscious processing, then I can imagine you turning out to be right. One test would be whether your recipe can produce processing that we think could easily be unconscious. If so, then more work to do.

    You see it differently, I appreciate that.
  • Emphasizing the Connection Perspective
    Yes, there must be something to distinguish, otherwise we'd have to argue that all that is the case was completely homogeneous and I can't reconcile that with the consistent role symmetry breaking seems to have in physical process. The point is two-fold. Firstly, the thing we actually do distinguish is not thereby any more real than alternative options we've chosen to overlook. Secondly, saying something is not the same as having a referrant for that something. We could both agree now to include the word 'Jabberwocky' in numerous conversations. We'd both be using the same term but it would be without an agreed referrant.Isaac

    Yes. And yet, couldn't someone have understood all of that perfectly well, and still wanted to ask whether you saw any use in the "conscious/unconscious" distinction: a division, however vague and provisional, among all of the potential (but actual, factual) referents of our discourse that are to be found moving about on the surface of our planet?

    I suppose it's clear to me now (but do correct me) that your answer to that person would be no, unless the supposed distinction were reformed by smearing it out into a spectrum, a gradual scale of increasingly vivid consciousness, going by degrees from barely conscious at all at one end of it, along and up to (at least) the full consciousness of, say, a young adult human after morning coffee at the other end. My slight disappointment (though not total surprise) is that you would have the 'lower' end of the spectrum reach so close to my thermostat circuit as to virtually include it, and thereby undermine any clear intuition of complete unconsciousness, or zombie-ness, or nobody-at-home-ness. There would be no clear cases of such a state, as is indicated by your cheerfully feeble assurance about the thermostat:

    None of these things are attributable to a thermostat, but if they were [...]Isaac

    Well I think I could persuade you that they are. Don't you think I could? (The circuit anticipates and conveys pain in the sense of being 'triggered' to send signals about damage and the cause of it, doesn't it?)

    Or perhaps I couldn't, and your intuition of complete unconsciousness is firm after all. By the same token, your intuition of where consciousness begins, or what kinds of things (e.g. what kinds of feedback circuits or logging circuits) to call conscious in a minimal degree, will then also be relatively clear and informative.

    What is the use of any such clarification, though? As you point out, things are looking circular...

    if we allow a definition of consciousness to be so embedded in human forms of life, then we cannot imbue with any awe the revelation that it is unique to [in this case, feedback loops (or similar circuits)]. After all, we have just defined it thus.Isaac

    So I won't be surprised if your assurance about the thermostat was disingenuous, and you soon admit that you don't really care whether we call it conscious or not.

    I, on the other hand, don't see the clarification as arbitrary, such that it might as well show consciousness beginning anywhere, or indeed nowhere and be just an all-inclusive spectrum. I share with many ordinary folk and dualists too the assumption that ordinary usage of "conscious" correlates with other important distinctions, one such being the question where to and where not to strive to prevent suffering - the answer being, usually, where the suffering would be conscious suffering, and not where it wouldn't. Obviously a car in a crusher suffers catastrophic damage, and quite possibly processes "pain" signals about this; but just as obviously (to some of us) it doesn't suffer consciously (nobody is home), and so it isn't a cause for ethical concern.

    Since your intuition of nobody-at-home-ness is so fragile you may want to question my carelessness about the car's plight. On the precautionary principle I may concede. If I resist, though, and get involved in a tug-of-war about whereabouts on a rough scale of processing-complexity we can surmise that consciousness begins, it won't be for lack of sympathy towards lower creatures but because, unlike you, I take ordinary usage of "conscious", aided and abetted by near-synonyms, to be capable of marking important distinctions in human psychology: so that defining consciousness isn't an arbitrary matter.

    Searle's Chinese Room, for example. For you (but correct me?), it's an arbitrary matter, merely one of definition, whether the Room is conscious, depending simply on whether or not consciousness is so defined as to apply in that case. For me, we learn from the example that language use can be conscious, as for us, or unconscious as for the Room (despite Searle's role as syntactic clerk). So the example serves by requiring a refinement of the supposed model of conscious processing. (To have it include a genuine semantic component.)

    I generally expect to find unconscious as well as conscious examples of all manner of cognitive and behavioural tasks. And I assume the contrast will point in the direction of useful theoretical revision. I don't think I could have any such expectation if, as you apparently do, I found the very idea of a sophisticated but completely unconscious machine to be problematic.
  • Evolution, music and math
    If the positioning of the holes was not random, it was measured. And it couldn't have been random or the sound wouldn't be musical. Don't you agree?Metaphysician Undercover

    You could (and probably still do) have limitless fun with identifiable melodies on scales of randomly spaced pitches that were nonetheless identifiable: as e.g. low-high (a 2-note scale) or low-medium-high (3 notes). (You could use randomly spaced holes in a flute, or randomly sized bongos etc.)

    Don't be in too much of a hurry for society to learn to identify performances of melodies on one of the particular spacings... requiring them to access a particular flute / bongo-set, or to produce new instruments the same size and with the same particular spacing... or... where the new identification of melody according to spacing were indifferent to choice of starting pitch: access new instruments with the same spacing relative (scaled in proportion) to the size of the whole instrument.

    Even then, when spacing is scaled in proportion for each instrument, don't expect many of the proportions to have gravitated to producing arithmetically nice frequency ratios. More likely they combine one or two arbitrary (and arithmetically non-nice) pitch intervals (frequency ratios).

    Yes, the sequence of proportions (step-intervals) might eventually repeat at the octave on the same instrument, but even then there is no reason for the musician or instrument maker or musicologist to assume that any arithmetically nice ratios are crucial to their art, in any obvious way.

    A lot of them have done so, of course, ever since Pythagoras. With the result that we are taught to assume the octave to be aesthetically more fundamental than other strikingly consonant intervals. Or that perception of consonance depends on approximation to nice ratios - a notion somewhat challenged by equal temperament, to say nothing of folk traditions.

    So I would guess the 40,000 year-old flute was crafted in careful imitation of previous models, with a keen sense of proportion but also in enviable ignorance of theories of arithmetically nice ratios.
  • Emphasizing the Connection Perspective
    That people can distinguish it does not mean it is distinguished in realityIsaac

    No, sure. Do you think there is anything to be distinguished, however vaguely? I might have lost track and missed that you are a zombie-denier / pan-psychist? So that you think that the "suffering" of an overheating thermostat circuit deserves some (presumably tiny but non-zero) degree of human sympathy?

    Please excuse the incredulity and name-calling, but I guess my suggested glossary is intended to establish common ground by excluding zombie-denial as well as consciousness-denial. If we (or anyone) can agree some clear cases of zombies as well as of consciousness then our discussion of how to characterise the transition is less likely to polarise and end in mutual incredulity. Is always my hope.

    I had assumed we had that common ground, but maybe not. So... do you see any clear cases at all of nobody-at-home?

    I think "somebody-at-home-ness" is an entirely fabricated story we tell ourselves post hoc to string together our disparate desires and actions into a coherent whole, and people are (perhaps quite rightly) frightened that neuroscience will find this out.Isaac

    Yep, and the danger is that dualists would sense mockery in this glossary. But maybe the eventual scientific story (e.g., dare we suppose, yours about logging of logging, or mine about pointing at pointing) needn't simply disappoint, and 'find us out' to be zombies. It could explain our conscious states so that we understand our experiences more exactly.

    Not in terms of homunculi, obviously. And I guess most people have always sensed the potential absurdity (as well as the genuine puzzle) of the somebody-at-home talk, anyway. So they wouldn't be in as much danger of disappointment as you (perhaps) suggest. I.e., we aren't necessarily beholden to a persistent error or illusion. That (alienating and polarising) assumption is unnecessary. Haha, sorry if that is holier than thou. I can't help spreading peace and goodwill. :Saint Homer of Hippo:
  • Emphasizing the Connection Perspective
    Where this system goes wrong, the problems of philosophy Wittgenstein was trying to dissolve, is when people reify words. They make a word (like consciousness) and then say because we have that word, there must be an accompanying concept. They search for the pure concept attached to the word, but there is none, the word was just doing a job, and a different job in different contexts. There's no sublime concept attached to it.Isaac

    You have to hand it to "consciousness", though... it keeps getting up and distinguishing itself from near-synonyms.

    How about glossing it as "somebody-at-home-ness"? And unconsciousness as "nobody-at-home-ness"?

    As a way of reassuring the dualists (who are legion) that we (if you can excuse the presumption) do at least share their intuition of something going on, something deserving of proper description and explanation. And of that thing not going on, crucially, with rocks and calculators.

    (Even if we can't yet define what is going on precisely and uncontroversially. And even though we shall decline the invitation to dualism which is implicit in these glosses.)

    I appreciate that users of self-aware might complain they already had this idea. But I tend to think that version fails, since I can easily enough imagine calling a thermostat aware and a larger system containing it self-regulating or (at a pinch) self-aware, even though I also see both as unquestionably unconscious (i.e. clear cases of nobody-at-home).

    As regards attempting to define the something-going-on more precisely... I love this,

    the logging is of the fact that some logging of sensory data has occurred. Ie logging the logging event. If a computer did that, then, yes, I would say it was self-aware. If it could make use of those logs in its computation I would say it was conscious.Isaac

    ... but mainly because of this,

    Logging and storing are two different things. Memory is not like a hard drive. A lot of the confusion around consciousness, I think, arises from this.Isaac

    Me too. (And ideas aren't inner words or pictures...)

    Still, you set the bar too low, for me. I can easily believe that nobody is at home in any state of the art neural network. I'm waiting for them to start playing the social game of pointing (actual) words and pictures at things in the real world, and I assume that will be a long time coming, e.g. well after they've started playing at pointing sticks and balls at things in the real world.
  • Are our minds souls?
    You were comfortable with denying that consciousness is a thing, or things. Great. Gloss 'mind' as 'mentalness'.
  • Are our minds souls?
    So a MIND is an objectBartricks

    "So" in the current idiom, or as an inference? From what?
  • Are our minds souls?
    No, to everyone.Bartricks

    What do you mean?

    So you think wetness can just exist? Wetness is a property of liquids.Bartricks

    Exactly.

    But you can't just have wet.

    Likewise, you can't just have conscious states.
    Bartricks

    Exactly.

    They are states - the clue is in the name - of a thing. What thing?Bartricks

    Why, a person, of course.
  • Are our minds souls?
    Conscious states are states of an object called a....wait for it....MIND.Bartricks

    To a dualist, of course. Don't expect a physicalist to agree with this premise. They can be quite happy pointing "conscious" directly at people, just as they point "wet" directly at tea.
  • Are our minds souls?
    Now that you know that I'm not reifying consciousness,Bartricks

    Then why do you refer to your mind as though it were an entity?
  • Are our minds souls?
    Yes, you can mean them metaphorically - and that's how a charitable person would interpret you if you said "how heavy is Beethoven's fifth" or "what does the pizza think like?"Bartricks

    Haha, yes but I meant I agree that the dish is literally unconscious (non-conscious if you prefer), while the music is metaphorically heavy.

    Anyway, human animals (brains, at the cost of distorting the situation) are among the things we can rightly describe as literally conscious. 'Mind' is best dropped in careful discourse, or glossed as 'conscious object' (e.g. person).

    the point is that sensible objects cannot literally think anything, just as Beethoven's fifth cannot literally weigh anything.Bartricks

    What about human animals i.e. persons? Are they not sensible objects? I can't gloss 'sensible objects' as 'things'?

    I'm not a zombie-denier, by the way. I'm not calling any dishes (or even e.g. insects) conscious.

    I'm just inviting you to refrain from turning a property (or class of objects) into a thing. I.e. turning consciousness into a thing, or things.
  • Are our minds souls?
    A premise I find unattractive and unnecessary is that a 'mind' is a thing or substance at all. 'Mindful', 'mental' or (more to my taste) 'conscious' is a property of things, most obviously human animals.

    What things to call conscious is the problem.

    I came here to be insulted, so feel free. :wink:

    You may wonder what the dish thinks like, but your reason - or at least, the reason of most of us - declares loud and clear that such wonderings make as little sense as wondering how heavy Beethoven's fifth symphony is. That is, they reflect category errors.
    4h
    Bartricks

    I disagree, I think they are analogous. 'Heavy' is of course applied to music only metaphorically, but that doesn't matter here. The point is that in both cases (dish and piece of music) we are trying to classify correctly. It's pretty clear that the dish is unconscious, while the music is (in this particular case) heavy.
  • Perception Of thoughts
    I cannot be sure my body exists.Andrew4Handel

    Fine... if, this means you want to ergo less, after all.

    But of course you want to ergo more. Unquestionably a mind and questionably a body.

    I don't see the ergo.
  • Perception Of thoughts
    I accept Descartes's cogito ergo sum.Andrew4Handel

    I accept cogito ergo something, just not ergo the whole Cartesian theatricals.

    I know for certain that I existAndrew4Handel

    Agreed, if "I" refers to your bodily person. Seems to me that a zombie robot could well make the same inference from detection of its own unconscious processing. (Insisting that cogito or pense implies specifically conscious processing would only beg the question.) But probably "I" gives a free pass to all manner of "subject" woo?

    but I can doubt the content of my experiences.Andrew4Handel

    That's beside the point if we are trying to understand consciousness. Obviously the content can be real or imaginary. The problem is what sense can be made of calling it "content".

    I don't think homunculi or mental images are a problemAndrew4Handel

    Neither do most people/homunculi. :wink:

    Theories often say nothing about homunculi but you know they are required for the theory to be coherent.Andrew4Handel

    Yes, sounds plausible... examples?

    when you people mental representations.Andrew4Handel

    Clarification? People as a verb, is that?

    the requirement for a perceiver or homunculiAndrew4Handel

    Not forgetting entirely the option of glossing 'perceiver' as 'perceiving bodily person'?

    Reading your link to your previous post that appears to be a form of behaviorism. I think strictly mental content like dreams and concepts are inexplicable that way.Andrew4Handel

    I'm a (amateur) behaviouristic consciousness-explainer, not a behaviouristic consciousness-denier. I want to understand what makes some of my cogitations - dreams included - conscious. Why behaviourist? Only in reaction to the age-old assumptions about inner words and pictures. When what really accounts for consciousness may be better understood as social skills with actual words and pictures.
  • Emphasizing the Connection Perspective


    Yes, but you're encouraging a fair deal of witting and unwitting dualistic woo.
  • Emphasizing the Connection Perspective
    Mental processes are different in kind from biological/neurological processes in the same way biological processes are different from chemical processes.T Clark

    At the risk of splitting hairs, but in aid of countering all of the witting and unwitting dualistic woo flying about...

    Mental processes are different in kind from information-technology processes (and will be re-conceived as, I dunno, social-semiotic processes) in the same way that vital life force processes are different in kind from chemical processes (and have been re-conceived as bio-chemical processes).

    Late edit:

    I mean that "mental" processes need re-conceiving in (something like) social-semiotic terms, so that we don't have to regard them as fundamentally different in kind from IT processes, even though we should beware of underestimating their complexity relative to ordinary (and of course non-mental) IT processes.

    In the same way, vital life force processes were eventually re-conceived as bio-chemical processes, so that we don't have to regard them as fundamentally different in kind from chemical processes, even though we are well aware of their complexity relative to non-biological chemistry.
  • Perception Of thoughts
    I am beginning to sympathise with the idea that perceiver might be the soul and some form of dualism.Andrew4Handel

    There is a lot of that about. Even skepticism about a soul or homunculus munching popcorn in the Cartesian theatre doesn't seem to imply skepticism about the images on the screen in the theatre (or wherever, but it looks like the same theatre to me). Here's me continuing in that vein... I'll try and find some better literature for to save your non-soul...

    This and this.
  • Perception Of thoughts
    However the story seems much more problematic when we talk about retrieving memories, accessing word meanings, dreaming and having ideas. Who is accessing this mental content and from where?Andrew4Handel

    Is it that you don't mean this is any more problematic at all as regards the "homunculi problem", just that it comes with a "where is it all coming from" problem, to boot?
  • Multiculturalism and Religious Fundamentalism
    the evolutionary trait of 'tribalism' which humans have in common with other primates.fresco

    Then why is exotic erotic?
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    To me - those are not logically unacceptable consequences.simeonz

    Maybe not, but see the quagmire up ahead?

    I suggest the choice, eventually, is between a physical binary distinction of conscious vs unconscious on the one hand, or a metaphysical binary distinction of mind vs matter on the other...

    Which of these seems to you potentially the more enlightening?
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    The analogy was, "when does a vehicle become truly automotive i.e. a true automobile?".
    — bongo fury
    I see now. First, let's agree that a vehicle and a vessel have some similarities, such as that they carry cargo and passengers. Of course, their method of transportation differs. Let's say that this aspect is fundamental for the purposes of the analogy.
    simeonz

    Ah, thanks for trying to get on board with my rickety analogy. But no, that difference is a red herring, or misunderstanding. I did say (although mention of horse-drawn in that sentence may have muddied things) sail-powered vehicles, not vessels. I appreciate sail-powered vehicles never were a common sight on the road, but in my story they are the nearest that the society has come to building their own cars - which they have inherited, ready-built, in plenty. So my point is the same as yours when you suggest,

    Then, for me at least, a human brain is to an insect brain, or to a plant's perception, more like a ship is to a boat, or a raft.simeonz

    Yes!... if you mean motor-ship. Then that's parallel, because I was equating the human/insect comparison to the automobile/sail-powered go-cart comparison. But there was no vehicle/vessel comparison for me.

    One could re-tell it as being about both (or either) vehicles and vessels, except there isn't a ready-made extension of "automobile" for that purpose (that I can think of, although there could have been).

    That said, I must agree to some extent. The spectrum of sentient qualities may have a sharp slope at some point. Even with a lot of structural complexity. I do not consider this likely - sophisticated information processing structure suddenly being vastly less aware when compared to a somewhat more complex different one. But I cannot fully disregard the possibility.simeonz

    Yes, a tempting compromise! My sharp slope, parallel to the progression from top-notch sailing to motorisation, is the journey from chimp or dog to human: from ability to follow the pointing of sticks or balls at targets to the ability to follow the (usually not actual) pointing of words or pictures at targets.

    Any interest shown in this positive matter and I'll happily roll over and tolerate what strike me as more or less unacceptable consequences of an unbounded spectrum... e.g. conscious phones, insects etc. at one end, and literal talk of mental pictures, concepts, beliefs etc. at the other.
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    The twist in the Chinese room, I guess, is to reveal a human (Searle) who is then revealed to be, in relation to the outer behaviour of the creature, a mere machine himself.
    — bongo fury

    I’m not really understanding how this twist is relevant.
    TheHedoMinimalist

    It brings out how processing of meaningful symbols by a machine may be no more meaningful for the machine than processing of any other materials. Even a component of the machine obviously capable of attaching meaning to certain (e.g. English) symbols might be oblivious as to any meaning attaching (for others present) to other (e.g. Chinese) symbols in its possession. It brings out how symbolic processing may be merely syntactic and not semantic. Devoid of understanding. Unconscious.

    The Chinese AI would have to be programmed to know how to learn Chinese instead through interactions with Chinese speakers because it’s impossible to simply hard code the knowledge of Chinese into the AI.TheHedoMinimalist

    Yes, putting the thought experiment on a more realistic footing could suggest conditions under which we might expect genuine semantic processing to occur. Searle would insist on interactions with Chinese speakers and the environment spoken of... so that the alleged added semantics needn't turn out to be just more syntax.

    But I actually think that being able to follow very complicated instructions would also require consciousness.TheHedoMinimalist

    But my PC fits that requirement?! But I forget, you are happy to attribute consciousness in such a case. :gasp:

    Just as the human in the thought experiment cannot follow his instructions without mentally understanding them,TheHedoMinimalist

    But remember that a premise (not necessarily realistic) of the thought experiment is that his understanding is purely of the syntax, so any mental aspect to it is surplus to requirements.

    Well, I actually don’t consider cars to be autonomous or having consciousness as a whole.TheHedoMinimalist

    Nor did I, nor did the post-apocalypse society. We (I and they) consider them to be "automobiles"... whatever that means - which (what that means) was meant to be the problem analogous to that of "consciousness". But maybe that word is too dated to work, and too easily confused with the more up to date problem of the consciousness (or otherwise) of self-driving cars. Which is just the problem of the consciousness of any AI. So I may need a different analogy.

    If the post-apocalyptic world had self-driving cars, how would the reductionist sages of that world explain them in terms of simpler mechanical processes?TheHedoMinimalist

    If you're talking about AI and consciousness directly and not my analogy of "automotivity" then I guess my answer would be the same as previously: their explanation is too bland and uninformative.

    I believe that you are claiming some ontological basis for placing human beings (or at least human organisms) in a distinct category here.simeonz

    Again, apparently my analogy mis-fired. No metaphysics intended. Only trying to save "conscious/non-conscious" as a vague binary.

    The analogy was, "when does a vehicle become truly automotive i.e. a true automobile?".
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    Similarly, my past experience of having behavioral patterns and seeing that they are influenced by my mental activity provides evidence for the hypothesis that insects are more likely to be conscious than zombies. Why do you think they are more likely to be zombies?TheHedoMinimalist

    Because of my past observations of robots whose behavior, while suggestive of mental influence, was soon explained by a revelation: either that there was no robot after all because a human actor operated from inside; or of mechanics and software inside, whose operation I recognised as obviously non-conscious. The twist in the Chinese room, I guess, is to reveal a human (Searle) who is then revealed to be, in relation to the outer behaviour of the creature, a mere machine himself.

    None of this can impress you if you have lost all intuition of the non-consciousness of even simple machines. I'm not sure how to remedy that, although...

    ... see the motor car analogy, below, and...

    ... also, at least bear in mind that arguments for "other minds" (arguing by analogy with one's own behaviour and private experience) were (I'm betting... will check later) designed to counter the very healthiest intuition of zombie-ness, which might otherwise have inclined us to doubt consciousness in even the most sophisticated (e.g. biological) kinds of machines (our friends and family).

    So you might at least see that your intuition of zombie-ness is likely to have depleted drastically from a previous level, before your interest in AI perhaps? Not that that justifies the intuition. Perhaps zombie denial is to be embraced.

    I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.
    — bongo fury
    I understand, but what is the alternative?
    simeonz

    Short answer: my pet theory here.

    More generally, how about this silly allegory... Post-apocalypse, human society is left with no knowledge or science but plenty of perfectly formed motor vehicles, called "automobiles", and a culture that disseminates driving skills through a mythology about the spiritual power of "automotovity" or some such.

    Predictably, a primitive science attempts to understand and build machines with true "automotivity". The fruits of this research are limited to sail-powered and horse-powered vehicles, and there is much debate as to whether true automotivity reduces ultimately to mere sail-power, so that car engines will eventually be properly understood as complicated sail-systems. And even now the philosophers remark sagely that engines may appear to be automotive, but the appearance of automotivity is, in reality, the sum of millions of sailing processes.

    Do we hope that this society replaces its vague binary (automotive/non-automotive) with an unbounded spectrum, and stops worrying about whether automotivity is achieved in any particular vehicle that it builds, because everything is guaranteed automotive in some degree?

    I told you it was silly.
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity?simeonz

    They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum.TheHedoMinimalist

    This is still the question, for me. I think the OP is quite right that consciousness denial and zombie denial will both tend to lead to replacement of the vague binary (conscious/non-conscious) with an unbounded spectrum/continuum of umpteen grades (of consciousness by whatever name).

    I always suspect that (replacement of heap/non-heap by as many different grades of heap as we can possibly distinguish) is a step backwards.

    In this case my complaint against the unbounded spectrum is,

    • you build lots of AI, confident that all of it is conscious in some degree or other, but you could well be wrong. Hence my aversion to zombie denial.

    • we don't get to understand what consciousness is / how it works. We remark sagely that it is all an illusion... but by the way quite true, and why worry about it...
      But I want to understand my conscious states. Hence my behaviourism: my skepticism about the folk psychology of consciousness, the inner words and pictures.


    This seems to me to suggest that John Searle wanted to reject machines sentience in general.simeonz

    Apparently not, at least not by way of the Chinese room. He does say he suspects consciousness is inherently biological, but for other reasons.

    , I mostly suspect that insects are conscious because they are capable of moving. They also appear afraid whether I try to squash them.TheHedoMinimalist

    But wouldn't they appear that way if they were zombie robot insects?... if you can imagine such a thing... could zombie actors help? :lol:
    https://www.imdb.com/title/tt0088024/
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    I would start by mentioning that bongo fury corrected meTheHedoMinimalist

    No correction intended, I was just trying to orient myself on the wikipedia map of positions. (I think I'm happy here, but in some respects also here and here.)

    I would say that my view is more properly called functionalism rather than eliminative materialism.TheHedoMinimalist

    Yes, point taken. You are more likely to ascribe consciousness to non-fleshy as well as fleshy brains. (?)

    Thanks to you and @simeonz for the Chinese room arguments. I will be pleased to respond later, for what it's worth.

    Hi.

    , the classical Turing test is outdated, because it limits the scope of the observations to static behavior.simeonz

    Wasn't that Searle's point? That the test was useless already, because an obvious zombie (an old-style symbolic computer) would potentially pass it?

    Not that everyone then or now finds it obvious that an old-style computer would be a zombie, but the man-in-the-room was meant to pump the intuition of obvious zombie-ness. That was my understanding, anyway, and I suppose I tend to raise the Chinese room just to gauge whether the intuition has any buoyancy. Lately I gauge that it doesn't, much.

    In particular, does materialism deny awareness and self-awareness as a continuous spectrum for systems of different complexity?
    — simeonz

    They do not deny that it is a spectrum but they don’t have to think that it begins on a molecular level or that all objects are part of the spectrum.
    TheHedoMinimalist

    I agree, and I don't know that I could without a rather clear intuition that all current machines are complete zombies.

    How many neurons (or similar structures) would we need to create an organism whose behavior can be considered minimally sentient - five, five hundred, five million, etc?
    — simeonz

    This is difficult to precisely answer but I would make an educated guess and say enough to form a microscopic insect.
    TheHedoMinimalist

    I must say, I find it easy to intuit that all insects are complete zombies, largely by comparing them with state of the art robots, which I likewise assume are unconscious (non-conscious if you prefer). I admit there is an element of slippery slope logic here - probably affecting both "sides" and turning them into extremists: the "consciousness deniers" (if such the strong eliminativists be) and the "zombie-deniers" (panpsychists if they deserve the label).

    I agree it is interesting to poll our educated guesses (or to dispute) as to where the consciousness "spectrum" begins (and zombie-ness or complete and indisputable non-consciousness ends). I vote mammals.

    Related to that, it might be useful to poll our educated guesses (or to dispute) as to where the zombie "spectrum" ends (and consciousness or complete and indisputable non-zombie-ness begins). I vote humans at 6 months.
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    At this point, would the people collectively manifest the consciousness of the original brain, as a whole, the way it would have manifested inside the person?simeonz

    Do you here allude to, or have you just re-invented, the China brain?

    Also relevant, this speculative theory of composition of consciousnesses. Also it attempts to quantify the kind of complexity of processing with which you (likewise) appear to be proposing to correlate a spectrum of increasingly vivid consciousness.
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    Thanks to you and others for supplying informed clarification as requested in the OP.

    Does ontological eliminative materialism ascribe awareness to everything or nothing?simeonz

    On the wikipedia page is Quine's question:

    Is physicalism a repudiation of mental objects after all, or a theory of them?

    This neatly distinguishes a strong eliminativism (ascribing consciousness to nothing) from mere identity-ism (ascribing consciousness to some things, some brain states). The former would be what causes horrified reactions from many (see above), and the latter is accepted by @Terrapin (I think), and @TheHedoMinimalist (I think).

    Note we are also given an intermediate option of ascribing some modified notion of consciousness to some things, some brain states. That would be my choice, and the likely scale of modification is enough to make me often want to side with the strong camp, especially when identity-ists embrace the folk-psychology of consciousness (e.g. mental words and pictures) with so little care or modification as to suggest Cartesian dualism.

    Then there is, as the OP says, the further option of ascribing consciousness to everything, and the subsequent question whether this could tell us anything that ascribing it to nothing wouldn't have told us anyway. (Or not, as argued here.)

    My question:

    Doesn't ascribing consciousness to any machines with "software" set the bar a bit low? Are you at all impressed by Searle's Chinese Room objection?
  • Does ontological eliminative materialism ascribe awareness to everything or nothing?
    Eliminative materialism sounds like a regurgitation of behaviorism
    — T Clark

    :up: That’s exactly what it is.
    Wayfarer

    ... hold noses... access handkerchiefs, gas masks...
  • Metaphysics
    I mean, if number (etc), is real but not physical, then it's a defeater for physicalism, right?Wayfarer

    I said, "at least for the non-Platonist"... who won't accept numbers as the values of bound variables, or whatever. And probably regards them as on a par with fictional characters. Or is a formalist. But isn't, either way, tempted to look beyond physics for their cast of actual entities, on redeeming the IOU.

    Funny thing is, I thought I gathered from reading some of @Terrapin's stuff that he is quite willing to posit fictional characters as mental entities... which could complicate things here.

    But no - no "revolutionary implications" for physical ontology in fairy or other fictional stories.
  • Metaphysics
    In terms of definition or reference (which is the y in question), all terms have the ambiguity you refer to re "eternal" for example.Terrapin Station

    Do you mean, as appears to, that metaphysical discourse is in no worse a condition than any other kind of discourse is when it comes to definition or reference? And probably also that that limitation shouldn't in itself be assumed to damn the enterprise? As we notice with fdrake's examples of broken leg diagnosis and war trials conduct? Where, if we are sensible, we get our priorities right and manage not to split the wrong hairs?

    If so, I would be interested to know whether you (or fdrake) regard any metaphysical conundrums as comparable to these examples, either in urgency or in the feasibility of reasonably meaningful thought and discussion about them.

    Actually, I expect you won't say they compare in urgency, but what about feasibility?

    I think the nature of the existence of numbers - the ontology of number, if you like - is actually a clue to the meaning of metaphysics. And I bet when you try and conceive of 'the abstract realm', your mind instinctively tries to imagine where such a realm could be. But 'where' is the 'domain of natural numbers?' Obviously nowhere, and the use of the word 'domain' is in some sense metaphorical in this context; but nevertheless, there is such a domain, because some numbers are 'in' it, and others are 'outside' it.Wayfarer

    But if this is a clue, as you say, won't it make the subject of metaphysics non-urgent in the extreme, at least for the non-Platonist? You don't want to call the cast of characters in a story the "ontology" of the story, except as an IOU for that ontology. I'm not doubting we often get by without redeeming the IOU for the real thing, and even enjoy or use the story better on that basis. But the interesting philosophy would be in how to unpick the metaphor, not how to take it literally?
  • Metaphysics
    The world is full of obscurantists.Magnus Anderson

    What really lies beyond the constraints of my mind?
    Could it be the sea... or fate, mooning back at me?

    On a graph of urgency against feasibility, where do you plot the bard's question (or that of the nature of the world beyond the physics), relative to broken leg diagnosis and war trials conduct?
  • Does consciousness = Awareness/Attention?
    The "invisible real me" projects the shadow puppet because it is just very useful to have a business rep out front which can deal with other business reps, which are also 'out front'.Bitter Crank

    So,

    - Does consciousness = shadow-business-rep-puppetry?
    - Does consciousness = any combination of this with any of the many other suggestions?

    But then, do you mean any shadow-business-rep-puppetry, including the clearly unconscious kind which I presume is implemented in my PC as operating system "shells" and the like?

    And if not, how do you narrow it down?

    Sorry to butt in.
  • Does consciousness = Awareness/Attention?
    it makes no sense whatsoever to talk about any meaningful connection/association/correlation for a thermostat.creativesoul

    Well, exactly. That's why I'm calling it mere syntax. We agree on much, as I keep saying.

    Does consciousness equal:

    • Awareness?
    • Attention?
    • Experiencing?
    • Thoughts/mental ongoings?
    • Belief?
    • Reasoning?
    • Meaning?
    • Thoughts about thoughts?
    • Mental correlations?
    • Mental associations?
    • Mental connections?
    • Expectation?
    • Fear?

    Does any of these do the trick? Or can all of them be unconscious?

    I suspect they all can, on any definitions plausibly grounded in common usage. (Fear, maybe not yet, but soon, when we start attributing it to some gratuitously cute robot.)

    So, taking the bull by the horns, what distinguishes, for example, conscious meaning from unconscious meaning: genuine from fake? That was my reason to invoke Searle and his Chinese Room, and you got that, totally. As here:

    When we're discussing consciousness, the discourse needs to include not only the candidate(creature), but also what *exactly* the candidate is conscious/aware of, and/or attentive towards?creativesoul

    That would be Searle's message, too, I think.

    It leads, of course, to the question how or roughly when a creature achieves consciousness of the (external or internal) objects of its awareness, attention, thought, belief, correlation, expectation, etc.

    Thanks for sketching your approach to that question, and thanks for looking at mine!
  • Does consciousness = Awareness/Attention?
    You writing all this down?creativesoul

    Can you doubt it?

    All meaning is attributed solely by virtue of drawing mental correlations, associations, and/or connections between different things.creativesoul

    Agreed. But then, the same old problem. Conscious (mental correlations), or unconscious?

    I'm suggesting, conscious where the meaning is genuine, in the sense of not reducing, like the light-heat connection for the thermostat, or the salivation-bell connection for Pavlov's dog, to syntax. (You don't like widening linguistic terminology to symbolic functioning in general, I do. That difference between us is negotiable, I expect.)

    Genuine, though, in what more positive sense? (You might ask.)

    Here I have to contradict your cherished separation of meaning from its study - from "semantics". Genuine meaning, worthy of associating (roughly at least) with consciousness, is for me a semantical understanding, exercise of the high level social skill of agreeing which words or pictures or other symbols we are to suppose are pointed at (directed at, thrown at, landed on, applied to, attached to) which things.

    The dog plays fetch with sticks. We play fetch with words.

    The dog understands where the stick was thrown towards, and where it landed, out of a range of possibilities. But it doesn't, as we do, understand what a word was pointed at, or even that anything got pointed, at all; even though it might be trained or innately disposed to respond a certain way (which we of course may interpret semantically). Understanding and recognising the semantic relation is a much harder game, which, by contrast, the human infant very soon delights in.

    The dog is conscious, I conjecture, roughly to the (rather limited) extent that it can join in the harder game.

    Thank you for looking.