Comments

  • Looking for suggestions on a particular approach to the Hard Problem
    The hard question goes beyond this and asks "How are the physical components equivalent to mental components". How is what you are saying addressing that?schopenhauer1

    By saying that mental components are a fiction which we get into the habit of entertaining as a convenient aid to succesful cognition. "What was my previous brain-shiver?... Oh yes, the one selecting picture A or picture B." Obvious how that abbreviates...
  • Looking for suggestions on a particular approach to the Hard Problem
    To say "consciousness is an illusion" is to not explain the illusion itselfschopenhauer1

    It might be. I'll have a go.

    I'm looking at the back of my front door, conscious of my consciousness of the colours and patterns: edges, curves, corners, textures, gradients. My dog (if I had one) is looking at roughly the same thing, and I know (haha, might need correcting) from psychology class that neurons in my visual system that are sensitive to certain kinds of edges, gradients etc. have rough counterparts in hers.

    Of course, I don't know from class whether she is conscious too, but my crude theory of consciousness would say not. To put it another way, she isn't subject to the illusion, because she, not having linguistic or other symbolic skills, isn't skilled in reading a scene as a picture, and in reading a picture as an array of features, identifiable as kinds (of pattern or object) in a linguistic scheme - verbal or pictorial or both. So she isn't likely to make a habit of confusing, say, the door handle, still less her internal response to the door handle, with pictures of door handles. I don't mean confusing in the obviously pathological way of being likely to mistake any of these for each other, but in the sense of readying a plethora of appropriate responses to, say, movement of the handle, that depend on skill in differentiating and interpreting symbols as representing door handles, as much as they depend on manipulating actual ones. When the physical skill is so soaked through with symbolic and intellectual correlations, we might well - and harmlessly - think of our internal processes in readying to deal with the handle as being composed of pictorial components, like parts of an actual picture.

    Whenever you think you have a "mental picture" of something presently or previously perceived, or imagined, and the sense that this creates a hard problem, consider an alternative interpretation to the effect that you have just determined a relatively narrow preference among appropriate actual pictures. How might a zombie assess its own thought process, supposing that the process was one of narrowing its preference (as to the appropriate selection in some symbolic context) among a range of pictures? It would need to associate the process with the narrowed range of actual pictures, of course. And if it were indeed able to so shiver its neurons as to repeat the determination of readiness to select the range of appropriate pictures, it might form the habit of associating, even confusing, the thoughts with the pictures.

    In that scenario the creature has reason to recognise its own experience in our descriptions of consciousness. Especially if those descriptions acknowledge, as I think they should, the habitual confusion or at least correlation of thoughts (brain-shivers) with actual pictures.

    Disclaimer: these ruminations are inspired by Nelson Goodman's far more careful analyses here and here. However, not only does Goodman expressly warn against reading them as dealing with consciousness (rather than merely "thought"), but I should mention he was also an ardent dog lover, and sponsor of animal welfare.
  • 50th year since Ludwig Wittgenstein’s death
    But this is not ignorance. We do not know the boundaries because none have been drawn". [Wittgenstein on games]Banno

    The annoying thing, without which no threat of paradox, and everything were merely (in the current idiom) "a spectrum", is that clear enough examples of non-game are plentiful enough. (Relative to a discourse or language game, as rightly noted by @StreetlightX.)

    With clear enough counter-examples, we continually imply a line, however fuzzy, even though we should admit in those cases that we are some distance from it.

    Trying to approach closer to it little by little is what creates the heap paradox. Trying to define it by a formula (apart from technical contexts) is what W rightly criticizes. But acknowledging it (implicitly, behaviourally) from a distance is, I would argue, an important aspect of any game of using "game " (or other noun or adjective): an aspect which, I dare to suggest that W would agree, "never troubled you before when you used the word"(ibid), but is characteristic of that trouble-free usage.
  • Collaborative Criticism
    A chair, or not a chair? That was HG Wells' question:

    In co-operation with an intelligent joiner I would undertake to defeat any definition of chair or chairishness that you gave me. — First and Last Things

    Running with the idea, Max Black imagined an even more ambitious project:

    ... an exhibition in some unlikely museum of applied logic of a series of "chairs" differing in quality by least noticeable amounts. At one end of a long line, containing perhaps thousands of exhibits, might be a Chippendale chair: at the other, a small nondescript lump of wood. — Vagueness: an exercise in logical analysis

    Such is the now familiar approach of fuzzy logic: instead of the either-or question, ask, "whereabouts is this or that object located on the chair spectrum"? And this seems in much the same spirit as when we say, "there is no black and white, only shades of grey".

    There is an opposite current of thought: we hear about the dangers of slippery slopes and relativistic thinking, and about the desirability of "zero tolerance" in many areas. But the reality of borderline cases, when faced up to intellectually rather than swept aside dogmatically, tends to leave Black and White looking very much the less well-funded party in its propaganda skirmishes against the Shades of Grey.

    I want to support the underdog, and argue that the absolutist intuition that seems, quite often, to separate black from white, in some way that resists deflation of their status and territory to that of extreme greys, is essential to properly understanding human language. The challenge is to be able to look at fuzzy borders head on, but in some way that doesn't result, as is more usual, in us losing our sense of absolutism, and allowing the fuzziness to create a slippery slope from one category (say, black) to another (white). I fancy the way to achieve this is through a technical feature of Nelson Goodman's analysis of 'notationality': a notion closely related to the property known more widely as that of being 'digital'.

    "Chair" has no immediate antonym or 'anti-chair': whereas, for example, black has white. Indeed, one suspects that Wells may have chosen it as a case-study precisely for that reason. An adventure of successive expansions for the extension of "chair" doesn't seem headed for any natural denouement. We could perhaps invent a plausible concept of anti-chair: even by that very name, and exemplified by any (distinctly unhelpful) device designed to stop people sitting down. However, to explain my proposed adaptation of Goodman's principle, it will be just as feasible for me to square up to Wells' teasing example of,

    chairs that pass into benches, chairs that cross the boundary and become settees

    Wells is quite right that he and his joiner might realistically hope to so influence usage that any sense of mutual repulsion between the extensions of "chair" and "bench", or even between those of "chair" and "settee", were significantly reduced. Not that there wouldn't remain enough underlying tension to distinguish the extensions: there might well be examples of each category that were certainly not examples of the other; just that there would be an overlap. Objects that were both.

    But we can equally well imagine a usage becoming entrenched, even if only or mostly within the furniture trade, according to which there is reliably no overlap, and being able to call something a chair is sufficient to imply that it isn't a settee, and vice versa. Specifically, and adapting Goodman's notation-based principle, calling something a chair (within the limited specified discourse) then indicates zero probability of it ever (within the discourse) being called a settee. To someone who protests, like Humpty Dumpty, that they can point a word at whatever they like, we simply insist that they are not speaking the specified language: where 'language' is to be glossed as 'discourse' or 'interpreted language' or 'language in use', to clarify that competence with meaning as well as syntax is assumed. In the present example the discourse is relatively circumscribed, and particular to the furniture trade, but the principle scales up: as where we can for example comfortably deny that someone may, within the larger English language as spoken and interpreted literally, succeed in pointing the word "black" at white. (Or point the word "chair" at a device for preventing sitting.)

    This way, the borderline examples of "chair" that we, as speakers, actually dispute and agonise over are far from the similarly borderline cases of "settee" but are our present best data about the whereabouts, on a gradual scale like Black's, of the edge of the possible extension of "settee". This is because the borderline cases that we dispute and negotiate are ones that are on or near the border of current data or samples of use, not the border of the background population or theoretical 'support'. However, with antonyms or with discrete categories in a conceptual scheme, as also with any two distinct characters in a syntactic alphabet, being in one means definitely not being in the other. So 'data' about the one limits the theoretical reach of the other. So 'chair' means 'definite non-settee' and 'settee' means 'definite non-chair'.

    Assuming that reference (what I've called 'pointing') isn't a matter of fact (is 'inscrutable'), then neither is the background population nor the foreground sample of acts of reference (pointings). But agonising over borderline cases is how we maintain the fiction in such a way that it keeps discrete categories discrete. Agonising and allowing disputes over borderline cases of, say, "chair" (=> "definite non-settee") and of "settee" (=> "definite non-chair") causes the 'actual' extension of each - its 'observed' incidence of usage - to thin out to nothing well clear of that of the other. The fuzzy border where an object may be variously judged "chair" and "non-chair" is kept well away from the fuzzy border where objects are judged both "settee" and "non-settee".

    Example.
  • Is 'information' a thing?
    the context of a thread on this subject on this or some other forum.Wayfarer

    So the OP is... what's that phrase, inauthentic narrative?
  • Is 'information' a thing?


    Fascinating though it is to see various notions of information displayed in their varyingly mystical colours, I'm surprised no one has questioned the interpretation of Dennett and Wiener in the OP?

    Wiener said

    The mechanical brain does not secrete thought "as the liver does bile," as the earlier materialists claimed, nor does it put it out in the form of energy, as the muscle puts out its activity. Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.
    — Computing Machines and the Nervous System. p. 132.
    Wayfarer

    Could you share a link or pdf of the context, here? I'm struggling to see this (or even Dennett's "adding information to the list" [of fundamentals]) as problematising or refashioning materialism. Of adding a "meta-physical simple", as you put it. It sounds to me more like the opposite. As contesting the notion of thought as an additional kind of stuff. Like he was referring to a tradition of psychology that wittingly or unwittingly encouraged such an assumption.

    the materialist chestnut that 'the brain secretes thought as the liver does bile'Wayfarer

    Materialist chestnut, are you sure??

    Information is information, not matter or energy. — Wiener

    That could mean we need a third kind of physical quantity, or it could mean we don't, and information is merely patterning, or form. (Whatever that is, sure.)

    Information, he's saying, is irreducible.Wayfarer

    Irreducible to patterns in physical stuff? Why shouldn't he be reminding us that is exactly what it (e.g. DNA transfer) reduces to?

    I'm genuinely puzzled, and couldn't find the source, so, grateful for more if you have it.
  • 50th year since Ludwig Wittgenstein’s death
    However, that words can carry different meanings depending on how we use it doesn't imply that referents don't exist, does it?TheMadFool

    :ok:

    I see that you then got (and seemed all too willing to get) sidetracked, into questions of definition, or fixity or primacy or essence.
  • 50th year since Ludwig Wittgenstein’s death
    None of those words - the ones you pointed out - have referents either though.StreetlightX

    Oh, ok.
  • 50th year since Ludwig Wittgenstein’s death
    "Give" us "an" "example" "of" "a" "word" "being" "used" "without" "a" "referent".
    — StreetlightX

    To be fair, "example", "word", "used" and "referent" also want scratching, here.
    bongo fury

    To be even fairer, scratch "give", too. Admittedly, how (if) relation-words like verbs refer is where things get tricky, and potentially mixed up in how (if) sentences refer, or correspond or picture or what have you.

    Tricky isn't always interesting or worthwhile, so,

    even if [reference] were the main game we might well choose to make it not the main game...Banno

    Sure,

    to retreat and regroup, or even to give up in the medium term and ask different questions,bongo fury

    Btw, I can be (happily) "in thrall" to reference while at the same time not so obliged to sentences.
  • 50th year since Ludwig Wittgenstein’s death
    "Give" us "an" "example" "of" "a" "word" "being" "used" "without" "a" "referent".StreetlightX

    To be fair, "example", "word", "used" and "referent" also want scratching, here.

    I take your point that examples abound.
  • 50th year since Ludwig Wittgenstein’s death
    Give us an example of a word being used without a referent?TheMadFool

    There are of course plenty of examples in most sentences. That doesn't mean reference isn't the main game.
  • 50th year since Ludwig Wittgenstein’s death
    Sounds like natural language :wink:
  • 50th year since Ludwig Wittgenstein’s death
    Usually that's a good way to construe it if you are going for a literal construal.
  • 50th year since Ludwig Wittgenstein’s death


    If you mean you are more interested in reference than Wittgensteinians think is cool, then hooray.
  • 50th year since Ludwig Wittgenstein’s death
    Wittgensteinian meaning is an act of referring no?TheMadFool

    Sadly, they think that is a ridiculously narrow approach.

    Many people think Wittgenstein repudiated this idea, but I think he merely was saying that language does more than this.Sam26

    I wish people would stop accepting this notion (usually justified with a nod towards PI) of "move along now, nothing to see". How to understand how words and pictures point at things (even pixels) might be the important question. The fact that using our pointing skills to answer it invariably results in pointing-havoc is an excuse to retreat and regroup, or even to give up in the medium term and ask different questions, but not to teach that the question is trivial or narrow.

    The notion that "red" refers to something leads to a metaphysics of perceptions, tying one's thinking in knots of phenomenology.Banno

    Not if "something" means "one or more red things", no, it needn't.
  • "1" does not refer to anything.
    Numbers represent potentials, not actuals. Why does dividing things by three, into thirds, create an "infinite" number of threes after the decimal point, as if we can never get to an actual third of something?Harry Hindu

    I've never read much of Harry's stuff (on the suspicion that more is less) but, for the second time this weekend, I do applaud him for going against the flow, and I must say I can't understand how people would so miss the point, and would take the above rhetorical question as anything but a defense of mathematical practice against philosophical over-thinking. He was just saying, see how the fact that we can divide one by 3 despite the potentially infinite recurring decimal (Achilles can catch up) means we don't have to (in this case anyway) take infinity as a thing.

    Wasn't he?
  • Emotions Are Concepts
    Imagine training machines instead of trying to program them.praxis

    :ok:

    The price of the neural network revolution was giving up (or at least severely compromising) the model of the brain (or computer) as a processor of stored symbols - internal words and pictures representing external objects. Ironically, it had to revert to Skinner's behaviourist model, a "black box". Training, without necessarily understanding the learning.bongo fury

    (Admittedly off-topic, except possibly as regards the question of internal (or external) words and pictures representing internal processes.)
  • "1" does not refer to anything.
    So... you are asking what I think Pneumenon meant that Wittgenstein meant atBanno

    No! Only whether the word-string "then we can get extensions for it" was a misprint of "then we can get infinite extensions for it"?

    A different reading of it (as not a misprint) seemed plausible, so I thought I should check.

    And ...?
  • "1" does not refer to anything.
    Did "then we can get extensions from it" mean "then we can get infinite extensions from it"?
  • "1" does not refer to anything.
    ↪bongo fury So you want to argue thatBanno

    One thing at a time?

    If the rule allows to construct a finite extension, then we can get extensions from it, too.
    — Pneumenon

    This is the bit that I've been unable to find clearly articulated.
    — Banno

    Just to be clear, are you both dropping (or taking as read) an "infinite"?
    bongo fury

    To which,

    ↪bongo fury There are infinities.Banno

    Ought that have clarified for the competent reader that @Pneumenon meant "then we can get infinite extensions from it, too"?

    Just hoping not to misunderstand either one of you.
  • "1" does not refer to anything.
    I wouldn't say that any of these expressions point to or are about anything (in the sense of reference). They may indicate something, but that's not quite the same thing –Michael

    But if we reflect that what a word refers to or is pointed at is never a matter of fact anyway, but is one rather of interpretation, theoretical parsimony then strongly argues against the easy option of distinguishing as many varieties of meaning as we might have different words for. Obviously no two of these kinds will ever be quite the same thing.

    The argument isn't just about theoretical desiderata and separate from the subject-matter: the behavioural interactions we are discussing depend on agents' anticipations of each others' interpretations, so we are theorising about theorising (about...).

    And so I applaud @Harry Hindu's objection here to the habitual distinction of expression and exhortation from description. My attempt here.

    To expand a little: since no bolt of energy (nor any more subtle physics) connects uttered word to object, we (interlocutor or foreign linguist or even utterer) are perhaps entitled and perhaps required to interpret the utterance as pointing, in various degrees of plausibility, not only a presently uttered token but also the "word as a whole" at (not only a present object but) some kind as a whole, and then by implication as also pointing not-presently-uttered but semantically related words at related objects and kinds. In other words, any speech act offers a potential adjustment (or entrenchment) of the language in use, so that the extensions of related words are shifted in related ways.

    Hence utterances that vent frustration can also offer (directly or indirectly) potential adjustments to the extensions of words ("patient", "skilful" etc.) that might or might not point at Michael.

    And hence also Harry's and my other examples as linked above.
  • "1" does not refer to anything.
    If the rule allows to construct a finite extension, then we can get extensions from it, too.
    — Pneumenon

    This is the bit that I've been unable to find clearly articulated.
    Banno

    Just to be clear, are you both dropping (or taking as read) an "infinite"?
  • "1" does not refer to anything.
    we pretend that integers are real things, and this leads us on to more complex ways of talking about integers, and so a sort of recursion allows us to build mathematics up from... nothing.Banno

    Sure, maths as fiction with a super-coherent plot.

    And with illustrations, too. Kind of, Alice in Wonderland.
  • Lack of belief vs active disbelief
    What is the probability of the invisible miniature dolphin's existence?Pneumenon

    Please tick one:

    • Zero or negligible
    • Non-negligible but doubtful
    • Certain, or negligible degree of doubt
  • "1" does not refer to anything.
    Rather "1" is to be understood through its role inBanno

    ... in whatever the discourse. Charity and world-domineering ambition alike require translation between discourses, and agreement re ontological commitment (re, e.g., what "1" refers to). Pedagogy and practicalities require, instead, toleration of alternative systems.
  • "1" does not refer to anything.
    One cannot physically list the integers. But in understanding the intension of "integer" we understand how to construct the extension... and in so doing it seems to me that we understand the extension to be infinite.Banno

    If in fact the concrete world is finite, acceptance of any theory that presupposes infinity would require us to assume that in addition to the concrete objects, finite in number, there are also abstract entities. [...]

    Apart from those predicates of concrete objects which are permitted by the terms of the given problem to appear in the definiens, nothing may be used but individual variables, quantification with respect to such variables, and truth-functions. Devices like recursive definition and the notion of ancestral must be excluded until they themselves have been satisfactorily explained.
    — Goodman and Quine, Steps Toward a Constructive Nominalism
  • "1" does not refer to anything.
    I needed a mathematician who might disagree with a constructivist approach to mathematics;Banno

    What, to explain,

    why does Wittgenstein's constructivism lead to finitism?Banno

    ? :chin:
  • Of Vagueness, Mind & Body
    Sure, and many people here in this thread have made the obvious but important point that...

    1. Given the brain has a digital structure (on/off neurons)TheMadFool

    is a contestable premise. But also...

    Yes, let's agree on that, for the sake of argument; or we could (equivalently) discuss the "perceptions" and "thoughts" of the computer.bongo fury

    Because then,

    how is it that it generates vague concepts?TheMadFool

    is a good question, requiring but potentially also stimulating clarification of the terms in play. One can but hope.

    The sorites paradox then [...] reveals, quite clearly in my opinion, that if the brain operates in discrete digital brain states [...] then, vagueness should be impossibleTheMadFool

    ... or will look curious and paradoxical, sure. I recommend it (the paradox) as an anvil on which to refashion your ideas about this topic. Equally,

    How can a digitally produced image show a foggy day?Tim3003

    Either way, try to learn to bash the ideas one at a time.
  • Of Vagueness, Mind & Body


    The surprising thing about a sorites puzzle, or indeed any discussion of vagueness, is you don't need any analog. Analog is sufficient but not necessary for vagueness. You only require a digitally defined series (of discrete but plausibly imperceptibly different values, like your heights to 3 d.p.) and two or three vague adjectives like tall, medium and short.
  • Definitions
    As part of a balanced regimen of expository etiquette, including regular glossing, defining of terms has been shown to visibly reduce misunderstandings, and underlying spiritual growth. Haha. And now, the science bit:

    Definitions can be beneficial when used to join up separate discourses of any kind, large or small: large, e.g. languages in use, conceptual schemes, paradigms, ideologies, disciplines; and small, e.g. beliefs, dialogues, theories etc.

    (Ouch, I might have glossed a bit hard there).

    Where they meet most resistance is probably where they are perceived as the possible Trojan horse of an untrustworthy power?
  • Of Vagueness, Mind & Body
    1. If the brain is digital then each perception and each thought corresponds to a specific combination of off/on neurons which I will call brain state.TheMadFool

    Yes, let's agree on that, for the sake of argument; or we could (equivalently) discuss the "perceptions" and "thoughts" of the computer.

    A dwarf evokes a brain state and a giant, as of necessity, mustTheMadFool

    , for the sake of argument, might

    elicit a different brain state for they're two different perceptions and also different thoughts; different perceptions, different brain states and different thoughts, different brain states.

    Tallness is a vague term - it applies not to a specific height but to a range of possible values height can assume.
    TheMadFool

    That merely makes it a general term, no?

    That means heights of 6.1 ft, 6.5 ft, 7 ft are all tall for this person. What is to be noted here is that each of these heights are distinct perceptions and should evoke distinct brain states and each of these brain states should be different thoughts but this isn't the case: all of the heights 6.1 ft, 6.5 ft and 7 ft are matched to not different but the same brain state, the same thought, the thought tall.TheMadFool

    Ditto. A brain event might point its "6.1 ft" state at several different persons, and the same event might instantiate also a "tall" state which it points at these and other persons. (Fans of brain state talk may prefer "correlate with" to "point at".) Another instantiated state, "John", might be functioning as a singular term, and "6.1 ft" and "tall" as both general. Or "6.1 ft" and "6.5 ft" might be singular terms each pointing at a singular thing or value while "tall" points at both values and doubtless others.

    This shouldn't be possible if each brain state is a different thought, no?TheMadFool

    Even if you insisted (as some might but I certainly wouldn't) that no word token has the same reference as another token of the same word, that still wouldn't prevent each of them from referring generally to a host of things or values. (Likewise for states and unique brain events as for words and unique tokens.)

    In other words, a digital brain with thoughts being discrete brain states shouldn't be able to generate/handle vague concepts because if it could do that it implies different brain states are not different but the same thought.TheMadFool

    Again, would you substitute "general" for "vague", here? And if not, why not? Either way, this is a point worth debating, but I think it is about generality not vagueness.

    2. Imagine a digital and an analog voltmeter (measures voltage). The analog voltmeter has a dial and is able to read any continuous voltage but the digital voltmeter reads only discrete values such as 0, 1, 2, and so on. Now, the digital voltmeter's measuring involves rounding off voltages and so anything less than 0.5 volts it reads as 0 and anything between 0.5 and 1.5 volts it reads as 1 volt and anything between 1.5 volts and 2.5 volts it reads as 2 volts and so on. The digital voltmeter assigns a range of continuous voltages to a discrete reading that it can display.This is vagueness.TheMadFool

    Yes it entails vagueness if the discrete values are assumed to represent all possible voltages, but usually no, because margins of error are implied in the digitizing which make it feasible to prevent 0 from overlapping with 1, etc. Hence the inherent fidelity of digital reproduction, which amounts to spell checking.

    This seems to suggest that vagueness is an aspect of digital systemsTheMadFool

    It's always an issue lurking at the borders (margins if you like) of the error-margins, and when considering the whole of the digitizing (or reverse) process.

    and so, the brain, understood as functioning in discrete brain states (digitally), should generate vague concepts.TheMadFool

    Probably. Depends how you clarify that. Read 's question, and I recommend also this answer.

    1 & 2 seem to contradict each other.TheMadFool

    With appropriate revisions we hope not. :smile:
  • Of Vagueness, Mind & Body
    How can a digitally produced image show a foggy day?Tim3003

    :ok:
  • Of Vagueness, Mind & Body
    1. Given the brain has a digital structure (on/off neurons) how is it that it generates vague concepts?TheMadFool

    Leaving my better :razz: question aside, and going down your road of assuming brain states to be elements of a digital syntax, which I must admit is a respectable road for many people... are you asking whether: the evident vagueness of the (ahem) reference by (I can't believe I'm saying this) those states to external stimuli shows that the repertoire of states and their correlation with stimuli must be encompassed by some larger, inclusive analog syntax and/or semantics?
  • Of Vagueness, Mind & Body
    My main concern is simply if a digital system can handle analog data.TheMadFool

    Ok, but data are only digital or analog relative to system. They are symbols or values that are digital when interpreted as elements of a discrete syntax or semantics, and analog when interpreted as elements of a continuous syntax or semantics.

    E.g. Goodman's example (above): any letter 'A' inscription is digital read as a token of the letter-type 'A', but analog read as one of an unlimited repertoire of distinct 'A' types, so that vagueness is entailed, at least eventually, by the uncapped expectation of finer and finer discrimination between one element and another.

    So no, a digital system is one that handles data digitally, while a different, 'larger' system may encompass the first one and treat the same data as analog.
  • Of Vagueness, Mind & Body
    let's venture into the digital, a worldTheMadFool

    Better, a symbol system...

    lacking the qualities of a continuumTheMadFool

    Although it might be an excision from a 'larger' analog system. (Here, page 126, though it may not show. Can't find a pdf.)

    This basic signalling architecture (on/off) suggests the brain is like a computer, digital.TheMadFool

    Yes (in the important respect you specify), but probably not a symbolic computer, processing symbols stored in a memory. Rather, a machine that can be trained to respond to stimuli in a systematic way. http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI10.html

    However, the mind has created what is probably a huge cache of vague conceptsTheMadFool

    Do you mean a cache of symbols stored in a memory, to be accessed and read, as in a symbolic computer? I think the connectionist model largely contradicts that (still prevalent) notion.

    1. Given the brain has a digital structure (on/off neurons) how is it that it generates vague concepts?TheMadFool

    So, better to ask, how is it that it learns to participate in language games of pointing actual (not mental) words and pictures at things (in vague ways).

    2. Does the existence of vague concepts imply that the analog mind is not the same asTheMadFool

    So, imv no, it doesn't imply anything about concepts contained in a mind or a brain, because I cash out "concepts" in terms of symbols, and I don't see any need to locate them in the head.

    Perhaps you could clarify how you meant us to read "concepts"?... If not as internal symbols, which I took to be implied by "cache".