• Gettier Problem Contradiction

    Thanks! It occurred to me just now that a simplified definition along these lines might be that a justification has to be a valid argument that is more likely sound than not.

    On the other hand, having a quantitative concept of justification seems to fit with the jury instruction to return guilty if the premise that the defendant committed the crime is true 'beyond reasonable doubt', and also with the phrase that 'extraordinary claims require extrordinary evidence'. As probability is constrained between 0 and 1, the odds that the justification is sound might be a better measure, in order to encompass both extraordinary evidence and unreasonable doubt (Update: maybe the log of the odds? The odds itself asymptotically approaches the probability when the latter is small, and they both approach zero, which means that a metric using either probability or odds would not discriminate much between extraordinary claims and claims that are merely improbable.).

    In either variant of this view, Gettier's second case still seems problematical, but perhaps we can dispose of it by noting that proposition h (the one that Gettier regards as problematical for JTB) is a disjunction of two beliefs, only one of which (Jones owns a Ford) could be knowledge by either of the definitions I gave above. If the other belief (that Brown is in Barcelona) happens to be true, then this would establish that the proposition h states a fact, but not that Smith has a justification for believing it, as the only justifications he has for h concern only Jones' car, and they turn out to be false. Here we have the disjunction of two propositions, neither of which are part of Smith's knowledge, so it seems reasonable to say that their disjunction is not known by Smith, either.
  • Gettier Problem Contradiction

    There are close parallels with the soundness of an argument here, and the no false premises response, which I agree is a partial solution, is treating the justification as if it were an argument for the belief, which suggests that something like validity should also be a requirement: it is not enough that the premises of the justification are true, they must also be relevant in some sense.

    The problem here seems to be that if you apply 'no false premises' recursively to the premises, and also require them make a valid argument for the belief, you end up with the requirement that only proofs are acceptable as justifications.

    I agree with you that this would not be very useful, but I am beginning to wonder whether, unless we accept either that view, or the view that knowledge is a matter of believing a fact simpliciter, then we must recognize that knowledge is a conditional thing.

    A Bayesean might propose that a justification should increase the likelihood that the belief is of a fact (which is a weakened form of validity and soundness; those two, in combination, raise the probability that the argument is correct to 1), and that the resulting conditional probability is a measure of the strength of that knowledge. In the clock example, the argument behind the justification at step 4 presumably includes the premise that the clock normally runs reliably, a claim that has a certain probability attached to it, which is used to update the prior (which, in some circumstances, might simply be that there is a 1/24 probability that it is 3-something pm.) The new fact introduced by Carol at 6 reverts the probability that it is actually 3pm to whatever the prior was before Bob mentioned looking at the clock (such as the afforementioned 1/24.) When Carol checks on the internet at step 8, they get a different justification to substitute for the original.

    A similar observation can be made about Gettier's first case: when it is revealed that Smith, not Jones, got the job, the conditional probability behind Smith's belief crashes to the probability that Smith's unexamined pocket contains ten coins. The discovery that it does constitutes a new justification.

    This is just off the top of my head; I would be surprised if there are not paradoxes arising from this view.

    Another aspect of Gettier's examples is that there is no causal connection: whether someone has 10 coins in their pocket, and who gets chosen for the job, have no causal relationship, and neither do Jones' car ownership and Brown's location in the second case (note that this causal disconnect is absent in the clock example.)

    As these issues lie at the foundations of epistemology, they are not trivial.
  • The Turing P-Zombie

    No, I don't see a negative self-reference that has any relevance to whether subjective experience can be explained. Nor do I see any any subjectivity. Nor do I see a conclusion, and nor do I see a coherent argument. Other than that, everything is clear.
  • The Turing P-Zombie

    Not every, but true but unprovable mathematical objects could be useful. At least in explaining what the problem we face is.ssu

    Well, we'll see - or not.

    With regard to your example, it is not clear to me what your argument here is - in fact, I cannot even see what your conclusion is. Can you state your conclusion, and the steps by which you reach it? And where is the subjectivity that you mention?
  • The Turing P-Zombie

    It's not entirely straightforward to come up with a definition of what's alive and what's dead; there is some disagreement over whether viruses are truly living, and defining the exact point of death of a complex organism is not a simple matter.

    Definitions are not proofs, and they are not generally provable, even though some of the arguments made for favoring one definition over another may be provable or disprovable. We don't have a clear, generally agreed-upon definition of consciousness because we don't know enough about it, and gaining sufficient knowledge will be an exercise in science, not logic.

    Even if we accept that mathematics models reality extremely well, it does not follow that every mathematical entity models some aspect of reality. I think it is true to say that all unprovables require infinities, and it seems unlikely that modeling any finite aspect of reality, such as the human mind or the whole of the visible universe, require infinities (for example, the singularities that appear in relativistic models of black holes are taken for evidence that the models are not complete, and the expectation is that they would be resolved in a more complete theory.)

    I am not convinvced that you simply cannot make an objective model about something that is inherently subjective. Qualia, for example, are widely regarded as subjective, yet it has been posited that they can be explained as a set of abilities.
  • The Turing P-Zombie

    I think you are alluding to the Lucas-Penrose argument aganst the possibility of there being algorithms that produce minds. If so, that is a separate argument from Chalmers' p-zombie argument. Chalmers is attempting to refute metaphysical physicalism, but Penrose is a physicalist.

    I am not sure what you mean by 'It would be perhaps provable that consciousness is unprovable.' Specifically, I am not sure what it would mean to say that conciousness is provable - what is the premise that one would be proving?
  • The Turing P-Zombie

    Yes, I think you are on the right track here, though it may be leading in a surprising direction.

    Firstly, there is no difficulty with 'it is possible/conceivable that the Collatz conjecture is true (X)OR it is possible/conceivable that the Collatz conjecture is false', which is the correct way to express the fact of our incomplete knowledge (at least if it is decidable.)

    Secondly, your assertion that 'it is inconceivable that a mathematical conjecture is both true AND false' depends on whether false statements are conceivable. You may find it inconceivable that they are, but others may logically disagree.

    There is an alternative view here that is... conceivable? It says that it is conceivable that a mathematical conjecture is both true AND false, it is just that we can immediately refute it (i.e. immediately see that it is not possible.) In this view, we had to conceive of it first (to form the thought in our minds), in order to prove that it is not possible.

    You might wonder if there is anything that is not conceivable in this view. For one thing, I can conceive of there being inconceivable ideas by virtue of them requiring too much physical information for a brain to contain (and, as information can grow exponentially with the medium in which it is expressed, I do not think this can be avoided by positing an AI larger than a brain.) Maybe the true Theory of Everything is like this. Also, as Eliezer Yudkowsky pointed out, two millennia of philosophizing over epistemology and metaphysics never conceived of the sort of non-local reality that is the only sort allowed by Bell's Inequality; it only became conceivable in the light of new knowledge.

    In my previous post, I wrote "Given that we do not know whether [the Collatz conjecture] is true, we might want to say it is possible that it is true; at another time, we might want to say that it is possible that it is false." I have emphasized "at another time" here because in my first draft, I wrote 'and' instead, but changed it, as the original statement would be begging the question I was trying to address. The point I wanted to make in that post is that, under possible-world semantics, and regardless of any difficulties with conjunctions, one cannot even know[1] that 'it is possible that the conjecture is true', because if it is actually false, there are no possible worlds in which it is true.

    This is mostly moot, however, because what matters here is not how you or I see it, but how Chalmers is using it. Chalmers, and apparently most philosophers, seem to take the view that obviously false ideas are not conceivable, but obviousness is in the mind of the beholder, and is dependent on what they believe, yet if we take 'obviously' out of the definition, then 'conceivable' is simply a synonym for 'possible'. Likewise, if we rule out concepts that are false by definition (such as Chalmers' example 'male vixen'), they are also dependent on what we know, and often on which definition we accept. This may not matter, as things that are true by definition are usually uninteresting (the vixen case, for example, is just a consequence of the contingent fact that the English language happens to have different words for the male vulpes vulpes and the female vulpes vulpes. There are no profound metaphysical truths to be found in this.)

    I think you are trying to show that 'possible' and 'conceivable' are synonyms. If so, then fair enough, but you should realize that, as Chalmers' argument depends on a distinction between 'conceivable' and 'possible', you would be disputing Chalmers' p-zombie argument (and, furthermore, over the same issue that many other people dispute it.)

    [1] I originally wrote 'assert' instead of 'know', but then realized that one can, of course, assert a counterfactual.
  • The Turing P-Zombie

    I had some more thoughts on conceivabilty vs. possibility. Most philosophers accept possible-world semantics for dealing with questions of possibility and necessity, in which to say something is possible is to assert that there is a possible world in which it is true, regardless of whether it is true in the actual world (that looks somewhat self-referential, but logicians seem to agree its OK.)

    It is also generally held that mathematical truths are necessary truths, and necessary a priori at that. A mathematical fact is true in all possible worlds, and always has been.

    Putting these two things together gets tricky when we consider a mathematical conjecture. Given that we do not know whether it is true, we might want to say it is possible that it is true; at another time, we might want to say that it is possible that it is false. Under possible-world semantics, however, if mathematical truths are necessary truths, then one or the other of those statements must be false: if the conjecture is true, it is necessarily true, so there are no possible worlds in which it is false, and vice-versa.

    What we want is a way of saying that something might be true, without invoking all the implications that come with possible-world semantics. Saying that it is is conceivable is a way of doing that. (Note that even though, in everyday usage, 'might be true' and 'possibly true' usually mean more-or-less the same thing, they are different when 'possible' is being used in the context of possible-world semantics.)

    Chalmers also says p-zombies are 'logically possible', which looks like a strong statement, but it really just says that he is unaware of any facts that could disprove them. Given that he has defined p-zombies in a way that makes them immune from being ruled out by any sort of scientific investigation or discovery whatsoever, this is not saying much.
  • The Turing P-Zombie

    Thanks for saying that. It is easy to get carried away whan defending a point of view. I do, and I used to do it a lot more; I have to make a conscious effort to back away.
  • The Turing P-Zombie

    I think we have reached the point that we can agree to differ over whether or not there is something special about understanding, because we are approaching the question from different perspectives.

    With regard to conceivability: It would be a contradiction to say that 'the Collatz Conjecture is true AND the Collatz Conjecture is false', but 'It is conceivable that the Collatz Conjecture is true AND it is conceivable that the Collatz Conjecture is false' is not the same, at least formally.

    To see this, consider sentences of the form 'It is P that the Collatz Conjecture is true AND it is P that the Collatz Conjecture is false'. Substituting 'true' for P leads to a contradiction, but substituting 'uncertain' does not. Without more precise semantics for 'conceivable', we cannot say that we get a contradiction when we substitute it for P (in the Collatz conjecture argument, I avoided giving a full definition of 'conceivable' by saying that everything that has been conceived of is a subset of everything conceivable. You can read 'conceived of' as 'thought of'.)

    Chalmer's p-zombie argument is entirely dependent on taking the step from p-zombies being conceivable to being possible, so what he intends these two words to mean, and the relationship between them, is of critical importance. If they have the same meaning, then he is simply asserting that p-zombies are possible, without offering any argument for that claim; to put it another way, he would merely be inviting us to share his belief, without there being any risk of us falling into a contradiction if we decline to do so.

    Chalmers, therefore, is walking a narrow path: his definition of 'conceivable' has to be distinct from 'possible', but not so distinct that he needs additional assumptions to get from the former to the latter, and especially not any contingent assumptions, which could be false as a matter of fact.

    I know one thing you are thinking right now: So what is Chalmers' definition of 'conceivable'? I am not certain, and I don't think there is an easy answer; the first place to look would be his paper Does Conceivability Entail Possibility?, though it is not an easy read. For what it is worth, my impression of the paper was that he only says it does so in those cases where there are other, independent, reasons for saying that the conceivable thing is possible - which amounts to saying "no, conceivability by itself does not entail possibility", and therefore his 'argument' for the possibility of p-zombies is merely an unargued-for belief.
  • The Turing P-Zombie

    I never claimed understanding as simple...TheMadFool

    I don't know why people make such a big deal of understanding - it's very simple.TheMadFool
    To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words.TheMadFool
    ...and so on. These are not 'gotcha' quotes taken out of context; the alleged simplicity of understanding is a big part of your claim that there is nothing special about it.

    ...I said it's computable.TheMadFool
    But the issue is not whether it is computable, as I have repeatedly had to remind you. Do you not remember this?
    In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.
    — TheMadFool

    Absolutely not. As you are all for rigor where you think it helps your case, show us your argument from "there's something special about understanding" to "understanding is uncomputable."
    A Raybould

    I am quite willing to believe that initially, you may have merely misunderstood what I meant, by drawing an unjustified conclusion such as the one above, but to continue as if this is the issue in contention, after having been repeatedly corrected on the matter, is another example of trollish behavior, and I will continue to call you on it wherever I see it, whether it is in response to me or someone else (in fact, if it were not for this aspect of your replies, I would drop the issue as being merely a misunderstanding and a difference of opinion.)

    1)word-referent matching and 2) pattern recognition, as far as I can tell, aren't vague at all.TheMadFool
    For one thing, they are vague, when considered as an explanation of understanding, in that they lack the specificity needed for it to be clear that anything having just those two capabilities would necessarily understand, say, common-sense physics or Winograd schema. I am willing to believe that a machine capable of understanding these things could be described as having these capabilities, but I am also pretty sure that many machines, including extant AIs such as GPT-3, could also be so described, while lacking this understanding. If so, then this description lacks the specificity to explain the difference between machines that could and those that cannot understand these things.

      
    Are you implying the meanings of conceivable and possible are based off of the Collatz conjecture?TheMadFool

    No - I should have made it clear that the Collatz conjecture is just something for which neither a proof nor a refutation has been found so far; any other formal conjecture would do as well in its place. The essence is that there are two conceivable things here, and we know that only one of them is possible (even though we don't know which one), so the other (whichever one it is) is conceivable but not possible.
  • The Turing P-Zombie

    I have noticed a pattern here: you will post a claim, I will respond, then you will raise a different issue as if you had no counter-argument. A post or two later, however, the first issue will rise again, zombie-like, as if it had never been discussed before.

    For convenience, I will list these recurring arguments and my responses. That way, you can make a comment that merely states the number of the argument du jour, and I can reply by picking a corresponding response ID and just posting that. It will make things so much easier!


    A1: Understanding is simple, because understanding is computable.

    R1: Being computable does not necessarily entail simplicity. If this were the case, the whole of AI would be simple, the ABC conjecture would have a simple proof (if there really is one), etc.


    A2: Understanding is simple, because understanding is just a matter of connecting symbols to referents and recognizing patterns.

    R2.1 This is too vague to establish simplicity. It is so vague that you could make the same claim for any aspect of AI, or AI as a whole (like the people who dismiss current AI as "just database lookup"), but if it is that simple, how come there are still outstanding problems? 

    R2.2: Regardless of what definition you put forward, the claim that it is simple to implement is inconsistent with the fact that current AI has ongoing difficulties with, for example, understanding common-sense physics.


    A3: Understanding is simple, as shown by this simple example of what I understand from the word 'water'.

    R3.1: You cannot establish simplicity through only simple examples, unless there are only simple examples. As it happens, there are difficult examples here, such as the afforementioned difficulty with understanding common-sense physics.

    R3.2: If you could establish simplicity through simple examples, then the whole of mathematics would be simple, as established by the fact that you can teach five-year-olds to add two numbers.


    Did I miss any? Your help in completing the list would be appreciated!

    So, according to this list, your latest post is A2 A3, to which I reply R2.2 R3.1 (I have some flexibility here.)


    As for the definitions of "conceivable" and "possible", I'd like to see them in a familiar format please, like in a dictionary.TheMadFool

    We have already been around this define / whats-the-difference? / define  loop once before, and as I made clear, I have no intention of going round again until you address the whats-the-difference example.

    If you want to get into definitions, it's your turn to offer some, so how about if you proffer definitions which make the Collatz conjecture example/argument invalid or unsound? (Or, if you find that infeasible, you could simply say which premise or conclusion you first disagree with, and we can proceed from there.)
  • The Turing P-Zombie
    In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.TheMadFool

    Absolutely not. As you are all for rigor where you think it helps your case, show us your argument from "there's something special about understanding" to "understanding is uncomputable."

    Hence, I maintain that thinking/understanding is, for sure, computable.TheMadFool

    As I have pointed out multiple times, that is not the issue in question. Here, you are just making another attempt to change the subject, perhaps because you have belatedly realised that you cannot sustain your original position? Until you have completed the above task, stop attempting to attribute to me straw-man views that I do not hold and have not advocated.

    obscurum per obscuriusTheMadFool

    Now you are just trolling, and using Latin does not alter that fact. Here we have a straightforward argument that you apparently don't agree with, but for which you cannot find a response.

    Define the words "conceivable" and "possible" like a dictionary does.TheMadFool

    I see you are reverting to bot-like behavior, as outlined in my previous post. We have been round this loop before. I see, from other conversations, that you frequently use demands for definitions to browbeat other people when you have run out of arguments, to take the discussion in a different direction... Well, it won't work here: I am not going to follow you in another run around the rabbit-warren until you have addressed the specific argument here.
  • The Turing P-Zombie

    First, let me make one thing clear (once again): The issue is not whether understanding is uncomputable, and if you think I have said so, you are either misunderstanding something I wrote, or drawing an unwarranted conclusion. The issue here is your insistence that there is nothing special about understanding and that it is a simple problem for AI.

    I have already given you a working definition that you chose to ignore. Ignoring me is one thing, but if, instead, you were to look at what real philosophers are thinking about the matter, you would see that, though it is a work in progress, at least one thing is clear: there is much more to it than you suppose.

    We can, however, discuss this matter in a way that does not depend on a precise definition. If, as you say, having AIs understand things is simple, then how come the creators of one of the most advanced AI programs currently written acknowledge that understanding common-sense physics, for one thing, is still a problem? Here we have a simple empirical fact that really needs to be explained away before we can accept that understanding (regardless of how you choose to define it) actually is simple - yet many posts have gone by without you doing so.

    Of course, anyone reading your 'explanation' of how to do machine understanding will have a problem implementing it, because it is so utterly vague. It most reminds me of many of the dismissive posts and letters-to-the-editor written after IBM's Watson's success in the Jeopardy contest: "it's just database lookup" was typical of comments by ignoramuses who had no idea of how it worked and by how much it transcended "just" looking up things in a database.


    If they're different then it's possible that conceivable but not possible and possible but not conceivable. Please provide examples of both scenarios for my benefit. Thanks.TheMadFool

    When I read this, I got the distinct feeling that I was dealing with a bot, which would be quite embarrassing for me, given the original topic of this thread! Things that tend to give away a bot include blatant non-sequiturs, a lack of substance, a tendency to lose the thread, and repetition of errors. You asked essentially the same question as this one here (complete with the same basic error in propositional logic) a few posts back, but when I provided just such an example (the same one as I had given more than once before) you ignored it and went off in a different direction, only to return to the same question now.

    I am tempted to just quote my reply from then, but I will spell it out more formally, so you can reference the first part you don't agree with:

    • P1 Anything that has been conceived of is conceivable.
    • P2 I have conceived of the proposition 'The Collatz conjecture is true.'
    • L1 'The Collatz conjecture is true' is conceivable. (P1, P2)
    • P3 I have conceived of the proposition 'The Collatz conjecture is false.'
    • L2  'The Collatz conjecture is false' is conceivable. (P1, P3)
    • P4 Either the Collatz conjecture is true, or it is false; it cannot be both, and there are no other alternatives.
    • L3 If the Collatz conjecture is true, then the conceivable proposition 'The Collatz conjecture is false' does not state a possibility. (L2, P4)
    • L4 If the Collatz conjecture is false, then the conceivable proposition 'The Collatz conjecture is true' does not state a possibility. (L1, P4)
    • C1 There is something that is conceivable but not possible. (L3, L4)
  • The Turing P-Zombie

    People who know me on this board know that I'm perfectly capable of getting into the mud.fishfry

    I will treat that comment with all the respect it deserves.

    A TM is not a physical device. It's an abstract mathematical construction...fishfry

    Regardless, the question I asked a couple of posts ago applies either way.

    ...A computationfishfry
    ...but it is not the computation that the abstract machine is computing. I covered that in yesterday's post.
    Do you think that in Bostrom's simulated universes, it's TMs all the way down? Clearly not, as his premises don't work in such a scenario - there's a physical grounding to whatever stack of simulations he is envisioning.
  • The Turing P-Zombie
    I couldn't have been more clear.fishfry

    I rather suspect that's true, unfortunately.
  • The Turing P-Zombie

    What part of 'a computation is what a Turing machine does, not what it is' do you not understand? At least until we sort that out, I am not going to read any more of this jumble.
  • The Turing P-Zombie

    Even in Bostrom's simulation argument, neither brains nor minds are TMs: in that argument, I (or, rather, what I perceive as myself) is a process (a computation being performed), and what I perceive as being the rest of you is just data in that process. To confuse a process (in either the computational sense here, or more generally) with the medium performing the process is like saying "a flight to Miami is an airplane." A computation is distinct from the entity doing the computation (even if the latter is a simulation - i.e. is itself a computation - they are different computations (and even when a computation is a simulation of itself, they proceed at different rates in unending recursion.))

    I recognize that this loqution is fairly common - for example, we find Searle writing "The question is, 'Is the brain a digital computer?' And for the purposes of this discussion I am taking that question as equivalent to 'Are brain processes computational?" - but, as this quote clearly shows, this is just a manner of speaking, and IMHO it is best avoided, as it tends to lead to confusion (as demonstrated in this thread) and can prime the mind to overlook certain issues in the underlying question (for example, if you assume that the brain is a TM, it is unlikely that you will see what Chalmers is trying to say about p-zombies.) To me, Searle's first version of his question is little more than what we now call click-bait.
  • The Turing P-Zombie

    There is no point in discussing your own private definition of 'understanding' - no-one can seriously doubt that computers are capable of performing dictionary look-up or navigate a predefined network of lexical relationships; even current AI can do much more than just that.

    We can make no judgment, however, of whether AI is performing at human-like levels by only looking at simple examples, and the fact remains that AI currently has problems with certain more demanding cognitive tasks, such as with "common-sense physics" (as I mentioned previously, that is not just my opinion, it is a quote from those who are actually doing the work.) You have given no plausible explanation for how your concept of understanding, and of how it can easily be achieved, solves this problem, and in your only attempt to explain away why, if it is so easy, it remains an acknowledged problem in actual AI research, you implied that the whole AI community has consistently failed to see what is obvious to you.

    There is no reasonable doubt that AI currently has a problem with something here; I just don't know what it is called in your personal lexicon.

    Personal lexicons come up again in the issue of 'conceivable' vs. 'possible', where the definition I attempted of 'conceivable' apparently doesn't match yours. There is no point in getting into a "you say, I say" argument, but we don't have to: it is a straightforward fact that the distinction between 'conceivable' and 'possible' is widely accepted among philosophers and is central to Chalmer's p-zombie argument. I will grant that it is conceivable, and even possible, that you are right and they are all wrong, but I don't think it is probable.

    You would be more convincing if you could explain where the example I gave earlier, using the current status of the Collatz conjecture, goes wrong.
  • The Turing P-Zombie

    I am replacing my original reply because I do not think the nitpicking style that this conversation has fallen into is helpful.

    From your explanations and 'water' question in your latest reply, it seems increasingly clear to me that we have very different ideas of what understanding is. For you, it seems to be something such that, if a person memorized a dictionary, they would understand everything that is defined in it. For me, it is partly an ability to find the significant, implicit connections between the things you know, and there is also a counterfactual aspect to it: seeing the consequences if things were different, and seeing what needs to change in order to get a desired result.

    Given these different conceptions, it is not surprising that you might think it is an easy problem, while I see significant difficulties. I will not repeat those difficulties here, as I have already covered them in previous posts.

    As for what I believe, no extant computer(+program) can perform human-like understanding, but I expect some future computer could do so.

    With regard to conceivability versus possibility, I gave my working definitions in my previous post, though not spelled out in 'dictionary style.' For completeness, here are the stripped-down versions:

    Conceivable: Anything that can be stated as a proposition is conceivable.

    Possible: In the context of modal logic, which is the context of Chalmers' argument, something is possible if and only if it can be stated as a proposition that is true in some possible world.

    I do not think reducing them to bare definitions is very helpful, and by doing so, perhaps I can persuade you of that. I urge you to take another look at the Collatz conjecture example from before.
  • The Turing P-Zombie
    Computers are symbol manipulators and that means whatever can be symbolized, is within the reach of a computer.TheMadFool

    "Within the reach" avoids precision where precision is needed. What do you mean, here?


    If you believe there's more to understanding than symbol manipulation...TheMadFool

    Whether it is symbol manipulation is beside the point. What's at issue here is my statement that "[Turing-like] tests could be further improved by focusing on what a machine understands, rather than what it knows" and your reply that you don't see anything special in understanding. Being symbol manipulation does not automatically make it simple, and your explanations, which invoke symbol manipulation whithout showing what sort of manipulation, are just part of the reason for thinking that it is not.


    If you believe there's more to understanding than symbol manipulation...TheMadFool

    That is a view I have not stated and do not hold. My position has consistenly been that understanding is not a simple issue and that it remains a significant obstacle for AI to overcome. I have also taken the position that current logic solvers are not sufficient to give a machine the ability to understand the world, which should not be mistaken for a claim that no form of symbol manipulation could work. To be clear, my position on consciousness is that I suppose that a digital computer could simulate a brain, and if it did, it would have a mind like that of the brain being simulated.


    Understanding is simply a match-the-following exercise, something a computer can easily accomplish.TheMadFool

    Please expand on 'match-the-following', as I cannot imagine any interpretation of that phrase that would lead to a computer being able to understand anything to the point where it would perform reasonably well on "common-sense physics" problems (in fact, perhaps you could work through the "if you put cheese in a refrigerator, will it melt?" example?)


    How do you think machine translations work?TheMadFool

    You have taken hold of the wrong end of the stick here. I was replying to your question "Is there a word with a referent that's impossible to be translated into computer-speak?" by pointing out that it is irrelevant to the issue because translation does not create or add meaning. In turn, it is also irrelevant to this point whether the translation is done by humans or machines: neither of them create or add meaning, which is delivered in the original text.


    Ask yourself the question: what exactly does understanding semantics mean if a machine, allegedly incapable of semantics, can do as good a job as a translator of languages?TheMadFool

    You are barking up the wrong tree here, precisely because translation does not modify the semantics of its input. To address the matter at hand, you would need an example that does demand understanding. We have that, at least to a small extent, in the common-sense physics questions discussed in the paper I linked to, and even here the performance of current AI is weak (note that this is not my assessment or that of critics; it is from the team which developed the program.) You have avoided addressing this empirical evidence against your claim that machine understanding is simple, until...

    I've heard that even the very best expert can and do make mistakes.TheMadFool

    Really? Do you understand that, for this excuse to work, in would not take just one or two experts to make a few mistakes; it would require the entire community to be mistaken all the time, never seeing the simple solution that you claim to have but have not yet explained?


    Kindly furnish the definitions of "conceivable" and "possible". I'd like to see how they differ, if you don't mind.TheMadFool

    At last! Back to the main issue. I will start by quoting what I wrote previously:

    In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)

    If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.
    A Raybould

    To expand on that, one could hold that any sentence in propositional form is conceivable, as it is conceived of merely by being expressed (some people might exclude propositions that are false a priori, but a difficulty with that is that we don't always (or even often) know whether that is the case.)

    In the context of modal arguments, of which the p-zombie argument is one, for the sentence to be possible, it must be true in a possible world. In modal logic, if you want a claim that something is possible to be accepted by other people, you either have to get them to accept it as an axiom, or you must derive it from axioms they have accepted.

    I am not sure if the above is going to help, because the debate over whether Chalmers can go from conceivability to possibility is, in part, a debate over what, exactly, people have accepted when they accept the conceivability of p-zombies. What seems clear, however, is that neither side is prepared to say that they are the same.

    By the way, your positions seem to generally physicalist, except that you are troubled by p-zombies, which are intended to be anti-physicalist. AFAIK, this is quite an unusual combination of views.
  • The Turing P-Zombie


    To me, understanding is just a semantics game that's structured with syntax.TheMadFool

    I have no idea what that means. I hope that it means more than "understanding is semantics with syntax", which is, at best, a trite observation that does not explain anything.

    Searle says that syntax can not give rise to semantics, and claims this to be the lesson of his "Chinese Room" paper. I don't agree, but I don't see the relationship as simple, either.


    Is there a word with a referent that's impossible to be translated into computer-speak?TheMadFool

    This is beside the point, as translation does not produce meaning, whether it is into "computer-speak" or anything else. Translation sometimes requires understanding, and it is specifically those cases where current machine translation tends to break down.

    You have not, so far, addressed a point that is relevant here: the problem current AI has with basic, common-sense understanding is not in solving logic problems, but in formulating the right problem in the first place.

    If you really think you have solved the problem of what it takes to understand something, you should publish (preferably in a peer-refereed journal), as this would be quite a significant advance in the study of the mind. At the very least, perhaps you could address an issue that I have raised twice now: if, as you say, there's nothing special to understanding, and semantics is just associating words to their referents in a computer's memory, how come AI is having a problem with understanding, as is stated in the paper I linked to? Do you think all AI researchers are incompetent?


    You talk of "combinatorial complexity" and the way you speak of Einstein's work suggests to me that you think E=mc^2 to be nothing short of a miracle.TheMadFool

    Well I don't, so I think I can skip your arguments against it - though not without skimming them to see if you made any point that stands on its own. In doing so, I see that you put quotes around combinatorial complexity, as if you thought it was beside the point, but it is very much to the point that humans achieve results that would be utterly infeasible if the mind worked like current theorem solving programs.


    If conceivability and possibility are different then the following are possible and I'd like some examples of each:

    1. There's something conceivable that's impossible

    2. There's something possible that's inconceivable
    TheMadFool

    They may be possible, but it is certainly not necessary that there must be something possible that's inconceivable - and if there is, then neither me, you nor anyone else is going to be able to say what it is. On the other hand, in mathematics, there are non-constructive proofs that show something is so without being able to give any examples, and it seems conceivable to me that in some of these cases, no example ever could be found. If this is so, then whether these things should be regarded as inconceivable or not strikes me as a rather subtle semantic issue.

    I have twice given you an example of the former: If the Collatz conjecture is true, then that it is false is conceivable (at least until a proof is found) but not possible, and vice-versa. It has to be either one or the other.

    By the way, this example is a pretty straightforward combination of syntax, semantics and a little logic, so how do you account for your difficulty in understanding it?
  • Materialism and consciousness

    This is because we're dealing with a philosophical argument, not a scientific hypothesis per se...Wayfarer

    We may disagree over whether minds have purely physical causes, but it would be hard, I think, to deny that they have physical effects, and this alone puts them in the domain of scientific study - not that I think these ontological / "proper domain" arguments are useful in cases like this, anyway: ontology bends to fit knowledge, not the other way round.

    ...I’m not saying that ‘reason is circular’ in any general sense. I’m saying there is circular reasoning implied in materialist theories of mind, in particular, which claim that mind (reasoning, thinking) can be understood in physico-chemical terms (as per Armstrong).Wayfarer

    Firstly, note that if this circularity of minds studying minds is doomed to fail, then, despite your assertion to the contrary, it applies to all study of the mind - scientific, philosophical, whatever (actually, even more so in the case of philosophy: neuroscience is minds studying brains, while the philosophy of mind is nominally minds studying minds.) If it is a problem - and I have seen no other argument that it is, beyond the claim that this circularity somehow means that it must be - then the study of the mind will either run into insurmountable problems, or run on interminably without delivering results. This may be so, but your statements do not show that it must be so.

    Self-referentiality in logic is indeed tricky, but it can be dealt with when handled with care - or even used creatively to expand our understanding, as Gödel did. Russell's barber paradox was a warning shot across Frege's bows, and what your circularity claim is missing is that sort of argument, showing the problem that this circularity causes.
  • The Turing P-Zombie

    How do you think a human processes this question?TheMadFool

    A person who does not just know the answer might begin by asking herself questions like "what does it mean for cheese to melt?" "what causes it to do so?" "what does a refrigerator do?" and come to realize that the key to answering the question posed may be reached through the answers to two subsidiary questions: what is the likely state of the cheese initially, and how is its temperature likely to change after being put into a refrigerator?

    At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easily. The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of  GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question.

    It gets more interesting when we consider a slightly more difficult problem: for "cheese", substitute the name of a cheese that the subject has never heard of (there are some candidates here). There is a good chance that she will still come up with the right answer, even if she does not suspect that the object is a form of cheese, by applying suitable general principles and some inductive thinking. Current AI, on the other hand, will likely be flummoxed.


    Is it possible to get to E = mc^2 without logic?TheMadFool

    That is beside the point. To think that the use of logic in getting to E = mc^2 somehow implies that, once you can get a machine to do logic, there's "nothing special" in getting it to understand things, is, ironically, a failure to understand the role (and limits) of logic in understanding things.

    Ultimately, you are arguing against the straightforward empirical fact that current AI has trouble understanding the information it has.


    Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of?TheMadFool

    Neither of the above. There is a method to solving certain problems in formal logic, that does a breadth-first search through the tree of all possible derivations from the given axioms, but that is nothing like insight: for one thing, there is no semantic content to the formulae themselves. (One of the first successes in AI research, Logic Theorist, proved many of the early theorems from Principia Mathematica, and as doing so is considered a sign of intelligence in people, some thought that AI was close to being a solved problem. They were mistaken.)

    What I was thinking is this: if you formalized the whole of classical physics, and started a program such as the above on discovering what it could deduce, the chances that it would come up with E=mc^2 before the world comes to an end are beyond-astronomically small (even more importantly, such a program would not understand the importance of that particular derivation, but that is a separate issue.) The reason for this is the combinatorial complexity of the problem - the sheer number of possible derivations and how fast they grow at each step (even the Boolean satisfiability problem 3-SAT is NP-complete.)

    Actually, I have since realized that even this would not be successful in getting to E = mc^2: to get there, Einstein had to break some 'laws' of physics, treat them as approximations, and substitute more accurate alternatives that were still consistent with everything that had been empirically determined. That's not just logic at work.

    Lucky guessing has the same problem, and anyone dismissing Einstein's work as a lucky guess just does not understand what he did. There is something more to understanding than any of this, and the fact that we haven't nailed it down yet is precisely the point that I am making on this tangential issue of whether understanding remains a tough problem for AI.


    What's the difference between conceivable and possible?TheMadFool

    Consider the example I gave earlier: I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible. This situation exists because it is either true or it is false, but so far, no-one has found a proof either way.

    In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there is a possible world in which X is true, we are not just saying that we suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)

    If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.
  • The Turing P-Zombie

    Indeed, that article is not by Chalmers; is that a problem? Is reading Archimedes' words the only way to understand his principle?

    If you want to read Chalmers' own words, he has written a book and a series of papers on the issue. As you did not bother to read my original link, I will not take the time to look up these references; you can find them yourself easily enough if you want to (and they may well be found in that linked article). I will warn you that you will find the papers easier to follow if you start by first reading the reference I gave you.

    I'd still like a simple answer to how a p-zombie differs from "a thing that is indistinguishable from a human but lacks self-awareness," such as a TM in a nice suit.fishfry

    That is a different question than the one you asked, and I replied to, earlier. The answer to this one is that a TM is always distinguishable from a human, because neither a human, nor just its brain, nor any other part of it, is a TM. A human mind can implement a TM, to a degree, by simulation (thinking through the steps and remembering the state), but this is beside the point here.

    If you had actually intended to ask "...indistinguishable from a human when interrogated over a teletype" (or by texting), that would be missing the point that p-zombies are supposed to be physically indistinguishable from humans (see the first paragraph in their Wikipedia entry), even when examined in the most thorough and intrusive way possible. This is a key element in Chalmers' argument against metaphysical physicalism.

    As a p-zombie is physically identical to a human (or a human brain, if we agree that no other organ is relevant), then it is made of cells that work in a very non-Turing, non-digital way. Chalmers believes he can show that there is a possible world identical to ours other than it being inhabited by p-zombies rather than humans, and therefore that the metaphysical doctrine of physicalism - that everything must necessarily be a manifestation of something physical - is false.

    Notice that there is no mention of AI or Turing machines here. P-zombies only enter the AI debate through additional speculation: If p-zombies are possible, then it is also possible that any machine (Turing or otherwise), no matter how much it might seem to be emulating a human, is at most emulating a p-zombie. As the concept of p-zombies is carefully constructed so as to be beyond scientific examination, such a claim may be impossible to disprove, but it is as vulerable to Occam's razor as is any hypothesis invoking magic or the supernatural.
  • The Turing P-Zombie
    That solves the mystery of who or what we consider to be more "intelligent"?TheMadFool

    No, it is intended to be what you asked for, an alternative to the Turing test, and the purpose of that test is to figure out if a given computer+program is intelligent.

    Even you, presumably a person in the know about the truth of computer "intelligence", half-thought they were suited to a task humans have difficulty with.TheMadFool

    I don't see where you got that from. I am writing about a hypothetical future computer that at least looks like it might be intelligent, just as Turing was when he presented his test.

    I see nothing special in understanding. For the most part it involves formal logic, something computers can do much faster and much better.TheMadFool

    If that is so, then how come the most powerful and advanced language-learning program has a problem with "common-sense physics", such as "If I put cheese in a refrigerator, will it melt?"

    Consider Einstein's equation E = mc^2. A great many people know it, but only a tiny fraction of those understand how it arises from what was known of physics at the beginning of the 20th. century. Einstein did not get there merely (or even mostly) by applying formal logic; he did so through a deep understanding of what that physics implied. A computer program, if lacking such insight, could not find its way to that result: for one thing, the problem is vastly too combinatorially complex to solve by exhaustive search, and for another, it would not understand that this formula, out of the huge number it had generated, was a particularly significant one.

    Nevertheless, this association between consciousness experienced in first person and some set of behaviors is not that of necessity (no deductive proof of it exists) but is that of probability (an inductive inference made from observation). Ergo, in my humble opinion, p-zombies are conceivable and possible to boot.TheMadFool

    For one thing, you seem to be making an argument that they are conceivable, but the controversial leap from conceivable to possible is not really argued for here, it is just asserted as as if it followed automatically: "...and possible to boot."

    More interestingly, if I am following you here, you do consider it possible that other people are p-zombies. That is very interesting, because Chalmers hangs his argument against physicalism on the assumption that they are not, and I know of no counter-argument that challenges this view (even when Dennett says, apparently somewhat tongue-in-cheek, that "we are all p-zombies", I think his point is that he thinks Chalmers' distinction between p-zombies and us (the non-physical element that they lack) is illusory.)

    Having said that, I have three follow-up questions: firstly, if other people could be p-zombies, do you think that you are different, and if so, why? Secondly, if it is possible that other people are p-zombies, why would it matter that it would be possible for a p-zombie to pass the Turing test? Thirdly, if it is possible that other people are p-zombies, why did we evolve a highly-complex, physical state machine called the human brain? After all, if the p-zombie hypothesis is correct, our minds are independent of the physical brain. The most parsimonious hypothesis here seems to be that the p-zombie hypothesis is false, and our minds actually are a result of what our physical brains do.
  • The Turing P-Zombie
    I see p-zombies and computer programs as being very closely related. Perhaps you can educate me as to what I'm missing about p-zombies.fishfry

    Chalmers' canonical p-zombie argument is a mataphysical one that is not much concerned with computers or programs, even though they are often dragged into discussions of AI, often under the misapprehension that chatbots and such are examples of p-zombies. The argument is subtle and lengthy, but I think this is a good introduction.
  • The Turing P-Zombie
    What test do you propose? Any ideas?TheMadFool

    I once half-jokingly suggested that devising a test that we find convincing should be posed as an exercise for the AI. The only reason I said 'half-jokingly' is that it would have a high false-negative rate, as no human has yet completed that task to everyone's satisfaction!

    I do not think Turing, or anyone else until much later, anticipated how superficially convincing a chatbot could be, and how effectively a machine could fake the appearance of consciousness by correlating the syntactical aspects of a vast corpus of human communication. By limiting his original test to a restricted domain - gender roles and mores - Turing made his test unnecessarily defeasible by these means, and subsequent variations have extended the scope of questioning. These tests could be further improved by focusing on what a machine understands, rather than what it knows.

    While there is a methodological difficulty in coming up with a test that defeats all faking, this is not the same problem as p-zombies allegedly pose, as that takes the form of an unexplained metaphysical prohibition on AI ever being 'truly' conscious (by 'unexplained', I mean that, in Chalmers' argument, we merely have to conceive of p-zombies, without giving any thought to how they might be so.)

    What, according to you, is an "accurate" concept of consciousness?TheMadFool

    I don't know, any better than the next person, what consciousness is, and if anyone had come up with a generally-accepted, predictive, falsifiable explanation, we would no longer be interested in the sort of discussion we are having here! For what it's worth, I strongly suspect that, for example, theories linking consciousness to quantum effects in microtubules are inaccurate. In a more general case, I think that any argument that insists physicalism must require a purely reductive explanation of conscious states in terms of brain states, without considering the possibility that the former may be emergent phenomena arising from the latter, are also inaccurate.

    Why is it incoherent?TheMadFool

    I am not saying that p-zombies are definitely an incoherent concept, though I suspect they are - that it will turn out that it is impossible to have something that appears to be as conscious as a human without it having internal states analogous to those of humans.

    Chalmers defends p-zombies as being "logically conceivable", but that is a vary low bar - it simply means that it is not simply a self-contradictory concept (such as 'male vixen' is, to quote one of Chalmers' examples), and that we don't know of any fact that refutes it - but that is always at risk of being overturned by new evidence, as has happened to many other concepts that were once seen as logically conceivable, such as phlogiston (actually, some form of metaphysical phlogiston theory might still be logically conceivable, but no-one would take seriously an argument based on that.)

    How and where is Occam's razor applicable?TheMadFool

    Chalmers is looking forward to a time when neuroscience has a thorough understanding of how brains work, and is trying to say that no such explanation can be complete - that there is something non-physical or magical going on as well. He cannot say what that is or how it works, or offer any way for us to answer those questions, but he insists that it must be there. It is for exactly these sort of unfalsifiable claims that Occam's razor was invented (even though the concept of falsifiability was not explicitly recognized until centuries later!)
  • The Turing P-Zombie


    The difficulty with employing a method to detect consciousness is that such a method is completely behavior-dependent and that raises the specter of p-zombies, non-conscious beings that can't be distinguished from conscious beings behaviorally.TheMadFool

    Firstly, just to be clear, and as you say in your original question, p-zombies are imagined as not merely behaviorally indistinguishable from humans, but entirely physically indistinguishable (and therefore beyond scientific investigation - and if they could be studied philosophically, I don't think anyone, not even Chalmers, has explained how.)

    Secondly, I don't think the Turing test should be considered as the only or ultimate test for consciousness - it was merely Turing's first shot at such a test (and, given that, it has endured surprisingly well.) For the purpose of this discussion, we can use Turing 's version to stand in for any such test, so long as we don't get hung up on details specific to its particular formulation.

    I assume that you think other people are conscious, but on what is your belief grounded? Is it because they behave in a way that appears conscious? Or, perhaps, is there an element of "they are human, like me, and I am conscious?"

    If you are going to throw out all behavioral evidence, in the case of AI, on account of p-zombies, then you would be inconsistent if you did not also throw it out in the case of other people. If you make an exception for other people because they are human, then you would be literally begging Chalmers' 'hard question of consciousness'. What does that leave? If you have another basis for believing other people are conscious, what is it and why would that not work for AIs? Suppose we find an enclave of Neanderthals, Homo Erectus, or space aliens - how would you judge if they are p-zombies?

    This, I suspect, is what Dennett is alluding to when he says "we are all p-zombies" - he sees no reason to believe that we have these extra-physical attributes that p-zombies would lack.

    Returning to your original question, you raise an interesting point: why should p-zombies not be considered conscious? After all, they were conceived explicitly to be indistinguishable from conscious entities. That they are allegedly lacking something that a philosopher says an entity must have, to be conscious, is not much of an argument; the philosopher might simply have an inaccurate concept of what consciousness is and requires.

    Putting that aside, there is a third option to be considered: that p-zombies are ultimately an incoherent concept. When we look at how strange, ineffable, unique and evidence-free a concept Chalmers had to come up with in order to defeat physicalism, how selective he had to be in what he chooses to bless with being conceivable, in order to get there, and the highly doubtful leap he makes from conceivability to possibility (I can conceive of the Collatz conjecture being true and it being false, but only one of these is possible), I am simply going to apply Occam's razor to p-zombies, at least until a better argument for their possibility comes along.
  • The Turing P-Zombie
    how would one know whether a computer is conscious in the sense we are?

    Do you know that I am conscious in the same way that you are? (or that any other person is, for that matter.) If so, then apply whatever method you used to come to that conclusion to a computer - and if that method depends on me being human and is not applicable to computers, then you would be begging the question.
  • The Turing P-Zombie
    There are a great many things that are unobservable, yet widely regarded as plausible, such as electrons, viruses, and, apparently, jealousy itself. One can, of course, take the position that it is possible that none of them are real, but that road, if taken consistently, leads ony to solipsism. To invoke this line of argument only over just some unobservables is not necessarily wrong (skepticism is an important trait) but it can also be tendentious, or an excuse for avoiding the issue. In particular, I regard it as tendentious, if not an outright inconsistency, to invoke zombie-conceivability arguments in the case of putative future AI but not in the case of people (or people other than oneself, if you are certain about your own case.)

    With regard to your specific claim that there can be behavior without internal state: certainly, but once you have observed more than a few behaviors that do seem to be dependent on internal state (e.g. learned behavior, or any behavior apparently using memories), then the possibility that none of the observed behaviors were actually state-dependent becomes beyond-astronomically improbable.
  • Can Consciousness be Simulated?
    Real philosophizing takes a lot of careful words!Douglas Alan

    Amen to that!

    In what I have read of the philosophy of the mind, there does not seem to have been much consideration of fundamental physics, and that may well be appropriate, if the brain operates within the classical approximation. Even the 'quantum microtubule' stuff is mostly the work of a physician and a mathematical physicist. At least we know Tegmark's position on that, which is that decoherence ensures that microtubules function within the classical approximation. None of the participants in this debate appear to be dualists.

    Both Chalmers and Tegmark have appeared at at least one or another of Hameroff's 'Towards a Science of Consciousness' conferences; there may be transcripts or videos.

    Other than that, I do not know if there has been much consideration of biology, at least since it became apparent that there is no simple mapping between cognitive concepts and biological structure.
  • Can Consciousness be Simulated?

    Thanks for your extensive reply. I can understand your disinterest in having anything more to do with the knowledge argument!

    I have made one pass through the paper of Tegmark's that you linked to, but my math is not up to following that much of it. Despite that, I have a few questions that I would pose to any proponent of it:

    • Could you make the same argument for music - that all you need is a notation and a score? But that would leave out the central role of performance, which is not just baggage.
    • In fig. 1, the top of the hierarchy is a hypothetical theory of everything, marked with a '?'. But what if, instead of turtles all the way down, it's '?' all the way up?
    • Tegmark makes much use of the set of real numbers in discussing the emergence of symmetry, but the Banach-Tarski paradox makes me wonder if the reals are a perfect match to physical reality.
    • In section 4B, he gets into the issue of initial conditions, which you need in addition to theories to explain the universe. He points out that physics has been pushing back the initial conditions, but again, what if it is initial conditions all the way up? From what little I understand, string theory is very dependent on initial conditions in order to explain our particular universe. If math can posit vastly more universes than obtain, is it perhaps the case that the "unreasonable effectiveness of mathematics" is because the universe 'picks out' just the bits that obtain, from an infinite smorgasbord?

    Not all physicalists are persuaded by Tegmark: Scott Aaronson, who appears to be a physicalist (see my first post in this thread) is one such.

    Ultimately, however, if Tegmark seems to be right, my attitude will be "that's wild!" rather than "that can't be!"
  • Can Consciousness be Simulated?
    IF materialism is correct, AND brain states are the same as mental states THEN knowledge of brain states should entail knowledge of mental states.RogueAI
    I believe that there were some early theories claiming that brain states are the same as mental states (type identity theory, perhaps?) but I think they have been supplanted by the view that minds are emergent phenomena arising from the low-level activity of the brain. This sort of emergence is not a controversial or speculative idea, as we have plenty examples of this sort of thing -- for example, in a neural network that picks out images containing cats, you will not find, in its individual hardware and software components (transistors and bytes, respectively), anything that recognizes cats. More simply, if you look at a sorting algorithm, you will not find, in its steps taken individually, anything that has a sorting property -- only the complete algorithm has that.
  • Can Consciousness be Simulated?

    The equivocation reply to the knowledge argument effectively begins with Churchland's "Knowing Qualia: a Reply to Jackson" in 1989, though both Horgan and Churchland himself raised the issue earlier. In this short paper (which is unaccountably overlooked, IMHO), Churchland straightforwardly demonstrates that the KA equivocates over the phrase "knows about": everything she learns while isolated must be propositional, while there is good biological reasons, he argues, for thinking that what she learns from seeing color is anything but that (part of that argument is that our trichomatic vision scheme is widely distributed across mammalia, including in animals that have no language ability; per Nagel, they presumably have a sense of what it is like to see colors, but evidently not by believing in certain propositions.)

    The most vigorous dualist response was Stanley and Williamson, "Knowing How" (2001), in which they used a linguistic argument to claim that all knowledge is propositional (actually, they had to admit a category of innate knowledge or know-how in order to avoid infinite regress: to have propositional knowledge of a fact, you need to know the corresponding proposition, but to know that...)

    FWIW, I think they make a good case that we often talk about know-how as if it is propositional (at least in English), but to me, such an approach is incapable of determining that it is actually the case: it is as if Dawkin's phrase "the selfish gene" proves that genes are aware of how evoution works.

    That is moot, however, as we can go with the feature of Mary's pre-release studies that prompted Churchland to call them propositional: At this stage of the experiment, she can only learn those things that can be learned from reading a book, or following a lecture on her monochrome TV. The first place I saw this called 'discursively learnable' was in Torin Alter's "A Limited Defense of the Knowledge Argument" (1996 - yes, it predates S&W.) Alter's position was then (and I think is still now) that the KA makes a point, but it is purely epistemic, and has no metaphysical implications. Tim Crane, in "The Knowledge Argument is an Argument About Knowledge" (2019) comes to a similar conclusion.

    Two useful properties of this formulation of the equivocation reply are firstly, that it is effective against vague "in a sense" type claims about what Mary does and does not know, and secondly that it makes the dilemma faced by proponents of the knowledge argument very clear.

    I am much amused by the way Zombie Mary pits one of dualism's darlings against the other, but I think you have to pay for that fun by accepting p-zombies. I don't have time to go through the case against p-zombies right now, except to say the the most common objection, that Chalmers unjustifiably jumps from conceivability to possibility, is a sufficient reply, though probably not all that can be said against p-zombies.

    Finally, it is not clear to me how physicalism leads us inevitably to Max Tegmark's MUH - may I ask you to expand on that?
  • Can Consciousness be Simulated?


    The knowledge argument also has the problem of equivocation over the sort of knowledge that Mary gains: she can only gain discursively-learnable knowledge while she is isolated, and if what she learns when she is released is not discursively learnable, then physicalism is not challenged, but if it is discursively-learnable, then how can one explain Mary not already knowing it, without begging the question, by requiring, as a premise, that it is non-physical discursively-learnable knowledge?
  • Can Consciousness be Simulated?

    And I have an opinion about your belief system: I think it's logically inconsistent.RogueAI

    Then it is surprising that, instead of offering a logical refutation of anything I have written, you have simply repeated, at great length, unsubstantiated claims that this or that is absurd.

    Actually, that is not entirely fair - you have also mastered the fundamentalists' trick of changing the subject when things get difficult, even to the point of stating some positions that you immediately disassociated yourself from when I challenged them.

    Do you have some good links supporting your position?RogueAI

    My position is not any less supported by external links than yours is. Actually, I am not sure what, in my posts, needs external support, as, after the first one, they have mostly been concerned with inconsistencies and non-sequiturs within your replies to me. You have already agreed that my three premises are reasonable, and I am not claiming anything more than that.
  • Can Consciousness be Simulated?
    It's impossible to know if other minds exist, of course. But I assume they do...RogueAI

    So, in your earlier post, you were arguing your position from a belief that not even you hold...
  • Can Consciousness be Simulated?
    Why, then, would you have any opinion at all about what rocks can and cannot do?

    I don't need to be a materialist to have an opinion about an absurdity contained within it.
    RogueAI

    But in your previous post, you wrote "I don't believe there are physical devices", so you have been expressing strong opinions about the capabilities of something that apparently does not exist in your universe. It is like having opinions about what republicans believe, without actually believing that there are republicans.
  • Can Consciousness be Simulated?

    The existence of conscious minds is the most surprising thing about this universe, I think.RogueAI
    According to what you wrote in reply to me an hour ago, you apparently think it likely that your conscious mind is the only thing in this universe...