How do you think a human processes this question? — TheMadFool
A person who does not just know the answer might begin by asking herself questions like "what does it mean for cheese to melt?" "what causes it to do so?" "what does a refrigerator do?" and come to realize that the key to answering the question posed may be reached through the answers to two subsidiary questions: what is the likely state of the cheese initially, and how is its temperature likely to change after being put into a refrigerator?
At this point, I can imagine you thinking something like "that's just a deductive logic problem", and certainly, if you formalized it as such, any basic solver program would find the answer easily. The part that is difficult for AI, however, is coming up with the problem to be solved in the first place. Judging by the performance of GPT-3, it would likely give good answers to questions like "what causes melting" and "what is a refrigerator?", but it is unable to put it all together to reach an answer to the original question.
It gets more interesting when we consider a slightly more difficult problem: for "cheese", substitute the name of a cheese that the subject has never heard of (there are some candidates
here). There is a good chance that she will still come up with the right answer, even if she does not suspect that the object is a form of cheese, by applying suitable general principles and some inductive thinking. Current AI, on the other hand, will likely be flummoxed.
Is it possible to get to E = mc^2 without logic? — TheMadFool
That is beside the point. To think that the use of logic in getting to E = mc^2 somehow implies that, once you can get a machine to do logic, there's "nothing special" in getting it to understand things, is, ironically, a failure to understand the role (and limits) of logic in understanding things.
Ultimately, you are arguing against the straightforward
empirical fact that current AI has trouble understanding the information it has.
Do you mean there's a method to insight? Or are insights just lucky guesses - random in nature and thus something computers are fully capable of? — TheMadFool
Neither of the above. There is a method to solving certain problems in
formal logic, that does a breadth-first search through the tree of all possible derivations from the given axioms, but that is nothing like insight: for one thing, there is no semantic content to the formulae themselves. (One of the first successes in AI research,
Logic Theorist, proved many of the early theorems from Principia Mathematica, and as doing so is considered a sign of intelligence in people, some thought that AI was close to being a solved problem. They were mistaken.)
What I was thinking is this: if you formalized the whole of classical physics, and started a program such as the above on discovering what it could deduce, the chances that it would come up with E=mc^2 before the world comes to an end are beyond-astronomically small (even more importantly, such a program would not understand the importance of that particular derivation, but that is a separate issue.) The reason for this is the combinatorial complexity of the problem - the sheer number of possible derivations and how fast they grow at each step (even the Boolean satisfiability problem
3-SAT is NP-complete.)
Actually, I have since realized that even this would not be successful in getting to E = mc^2: to get there, Einstein had to break some 'laws' of physics, treat them as approximations, and substitute more accurate alternatives that were still consistent with everything that had been empirically determined. That's not just logic at work.
Lucky guessing has the same problem, and anyone dismissing Einstein's work as a lucky guess just does not understand what he did. There is something more to understanding than any of this, and the fact that we haven't nailed it down yet is precisely the point that I am making on this tangential issue of whether understanding remains a tough problem for AI.
What's the difference between conceivable and possible? — TheMadFool
Consider the example I gave earlier: I can conceive of the Collatz conjecture being true and of it being false, but only one of these is possible. This situation exists because it is either true or it is false, but so far, no-one has found a proof either way.
In everyday usage, the words might sometimes be considered synonymous, but in the contexts of metaphysics and modal logic, which are the contexts in which the p-zombie argument is made, 'possible' has a specific and distinct meaning. As the example shows, there's an aspect of 'so far as we know' to conceptions, which is supposed to be resolved in moving to metaphysical possibility - when we say, in the context of modal logic, that there
is a possible world in which X is true, we are not just saying that we
suppose there might be such a possible world. We should either assert it as an axiom or deduce it from our axioms, and if Chalmers had done the former, physicalists would simply have said that the burden was on him to justify that belief (it gets a bit more complicated when we make a counterfactual premise in order to prove by contradiction, but that is not an issue here.)
If the two words were synonymous, Chalmers would not have spent any time in making the distinction and in attempting to get from the former to the latter, and his opponents would not be challenging that step.