One possible conclusion from this equation is that p-zombies, as defined, cannot exist.Conscious being = True AI = P-Zombie — TheMadFool
One possible conclusion from this equation is that p-zombies, as defined, cannot exist. — Olivier5
As for the definitions of "conceivable" and "possible", I'd like to see them in a familiar format please, like in a dictionary. — TheMadFool
I have noticed a pattern here: you will post a claim, I will respond, then you will raise a different issue as if you had no counter-argument. A post or two later, however, the first issue will rise again, zombie-like, as if it had never been discussed before. — A Raybould
Being computable does not necessarily entail simplicity — A Raybould
This is too vague to establish simplicity — A Raybould
We have already been around this define / whats-the-difference? / define loop once before, and as I made clear, I have no intention of going round again until you address the whats-the-difference example.
If you want to get into definitions, it's your turn to offer some, so how about if you proffer definitions which make the Collatz conjecture example/argument invalid or unsound? (Or, if you find that infeasible, you could simply say which premise or conclusion you first disagree with, and we can proceed from there.) — A Raybould
I never claimed understanding as simple... — TheMadFool
I don't know why people make such a big deal of understanding - it's very simple. — TheMadFool
...and so on. These are not 'gotcha' quotes taken out of context; the alleged simplicity of understanding is a big part of your claim that there is nothing special about it.To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words. — TheMadFool
But the issue is not whether it is computable, as I have repeatedly had to remind you. Do you not remember this?...I said it's computable. — TheMadFool
In other words, it's implied, you feel understanding is uncomputable i.e. there is "something special" about it and for that reason is beyond a computer's ability.
— TheMadFool
Absolutely not. As you are all for rigor where you think it helps your case, show us your argument from "there's something special about understanding" to "understanding is uncomputable." — A Raybould
For one thing, they are vague, when considered as an explanation of understanding, in that they lack the specificity needed for it to be clear that anything having just those two capabilities would necessarily understand, say, common-sense physics or Winograd schema. I am willing to believe that a machine capable of understanding these things could be described as having these capabilities, but I am also pretty sure that many machines, including extant AIs such as GPT-3, could also be so described, while lacking this understanding. If so, then this description lacks the specificity to explain the difference between machines that could and those that cannot understand these things.1)word-referent matching and 2) pattern recognition, as far as I can tell, aren't vague at all. — TheMadFool
Are you implying the meanings of conceivable and possible are based off of the Collatz conjecture? — TheMadFool
↪TheMadFool
I never claimed understanding as simple...
— TheMadFool
I don't know why people make such a big deal of understanding - it's very simple.
— TheMadFool
To cut to the chase, understanding the words "trees" and "water" is simply a process of connecting a specific set of sensory and mental data to these words.
— TheMadFool
...and so on. These are not 'gotcha' quotes taken out of context; the alleged simplicity of understanding is a big part of your claim that there is nothing special about it. — A Raybould
For one thing, they are vague, when considered as an explanation of understanding, in that they lack the specificity needed for it to be clear that anything having just those two capabilities would necessarily understand, say, common-sense physics or Winograd schema. I am willing to believe that a machine capable of understanding these things could be described as having these capabilities, but I am also pretty sure that many machines, including extant AIs such as GPT-3, could also be so described, while lacking this understanding. If so, then this description lacks the specificity to explain the difference between machines that could and those that cannot understand these things. — A Raybould
No - I should have made it clear that the Collatz conjecture is just something for which neither a proof nor a refutation has been found so far; any other formal conjecture would do as well in its place. The essence is that there are two conceivable things here, and we know that only one of them is possible (even though we don't know which one), so the other (whichever one it is) is conceivable but not possible. — A Raybould
Computers as we know them are not aware of the world around them, and that means they cannot realy understand anything, because they don't know that there exists referents out there for words like "trees" or "water".Understanding, as it appears to be, is probably a complex phenomena, nevertheless computable. That's what I mean when I said "I don't know why people make such a big deal of understanding - it's very simple. Very simple in the sense of being reducible to logic, something computers are capable of. — TheMadFool
And hence, it would be perhaps provable that consciousness is unprovable. — ssu
No, just the basic mathematics in Turings answer on the Entscheidungsproblem with his Turing Machine argument. Above was talking about mathematical conjectures.I think you are alluding to the Lucas-Penrose argument aganst the possibility of there being algorithms that produce minds. — A Raybould
Let me try to explain.Specifically, I am not sure what it would mean to say that conciousness is provable - what is the premise that one would be proving? — A Raybould
Wow.I can tell, Nagel made a big deal of consciousness being just too subjective to be objectivity-friendly.Since proofs are objective in character, it appears that consciousness can't be proven to an other for that reason. Nonetheless, to a person, privately, consciousness is as real as real can get. — TheMadFool
Sure, but it's easier than defining consciousness and what is conscious and what isn't. Doctors have some kind of definition that they apply on the issue.It's not entirely straightforward to come up with a definition of what's alive and what's dead; there is some disagreement over whether viruses are truly living, and defining the exact point of death of a complex organism is not a simple matter. — A Raybould
Not every, but true but unprovable mathematical objects could be useful. At least in explaining what the problem we face is.Even if we accept that mathematics models reality extremely well, it does not follow that every mathematical entity models some aspect of reality. — A Raybould
Ok, I'll use from math/set theory negative self reference and Cantor's diagonalization to make an example.I am not convinvced that you simply cannot make an objective model about something that is inherently subjective. — A Raybould
Not every, but true but unprovable mathematical objects could be useful. At least in explaining what the problem we face is. — ssu
I think you are trying to show that 'possible' and 'conceivable' are synonyms. If so, then fair enough, but you should realize that, as Chalmers' argument depends on a distinction between 'conceivable' and 'possible', you would be disputing Chalmers' p-zombie argument (and, furthermore, over the same issue that many other people dispute it.) — A Raybould
Wow.
Well, he's right on that thing. Because it is genuinely a problem about subjectivity and objectivity. Or to put it another way: the limitations of objectivity. And proving something has to be objective. You simply cannot make an objective model about something that is inherently subjective.
Can you give reference where Nagel said that? It would be interesting to know. — ssu
Obviously it's so when studying consciousness.I can't assist you as much as I'd like. I can't quite get it right when it comes to who said what but I can tell you, with some confidence, that there's a subjectivity-objectivity issue people seem mighty concerned about regarding consciousness. — TheMadFool
I'll try to explain again.Can you state your conclusion, and the steps by which you reach it? And where is the subjectivity that you mention? — A Raybould
It seems you've got the wires crossed.One well-known test for Artificial Intelligence (AI) is the Turing Test in which a test computer qualifies as true AI if it manages to fool a human interlocutor into believing that s/he is having a conversation with another human. No mention of such an AI being conscious is made.
A p-zombie is a being that's physically indistinguishable from a human but lacks consciousness.
It seems almost impossible to not believe that a true AI is just a p-zombie in that both succeed in making an observer think they're conscious. — TheMadFool
Perhaps the correction I've provided above is enough to persuade you that the Turing-AI and p-zombies are not the same sort of thing.The following equality based on the Turing test holds:
Conscious being = True AI = P-Zombie — TheMadFool
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.