are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it? — Pseudonym
The tone of your OP was clear that you thought it was unwise to make the stake; so I made it. — Noble Dust
I don't like these OPs where the questions are leading me somewhere. — Noble Dust
What? Because I indicated I thought it unwise you decided to go for it. I'm touched that my opinion is so influential in your decision making, even if only to oppose it. — Pseudonym
How does the question lead you somewhere? I — Pseudonym
philosophers have idly speculated — Pseudonym
Only in the last few hundred has it become polarized into the debate we recognise today — Pseudonym
the debate was almost entirely academic — Pseudonym
We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way. — Pseudonym
If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us. — Pseudonym
If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it? — Pseudonym
Do you believe in the uniqueness of consciousness and free will enough to stake the future of humanity on it, do you think we should proceed under a presumption that is safer, or alternatively do you think there is even more risk from presuming these traits are not unique? — Pseudonym
Where did I say you're not allowed to have an opinion about this? — Noble Dust
And I'm just saying I don't like those sorts of threads. — Noble Dust
I think AI has a chance of doing evil but most likely it won't be because of free will but because of accidents or because of the will of humans controlling them. — René Descartes
I don't see AI truly thinking for themselves — René Descartes
So are we saying that human beings do not have the possibility of functioning as autonomous, self regulating, self directional (free) moving beings, (I am not talking about physical biology here)? — Pneuma
For at least the last 2000 years, philosophers have idly speculated on the question of what conciousness is and whether we have free-will. — Pseudonym
is it safe to pin the future of humanity on some fragile metaphysical constructions? — Pseudonym
Your suggested alternative being....? — Wayfarer
We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way. If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.
If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it? — Pseudonym
Well, we don't have any working definition of what a thought is or what an idea is, on a neurological level. According to Jaron Lanier. — Ying
So talk about hard AI as if it's a thing is like talking about nuclear fusion in the living room. Maybe possible in the future after some or the other huge breakthrough, but banking on such issues is science fiction at this point in time. — Ying
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.