• Pseudonym
    1.2k
    For at least the last 2000 years, philosophers have idly speculated on the question of what conciousness is and whether we have free-will. Only in the last few hundred has it become polarized into the debate we recognise today, but even with Kant, Schopenhauer, Hobbes and Hume the debate was almost entirely academic, entering public policy only with regards to crime and punishment.

    We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way. If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.

    If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?
  • Noble Dust
    7.9k
    are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?Pseudonym

    1 vote for "yes".
  • Pseudonym
    1.2k


    Really? Care to explain why? I don't mean why you think free will and consciousness are real and unique to humans (or perhaps biological life), I mean why you think it is sensible to be so sure about that belief that it's worth risking the future of humanity on. Just to make a point? Or is there some greater threat you see from admitting that it might just be an illusion and we ought to proceed under that assumption for now?
  • Pseudonym
    1.2k


    But that's exactly the question. With AI something of an unknown, do you believe humans (alone) have free will strongly enough to just let the computer scientists get on with it?
  • Noble Dust
    7.9k


    The tone of your OP was clear that you thought it was unwise to make the stake; so I made it.

    I don't like these OPs where the questions are leading me somewhere.

    And because I believe in free will being something uniquely human, I have no fear of AI overcoming that free will. AI could lead to catastrophic pain and death, but if it does, then that catastrophic pain and death is the responsibility of humans with free will because humans with free will created AI, and so it will always be the fault of those humans, no matter how out of hand it might get.
  • Pseudonym
    1.2k
    The tone of your OP was clear that you thought it was unwise to make the stake; so I made it.Noble Dust

    What? Because I indicated I thought it unwise you decided to go for it. I'm touched that my opinion is so influential in your decision making, even if only to oppose it.

    I don't like these OPs where the questions are leading me somewhere.Noble Dust

    How does the question lead you somewhere? It's a simple enough question. Do you believe in the uniqueness of consciousness and free will enough to stake the future of humanity on it, do you think we should proceed under a presumption that is safer, or alternatively do you think there is even more risk from presuming these traits are not unique?
  • Noble Dust
    7.9k
    What? Because I indicated I thought it unwise you decided to go for it. I'm touched that my opinion is so influential in your decision making, even if only to oppose it.Pseudonym

    No, I just like a good fight every now and then.

    How does the question lead you somewhere? IPseudonym

    The thread title is suggestive, as is:

    philosophers have idly speculatedPseudonym

    Only in the last few hundred has it become polarized into the debate we recognise todayPseudonym

    the debate was almost entirely academicPseudonym

    We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way.Pseudonym

    If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.Pseudonym

    If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?Pseudonym

    Basically the entire post, in other words.
  • Noble Dust
    7.9k
    Do you believe in the uniqueness of consciousness and free will enough to stake the future of humanity on it, do you think we should proceed under a presumption that is safer, or alternatively do you think there is even more risk from presuming these traits are not unique?Pseudonym

    Again, the assumption on your part here is so obvious as to not even warrant a more detailed response.
  • Pseudonym
    1.2k


    I haven't the faintest idea what you're going on about. Am I not allowed to have an opinion about this in order to open a discussion? Yes I think free will and consciousness are illusions. I think any sufficiently complex system designed with self-analysis would exhibit the same traits. Does that somehow mean I'm not allowed to ask others what they think would be a pragmatic assumption with regards to AI?
  • Noble Dust
    7.9k


    Where did I say you're not allowed to have an opinion about this? Or that you are "somehow not allowed to ask others what they think"? And clearly I pegged your positions correctly.
  • Pseudonym
    1.2k
    Where did I say you're not allowed to have an opinion about this?Noble Dust

    You said "I don't like these OPs where the questions are leading me somewhere."

    Then I asked "How does the question lead you somewhere?"

    Then you replied with a list of quotes in which I imply my opinion.

    Ergo, you "don't like" the fact that I implied my opinion (your definition of "questions [which are] leading me somewhere").

    Take A to be 'questions [which are] leading me somewhere'

    You state you don't like A.

    I asked you to define A.

    You give a set of paragraphs in which I express my opinion.

    Hence my conclusion that you don't like me expressing my opinion in the OP.

    Where have I gone wrong?
  • Noble Dust
    7.9k


    No no, I'm fine with you expressing your opinion. I just don't like threads where the questions are supposed to lead me somewhere, i.e. where the question is being begged. But beg away; I'm not questioning your right to beg the question.
  • Noble Dust
    7.9k


    And I'm just saying I don't like those sorts of threads. I was expressing an emotion. Of course you can keep making threads that annoy me; I shouldn't even have to say that.
  • Noble Dust
    7.9k


    And, more importantly, again, no where did I state that I think you're "not allowed" to have an opinion. you're conflating me not liking your approach with you not being allowed to have an opinion.
  • Pneuma
    3
    So are we saying that human beings do not have the possibility of functioning as autonomous, self regulating, self directional (free) moving beings, (I am not talking about physical biology here)? Why is it that we always need another “authority”, The Church, State or Science? Is the answer to the problems of the human condition really just another external authority? Why the thought of human autonomy seems to be “some grater threat”?
  • Pseudonym
    1.2k
    And I'm just saying I don't like those sorts of threads.Noble Dust

    Yeah I get that, I'm trying to find out what it is you don't like about them. Not that it's particularly related to the thread topic, but your reaction intrigued me. I still don't feel like I've understood your position.

    If a question is up for debate (i.e not entirely factual), it's not 'question begging' to already have an opinion on the answer. 'Question begging' is when the answer is actually implied in the wording of the question, in such a way that you can't even ask the question without assuming the answer as a matter of fact. It's not the same as asking a question whilst holding an opinion on the answer. I think that's way too high a standard to hold anyone to, to come at each question of moral or political import without holding any view whatsoever as to the answer. I have an opinion on the answer already, but I don't know what everyone else's opinion is, that's why I'm asking.

    On a wider note, it's something I've found endemic on this site so far and it stifles proper discussion When a question is asked or statement made, there seems to be a knee-jerk reaction for people to write their opinion on the topic, not the question. I've asked here a very specific question about people's views on the pragmatism of having a view that free-will and conciousness are unique to humans (or biological life). Already, all I've got are people stating their opinions on free-will and conciousness (the topic), not whether those views are actually a pragmatic/safe way to approach the question of AI (the actual question). What I asked was what arguments they have for considering those positions pragmatic, or the least risk approach, in the light of advances in AI.

    If I might join in the 'types of thread that annoy me' discussion you've opened, then that would be my number one, threads where people just state their views at each other on wide and vague topics with no attempt to actually relate them to anything practical or engage with the philosophical sticking points.
  • Pseudonym
    1.2k
    I think AI has a chance of doing evil but most likely it won't be because of free will but because of accidents or because of the will of humans controlling them.René Descartes

    OK, so why do you think this, what's line of thinking has lead you to this conclusion?

    I don't see AI truly thinking for themselvesRené Descartes

    Again, I'd be intrigued to hear you reasoning, but more pertinent to the question, why you have weighed it the way you have against the harm that could be caused if you are wrong. Do you see some greater harm in presuming free-will is obtainable by an AI that has lead you to think we need not take this approach? Or perhaps is your faith so important to you that you feel it needs to be expressed regardless of the risk?
  • Pseudonym
    1.2k
    So are we saying that human beings do not have the possibility of functioning as autonomous, self regulating, self directional (free) moving beings, (I am not talking about physical biology here)?Pneuma

    Yes, that's certainly what I'd say about it, but that's not the question I asked. The question is, do you think it is important to maintain a belief in free-will as unique to humans in the light of advances in AI, or do you think the risk from potentially being wrong is great enough to warrant more caution?
  • Wayfarer
    22.5k
    For at least the last 2000 years, philosophers have idly speculated on the question of what conciousness is and whether we have free-will.Pseudonym

    I think that this has really only been the case for the last several centuries. The term 'consciousness' was coined in about the mid 1700's (from memory, by one of the Cambridge Platonists). And the modern 'mind-body' problem that you're referring to, really harks back only to the early modern period, and as a consequence of the 'mind-body' dualism of Rene Descartes, and the way philosophy of mind developed after that. The ancients used to speak in terms of the soul, or of nous, but there is no obvious synonym for 'consciousness' in their lexicon, that I'm aware of.

    As for 'what will distinguish us from AI' - for the forseeable future, AI technologies will continue to reside in computer networks and their connected devices. I suppose you might be in a situation where you might not know if you're talking to a bot at a call centre, but there is no real prospect of AI looking realistically human anytime in the foreseeable future. And as it does reside in devices, then we exercise at least the power over it, of being able to disconnect it.

    is it safe to pin the future of humanity on some fragile metaphysical constructions?Pseudonym

    Your suggested alternative being....?
  • Pseudonym
    1.2k
    Your suggested alternative being....?Wayfarer

    Presuming a fragile metaphysical construction that is less risky.

    That's basically the argument I'm making. I'm asking if anyone sees any real dangers in presuming (when it comes to AI) that free-will and conciousness may well be properties entirely emergent from complex systems. It seems to me to be the safest option, to act as if free-will were not unique to humans, just in case it isn't, and treat the progress of AI under than assumption.

    As to the fact that we will remain in control, that's exactly one of the safety measures that might be put in place under a presumption that it is possible for a machine to obtain free-will, but without that presumption, or at least if we do not take the possibility seriously, I can easily see it seeming like an attractive option to leave the machine in charge of it's own power supply, solar harvesting or internal fission, for example.
  • Ying
    397
    We now, however, face the problem of increasingly intelligent AI and the question of whether it needs to be controlled in some way. If free-will is an illusion and conciousness is simply something available to any sufficiently complex computational system, then absolutely nothing will distinguish us from AI apart from the fact that they are several thousand times more intelligent than us.

    If conciousness and free-will is something unique to humans then there's no threat from AI, but is it safe to pin the future of humanity on some fragile metaphysical constructions, are those who believe in free-will and conciousness (as a uniquely human trait) willing to stake the future of humanity on it?
    Pseudonym

    Well, we don't have any working definition of what a thought is or what an idea is, on a neurological level. According to Jaron Lanier.



    So talk about hard AI as if it's a thing is like talking about nuclear fusion in the living room. Maybe possible in the future after some or the other huge breakthrough, but banking on such issues is science fiction at this point in time.

    [edit]
    Great. No timestamp on the youtube vid. Here's the link with timestamp:
    https://www.youtube.com/watch?v=L-3scivGxMI&feature=youtu.be&t=1991
    [/edit]
  • Pseudonym
    1.2k
    Well, we don't have any working definition of what a thought is or what an idea is, on a neurological level. According to Jaron Lanier.Ying

    So talk about hard AI as if it's a thing is like talking about nuclear fusion in the living room. Maybe possible in the future after some or the other huge breakthrough, but banking on such issues is science fiction at this point in time.Ying

    You've made quite a leap there from what a single, rather fringe, computer scientist thinks to two very firm statements about what is the case. It's this kind of presumption that I'm questioning here. 20 years ago it would have been fine for you to simply go along with whatever your preferred philosopher is saying on the matter without having to justify that belief. What I'm asking, suggesting I suppose, is that we no longer have that luxury because even the possibility of advanced AI means that we need to consider worst case scenarios rather than simply what we 'reckon' is right.
  • Ying
    397
    You've made quite a leap there from what a single, rather fringe, computer scientist thinks to two very firm statements about what is the case.Pseudonym

    Yeah I'm sure some internet rando like yourself knows more about these matters than someone actually active in the field. Bye now.
  • Rich
    3.2k
    The real problem is with those who believe we are just like robots and not only act like such, but allow the government/industrial complex (particularly the Medical/Big Pharm industries) to treat us like such. Gradual loss of freedom, privacy, and rights us the biggest problem we face and the education system is designed to encourage this with the idiotic idea that we are just computers - fodder for the super-rich. It is no different than when, in the past, people were taught that certain races were sub-human and could be used as slaves.

    It is the promulgation of the idea that we are just computers that is the biggest danger people face.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.