If that's what you are doing too, as I expect, you are in the respectable company of nearly everybody. It's a catastrophically tempting confusion.
The Chinese room argument holds that a digital computer executing a program cannot be shown to have a "mind", "understanding" or "consciousness", regardless of how intelligently or human-like the program may make the computer behave.
And speaking of Descartes, he anticipates such arguments:
Here is Descartes's direct quote. Note that he's anticipated your idea by almost 400 years.
Who or what is it that's having the experience?
What if the computer tells you it is aware of itself and not simply aware to the extent it can answer questions? What would be your test for self-awareness?
this op is entirely nonsensical - it doesn't convey anything about the original argument, nor any insight into what might be wrong with the original argument.
But, computation can give rise to virtual realities
— Zelebg
What does that even mean?
Humans write the programs to create the virtual realities out of meaningless bit patterns.
Computation does nothing but flip bits.
sounds like hypothetical mumbo-jumbo, not evidence.
Not true. Your computer program will still have pre-determined behaviors even its functionality and/or goals can be expost facto updated, which is completely unlike humans who's behavior is not deterministic in any way or at any moment in time b/c, unlike the computer program, humans (and rats et. al.) have an "I" which is self-determined sufficiently apart from their genetic code/programming to have true 'free will'.
A program that can redefine its set of defined functions and goals.
— Zelebg
bad example. just b/c the programmed changed its 'goals' it is still a deterministic program which is constrained to a (narrow) set of pre-determined behaviors and functions. So, it seems your example fails my request.
at least b/c it is an objectified and unified state which happens apart from time and apart from its hardware or embodiment (incl. any programming) and considers/'feels' all constraints at a single moment. state-machine programs or processes cannot achieve that 3rd party state of entwined being.
Can you evidence they can?
again, just asking you for your definition of 'free will' for which your statements and examples are based upon. If you continue to resist then I will just assume you have no definition and you are 'testing the waters' (like a sophist) on some ridiculous idea the all computer programs have human style 'free will', but you really don't believe that, which is why you will not provide your definition that supports that degenerate (case) view. I guess I'm trying to "squeeze water from a rock" with you on this 'free will' topic, and I'm not interest in rhetorical banter on things you really do not believe to be well reasoned and true.
Those are obviously leading questions, which do hint at/point to my answers. My leading questions there are effectively begging you for your definition of 'free will'. That gets to the point of where you are coming from. So, why are you avoiding taking your stab at that?
Experience is qualia, in that experience consists of one or more different and simultaneous qualities.
— Zelebg
I disagree.
I'm just asking you for a clear definition of 'free will' and for you to compare/contrast that to the 'common definition' to apparently alluded to. If you do not care to do that then I dare say all your opinions on 'free will' are not meant to be taken seriously as they are not open for debate towards a true, but just to state/spread your position.
please give us an example of a stable deterministic program which is not constrained to a set of pre-determined behaviors and functions, yet achieves goals and/or has utility.
how is determinism relevant to "knowing itself"?
— Zelebg
would you agree that all true independent agents acting in the world have a goal? would you agree that a more meaningful decision made by an agent when it "knows" that its decision(s) is/are best for its overall goals? would you agree that best overall decisions can only be achieved if the agent has a state of awareness (e.g., conscious) of its totality of needs and if the agent has the ability to realign its behaviors and/or beliefs and/or goals according to the experienced/predicted consequences of its behaviors and/or beliefs and/or goals ? If so, then would you not agree that a conscious agent is exercising more meaningful 'free will' than the automaton programmed agent, and a self-conscious agent exhibiting still more meaningful 'free will' than an only conscious agent?
Sure, what do you say is the 'common wisdom' meaning of "free will" and exactly where/how do you reason that is not accurate/true?
What is it you were expecting to be able do with it?
an agent can certainly learn and make choices w/o sentience or consciousness, and you seem to be contradicting yourself by saying that the purpose of sentience or consciousness (which the robot doesn't have) is so we can have "free will" (which you said the robot does have)
your robot is operating 100% deterministic on its program. So, your robot cannot represent or know itself.
Yes, that is free will, program does what program wants
— Zelebg
so, how can you have a 'will' w/o a sentient agent?
Standard definitions seem to require the "I" be present in the agent. So, my robot example won't cut it, esp. since it cannot ever have the cogito ergo sum dilemma. That is, how can one say it made a willful choice when it does not have self-consciousness to know it is choosing anything? thus, no free will there b/c you don't have a sentient free agent.
So, if you actually consider imagination to be a virtual experience then must it have qualia to render the experience part?
Colour exists potentially in the brain as conceptual or five-dimensional relations, developed through prediction error to be relatively accurate in relation to our experiences so far. The information we refer to as ‘colour’ is irreducible in this sense.
if better vision has more (e.g., energy) cost than its survival benefits then Darwin would say that better performing vision is even worse than worse vision.
Colours are five-dimensional conceptual structures of chemical and energy relations.
They exist potentially as values - any reference to the ‘actuality’ of a colour is a reduction of information using particular value structures: light wave frequencies, chemical ‘signatures’, computer ‘code’, etc.
So in the above metaphor, I would say that the colour yellow exists potentially in the program, not actually in the computer.
Can you answer any of those questions?
I ask again: How do you implement 2 and 5?
You said you offered suggestions on how to implement 2 and 5. I didn't see them.
OK, so I make a robot that evolves its own personality, goals, and decision making by way of a genetic algorithm, and then have it makes its own final action decisions based on its personal/unique personality, and goals, and, in part, on a random number generator to help bias it to action when split decisions are experienced (likely not too different than what most humans do). So, according to your definition, have I not invented/created a robot which has 'free will'?
in a previous reply to discount me saying that we cannot imagine infinity you said that was b/c "Well as we are talking about what can be imagined, not experienced, it seems you can imagine infinity.", but now you say imagination is almost synonymous w/ virtual experience in re 'To learn by imagining, that is mental / virtual experience in advance'. I strongly suspect you are confounding distinctly separate and different human faculties into one, which has you flip-flopping on definitions.
Ok. How do you implement 2 and 5?