Tarskian         
         Ok. Oracle gives a final spoken prediction, but secretly writes down what it knows thwarter will do at that point. — Patterner
Patterner         
         This was your idea. I didn't know you were looking for a purpose for Oracle. What did your have in mind? Off the top of my head, I'd say there's money to be made at the roulette wheel.Yes, of course, Oracle can perfectly know what is truly going to happen. However, his knowledge of the truth is not actionable. What else is he going to do with it? — Tarskian
ssu         
         Yes, But notice that the Oracle staying silent can be also viewed as an input. So when the Oracle is silent and doesn't make a prediction, the Thwarter can do something (perhaps mock the Oracle's limited abilities to make predictions), which should be easily predictable.Thwarter needs a prediction as input. Otherwise it does not run. — Tarskian
Oracle can know perfectly what is going to happen if your Thwarter app is a Turing Machine that runs on a program that tells exactly how Thwarter will act on the Oracle's prediction.Yes, of course, Oracle can perfectly know what is truly going to happen. However, his knowledge of the truth is not actionable. — Tarskian
Tarskian         
         It should be understood here that computers cannot follow an order of "do something else". — ssu
Lionino         
         Free will is a property of a process making choices. If it impossible to predict what choices this process will make, then it has free will.
— Tarskian
Oh for gosh sake. That's not true. A coin doesn't have free will when you flip it. And if you say that deep down coin flips are deterministic, so are programs. — fishfry
ssu         
         Randomly picking some action from [A1, A2, ..., A(k-1),A(k+1) .... An] as long as it is not Ak is surely not "do something else". It is an exact order that is in the program that the Oracle can surely know. Just like "If Ak" then take "Ak+1". A computer or Turing Machine cannot do something not described in it's program.If a program knows a list of things it can do [ A1, A2, A3, ..., An], and it receives the instruction "do something else but not Ak", then it can randomly pick any action from [A1, A2, ..., A(k-1),A(k+1) .... An] as long as it is not Ak. — Tarskian
Tarskian         
         It's systems that are "incomplete": the idea makes no sense at all, but our understanding of systems. — Janus
Janus         
         
Tarskian         
         I was referring to real physical systems which are not conceptual, not I was not referring to mathematical systems, which are conceptual. — Janus
It makes no sense to say that the Universe, a real physical system, is incomplete, but of course our understanding of the universe is incomplete, and always will be. — Janus
So, the future is not comprehensively predictable, but it does not follow that it is incomplete or in possession of free will. — Janus
Tarskian         
         I was referring to real physical systems which are not conceptual — Janus
Tarskian         
         Btw, have you read Yanofsky's A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points that we discussed on another thread, should be important to this too — ssu
https://en.wikipedia.org/wiki/Cantor%27s_theorem
Theorem (Cantor) — Let f be a map from set A to its power set P ( A ). Then f : A → P ( A ) is not surjective. As a consequence, card  ( A ) < card  ( P ( A ) ) holds for any set A.
https://arxiv.org/pdf/math/0305282
Theorem 1 (Cantor’s Theorem) If Y is a set and there exists a function α : Y → Y without a fixed point (for all y ∈ Y , α(y) != y), then for all sets T and for all functions f : T × T → Y there exists a function g : T → Y that is not representable by f i.e. such that for all t ∈ T: g(−) != f (−, t).
ssu         
         Ok, this is very important and seemingly easy, but a really difficult issue altogether. So I'll give my 5 cents, but if anyone finds a mistake, please correct me.While Cantor says something simple, i.e. any onto mapping of a set onto its power set will fail, Yanofsky says something much more general that I do not fully grasp. — Tarskian
flannel jesus         
         Just a nitpick. Not every f(x) function is bijective. — Lionino
fishfry         
         Deep down humans could also be deterministic. — Tarskian
As long as the theory of humans is incomplete, humans would still have free will. — Tarskian
fishfry         
         Chaos theory has already been brought up twice, which he ignored, like he does everytime his incorrigible nonsense is challenged. — Lionino
Lionino         
         a bit eccentric — fishfry
Tarskian         
         Yet this is also the general issue that Yanofsky is talking about as this is found on all of these theorems. — ssu
Tarskian         
         But given that, my original point stands. That programs can't have free will. And I hope you agree that humans being deterministic would not contradict that point. — fishfry
Tarskian         
         I try to keep an open mind and take the good with the bad of all, say, a bit eccentric posters. I hope that is not too uncharitable to Tarskian. Am I being fair? — fishfry
According to formalism, the truths expressed in logic and mathematics are not about numbers, sets, or triangles or any other coextensive subject matter — in fact, they aren't "about" anything at all.
ssu         
         
Bylaw         
         Thwarter needs a prediction as input. Otherwise it does not run. — Tarskian
fishfry         
         I guess so.
As you have probably noticed, Lionino does not talk about metaphysics or about mathematics but about me. That is apparently his obsession. He incessantly talks about me, very much like I incessantly talk about Godel. I don't know if I should feel flattered. — Tarskian
But then again, the metaphysical implications of the foundational crisis in mathematics, are truly fascinating. — Tarskian
How can something that "isn't about anything at all" suddenly become about the fundamental nature of everything? — Tarskian
Tarskian         
         IOW the owners of oracle could just tell it to lie to Thwarter. — Bylaw
https://en.m.wikipedia.org/wiki/Lambda_calculus
There is no algorithm that takes as input any two lambda expressions and outputs TRUE or FALSE depending on whether one expression reduces to the other. More precisely, no computable function can decide the question. This was historically the first problem for which undecidability could be proven.
fishfry         
         I think that having "free will" versus having a "soul" are not the same thing. — Tarskian
As I see it, the soul is an object in religion while free will is an object in mathematics. — Tarskian
I see free will and incompleteness as equivalent. I don't see why they wouldn't be. — Tarskian
Tarskian         
         "Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer." — fishfry
fishfry         
         I believe that the soul is non-algorithmic. — Tarskian
Concerning "human consciousness", I don't know how much of it is just mechanical. The term is too vague for that purpose. A good part of the brain can only be deemed to be a machine, i.e. a biotechnological device, albeit a complex one, of which we do not understand the technology, if only, because we did not design it by ourselves. — Tarskian
But then again, even if the brain were entirely mechanical, its theory is undoubtedly incomplete, which ensures that most of its truth is unpredictable. — Tarskian
Even things without a soul can have an incomplete theory and therefore be fundamentally unpredictable. — Tarskian
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.