This is something that's been discussed a lot here on the forum. — T Clark
This is gobbledygook. But I would not be surprised were you unable to see that.A circle is a very close approximation of Pi which is infinity itself. — invicta
Where do concepts exist? I'm not sure. Or if it's even quite right to say they exist, or if this is a reification. — Moliere
The second premise is true if the definition doesn't contain a contradiction, which I think is an easy condition to satisfy. — Michael
if it’s not infinity why haven’t we been able to calculate it’s finite value. — invicta
A circle is a very close approximation of Pi which is infinity itself. — invicta
I don’t know how accurate that website is at parsing modal logics — Michael
You’re saying im being irrational just like Pi. — invicta
It’s an example of circular logic in Action. — invicta
What does your claim that "well-informed and rational are normative” mean to you? I can make no sense of it. — Mark S
But from your response it seems to have fallen flat. I don't think I'll try explaining it.Are well informed rational people better than ill-informed irrational people? — unenlightened
If you only knew Pi, which you obviously can’t as it’s irrational and infinite — invicta
"There are those" seems to be covertly pointing at yours truly. — Gnomon
I find his dissatisfaction with infinite regression unsatisfactory for if infinite causes are the chain of sequences ad infinitum does such a chain not imply a closed loop, like that primordial snake ouroboros eating it’s own tail. — invicta
Russell (1912: 1) famously denied that there are any causal relations at all, quipping that causation is “a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm” — SEP: The Metaphysics of Causation
I applaud your perspicacity. — jgill
So how is it, then, that something like ChatGPT can get as far as it does with language? The basic answer, I think, is that language is at a fundamental level somehow simpler than it seems. And this means that ChatGPT—even with its ultimately straightforward neural net structure—is successfully able to “capture the essence” of human language and the thinking behind it. And moreover, in its training, ChatGPT has somehow “implicitly discovered” whatever regularities in language (and thinking) make this possible.
The success of ChatGPT is, I think, giving us evidence of a fundamental and important piece of science: it’s suggesting that we can expect there to be major new “laws of language”—and effectively “laws of thought”—out there to discover. In ChatGPT—built as it is as a neural net—those laws are at best implicit. But if we could somehow make the laws explicit, there’s the potential to do the kinds of things ChatGPT does in vastly more direct, efficient—and transparent—ways.
This:What exactly are you saying, I guess? — schopenhauer1
You expresses some agreement with the phenomenological approach to defining consciousness. I have pointed out that it's a useless definition. It cannot help us to decide if ChatGPT, creativesoul, or your air conditioner are conscious. — Banno
a) Chomsky's view on analyticity as described in your OP article is just that. — schopenhauer1
