If you already know the answer to the question you ask AI, then you can vet it. If AI is to be useful, then you musn't know the answer ahead of time — Leontiskos
We're asking a viewpoint question instead of a truth question, and that's part and parcel of the whole nature of AI. — Leontiskos
If every action originates from the actor’s internal state, then no act can be wholly “selfless.” Even apparent self-sacrifice — the soldier dying for his country, the mother starving for her child, the philanthropist donating wealth — finds its roots in personal satisfaction, emotional fulfillment, or existential meaning — Copernicus
AI is providing you with a cultural opinion, not an expert opinion. AI is reliable wherever the cultural opinion tracks the expert opinion. — Leontiskos
As I understand it, the insight is what you’re supposed to provide in your post. I don’t really care where you get it from, but the insight should be in your own words based on your own understanding and experience and expressed in a defensible way. The documentation you get from the AI response can be used to document what you have to say, but then you’re still responsible for verifying it and understanding it yourself. — T Clark
hat being said, a listing or summary of a bunch of smart guys’ ideas is not the same as insight. That requires a connection between things that are not normally thought of as connected. Something unexpected, surprising. The truth is always a surprise. — T Clark
For me, it’s not a question of clear and flowing, it’s a question of insightful. My experience with using AI myself is that it produces lots of surface without much depth. Lots of rounded corners with nothing that will cut. — T Clark
The lie that one is interacting with an intelligent oracle is too good to resist. It's worth asking whether it is even possible to regularly use an LLM without falling into the false belief that one is interacting with an intelligent and extremely knowledgeable person. — Leontiskos
I disagree with this. I was toying around with a bunch of disparate ideas that seemed related to me. I used chat GPT to help me figure out what they had in common. That seems like a legitimate use to me. I use a thesaurus when I can’t think of the right word for a particular idea. I use quotes when I want to add legitimacy or clarity. AI feels like the same kind of tool — T Clark
There have always been enough overblown and oratorical but poorly thought out OPs and posts here on the forum even without AI that I don’t know how easy it is to tell. Perhaps it would be helpful if people called them out when you see them. — T Clark
I am looking for a falsification specifically of physical monism, hard to do without any competing theories. — noAxioms
I mean the quote doesn't seem distinctly enactivist to me, but more focused on the inability to explain qualia. At the same time, I can clarify that I didn't mean anything about qualia or experience in the previous post, I only meant behavior, as I mentioned in the first sentence. — Apustimelogist
The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand.
— Apustimelogist
:100:
I'm curious as to whether Joshs recognizes this. — wonderer1
t I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand — Apustimelogist
"I follow the trajectory that arises in the later Husserl and continues in Merleau-Ponty, and that calls for a rethinking of the concept of “nature” in a post-physicalist way—one that doesn't conceive of fundamental nature or physical being in a way that builds in the objectivist idea that such being is intrinsically of essentially non-experiential. But, again, this point doesn't entail that nature is intrinsically or essentially experiential (this is the line that pan-psychists and Whiteheadians take). (Maybe it is, but I don't think we're now in position to know that.) All I want to say for now (or think I have grounds for saying now) is that we can see historically how the concept of nature as physical being got constructed in an objectivist way, while at the same time we can begin to conceive of the possibility of a different kind of construction that would be post-physicalist and post-dualist–that is, beyond the divide between the “mental” (understood as not conceptually involving the physical) and the “physical” (understood as not conceptually involving the mental)."
“Many philosophers have argued that there seems to be a gap between the objective, naturalistic facts of the world and the subjective facts of conscious experience. The hard problem is the conceptual and metaphysical problem of how to bridge this apparent gap. There are many critical things that can be said about the hard problem, but what I wish to point out here is that it depends for its very formulation on the premise that the embodied mind as a natural entity exists ‘out there' independently of how we configure or constitute it as an object of knowledge through our reciprocal empathic understanding of one other as experiencing subjects. One way of formulating the hard problem is to ask: if we had a complete, canonical, objective, physicalist account of the natural world, including all the physical facts of the brain and the organism, would it conceptually or logically entail the subjective facts of consciousness? If this account would not entail these facts, then consciousness must be an additional, non-natural property of the world.
One problem with this whole way of setting up the issue, however, is that it presupposes we can make sense of the very notion of a single, canonical, physicalist description of the world, which is highly doubtful, and that in arriving (or at any rate approaching) such a description, we are attaining a viewpoint that does not in any way presuppose our own cognition and lived experience. In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism. From the phenomenological perspective explored here, however — but also from the perspective of pragmatism à la Charles Saunders Peirce, William James, and John Dewey, as well as its contemporary inheritors such as Hilary Putnam (1999) — this transcendental or metaphysical realist position is the paradigm of a nonsensical or incoherent metaphysical viewpoint, for (among other problems) it fails to acknowledge its own reflexive dependence on the intersubjectivity and reciprocal empathy of the human life-world.
I don't think there is any fundamental difference here between what I think about what humans and animals do, I think the disagreement is about relevance. — Apustimelogist
Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.
— Joshs
I don't understand this sentiment. It's not a refutation of the possibilities of what can be created, neither is it a realistic sentiment about how the world works. Things change, ideas advance, ideas bleed over between different fields. Doubt anyone in 1950 at the time could tangibly envision technologoy that does what A.I. are doing now.
The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.
— Joshs
And this would apply to all living organisms: jellyfish, plants, the many kinds of brains, neural webs, etc, etc. — Apustimelogist
My position is simply that nobody has ever demonstrated the simpler model wrong. Plenty (yourself included) reject that simplicity, which is your choice. But the physical view hasn't been falsified, and there is no current alternative theory of physics that allows what you're proposing. You'd think somebody would have come up with one if such a view was actually being taken seriously by the scientific community. — noAxioms
Certainly a brain would not be operational. It needs a being to be in, and that being needs an environment, hence my suggestion of a simulation of <a person in a small room> — noAxioms
You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic.
— Joshs
Yea, which is why mechanical devices are not yet living things. It can happen. Whether it will or not is an open question at this point. A device being living is not a requirement for it to think or to have a point of view — noAxioms
This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment.
You mean like putting on a coat when winter comes? What does this have to do with the topic again? The definition of 'life' comes up only because you're asserting that life seems to have access to a kind of physics that the same matter not currently part of a lifeform does not. — noAxioms
Pre-programmed, in contrast to self-organization. Its not some technical concept. For instance, you could say pain or hunger is in some sense hard-coded into us — Apustimelogist
You have then seemed to base the rest of the post on latching onto this use of the word "hardcoded" even though I initially brought that word up in the post to say that "hardcode" is exactly not what characterizes self-organization or what A.I. — Apustimelogist
What is being hardcoded, effectively, is the ability for a system to learn to do things by itself without explicit supervision, which is self-organization. — Apustimelogist
↪Joshs
I really don't understand what you are going on about. A brain is a physical object. In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities — Apustimelogist
No matter how apparently flexible its behavior , that flexibility will always be framed and and limited to the model of thinking that dates to the time that the product is released to the market.
— Joshs
No so for devices that find their own models of thinking.
As soon as it is released, it already is on the way to obsolescence
So similar to almost every creature. Name a multicelled creature they have a fossil of that exists today. I can't think of one. They're all obsolete. A rare fossil might have some living descendants today (I can think of no examples), but the descendant is a newer model, not the same species. — noAxioms
I will say bluntly that no machine we invent will do what we do, which is to think.
— Joshs
Noted. How very well justified. Your quote is about LLMs which are mildly pimped out search engines. Compare that do devices which actually appear to think and to innovate. What do you call it if you refuse to apply the term 'think' to what it's doing?
The quote goes on to label the devices as tools. True now, but not true for long. I am arguably a tool since I spent years as a tool to make money for my employer. Am I just a model then? — noAxioms
↪JoshsI think we’re converging on a similar point. I would agree that “truth” does not wear a single face. Its criteria shift depending on the language-game: in a courtroom truth is tied to testimony and records, in science it is tied to predictive success but also to the testimony and documentation that communicate, test, and replicate those predictions, and in mathematics it is tied to logical necessity. To borrow Wittgenstein’s term, these are family resemblances rather than a unitary essence.
Where I’d want to add a note of caution is that the factivity of truth still matters across those contexts. However we construe it, “p” being true always implies that things are as “p” says they are. Otherwise we lose the very grammar that distinguishes knowledge from conviction. — Sam26
Sam26
You’re right that to say “truth is a maintenance project of cognition” is itself an epistemic claim. But that doesn’t undermine my point... it reinforces it. The fact that I can’t step outside the framework of justification to make my claim is precisely what I mean when I call truth a “maintenance project.” To describe truth is always to participate in it, never to stand above — DifferentiatingEgg
↪Joshs
I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible. — Apustimelogist
LLMs do not perform any tasks of their own, they perform our tasks. It would be better to say that they do not really do anything at all. Thus, we should not treat LLMs as agents. And since LLMs are not agents, let alone epistemic ones, they are in no position to do or know anything.
The map does not know the way home, and the abacus is not clever at arithmetic. It takes knowledge to devise and use such models, but the models themselves have no knowledge. Not because they are ignorant, but because they are models: that is to say, tools. They do not navi-gate or calculate, and neither do they have destinations to reach or debts to pay. Humans use them for these epistemic pur-poses. LLMs have more in common with the map or abacus than with the people who design and use them as instruments. It is the tool creator and user, not the tool, who has knowledge.
There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle. — Apustimelogist
From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts. — sime
The second is fiction, while the first is an act of mind remembering (while I am sleeping) people I know and whom I interacted with. — javi2541997
I can assume that a past version of myself is fictional — javi2541997
They are subjective constructions, abstractions, idealizations which result from our taking our own perspectivally changing experience, comparing it with that of others
— Joshs
How can you compare your experience to that of others if their experience is not available to you? — noAxioms
First person questions are not about what is the case, what the objective facts are.
Funny, but 'cogito ergo sum' is pitched as a first person analysis concluding an objective fact. I personally don't buy that conclusion at all, but that's me not being a realist. — noAxioms
Exactly by following my feelings, I came to the conclusion that you exist. :smile: I know that an image (like a mirror) can prove me wrong or cheat me. — javi2541997
I assert that your mechanics will fall flat on its face if it amounts to nothing but a ‘third-person’ mechanics.
— Joshs
But everything in your previous post was "third-person mechanics — Apustimelogist
Even if I was in a dream, my ability to have these thoughts, including interacting with you, proved your existence — javi2541997
I don't really find this that interesting in the context of the problem of consciousness. It’s almost a triviality of science that different problems, different descriptions utilize different models or explanations. Given that any plurality of explanations need to be mutually self-consistent, at least in principle, this isn't interesting. — Apustimelogist
And here we have the problem. All what we know via science can be known by any subject, not a particular one. However, 'experience(s)' have a degree of 'privateness' that has no analogy in whatever physical property we can think of.
— boundless
I'm not grasping what you see as a problem for physicalism here.
My neurons are not interconnected with your neurons, so what experience the activity of your neurons results in for you is not something neurally accessible within my brain. Thus privacy. What am I missing? — wonderer1
It seems that people are talking about many different issues.
Q1: What is the subjective experience of red? More to the point, what is something else's subjective experience of red? What is that like?
Q2 How does the experience of red (or any qualia) work? This seems to be a third person question, open to science — noAxioms
↪Patterner
To me, they would if they had exactly the same brains as us but just devoid of any "lights on" inside. My impression is that there is nothing really in biology that suggests we couldn't explain our behavior entirely in terms of the mechanics of brains, at least in principle. — Apustimelogist
I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard. — noAxioms
Here's a thing - what does "abduction" mean? Even the SEP article can't say. So now you claim it's just making up an hypothesis. So why not just call it "hypothesising"? Why the new name — Banno
I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.
Neither side of the divide is presented as a given. The frames of reference are incongruent — Paine
. Maybe someone knows other approaches? — Astorre