t I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand — Apustimelogist
"I follow the trajectory that arises in the later Husserl and continues in Merleau-Ponty, and that calls for a rethinking of the concept of “nature” in a post-physicalist way—one that doesn't conceive of fundamental nature or physical being in a way that builds in the objectivist idea that such being is intrinsically of essentially non-experiential. But, again, this point doesn't entail that nature is intrinsically or essentially experiential (this is the line that pan-psychists and Whiteheadians take). (Maybe it is, but I don't think we're now in position to know that.) All I want to say for now (or think I have grounds for saying now) is that we can see historically how the concept of nature as physical being got constructed in an objectivist way, while at the same time we can begin to conceive of the possibility of a different kind of construction that would be post-physicalist and post-dualist–that is, beyond the divide between the “mental” (understood as not conceptually involving the physical) and the “physical” (understood as not conceptually involving the mental)."
“Many philosophers have argued that there seems to be a gap between the objective, naturalistic facts of the world and the subjective facts of conscious experience. The hard problem is the conceptual and metaphysical problem of how to bridge this apparent gap. There are many critical things that can be said about the hard problem, but what I wish to point out here is that it depends for its very formulation on the premise that the embodied mind as a natural entity exists ‘out there' independently of how we configure or constitute it as an object of knowledge through our reciprocal empathic understanding of one other as experiencing subjects. One way of formulating the hard problem is to ask: if we had a complete, canonical, objective, physicalist account of the natural world, including all the physical facts of the brain and the organism, would it conceptually or logically entail the subjective facts of consciousness? If this account would not entail these facts, then consciousness must be an additional, non-natural property of the world.
One problem with this whole way of setting up the issue, however, is that it presupposes we can make sense of the very notion of a single, canonical, physicalist description of the world, which is highly doubtful, and that in arriving (or at any rate approaching) such a description, we are attaining a viewpoint that does not in any way presuppose our own cognition and lived experience. In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism. From the phenomenological perspective explored here, however — but also from the perspective of pragmatism à la Charles Saunders Peirce, William James, and John Dewey, as well as its contemporary inheritors such as Hilary Putnam (1999) — this transcendental or metaphysical realist position is the paradigm of a nonsensical or incoherent metaphysical viewpoint, for (among other problems) it fails to acknowledge its own reflexive dependence on the intersubjectivity and reciprocal empathy of the human life-world.
I don't think there is any fundamental difference here between what I think about what humans and animals do, I think the disagreement is about relevance. — Apustimelogist
Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.
— Joshs
I don't understand this sentiment. It's not a refutation of the possibilities of what can be created, neither is it a realistic sentiment about how the world works. Things change, ideas advance, ideas bleed over between different fields. Doubt anyone in 1950 at the time could tangibly envision technologoy that does what A.I. are doing now.
The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.
— Joshs
And this would apply to all living organisms: jellyfish, plants, the many kinds of brains, neural webs, etc, etc. — Apustimelogist
My position is simply that nobody has ever demonstrated the simpler model wrong. Plenty (yourself included) reject that simplicity, which is your choice. But the physical view hasn't been falsified, and there is no current alternative theory of physics that allows what you're proposing. You'd think somebody would have come up with one if such a view was actually being taken seriously by the scientific community. — noAxioms
Certainly a brain would not be operational. It needs a being to be in, and that being needs an environment, hence my suggestion of a simulation of <a person in a small room> — noAxioms
You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic.
— Joshs
Yea, which is why mechanical devices are not yet living things. It can happen. Whether it will or not is an open question at this point. A device being living is not a requirement for it to think or to have a point of view — noAxioms
This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment.
You mean like putting on a coat when winter comes? What does this have to do with the topic again? The definition of 'life' comes up only because you're asserting that life seems to have access to a kind of physics that the same matter not currently part of a lifeform does not. — noAxioms
Pre-programmed, in contrast to self-organization. Its not some technical concept. For instance, you could say pain or hunger is in some sense hard-coded into us — Apustimelogist
You have then seemed to base the rest of the post on latching onto this use of the word "hardcoded" even though I initially brought that word up in the post to say that "hardcode" is exactly not what characterizes self-organization or what A.I. — Apustimelogist
What is being hardcoded, effectively, is the ability for a system to learn to do things by itself without explicit supervision, which is self-organization. — Apustimelogist
↪Joshs
I really don't understand what you are going on about. A brain is a physical object. In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities — Apustimelogist
No matter how apparently flexible its behavior , that flexibility will always be framed and and limited to the model of thinking that dates to the time that the product is released to the market.
— Joshs
No so for devices that find their own models of thinking.
As soon as it is released, it already is on the way to obsolescence
So similar to almost every creature. Name a multicelled creature they have a fossil of that exists today. I can't think of one. They're all obsolete. A rare fossil might have some living descendants today (I can think of no examples), but the descendant is a newer model, not the same species. — noAxioms
I will say bluntly that no machine we invent will do what we do, which is to think.
— Joshs
Noted. How very well justified. Your quote is about LLMs which are mildly pimped out search engines. Compare that do devices which actually appear to think and to innovate. What do you call it if you refuse to apply the term 'think' to what it's doing?
The quote goes on to label the devices as tools. True now, but not true for long. I am arguably a tool since I spent years as a tool to make money for my employer. Am I just a model then? — noAxioms
↪JoshsI think we’re converging on a similar point. I would agree that “truth” does not wear a single face. Its criteria shift depending on the language-game: in a courtroom truth is tied to testimony and records, in science it is tied to predictive success but also to the testimony and documentation that communicate, test, and replicate those predictions, and in mathematics it is tied to logical necessity. To borrow Wittgenstein’s term, these are family resemblances rather than a unitary essence.
Where I’d want to add a note of caution is that the factivity of truth still matters across those contexts. However we construe it, “p” being true always implies that things are as “p” says they are. Otherwise we lose the very grammar that distinguishes knowledge from conviction. — Sam26
Sam26
You’re right that to say “truth is a maintenance project of cognition” is itself an epistemic claim. But that doesn’t undermine my point... it reinforces it. The fact that I can’t step outside the framework of justification to make my claim is precisely what I mean when I call truth a “maintenance project.” To describe truth is always to participate in it, never to stand above — DifferentiatingEgg
↪Joshs
I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible. — Apustimelogist
LLMs do not perform any tasks of their own, they perform our tasks. It would be better to say that they do not really do anything at all. Thus, we should not treat LLMs as agents. And since LLMs are not agents, let alone epistemic ones, they are in no position to do or know anything.
The map does not know the way home, and the abacus is not clever at arithmetic. It takes knowledge to devise and use such models, but the models themselves have no knowledge. Not because they are ignorant, but because they are models: that is to say, tools. They do not navi-gate or calculate, and neither do they have destinations to reach or debts to pay. Humans use them for these epistemic pur-poses. LLMs have more in common with the map or abacus than with the people who design and use them as instruments. It is the tool creator and user, not the tool, who has knowledge.
There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle. — Apustimelogist
From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts. — sime
The second is fiction, while the first is an act of mind remembering (while I am sleeping) people I know and whom I interacted with. — javi2541997
I can assume that a past version of myself is fictional — javi2541997
They are subjective constructions, abstractions, idealizations which result from our taking our own perspectivally changing experience, comparing it with that of others
— Joshs
How can you compare your experience to that of others if their experience is not available to you? — noAxioms
First person questions are not about what is the case, what the objective facts are.
Funny, but 'cogito ergo sum' is pitched as a first person analysis concluding an objective fact. I personally don't buy that conclusion at all, but that's me not being a realist. — noAxioms
Exactly by following my feelings, I came to the conclusion that you exist. :smile: I know that an image (like a mirror) can prove me wrong or cheat me. — javi2541997
I assert that your mechanics will fall flat on its face if it amounts to nothing but a ‘third-person’ mechanics.
— Joshs
But everything in your previous post was "third-person mechanics — Apustimelogist
Even if I was in a dream, my ability to have these thoughts, including interacting with you, proved your existence — javi2541997
I don't really find this that interesting in the context of the problem of consciousness. It’s almost a triviality of science that different problems, different descriptions utilize different models or explanations. Given that any plurality of explanations need to be mutually self-consistent, at least in principle, this isn't interesting. — Apustimelogist
And here we have the problem. All what we know via science can be known by any subject, not a particular one. However, 'experience(s)' have a degree of 'privateness' that has no analogy in whatever physical property we can think of.
— boundless
I'm not grasping what you see as a problem for physicalism here.
My neurons are not interconnected with your neurons, so what experience the activity of your neurons results in for you is not something neurally accessible within my brain. Thus privacy. What am I missing? — wonderer1
It seems that people are talking about many different issues.
Q1: What is the subjective experience of red? More to the point, what is something else's subjective experience of red? What is that like?
Q2 How does the experience of red (or any qualia) work? This seems to be a third person question, open to science — noAxioms
↪Patterner
To me, they would if they had exactly the same brains as us but just devoid of any "lights on" inside. My impression is that there is nothing really in biology that suggests we couldn't explain our behavior entirely in terms of the mechanics of brains, at least in principle. — Apustimelogist
I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard. — noAxioms
Here's a thing - what does "abduction" mean? Even the SEP article can't say. So now you claim it's just making up an hypothesis. So why not just call it "hypothesising"? Why the new name — Banno
I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.
Neither side of the divide is presented as a given. The frames of reference are incongruent — Paine
. Maybe someone knows other approaches? — Astorre
Basic point being that analysis is a process of critical scrutiny, so the people interested in it generally want to really get to the bottom of things — boethius
Don’t all of the people you mention share competence, and perhaps even innovation, in common?
How many people known as philosophers today would actually produce original work, do you think?
How would you go about defining what it means to be a philosopher? — Tom Storm
I wonder what the minimum standard would be for someone to be called a philosopher?
— Tom Storm
Minimum standard, by my lights in the world we live in, is being paid to do it.
But surely you see how inadequate that standard is. It's just the minimum standard in the world we happen to live in (and it's likely the person paid to do it has expertise, especially given how competitive those roles are) — Moliere
Ok, but doesn't that mean the "other America" you spoke of is 80-85% of the population? Is that what you meant? — frank
↪Joshs
80% of the US population is considered urban., but Trump got 49.1% of the popular vote..
I think the community you're referring to is educated urbanites, probably mostly white, so it's the 45% of whites who didn't vote for Trump. The group to watch is Latinos, who are now 20% of the US population, and voted for Trump in larger numbers in 2024 than previously. — frank
All of the those places are failing, sorry to say. You are making my point. I live in one of them. — Fire Ologist
What are the values unique to those cities that the Dems are fostering and building up but the repubs are resisting? What values and will promoting those values help make those cities flourish? — Fire Ologist
↪Joshs Very interesting analysis. How do you see this playing out over the next 4-8 years? — Tom Storm
North, south, east coast, west coast, city, farm, black, white, little Italy, china town, rich/poor - the American system survived a massive civil war. We survived the 1960s and the murder if so many politicians, and 2020 elections and a maga insurrection. Nothing really new about a free nation’s people at odds with their own unity — Fire Ologist
Trump’s success is because people in the cities, in the suburbs, on the farms, of every economic class, of all types of sexual preference, in every color, Hispanic, Native American, etc, etc, etc - so many agree. Basic street facts, like who is male, and who is the bully, and who needs help, and who is full of shit all of the time (Crockett) - they can’t be hidden forever. Media is losing and the Dems are losing with them. — Fire Ologist
At some stage one might grow to recognise oneself as a member of a community, to acknowledge the need in others to also overcome themselves.
And then one might begin to consider ethics. One might become an adult. — Banno
The truth of the deep leftward bias of all legacy and main stream media (ABC, NBC, CBS, NYT, LA Times, Wash. Post, CNN, all things Hollywood) is the fulcrum behind Trump’s continued success and appeal - since 2016.
Libs refuse to see it. It’s a total blind spot. It’s why dems will continue losing outside of the areas where Al of their sheep flock — Fire Ologist
I agree that when it comes to claims of knowledge, justification is required. On the other hand I know many things with certainty that require no justification simply because they are directly known―in these cases justification just doesn't enter the picture.
— Janus
And this resembles the "A or ~A" case, where it's difficult to see it in terms of justifications. Still, I think the conclusion we ought to draw from this is that we're not quite sure what a justification is. What sorts of reasons may play a part in justification? (We noted earlier that a "good justification" is very unclear, in many cases.) If you ask me for my justification in believing "I am having thought X right now" and I reply, "I am directly observing this occurrence as we speak," have I offered a justification? Perhaps so; that's one way of understanding what reasons count as justification, though I'd probably also need to say something about the previous reliability of my direct observations. Or we might conclude that "directly observing" and "having" are two ways of saying the same thing, so no actual reason has been offered. Then, if "I am having thought X" needs a justification, we'd have to look elsewhere. — J
