Why do we always fall reflexively back to a Cartesian perspective? I agree with Taylor above that morality and the emotions associated with it are the real power source for the self. My question is: is that always going to be a Cartesian self? I think it might be that everytime we go to explain the self, we'll automatically conjure some kind of independent soul. What do you think? — frank
How would you interpret the Reputation element of the diagram? Does it refer to how a person sees himself, or to how the person thinks others see himself? — Gnomon
O wad some Pow'r the giftie gie us
To see oursels as ithers see us! — Burns
These ideas about social perception can be linked to the social self in the following way. The ability to infer others' mental states requires, as does all perceptual inference, a generative model. Generative models, as we know, are able to generate the sensory signals corresponding to a particular perceptual hypothesis. For social perception, this means a hypothesis about another's mental states. This implies a high degree of reciprocity. My best model of your mental states will include a model of how you model my mental states. In other words I can only understand what's in your mind if I try to understand how you are perceiving the contents of my mind. It is in this way that we perceive others refracted through the minds of others. This is what the social self is all about, and these socially nested predictive perceptions are an important part of the overall experience of being a human self. — Seth, Being You, p167
It would be normal for any scientist to pick number 1. We might divide scientists by whether they believe science as it currently stands is capable of explaining it, that is, do we just need to complete work on the models we have? Or are we going to need new paradigms? — frank
↪GrahamJ How would you characterize the difference between Damasio and Seth? — frank
That is a diagram of something else, but it is good to see reputation being mentioned. (I might say more later.)Diagram : Structure of the self. — Gnomon
Fine.I wasn't presenting Damasio's work as the correct view on consciousness, I was using it as an example of a type of description. — T Clark
Russelll's proposed solution is that we should say to the machines:If we use, to achieve our purposes, a mechanical agency with whose
operation we cannot interfere effectively we had better be quite sure that the
purpose put into the machine is the purpose which we really desire. — Norbert Wiener, 1960
My task today is to dispel some of the doominess by explaining how to
retain power, forever, over entities more powerful than ourselves - [...] — Russell
Books are not always convenient; electronic devices are. — Vera Mont
Once you've downloaded something, it's available all the time. — Vera Mont
There’s also a good reason why deliberation isn’t something we use much in ML today. It’s hard to control. Deliberation may occur with minimal to no feedback from the physical body or environment.
The model-free strategy efficiently produces habitual (or automatized) behavior for oft-repeated situations. Internally, the brain learns something akin to a direct mapping from state to action: when in a particular state, just do this particular action. The model-based strategy works in reverse, by starting with a desired end-state and working out what action to take to get there.
Meta-management as a term isn’t used commonly. I take that as evidence that this approach to understanding consciousness has not received the attention it deserves.
It’s now generally accepted that the brain employs something akin to the Actor/Critic reinforcement learning approach used in ML (Bennet, 2023).
The content of consciousness — whatever we happen to be consciously aware of — is a direct result of the state that is captured by the meta-management feedback loop and made available as sensory input.
I'll bundle this withWhat might be missing between this description and true human consciousness? I can think of nothing ...
*Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data.
I find the following laughable, so I must be misunderstanding it:
Mathematics is not more exact than historiographical, but only narrower with regard to the scope of the existential foundations relevant to it.
This seems to be saying that maths is only about maths; the "existential foundations" of maths are applicable in applied maths, or physics, or engineering.
Maths has a far, far greater reach and explanatory power than 'historiography'. — Banno
Within the stance of 'science is social relations', only historians can speak; mere natural scientists with their commitment to reality are reduced to objects of historical study,... — Hilary Rose (a feminist sociologist of science), in Love, power and knowledge
I might be wrong. I find your style quite obtuse. To be candid, it seems intended to be clever rather than clear. — Banno
... the movable bodies that we see, grasp, and act on. Before infants can reach for and manipulate objects, they organize perceptual arrays into bodies that are cohesive, bounded, solid, persisting, and movable on contact. Young infants use these abstract, interconnected properties to detect the boundaries of each object in a scene, to track objects over occlusion, and to infer their interactions with other objects.
The core place system underlies our sense of where we are, where other things are, and what paths will take us from one place to another. Studies of animals and young children reveal that navigation depends, first and foremost, on representations of abstract geometric properties of the ground surface over which we travel: the distances and directions of its boundaries, ridges, cliffs, and crevices.
Research on human infants, children, adults in diverse cultures, and nonhuman animals all converges on evidence for an early-emerging ability to represent and combine numerical magnitudes with approximate, ratio- limited precision. This ability depends on a core system with most of the properties of the core object and place systems: it is present in newborn infants and functions throughout life, and it is ancient, unitary, and limited in the types of information it provides.
However, if a "mathematical antirealist" believes that math is invented and these concepts exist only in human minds, then one must accept that the conception of "2" varies depending on the circumstance, or use. This is very evident from the multitude of different number systems. So for example, when a person uses, "2" it might refer to a group two things, or it might refer to the second in a series, or order. These are two very distinct conceptions referred to by "2". So, since "2" has at least two referents, it cannot refer to a single object. We could however propose a third referent, an object named "2", but what would be the point in that? The object would be something completely distinct from normal usage of the symbol. — Metaphysician Undercover
Earlier you said (for example):For a mathematical antirealist, does any of this constitute hypocrisy?
I can't see the relevance. Your game clearly involves real objects, pebbles, or in the case of your presentation, the letters. Would the antirealist insist that these are not real objects? — Metaphysician Undercover
andIn set theory it is stated that the elements of a set are objects, and "mathematical realism" is concerned with whether or not the things said to be "objects" in set theory are, or are not, objects.
However, it's hypocrisy to say "I'm a mathematical antirealist" and then go ahead and use set theory.
BW
BW
BW
BW
BBWW
BBWBWW
BW BBWW
BBWBBWWW
BW BBWW BBWBBWWW
BBWBBWWBBWBBWWWW
BW, BBWW, BBWBBWWW, BBWBBWWBBWBBWWWW
Chapter 2 focuses on studies of infants’ knowledge of objects: the movable
bodies that we see, grasp, and act on. Before infants can reach for and manip-
ulate objects, they organize perceptual arrays into bodies that are cohesive,
bounded, solid, persisting, and movable on contact. Young infants use these
abstract, interconnected properties to detect the boundaries of each object
in a scene, to track objects over occlusion, and to infer their interactions with
other objects. Nevertheless, there are striking limits to young infants’ object
representations: Infants have little ability to track hidden objects by their shapes,
colors, or textures, although they do detect and remember these properties.
Above all, research reveals that infants’ early- emerging representations of
objects are the product of a single cognitive system that operates as an inte-
grated whole. This system emerges early in development, it remains present and
functional in children and adults, and it guides infants’ learning. The system
combines some, but not all, of the properties of mature perceptual systems and
belief systems, and it therefore appears to occupy a middle ground between our
immediate perceptual experiences on the one hand and our explicit reasoning on
the other. Research probing infants’ expectations about objects suggests hypoth-
eses concerning the mechanisms by which a system of knowledge might emerge,
function, and guide infants’ learning about the kinds of objects their environ-
ment provides and the kinds of events that occur when different objects interact.
Research described in this chapter also reveals that infants’ knowledge of objects
is at least partly innate. It suggests how innate knowledge of objects might arise
prior to birth, preparing infants for their first perceptual encounters with mov-
able, solid, inanimate bodies.
I have seen some ultrafinitists go so far as to challenge the existence of as a natural number, in the sense of there being a series of “points” of that length. There is the obvious “draw the line” objection, asking where in do we stop having “Platonistic reality”? Here this … is totally innocent, in that it can be easily be replaced by 100 items (names) separated by commas. I raised just this objection with the (extreme) ultrafinitist Yessenin-Volpin during a lecture of his. He asked me to be more specific. I then proceeded to start with and asked him whether this is “real” or something to that effect. He virtually immediately said yes. Then I asked about , and he again said yes, but with a perceptible delay. Then , and yes, but with more delay. This continued for a couple of more times, till it was obvious how he was handling this objection. Sure, he was prepared to always answer yes, but he was going to take times as long to answer yes to then he would to answering . There is no way that I could get very far with this. — Harvey Friedman, Philosophical Problems in Logic
According to Chalmers, one can coherently conceive of an entire zombie world, a world physically indistinguishable from this one but entirely lacking conscious experience. Since such a world is conceivable, Chalmers claims, it is metaphysically possible, which is all the argument requires. Chalmers writes: "Zombies are probably not naturally possible: they probably cannot exist in our world, with its laws of nature."
But I wasn't sure how his preferred 'weak emergence' would be real phenomenality as he indicates, as he seemed to switch to talking about levels of explanation. — Danno
I’m leaning toward panpsychism. But even if it’s not that, something else is happening. And without that something else, why would a thing that looks like us, and has all the physical we have, act as though it has that something else? Why would it say the things it would have to say to make us think it was conscious if it was not? — Patterner
One is a case of weak emergence, or simply different levels of description, and the other is a case, if of emergence, of strong emergence, which is much harder to justify. — petrichor
I have often gotten the impression, which is maybe mistaken, that many in the scientific community basically take this position, that consciousness is real, that everything that happens in the brain is fully accounted for by low-level pre-conscious physical causes (and therefore epiphenomenalism must be true), and yet that consciousness evolved by natural selection. This has always seemed to me to be a problematic combination of incompatible beliefs. It makes me suspect that people haven't thought it all through sufficiently. But maybe I am missing something. Maybe, for one thing, they just don't even have in mind the same thing I do when talking about consciousness. — petrichor
It means retrieving the information from memory. Mind you, bodily functions such as hunger is not memory based, nor the bowel movement ( I will explain it for those uninitiated, upon request). — L'éléphant
Thanks. Perhaps I'm not fully understanding your point, but does this actually reduce the number of computations required or just the length of the algorithm needed to describe the transition from T1 to Tn? — Count Timothy von Icarus
Even if we model the demon as a Markov chain, it is still passing through these many states. And here is the crux of my argument, a full description of each of the states the demon passes through to evolve the system from time T to time T' would require more information than is used to describe either T or T' alone. If you say, "not true, T3 tells you all about T4 and T5," my response would be, "if that is the case, show me T5 without passing through any more states." If T is truly equivalent to T', it shouldn't be discernible from it. If it is discernible, then difference exists (Leibnitz Law), and so to new does information. — Count Timothy von Icarus
That is, we cannot ignore the process of evolution, as is often done. Computation creates discernible differences across a time dimension, such that if we had a second Le Place's demon producing outputs about every state the first demon passes through, the output would be many times larger than the first's when it simply describes T' based on T. — Count Timothy von Icarus
1.Why are physical processes ever accompanied by experience?
[...]
The answer for the first question is Survival advantage(Evolutionary Principles) — Nickolasgaspar
Pleasure isnt such a simple concept from an enactivist perspective. What constitutes a reinforcement is not determinable independently of the normative sense-making goals of the organism.
[...]
https://arxiv.org/pdf/1810.04535.pdf — Joshs
In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism. — Joshs
Then I recommend The Embodied Mind by Varela, Thompson and Rosch and Mind in Life: Biology, Phenomenology and the Sciences of Mind, by Evan Thompson. — Joshs
Could you say more about why you distinguish emotions from the other aspects of experience?
Could you give some examples of thoughts with no emotional content? — Daemon
Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. — Wikipedia
The purpose of reinforcement learning is for the agent to learn an optimal, or nearly-optimal, policy that maximizes the "reward function" or other user-provided reinforcement signal that accumulates from the immediate rewards. This is similar to processes that appear to occur in animal psychology. For example, biological brains are hardwired to interpret signals such as pain and hunger as negative reinforcements, and interpret pleasure and food intake as positive reinforcements. — Wikipedia