Well, if we need inspiration…
She shuffled some books on her desk, found what she was looking for, a small rectangular package. The label on the front of the package was a gold on orange holographic image. From one angel it showed a muscular, bearded man in a toga, rolling a stone up a steep hill. Depending on how you tilted the package, you could make the stone roll up or down the hill, in endless repetition. But, if you tilted it far enough, a totally new image would appear, the face of a man, eyes comically red. Many customers didn’t know it, but this was the face of the French existentialist, Albert Camus. Above his face popped out the words:
“Absurdly Good Weed(™)”
Then, below the face:
“One must imagine Sisyphus stoned.”
She opened the package and pulled out a joint.
And what sort of story would a disinfo merchant fall for?
Hilde looked back down at the books cluttering her desk. Her eyes locked on Plato’s dialogue on the immortality of the soul, the Phaedo.
Still, Chris wasn’t naive about what most people would say about their work. Purveyors of misinformation.Disinfo merchants. Propagandists. Liars. Trolls.
Or, as one journalist for the Des Moines Register had put it, Nigel was “a rotund British cancer on the American body politic, not talented enough to metastasize, but hardly benign.”
“Fucking self indulgent purple prose — who does this asshole think he is writing for?” Nigel had fumed, showing the article to Chris. “Not talented enough? I turn down bigger jobs all the time. I keep a low profile because I’m not a moron like this absolute pleb.”
This had been during the phase when Nigel was using “pleb,” as his go-to insult. The insult held no classest connotations when wielded by Nigel. He frequently painted billionaires and officials in high office with the label.
“Pleb,” for Nigel, was short for “plebian of the soul,” a term he had adopted after being turned on to the works of the ultra-conservative, caste-system-advocating, esotericist, Julius Evola. He had come across the facism-adjacent, wizard, or sorcerer, or what-the-fuck-ever people who do “magic” call themselves, via some godforsaken VR community that Nigel had been frequenting back then.
Evola had convinced Nigel that he was an “aristocrat of the soul.” From that it followed that his enemies were “the plebs,” the low-class mob hoping to drag others down to their level of “spiritual mediocrity.” This was worse than Economic Marxism — worse than Cultural Marxism even — this was… Spiritual Marxism.
Nigel had been particularly insufferable during this period, frequently accusing Chris of “Spiritual Marxism” and its attendant ills, whenever Chris had pushed back on his increasingly unhinged ideas. For Chris, the turn had been evidence that even his boss, so astute in fathoming the psychology of the masses, was not immune to the lure of intrigue, controversy, and self-flattery.
It had also been a period of significant “biohacking,” Nigel’s preferred term. Biohacking was “the rational and intentional alteration of one’s own neurochemistry to help maximize productivity, achieve one’s goals, and fully realize one’s potential.” It was, “better living through science,” “the use of entheogens to achieve a fit-to-purpose physicochemistry conducive to the demands of the modern workplace.”
Biohacking, per Nigel, was a premier example of “the application of Logos to Psyche,” the “triumph of Gnosis over Eros.”
Chris had secretly thrown out the man’s cocaine stash, a key “biohacking reagent,” after he had, only half-jokingly, referred to it as “Aristocrat’s Powder.”
In retrospect, this inflation of the man’s eccentricities had foreshadowed his downfall, the end of the first company, and his fourteen month, all expenses paid “vacation” to the Yazoo City Federal Corrections Complex. He had been a bubble ready to pop, destined for the “Zoo.”
It makes sense to me. What does not make sense is the idea that faith is preferable to knowledge, or that knowledge cannot replace faith. This is the anti-mystical idea that for me undermines the credibility of the church's dogma and alienates modern thinkers.
Learning anything requires a certain degree of faith but the idea of learning what must always remain merely a faith, and is merely a faith even to those who teach it, will be unappealing to a rational person.
Love never ends; as for prophecies, they will pass away; as for tongues, they will cease; as for knowledge, it will pass away.
For our knowledge is imperfect and our prophecy is imperfect;
but when the perfect comes, the imperfect will pass away.
When I was a child, I spoke like a child, I thought like a child, I reasoned like a child; when I became a man, I gave up childish ways.
For now, we see in a mirror dimly, but then face to face. Now I know in part; then I shall understand fully, even as I have been fully understood. So faith, hope, love abide, these three; but the greatest of these is love.
6.13 Reason is the soul’s contemplative gaze. But it does not follow that everyone who contemplates sees. Rightful and perfect contemplation, from which vision follows, is called virtue. For virtue is rightful and perfect reason.But even though the soul may have healthy eyes, the contemplative gaze itself cannot turn toward the light unless these three [virtues] have become permanent: faith by which it believes the reality which it gazes upon can, when seen, make us blessedly happy; hope by which it trusts that it will see if only it contemplates intently; love, by which it yearns to see and to enjoy.Then the vision of God flows from the contemplative gaze. This vision is the true goal of our contemplation, not because the contemplative gaze no longer exists, but because it has nothing further to strive toward...
7.14 Therefore, let us reflect on whether these three are still necessary once the soul succeeds in seeing (that is, knowing) God. Why should faith be needed since now it sees? Why hope, since it already grasps its hope? But as for love, not only will nothing be taken away, but rather much will be added. For when the soul sees that unique and true Beauty, it will love all the more deeply. But unless it fixes its eye upon it with surpassing love and never withdraws its gaze, it will not be able to continue in that most blessed vision.
The Darkness Before the Light is an epic fantasy novel (think Game of Thrones or The Darkness That Comes Before). The setting is a mix between Reformation Europe and the Wars of Religion, a setting that will allow us to explore theological intricacies, and the early Italian Renaissance, an interesting period for the evolution of warfare, with the advent of canons and the rise of large mercenary companies. A main conceit of the novel is that its sorcery is based on the esoteric traditions of this and earlier periods.
Perhaps it is strange to say that men and women who willingly faced death were cowards but perhaps someone like Nietzsche would say that this is proof of their rejection of life.
Why was it acceptable for God to wage war against the wicked in heaven and somehow impermissible for his faithful son and servants here on earth? Is it a double standard or is it something deeper? Maybe Christ didn't have a dog in the fights that happen down here on earth but what are we to do? Should we fight when faced with an evil enemy like Micheal or should we do as christ did and lay down our lives for the ones we love because we are taught by him to love our enemies?
Too late have I loved you, O Beauty so ancient, O Beauty so new.
Too late have I loved you! You were within me but I was outside myself, and there I sought you!
In my weakness, I ran after the beauty of the things you have made.
You were with me, and I was not with you.
The things you have made kept me from you – the things which would have no being unless they existed in you!
You have called, you have cried, and you have pierced my deafness.
You have radiated forth, you have shined out brightly, and you have dispelled my blindness.
You have sent forth your fragrance, and I have breathed it in, and I long for you.
I have tasted you, and I hunger and thirst for you.
You have touched me, and I ardently desire your peace...
You have made us for yourself, O Lord, and our heart is restless until it rests in you
-Saint Augustine of Hippo
If you opt to not believe, then much of the teaching by Jesus and his Church are likely not going to make a whole lot of sense to you. If you opt to believe, it isn't that everything will fall into place and make perfect sense.

House of Cards?
The most influential critiques of ontological emergence theories target these notions of downward causality and the role that the emergent whole plays with respect to its parts. To the extent that the emergence of a supposedly novel higher - level phenomenon is thought to exert causal influence on the component processes that gave rise to it, we might worry that we risk double - counting the same causal influence, or even falling into a vicious regress error — with properties of parts explaining properties of wholes explaining properties of parts. Probably the most devastating critique of the emergentist enterprise explores these logical problems. This critique was provided by the contemporary American philosopher Jaegwon Kim in a series of articles and monographs in the 1980s and 1990s, and is often considered to be a refutation of ontological (or strong) emergence theories in general, that is, theories that argue that the causal properties of higher - order phenomena cannot be attributed to lower - level components and their interactions. However, as Kim himself points out, it is rather only a challenge to emergence theories that are based on the particular metaphysical assumptions of substance metaphysics (roughly, that the properties of things inhere in their material constitution), and as such it forces us to find another footing for a coherent conception of emergence.
The critique is subtle and complicated, and I would agree that it is devastating for the conception of emergence that it targets. It can be simplified and boiled down to something like this: Assuming that we live in a world without magic (i.e., the causal closure principle, discussed in chapter 1), and that all composite entities like organisms are made of simpler components without residue, down to some ultimate elementary particles, and assuming that physical interactions ultimately require that these constituents and their causal powers (i.e., physical properties) are the necessary substrate for any physical interaction, then whatever causal powers we ascribe to higher - order composite entities must ultimately be realized by these most basic physical interactions. If this is true, then to claim that the cause of some state or event arises at an emergent higher - order level is redundant. If all higher - order causal interactions are between objects constituted by relationships among these ultimate building blocks of matter, then assigning causal power to various higher - order relations is to do redundant bookkeeping. It’s all just quarks and gluons — or pick your favorite ultimate smallest unit — and everything else is a gloss or descriptive simplification of what goes on at that level. As Jerry Fodor describes it, Kim’s challenge to emergentists is: “why is there anything except physics?” 16
The concept at the center of this critique has been a core issue for emergentism since the British emergentists’ first efforts to precisely articulate it. This is the concept of supervenience...
Effectively, Kim’s critique utilizes one of the principal guidelines for mereological analysis: defining parts and wholes in such a way as to exclude the possibility of double - counting. Carefully mapping all causal powers to distinctive non - overlapping parts of things leaves no room to find them uniquely emergent in aggregates of these parts, no matter how they are organized...
Terrance Deacon - Incomplete Nature
This is not meant to suggest that we should appeal to quantum strangeness in order to explain emergent properties, nor would I suggest that we draw quantum implications for processes at human scales. However, it does reflect a problem with simple mereological accounts of matter and causality that is relevant to the problem of emergence.
A straightforward framing of this challenge to a mereological conception of emergence is provided by the cognitive scientist and philosopher Mark Bickhard. His response to this critique of emergence is that the substance metaphysics assumption requires that at base, “particles participate in organization, but do not themselves have organization.” But, he argues, point particles without organization do not exist (and in any case would lead to other absurd consequences) because real particles are the somewhat indeterminate loci of inherently oscillatory quantum fields. These are irreducibly processlike and thus are by definition organized. But if process organization is the irreducible source of the causal properties at this level, then it “cannot be delegitimated as a potential locus of causal power without eliminating causality from the world.” 20 It follows that if the organization of a process is the fundamental source of its causal power, then fundamental reorganizations of process, at whatever level this occurs, should be associated with a reorganization of causal power as well.
Terrance Deacon - Incomplete Nature
What about redescrbing the situation as one ball in two locations?
I do not see where you get the idea that what you would call "a discernment", could be anything other than the product of an act of discernment, which is the act of a subject. Because of this, I do not see how you propose the possibility of a discernment which is not a subjective discernment. Each and every discernment is produced by a subject, therefore all possible discernments (by induction only) are subjective discernments.
You might propose a form of discernment which is not subjective, but this would violate inductive reasoning, rendering it as a useless tool within your argument, so that your whole argument which is based on induction would be undermined, by allowing that a very strong inductive principle could be violated.
Leibniz' law does not leave open the possibility of differences which make no difference. Instead, you ought to recognize that what the law intends, is that there is no such thing as a difference which makes no difference, this itself would be contradictory. If an observer notices something as a difference, then by that very fact that the difference has been noticed as a difference, the difference has already, necessarily, made a difference to that observer.
The law does not speak of possibilities, and I think that is where you misrepresent it. It is based in an impossibility, which is an exclusion of possibility. This is the impossibility that an entity which could only be identified as itself, could also be identified as something else.
I prefer more descriptive terms like e.g. immaterial or disembodied or nonphysical or spiritual or magical ... to the umbrella term "supernatural".
"Everything" which causes changes is material, ergo "energy" is material, no?
Axioms are considered primitive assumptions beyond questions of true or falsity. The remainder of a system is then developed formally from these primitives. In contrast, Spencer-Brown’s axioms seem to be indisputable conclusions about the deepest archetypal nature of reality. They formally express the little we can say about something and nothing...
Once bitten, twice shy—mathematicians became much more concerned with abstraction and formality. They separated what they knew in their mathematical world from what scientists asserted about the physical world. Mathematics was supposed to be the science which dealt with the formal rules for manipulating meaningless signs. Spencer-Brown’s attempt to develop
Meanwhile, if the fear of falling into error introduces an element of distrust into science, which without any scruples of that sort goes to work and actually does know, it is not easy to understand why, conversely, a distrust should not be placed in this very distrust, and why we should not take care lest the fear of error is not just the initial error. As a matter of fact, this fear presupposes something, indeed a great deal, as truth, and supports its scruples and consequences on what should itself be examined beforehand to see whether it is truth. It starts with ideas of knowledge as an instrument, and as a medium; and presupposes a distinction of ourselves from this knowledge. More especially it takes for granted that the Absolute stands on one side, and that knowledge on the other side, by itself and cut off from the Absolute, is still something real; in other words, that knowledge, which, by being outside the Absolute, is certainly also outside truth, is nevertheless true — a position which, while calling itself fear of error, makes itself known rather as fear of the truth.
When [Norbert] Wiener brought the feedback idea to the foreground, not only did it become immediately recognized as a fundamental concept, but it also raised major philosophical questions as to the validity of the cause-effect doctrine.…the nature of feedback is that it gives a mechanism, which is independent of particular properties, of components, for constituting a stable unit. And from this mechanism, the appearance of stability gives a rationale to the observed purposive behavior of systems and a possibility of understanding teleology.…Since Wiener, the analysis of various types of systems has borne this same generalization: Whenever a whole is identified, its interactions turn out to be circularly interconnected, and cannot be taken as linear cause-effect relationships if one is not to lose the system’s characteristics (
Quantum mechanics is a scientific theory. It describes aspects of our world. Our world includes consciousness. That doesn't mean there is a specific, direct connection between QM and consciousness.
This confuses me. What does it mean that communication takes place instantaneously but no information can be transmitted? I would have thought that "communication" means the transfer of information. I have to do more reading
Yes, this problem seems to me a special case of the general problem about whether there is a reality that exists independently of observers.
This seems to me embedded in our language and thought, except possibly in sub-atomic physics, and that's a special case because the act of observation directly affects what happens next.
But the idea of an unobservable reality seems absurd or pointless.
I'm pro-choice and find that in the inviolability of our physical integrity. I choose what goes in and comes out of my body.
Charles Darwin conceived of evolution by natural selection without knowing that genes exist. Now mainstream evolutionary theory has come to focus almost exclusively on genetic inheritance and processes that change gene frequencies.
Yet new data pouring out of adjacent fields are starting to undermine this narrow stance. An alternative vision of evolution is beginning to crystallize, in which the processes by which organisms grow and develop are recognized as causes of evolution.
Some of us first met to discuss these advances six years ago. In the time since, as members of an interdisciplinary team, we have worked intensively to develop a broader framework, termed the extended evolutionary synthesis1 (EES), and to flesh out its structure, assumptions and predictions. In essence, this synthesis maintains that important drivers of evolution, ones that cannot be reduced to genes, must be woven into the very fabric of evolutionary theory.
We believe that the EES will shed new light on how evolution works. We hold that organisms are constructed in development, not simply ‘programmed’ to develop by genes. Living things do not evolve to fit into pre-existing environments, but co-construct and coevolve with their environments, in the process changing the structure of ecosystems...
The core of current evolutionary theory was forged in the 1930s and 1940s. It combined natural selection, genetics and other fields into a consensus about how evolution occurs. This ‘modern synthesis’ allowed the evolutionary process to be described mathematically as frequencies of genetic variants in a population change over time — as, for instance, in the spread of genetic resistance to the myxoma virus in rabbits.
In the decades since, evolutionary biology has incorporated developments consistent with the tenets of the modern synthesis. One such is ‘neutral theory’, which emphasizes random events in evolution. However, standard evolutionary theory (SET) largely retains the same assumptions as the original modern synthesis, which continues to channel how people think about evolution.
The story that SET tells is simple: new variation arises through random genetic mutation; inheritance occurs through DNA; and natural selection is the sole cause of adaptation, the process by which organisms become well-suited to their environments. In this view, the complexity of biological development — the changes that occur as an organism grows and ages — are of secondary, even minor, importance.
In our view, this ‘gene-centric’ focus fails to capture the full gamut of processes that direct evolution. Missing pieces include how physical development influences the generation of variation (developmental bias); how the environment directly shapes organisms’ traits (plasticity); how organisms modify environments (niche construction); and how organisms transmit more than genes across generations (extra-genetic inheritance). For SET, these phenomena are just outcomes of evolution. For the EES, they are also causes.
Valuable insight into the causes of adaptation and the appearance of new traits comes from the field of evolutionary developmental biology (‘evo-devo’).Some of its experimental findings are proving tricky to assimilate into SET. Particularly thorny is the observation that much variation is not random because developmental processes generate certain forms more readily than others...
SET explains such parallels as convergent evolution: similar environmental conditions select for random genetic variation with equivalent results. This account requires extraordinary coincidence to explain the multiple parallel forms that evolved independently in each lake. A more succinct hypothesis is that developmental bias and natural selection work together4,5. Rather than selection being free to traverse across any physical possibility, it is guided along specific routes opened up by the processes of development5,6...
Another kind of developmental bias occurs when individuals respond to their environment by changing their form — a phenomenon called plasticity. For instance, leaf shape changes with soil water and chemistry. SET views this plasticity as merely fine-tuning, or even noise. The EES sees it as a plausible first step in adaptive evolution. The key finding here is that plasticity not only allows organisms to cope in new environmental conditions but to generate traits that are well-suited to them. If selection preserves genetic variants that respond effectively when conditions change, then adaptation largely occurs by accumulation of genetic variations that stabilize a trait after its first appearance5,6.In other words, often it is the trait that comes first; genes that cement it follow, sometimes several generations later5.
Studies of fish, birds, amphibians and insects suggest that adaptations that were, initially, environmentally induced may promote colonization of new environments and facilitate speciation5,6. Some of the best-studied examples of this are in fishes, such as sticklebacks and Arctic char. Differences in the diets and conditions of fish living at the bottom and in open water have induced distinct body forms, which seem to be evolving reproductive isolation, a stage in forming new species. The number of species in a lineage does not depend solely on how random genetic variation is winnowed through different environmental sieves. It also hangs on developmental properties that contribute to the lineage’s ‘evolvability’.
In essence, SET treats the environment as a ‘background condition’, which may trigger or modify selection, but is not itself part of the evolutionary process. It does not differentiate between how termites become adapted to mounds that they construct and, say, how organisms adapt to volcanic eruptions. We view these cases as fundamentally different7...
Finally, diluting what Laland and colleagues deride as a ‘gene-centric’ view would de-emphasize the most powerfully predictive, broadly applicable and empirically validated component of evolutionary theory. Changes in the hereditary material are an essential part of adaptation and speciation. The precise genetic basis for countless adaptations has been documented in detail, ranging from antibiotic resistance in bacteria to camouflage coloration in deer mice, to lactose tolerance in humans.
I think these days it is fairly widely understood, amongst those who have looked into the subject beyond high school biology, that there are selection effects that take place through changes in DNA outside the boundaries of genes. (Gene expression promoting regions of DNA, which are not themselves part of a gene, for example.)
So there is a sense in which definitions of evolution in terms of change in allele frequency over time is simplistic. However, perhaps when looked at on geological time scales, changes in allele frequency over time are such a dominant factor that such simplistic definitions are pragmatic for introducing people to the subject?
Richard Dawkins likes to couch this discussion in terms of replicators and vehicles. Replicators are any entities of which copies are made; selection will favor replicators with the highest replication rate. Vehicles are survival machines: organisms are vehicles for replicators and selection will favor vehicles that are better at propagating the replicators that reside within them. There is a hierarchy of both replicators and vehicles. The key issues are that 1) the "unit" of selection is one that is potentially immortal: organisms die, but their genes could be passed on indefinitely. The heritability of a gene is greater than that of a chromosome is > that of a cell > organism > and so on. But , because of linkage we should not think of individual genes as the units; it is the stretch of chromosome upon which selection can select, given certain rates of recombination. Issue 2) is that selection acts on phenotypes that are the product of the replicators, not on the replicators themselves, but the vehicles have lower heritability and immortality than replicators. What then is the unit of selection? All of them, just of different strengths and effects at different levels.
I don't see any real problem. Panpsychism seems like nothing more than an unfalsifiable hypothesis that has no significant explanatory value, and Ockham razor seems like sufficient justification for dismissing panpsychism. From my perspective panpsychism doesn't seem to present any more challenge than solipsism.
This seems to me, more a matter of unrealistic expectations on the part of critics of physicalism, than it seems a problem for physicalism. Brains are enormously complex, and I say this as an electrical engineer who routinely deals with highly complex systems. Yes there is a huge way to go in developing a understanding of how brains instantiate minds, and no guarantee that human minds are up to the task of developing something approaching an ultimate explanatory theory. However, substantial explanatory progress has been made over my lifetime, and that progress is ongoing. I don't see how anything similar can be claimed for panpsychism.
In any case, I'm interested in hearing more about what you see as "massive problems" for physicalism.
But the most influential objection to supervenience physicalism (and to modal formulations generally) is what might be called the sufficiency problem. This alleges that, while (1) articulates a necessary condition for physicalism it does not provide a sufficient condition. The underlying rationale is that, intuitively one thing can supervene on another and yet be of a completely different nature. To use Fine’s famous (1994) example, consider the difference between Socrates and his singleton set, the set that contains only Socrates as a member. The facts about the set supervene on the facts about Socrates; any world that is like ours in respect of the existence of Socrates is like ours in respect of the existence of his singleton set. And yet the set is quite different from Socrates. This in turn raises the possibility that something might be of a completely different nature from the physical and nevertheless supervene on it.
One may bring out this objection further by considering positions in philosophy which entail supervenience and yet deny physicalism. A good example is necessitation dualism, which is an approach that weaves together elements of both physicalism and its traditional rival, dualism. On the one hand, the necessitation dualist wants to say that mental facts and physical facts are metaphysically distinct—just as a standard dualist does. On the other hand, the necessitation dualist wants to agree with the physicalist that mental facts are necessitated by, and supervene on, the physical facts. If this sort of position is coherent, (1) does not articulate a sufficient condition for physicalism. For if necessitation dualism is true, any physical duplicate of the actual world is a duplicate simpliciter. And yet, if dualism of any sort is true, including necessitation dualism, physicalism is false.
A final topic that I will consider is that of supervenience. The intuition of supervenience is that higher level phenomena cannot differ unless their supporting lower-level phenomena also differ. There may be something correct in this intuition, but a process metaphysics puts at least standard ways of construing supervenience into question too.
Most commonly, a supervenience base — that upon which some higher-level phenomena are supposed to be supervenient — is defined in terms of the particles and their properties, and perhaps the relations among them, that are the mereological constituents of the supervenient system [Kim, 1991; 1998]. Within a particle framework, and so long as the canonical examples considered are energy well stabilities, this might appear to make sense.
But at least three considerations overturn such an approach. First, local versions of supervenience cannot handle relational phenomena — e.g., the longest pencil in the box may lose the status of being longest pencil even though nothing about the pencil itself changes. Just put a longer pencil into the box. Being the longest pencil in the box is not often of crucial importance, but other relational phenomena are. Being in a far from equilibrium relation to the environment, for example, is a relational kind of property that similarly cannot be construed as being locally supervenient. And it is a property of fundamental importance to much of our worlds — including, not insignificantly, ourselves.
A second consideration is that far from equilibrium process organizations, such as a candle flame, require ongoing exchanges with that environment in order to maintain their far from equilibrium conditions. There is no fixed set of particles, even within a nominally particle view, that mereologically constitutes the flame.
A third, related consideration is the point made above about boundaries. Issues of boundary are not clear with respect to processes, and not all processes have clear boundaries of several differentiable sorts - and, if they do have two or more of them, they are not necessarily co-extensive. But, if boundaries are not clear, then what could constitute a supervenience base is also not clear.
Supervenience is an example of a contemporary notion that has been rendered in particle terms, and that cannot be simply translated into a process metaphysical framework [Bickhard, 2000; 2004]. More generally, a process framework puts many classical metaphysical assumptions into question.
Mark Bickhard - Systems and Process Metaphysics - The North Holland Handbook of the Philosophy of Science: The Philosophy of Complex Systems
A third problem, which we mentioned briefly above, is the problem of abstracta (Rabin 2020). This concerns the status within physicalism of abstract objects, i.e., entities apparently not located in space and time, such as numbers, properties and relations, or propositions.
To see the problem, suppose that abstract objects, if they exist, exist necessarily, i.e., in all possible worlds. If physicalism is true, then the facts about such objects must either be physical facts, or else bear a particular relation (grounding, realisation) to the physical. But on the face of it, that is not so. Can one really say that 5+7=12, for example, is realised in, or holds in virtue of, some arrangement of atoms and void? Or can one say that it itself is a physical fact or a fundamental physical fact? If not, physicalism is false: the property of being such that 5+7=12 obtains the actual world but is neither identical to, nor grounded in or realized by, any physical property. (Sometimes the problem of abstracta is formulated as concerning, not abstract objects such as numbers or properties, but the grounding or realization facts themselves; see, e.g, Dasgupta 2015. We will set this aside here.)
There are a number of responses to this problem in the literature; for an overview, see Rabin 2020, see also Dasgupta 2015 and Bennett 2017; for more general discussion of physicalism and abstracta, see Montero 2017, Schneider 2017, and Witmer 2017.
One response points out that, while the problem of abstracta confronts many different versions of physicalism, it does not arise for supervenience physicalism. After all, since numbers exist in all possible worlds, facts about them trivially supervene on the physical; any world identical to the actual world in physical respects will be identical to it in respect of whether 5+7=12, because any world at all is identical to the actual world in that respect! But the difficulty here is that supervenience physicalism seems, as we saw above, too weak anyway. Indeed, one might think that the example of abstracta is simply a different way to bring out that it is too weak.
Another option is to adopt a version of nominalism, and deny the existence of abstracta entirely. The problem with this option is that defending nominalism about mathematics is no easy matter, and in any case nominalism and physicalism are normally thought of as distinct commitments.
A third view, which seems more attractive than either of the two mentioned so far, is to expand the notion of a physical property that is in play in formulations of physicalism. For example, one might treat the properties of abstract objects as topic-neutral in something like the sense discussed in connection with Smart and reductionism above (see section 3.1). Topic-neutral properties have the interesting feature that, while they themselves are not physical, but are capable of being instantiated in what is intuitively a completely physical world, or indeed what is intuitively a completely spiritual world or a world entirely made of water. If so, it becomes possible to understand physicalism so that the reference to ‘physical properties’ within it is understood more correctly as ‘physical or topic-neutral properties’.
Dawkins describes genes as replicators. The suffix “ - or” suggests that genes are in some sense the locus of this replication process (as in a machine designed for a purpose like a refrigerator or an applicator), or else an agent accomplishing some function (such as an investigator or an actor). This connotation is a bit misleading. DNA molecules only get replicated with the aid of quite elaborate molecular machinery, within living cells or specially designed laboratory devices. But there is a sense in which they contribute indirectly to this process: if there is a functional consequence for the organism to which a given DNA nucleotide sequence contributes, it will improve the probability that that sequence will be replicated in future generations. genes as active replicators for this reason, though the word “active” is being used rhetorically...
Replicator theory thus treats the pattern embodied in the sequence of bases along a strand of DNA as information, analogous to the bit strings entered into digital computers to control their operation. Like the bit strings stored in the various media embodying this manuscript, this genetic information can be precisely copied again and again with minimal loss because of its discrete digital organization. This genetic data is transcribed into chemical operations of a body analogous to the way that computer bit strings can be transcribed into electrical operations of computer circuits. In this sense, genes are a bit like organism software.
Replicators are, then, patterns that contribute to getting themselves copied. Where do they get this function? According to the standard interpretation, they get it simply by virtue of the fact that they do get replicated.
The qualifier “active” introduces an interesting sort of self - referential loop, but one that seems to impute this capacity to the pattern itself, despite the fact that any such influence is entirely context - dependent. Indeed, both sources of action — work done to change things in some way — are located outside the reputed replicator. DNA replication depends on an extensive array of cellular molecular mechanisms, and the influence that a given DNA base sequence has on its own probability of replication is mediated by the physiological and behavioral consequences it contributes to in a body, and most importantly how these affect how well that body reproduces in its given environmental context. DNA does not autonomously replicate itself; nor does a given DNA sequence have the intrinsic property of aiding its own replication — indeed, if it did, this would be a serious impediment to its biological usefulness. In fact, there is a curious irony in treating the only two totally passive contributors to natural selection — the genome and the selection environment — as though they were active principles of change.
But where is the organism in this explanation? For Dawkins, the organism is the medium through which genes influence their probability of being replicated. But as many critics have pointed out, this inverts the location of agency and dynamics. Genes are passively involved in the process while the chemistry of organism bodies does the work of acquiring resources and reproducing. The biosemiotician Jesper Hoffmeyer notes that, “As opposed to the organism, selection is a purely external force while mutation is an internal force, engendering variation. And yet mutations are considered to be random phenomena and hence independent of both the organism and its functions.”
By this token the organism becomes, as Claus Emmeche says, “the passive meeting place of forces that are alien to itself.” So the difficulty is not that replicator theory is in error — indeed, highly accurate replication is necessary for evolution by natural selection — it’s that replicators, in the way this concept has generally been used, are inanimate artifacts. Although genetic information is embodied in the sequence of bases along DNA molecules and its replication is fundamental to biological evolution, this is only relevant if this molecular structure is embedded within a dynamical system with certain very special characteristics. DNA molecules are just long, stringy, relatively inert molecules otherwise.The question that is begged by replicator theory, then, is this: What kind of system properties are required to transform a mere physical pattern embedded within that system into information that is both able to play a constitutive role in determining the organization of this system and constraining it to be capable of self - generation, maintenance, and reproduction in its local environment? These properties are external to the patterned artifact being described as a replicator, and are far from trivial... [It] can’t be assumed that a molecule that, under certain very special conditions, can serve as a template for the formation of a replica of itself exhibits these properties. Even if this were to be a trivially possible molecular process, it would still lack the means to maintain the far - from - equilibrium dynamical organization that is required to persistently generate and preserve this capacity. It would be little more than a special case of crystallization.
"If X is such that necessarily there does not exist an observer O such that possibly there exists (a distinction of X from Y for O) then X is indiscernible from Y."
The second problem which is more to the point, is that each observer is oneself, a unique and particular individual, according to the law of identity. Because of this, the observational apparatus and perspective of the observer is also unique to the individual. This makes it highly improbable that two distinct observers will ever precisely describe the very same thing in the exact same way. Accordingly, the criteria for "X", which needs to be the same description provided by all observers, will never be fulfilled, and "X=Y" will refer to nothing.
But I am talking about the information contents of the actual image, you are talking about features of the physical object the image has been projected on. I can produce the same image on different paper or have it on a digital screen and identify the contents of the image; those contents are not directly related to the physical composition of a photograph you can hold in your hand and cannot be reduced to it, which is the main point.
Yes, it's true that that image is not totally independent of other factors; after all, the type of camera and resolution etc will have an effect on the image but these largely still come from the same interactions during the photo-taking process by which the image of Everest was stored - it is still information of the image which is independent of the physical medium an image is projected on and so cannot be reduced to it.
