So we'll never know. — Banno
The problem I see is that if everyone uses AI its development will be profit driven, and it will thus not be judiciously developed. — Janus
If there were no regularities, there would be no laws. — Janus
So of corse there are no 'well-documented occurrences of exceptions to nature's "laws"", as you say... because when they happen, it's good scientific practice to change the laws so as to make the exception disappear. — Banno
Whitehead’s central idea is that reality is made of events, not substances — what he calls actual occasions. — PoeticUniverse
Post on time as cogent moment or a hierarchy of durations….
Hierarchy theorist Stan Salthe dubs this the "cogent moment". Henri Bergson had a similar idea.
If the world is understood in terms of a hierarchy of processes, then they all will have their own characteristic integration times. Time for the Cosmos is not some Newtonian dimension. It is an emergent feature of being a process as every process will have a rate at which it moves from being just starting to form a settled state - reaching some sort of cogent equilibrium which defines it as having "happened" - and then being in fact settled enough to become the departure point, the cause, for further acts of integration or equilibration.
So this view of time sees it not as a spatial line to be divided in two - past and future - with the present being some instant or zero-d point marking a separation. Instead, time is an emergent product of how long it takes causes to become effects that are then able to be causes. For every kind of process, there is going to be a characteristic duration when it comes to how long it takes for integration or equilibration to occur across the span of the activity in question.
We can appreciate this in speeded up film of landscapes in which clouds or glaciers now look to flow like rivers. What seemed like static objects - changing too slowly to make a difference to our impatient eye - now turn into fluid processes. They looked like chunks of history. Now we see them as things very much still in the middle of their actualisation. They will be history only after they have passed, either massing and dropping their rain, or melting and leaving behind great trenches etched in the countryside.
So the present is our intuitive account of the fact that causes must be separated from their effects, and the effects then separated from what they might then cause. There is some kind of causal turnaround time or duration - a momentary suspension of change - that is going to be a physical characteristic of every real world process. Thus there is some rate of change, some further "time frame" or cogent moment, that gets associated with every kind of natural system.
At the level of fundamental physics, this turns out to be the Planckscale limit. Time gets "grainy" at around 10^-44 seconds. The Planck distance is 10^-35m. So the Planck time represents the maximum action that can be packed into such a tiny space - the single beat of a wavelength. That primal act of integrated change - a single oscillation - then also defines the maximum possible energy density, as the shortest wavelength is the highest frequency, and the highest frequency is the hottest possible radiation.
So the shortest time, the smallest space, and the most energetic event, all define each other in a neat little package. Actuality is based on the rate at which a thermal event can come together and count as a "first happening" - a concrete Big Bang act of starting to cool and expand enough to stand as a first moment in a cosmic thermal history.
Then psychological time for us humans is all about neural integration speed. It takes time for nerve signals to move about. The maximum conduction speed in a well-insulated nerve, like the ones connecting your foot to your brain is about 240 mph. But inside the brain, speeds can slow to a 20 mph crawl. To form the kind of whole brain integrated states needed by attentional awareness involves developing a collective state - a "resonance" - that can take up to half a second because of all the spread-out activity to become fully synchronised.
So there is a characteristic duration for the time it takes for causes to become the effects that are then themselves causes. Input takes time to process and become the outputs that drive further behaviour. Which is why I mention also the importance of bridging this processing gap by anticipation. The brain shortcuts itself as much as it can by creating a running expectation of the future. It produces an output before the input so that it can just very quickly ignore the arriving information - treat it as "already seen". It is only the bit that is surprising that then takes that further split second to register and get your head around.
But between this physical Planckscale integration time and this neural human information processing time are a whole host of other characteristic timescales for the processes of nature.
Geology has its own extremely long "present tense". Stresses and strains can slowly build for decades or centuries before suddenly relaxing in abrupt events like earthquakes or volcanoes.
A process view explains time in a more general fashion by relating it to the causal structure of events. Every system has some characteristic rate of change. There is a cogent moment graininess or scale created by the fact that not everything can be integrated all at once. It requires "time" to go from being caused to being a cause. There is a real transition involved. And that happens within what we normally regard as the frozen instant when things are instead finally just "actual". Brutely existent and lacking change, not being in fact a transition from being caused to being a cause in terms of our multi-scale accounts of causal flows.
One of the features of the whole tradition of process thought, from Anaximander onwards (including Peirce, and to a lesser extent Bergson), has been the view that order in the world has in some sense emerged from a background of disorder, flux or chaos.
Anaximander characterized the cosmos as developing through the limiting of the unlimited, and emphasised the precarious nature of what emerged in this way, characterizing its existence as an ‘injustice’ that eventually would have to be paid for. Even the Pythagoreans accepted the dichotomy between the limited and the unlimited. Heraclitus, to some extent defending Anaximander against later philosophers, characterized the cosmos as in perpetual motion and emphasised the central place within it of strife and conflict. It is only through a balance between opposites that the existence of anything is maintained, and nothing is permanent except this principle, Heraclitus claimed.
As noted, Peirce also assumed that necessity in the world arose from chaos and chance through limitation. Recently, it has been argued in process physics that it is necessary to postulate an ‘intrinsic randomness’ or ‘self-referential noise’ to generate a self-organising relational information system, sufficiently rich that self-referencing is possible.
Hierarchy theorists, notably by Howard Pattee, Timothy Allen and Stanley Salthe, among others, who have argued that emergence is associated with new constraints emerging which are not in the initial conditions. While developed without reference to pre-twentieth century thought (or to Bergson), this conception of nature revives Anaximander’s conception of cosmos as having formed through the limiting of the unlimited (an idea also taken up further developed by Schelling at the end of the eighteenth century).
Along with the notion of different minimum durations, or different process rates, this has enabled Pattee, Allen and Salthe to clarify the nature of both emergence and hierarchical ordering in nature. Treating time as pulsational rather than atomic and treating causation as essentially a matter of constraining, overcomes a number of difficulties in Whitehead’s philosophy,
How was the music of the 'double-halving' vid? — PoeticUniverse
Yet, we can't tell the difference among the three modes; how can we find out? — PoeticUniverse
I would think handing your half-formed prose to a bot for it to improve it is plagiarism, regardless of the number of words changed or inserted. It's a different thing from you deliberately searching for a synonym. — bongo fury
The Conceptual Boundary of Authorship
The act of submitting half-formed prose to an autonomous processing system for "improvement" raises a profound ethical and philosophical question regarding the locus of authorship.
I would posit that this practice constitutes an illicit appropriation of intellectual effort—a form of plagiarism—irrespective of the quantitative degree of lexical or syntactic transformation enacted by the machine. The core violation lies in the delegation of the substantive process of refinement and telos (purposeful development) of the text to an external agent without explicit, critical engagement.
This is epistemologically distinct from the deliberate, conscious act of a human agent consulting a thesaurus to seek a more precise synonym. The latter remains an act of intentional, informed choice that preserves the continuous thread of human intellectual stewardship over the text's final form and meaning. The former, in contrast, risks dissolving the very boundary between personal expression and automated fabrication.
Surely, discussing AI capabilities, flaws and impacts, as well as the significance this technology has for the philosophy of mind and of language (among other things) should be allowed, and illustrating those topics with properly advertised examples of AI outputs should be allowed. — Pierre-Normand
The A.I.-derived OP’s are likely to be better thought-out than many non-A.I. efforts. Banning A.I. is banning background research that will become built into the way we engage with each other. — Joshs
I guess I’m naïve or maybe just not very perceptive, but I haven’t recognized any posts definitely written by AI. — T Clark
AI poses several dangers to ordinary human intellectual debate, primarily through the erosion of critical thinking, the amplification of bias, and the potential for large-scale misinformation. Instead of fostering deeper and more informed discourse, AI can undermine the very human skills needed for a robust and productive exchange of ideas.
Erosion of critical thinking and independent thought: By outsourcing core intellectual tasks to AI, humans risk a decline in the mental rigor necessary for debate.
Cognitive offloading: People may delegate tasks like research and analysis to AI tools, a process called cognitive offloading. Studies have found a negative correlation between heavy AI use and critical thinking scores, with younger people showing a greater dependence on AI tools for problem-solving.
Reduced analytical skills: Over-reliance on AI for quick answers can diminish a person's ability to engage in independent, deep analysis. The temptation to let AI generate arguments and counterarguments can bypass the human-centered process of careful reasoning and evaluation.
Stagnation of ideas: If everyone relies on the same algorithms for ideas, debate can become repetitive and less creative. True intellectual debate thrives on the unpredictable, human-driven generation of novel thoughts and solutions.
Amplification of bias and groupthink: AI systems are trained on human-created data, which often contains pre-existing biases. Algorithms can create "filter bubbles" and "echo chambers" by feeding users content that reinforces their existing beliefs. In a debate, this means participants may be intellectually isolated, only encountering information that confirms their own point of view, and they may be less exposed to diverse perspectives.
Erosion of authenticity: As AI-generated content becomes indistinguishable from human-generated content, it can breed a pervasive sense of distrust. In a debate, it becomes harder for participants to trust the authenticity of arguments, eroding the foundation of good-faith discussion
My personal worldview is built upon what I call the BothAnd principle*1 of Complementarity or the Union of Opposites. Instead of an Either/Or reductive analysis, I prefer a Holistic synthesis. We seem to be coming from divergent directions, with different vocabularies, but eventually met somewhere in the middle of the Aperion. — Gnomon
Schelling's theory of the Ungrund (non-ground) posits a primal, ungrounded principle that precedes and underlies all existence, including the rational mind. This "ungrounded ground" is a chaotic, indeterminate, and free force that is the source from which all reality and consciousness emerge, a concept that departs from purely rationalistic systems and emphasizes the importance of the unconscious and irrational.
Key aspects of the Ungrund
Primal, undetermined principle: The Ungrund is an "unfathomable" and "incomprehensible" starting point that has no prior cause or ground itself. It is a pure, indifferent identity that exists before the separation of subject and object, logic and existence.
Source of freedom and creativity: Because it is not bound by pre-existing structures or reason, the Ungrund is inherently free and allows for the possibility of change and development. This freedom is the basis for creativity and action in both nature and the human being.
Precedes reason: For Schelling, reason and rational structures are not the ultimate source of reality but rather emerge from this ungrounded source. The world contains a "preponderant mass of unreason," with the rational being merely secondary.
A link between philosophies: The Ungrund serves as a bridge between Schelling's early philosophy of identity and his later division into negative and positive philosophies. It is introduced to explain the origin of difference and existence from a prior, non-dialectical unity.
Connection to the divine: Schelling also uses the concept of the Ungrund in a theological context, suggesting that God has an inner ground that precedes existence, but that God is also the principle that gives rise to this ground, as seen in his discussions on freedom and God
Perhaps I should consider myself lucky I have a sketchy grounding in formal logic. — Janus
Friston and his followers often claim that subjective experience (qualia) is not the target of the theory.
But to describe the brain as a predictive, inferential, or representational system is already to invoke the phenomenal domain. — Wolfgang
To suggest that laws (so defined) come and go over time is ad hoc, because there's no evidence for this. Types of particulars may come into or out of existence, but if they exist - the associated laws will necessarily exist. — Relativist
I agree with you... unenlightened is blowing the smoke of mere logical possibility. — Janus
Free energy minimization gives you a framework where you can write down the equations describing the conditions for a system to maintain its own existence over time. That might not be interesting for a rock. But I think thats quite interesting for more complicated self-organizing systems. Its a framework for describing what complex self-organizing systems do, like choosing to Describe physical systems as following paths of least action.
...as a unifying theory of self-organization, it does exactly what it says on the tin, and its impossible for it to be any more precise empirically because the notion of a self-organizing system is far to general to have any specific empirical consequences. Exactly the same for a "general system's theory". Nonetheless, this theory is fundamentally describing in the most general sense what self-organizing systems do, and gives you a formal framework to talk about them which you can flesh out with your own specific models in specific domains. — Apustimelogist
That’s why I said “usually.” As I understand it, engineering mechanics is the science of phenomena that can be constructed using the principles of lower levels of organization. — T Clark
For example, "the dichotomising action of apokrisis" meant nothing to me, until Google revealed some associated concepts that I was already familiar with. — Gnomon
Anaximander used the term apokrisis (separation off) to explain how the world and its components emerged from the apeiron—the boundless, indefinite, and eternal origin of all things. In his cosmology, this process involved the separation of opposites, such as hot and cold or wet and dry, from the undifferentiated primordial substance.
The process of apokrisis
A contrast to Thales: Anaximander's teacher, Thales, had proposed that water was the fundamental principle (archē) of all things. Anaximander disagreed, arguing that if any one of the specific elements (like water) were infinite and dominant, it would have destroyed the others long ago due to their opposing qualities.
The function of the apeiron: To resolve this issue, Anaximander proposed the apeiron as a neutral, limitless, and inexhaustible source. The apeiron is not itself any of the known elements and is therefore capable of giving rise to all of them through an eternal motion without being depleted or overpowered.
Cosmic differentiation: The apokrisis, or "separating off," is the key mechanism by which the universe comes into being. Anaximander held that an eternal, probably rotary, motion in the apeiron caused the pairs of opposites to separate from one another.
Formation of the cosmos: This separation led to the formation of the world. For instance, the hot and the cold separated, with a sphere of fire forming around the cold, moist earth and mist. This ball of fire later burst apart to form the heavenly bodies. This dynamic interplay of opposites is regulated by a sense of cosmic justice, with each opposite "paying penalty and retribution to one another for their injustice," according to the "disposition of time"
When you speak of a “bridge mechanism,” you already presuppose that there is a level of description where semantics becomes physics. — Wolfgang
But semantic reference, intentionality, or subjective experience are not additional physical phenomena that arise through complexity. They are descriptions that belong to a different epistemic domain. — Wolfgang
So when you say that biophysics “has already provided the bridge,” I would say: it has provided the conditions of correlation, not the transition itself. What you call a “bridge” is in truth an interface of perspectives, not a mechanism. — Wolfgang
Modern philosophy has thus taken on the character of a stage performance. When David Chalmers, with long hair and leather jacket, walks onto a conference stage, the “wow effect” precedes the argument. Add a few enigmatic questions — “How does mind arise from matter?” — and the performance is complete.
Koch and Friston follow a similar pattern: their theories sound deep precisely because almost no one can truly assess them. — Wolfgang
The second is subtler: it assumes that mind must arise from matter, when in fact it arises from life.
If you reduce a physical system, you end up with particles.
If you reduce a living system, you end up with autocatalytic organization — the self-maintaining network of chemical reactions that became enclosed by a membrane and thus capable of internal coherence.
That is the true basis of life: the emergence of a causal core within recursive, self-referential processes. — Wolfgang
Elite batting or return tasks show anticipatory control because neural and sensorimotor delays demand feed-forward strategies. That is perfectly compatible with many control-theoretic explanations (internal models, Smith predictors, model predictive control, dynamical systems) that do not require Bayesian inference or a universal principle of “uncertainty minimization.” From “organisms behave as if anticipating” it does not follow that they literally minimize epistemic uncertainty. — Wolfgang
Sliding between these domains without a strict translation rule is a category error. — Wolfgang
Between physics and semantics there can be no bridge law, only correlation. — Wolfgang
f “organisms minimize uncertainty” is to be an empirical claim rather than a post-hoc description, it must yield a pre-specified, falsifiable prediction that (i) distinguishes the FEP/predictive-coding framework from alternative non-Bayesian control models, (ii) is measurable in the organism itself (not only in our statistical model), and (iii) could in principle fail. — Wolfgang
The issue is the illegitimate conceptual leap from physical energy flow to semantic uncertainty, and from probabilistic modelling to biological reality. That’s precisely the confusion I am objecting to. — Wolfgang
Usually you cannot use the principles of one level of organization to predict—construct—phenomena at another level. — T Clark
It claims that organisms minimize uncertainty. This claim cannot be empirically confirmed. — Wolfgang
Physical energy and semantic uncertainty belong to entirely different descriptive levels; they have nothing to do with each other. — Wolfgang
This is related to this:
differentiation emerge from a state of uniformity — JuanZu
It is against to the thesis that matter is a passive receptacle for external and transcendent forms (first cause), while symmetry breaks give matter (to which they are immanent) the ability to generate forms without external intervention. — JuanZu
This is not talking about Symmetry in the traditional mirror-image sense. — Gnomon
For example, the genetic code (A–T–G–C) is just chemistry, but evolution selected the combinations that could store and replicate information. — Copernicus
The key difference lies in the informational architecture, not the physics underneath it. So life and mind aren’t exceptions to physical law — they’re extensions of it. The universe, in a way, learning how to encode itself. — Copernicus
If biology starts at the point where “a molecule can be a message,” then that’s the threshold where matter becomes reflexive — where it starts encoding its own persistence. — Copernicus
In short, codes aren’t supernatural — they’re emergent designs within physics. — Copernicus
Can you ask in simpler terms exactly what your objection was? — Copernicus
Human cognition exists along a continuum of increasing physical complexity:
• Sentience – the ability to feel or experience; found widely among animals with nervous systems.
• Sapience – higher reasoning, foresight, and abstraction; a hallmark of human cortical evolution.
• Consciousness – awareness of the environment and oneself; emerging from multi-level neural feedback loops.
• Conscience – moral awareness; the social and reflective layer of consciousness shaped by empathy and memory.
Each is built upon physical substrates—neurons, synapses, chemical gradients—yet each transcends its parts through emergent organization. — Copernicus
Friston's Bayesian mechanics learns from experience by using prediction errors to update its internal models of the world, a core component of the Bayesian Brain Theory and Active Inference. Incoming sensory data is compared to the system's predictions, and any discrepancies drive changes to the probabilistic beliefs and generative models, allowing the system to adapt to its environment and improve its simulations of reality.
These prediction errors are crucial because they are used to update the internal models and beliefs. This process of updating probabilistic models based on new sensory evidence is the core of Bayesian inference. Through this ongoing process of prediction, comparison, and updating, the organism constructs and refines its "reality model," which enables adaptive behavior in a complex environment.
Friston's Bayesian brain model explains attentional processes by proposing that attention acts to estimate and manage uncertainty during hierarchical inference, thereby controlling the flow of sensory information. By dynamically adjusting the "precision" of prediction errors (how much weight is given to sensory input vs. prior beliefs), the brain can focus processing resources on the most informative parts of the sensory scene, a process which naturally accounts for phenomena like salience and selection
Is this psych rock?: — PoeticUniverse
You don't actually say anything here about why I'm wrong. — Banno
You can't be claiming that Bayesian calculus is not about belief. So, what? — Banno
Bayesian calculus deals with our beliefs, such that given some group of beliefs we can calculate their consistency, and put bets on which ones look good. But it doesn't guarantee truth. So what it provides is rational confidence, not metaphysical certainty. It's in line with Hume's scepticism. — Banno
I could have written 'invariances' instead of "laws of nature". Do you think it is reasonable to say that if the past constrains the future it follows that nature's invariances do not suddenly or randomly alter? — Janus
