Comments

  • A proposed solution to the Sorites Paradox
    Slightly more formally: treating a grain as a centre of gravity with a spatial boundary, four points are required to define a three dimensional space, so four grains are required to define a three dimensional shape that constitutes a heap.unenlightened

    One can always add formal precision to a definition or constraint. And yet vagueness also remains. It is inherent in the world itself.

    In ordinary language, “heap” has connotations of careless formation. A pile without a formal design, just randomly built up. So a tetrahedral volume is the opposite of a careless pile. Can it thus really be called a heap if we precisify our definition and emphasise the connotation of random formation?

    Applying chaos theory, we could indeed apply a formal definition of “sufficiently random”. The question now is what is the least number of grains that could be piled in such a fashion. Sphere packing maths does just this. It contrasts regular stacking and random settling. https://en.wikipedia.org/wiki/Random_close_pack

    That would argue more than four grains would be needed to arrive at a stable yet random heap.

    So my point is that formal definitions of natural situations can indeed be tightened up as much as desired. That is not the issue.

    What is of interest is whether formality can ever exhaust the availability of tiny differences or further distinctions. Is reality in fact atomistic as logicism likes to presume. (spoiler: no).

    And then going with that acceptance of ontic vagueness goes the recognition that formality in fact doesn’t function because it is precise but because it can apply epistemic generality. Logic is semiotic. It is an exact way of ignoring the underlying vagueness of the world by making choices about what differences don’t count as making any damn difference, from some agent’s point of view.
  • A proposed solution to the Sorites Paradox
    Even though the pile-size may change, the visual image of the pile remains the same.Don Wade

    It is a feature rather than a bug that language is vague at base. So the paradox doesn’t need “solving”. Being able to speak in generalities is the point. We can gloss over the multi fold differences that don’t make a difference ... to us, for some reason, at that moment.

    How are you actually imagining your pile? Is there at least one grain stacked upon another? Is there more than a single layer of grains?

    That generalised image must mean the smallest pile is 4 grains - three as a triangular base and one perched on top. Take that away an only have a clump or group of grains? Move the grains gradually apart and at some point they are not even a group?

    Vagueness always exists when we form some verbal or imagistic generalisation. That is just everyday epistemic vagueness.

    Where the metaphysics gets interesting is when the vagueness is ontic - a fact also of the world itself.
  • New Adam Curtis Documentary
    I just watched part1, and I'm reminded of ideas about control and stability being inversely related. The more control humans have the more unstable humanity becomes, and this is just the way the world is.unenlightened

    Watched part 2 now. Commenting on this, I would say that systems biology gives a more functional reading of the relation.

    It is true that an excess of top-down control (or constraint) creates a brittle and over-regulated system (social or otherwise). Stan Salthe calls this a senescent system - one grown to be too ruled by ingrained habit. The component parts of the system lack creative freedoms - an ability to adapt - and so become prone to erratic responses in the face of challenges. The system starts to fray because of this form of instability.

    But the human story - as being told by Curtis - is at a quite different stage of its lifecycle. Society is more immature than senescent. Senescence sets in when a system has arrived at a very stable energy throughput. A steady-state power flow.

    A highly functional ecosystem, like a rainforest, is senescent in the sense that it has evolved so many feedback regulatory controls - a mass of interlocking biological activity - that this becomes both its great strength and great vulnerability. If the wider world - the energy and resources flux that sustains it - remains itself stable, then the rainforest flourishes due to its vast weight of accumulated adaptations. But if the Earth climate systems start altering even slightly, the rainforest can collapse rather catastrophically and get replaced by some less complex ecology.

    An immature system is the opposite in existing in a realm of increasing energy throughput - in the way the weeds suddenly have open light when the forest canopy disappears. The equation now is the system is under-regulated and so has plenty of local freedom to adapt as it likes. There are no habits in place and so free experimentation is in play. A second form of "instability" to complement the fragility of being over-regulated and prone to breakage.

    For human society, as an evolving biological entity, the story has been about how we have been continually reinventing our way of life so as to be able to step up the energy throughput and hence remain in an immature state. The industrial era was built on fossil fuel and the machinery (the epitome of regulatory habit in nature) that could harness it.

    Humans have become more unstable partly because they become victims of now senescing forms of society - its economic and political order - and because they are also part of the creative experimentation to take the system to its next stage of development. Immaturity - the vigour of youth - is restored by society finding the behaviours that allow it to create more entropy.

    So this is one of the confusions in Curtis's presentation. There is both the crumbling of the old and the invention of the new taking place in the same moment. And it is always heading for the same "functional" outcome - even more energy throughput through the human ecosystem.

    It is not about some malign human power or instinct that maintains and reasserts itself - the re-imposition of control over the individual spirit. This is the Romantic myth of humanity in moral decline.

    Instead, it is a much more natural and biophysical story of nature finding better routes to maximise entropy production. And the idea of the modern individual results from continually re-discovering the next mode of operation that keeps that biophysical project going.

    The British Empire perfected a stage of planetary resource exploitation. It created a colonial management system in which the natives would be educated to think and behave in ways that actively supported a UK power structure that could biophysically out-compete other less intelligent colonial empires. The civil service was a semiotic machinery for a certain form of mind control that fostered a willing and creative participation by those being colonised.

    Then we had the era of the machine man - rational, efficient, time managed, etc. Followed by the corporate era of the passive consumer (Curtis was spot-on about the hippies becoming a consumable life style and fashion choice). Then the high finance era where every individual is a go-getting entrepreneur.

    From this biophysical point of view - where the only driving imperative is to maintain the human social system in its accelerating phase of constantly growing energy throughput - the current era of Trump and woke irrationalism becomes much more explicable.

    The script for being a human individual must get rewritten in ways that keep the big show going. And what we get is thus the functional product. We get what works in the current circumstance.

    Trump and Wokism represent the kind of instability that currently flourishes where a more rationally-trained mind believes it shouldn't.

    And it is pointless putting a moral lens on this story - as Curtis is largely doing. It is only going to make sense as a tale of systems biology or biosemiotics.

    The Romantic myth is that humanity aspires to be on some virtuous and Platonic life path. But the beast within keeps dragging us down. Christianity 101. It is the familiar moral philosophy diagnosis.

    But Nature will just evolve its way towards the goal of maximum entropy production. That is the telos baked in to life itself. Humanity simply represents that urge at a collective social level where we have ourselves evolved the further semiotic tools of words and numbers.

    Biology is based on genes and neurons. We created the further steps that allowed us to exploit fossil fuels and even dream of removing energy constraints completely. System senescence could be postponed indefinitely by the combination of general technological structure and local human ingenuity.

    Curtis is spending too much time talking about human psychology and moral dilemmas - the myth that humans are somehow something special and beyond nature. The way to unlock what is going on in the world is to focus on humanity as what Vaclav Smil dubs the planetary anthropomass - humans as a biosemiotic entropy production system with the new conscious ambition of staying forever within the immature growth phase of the canonical ecosystem lifecycle.

    Steady-state is flourishing for a rain forest, but anathema to the "modern human spirit". So the goal is not perfect top-down control over the individual. It is to breed the kind of creative instability that fuels the further social change needed to keep the whole crazy game going.
  • New Adam Curtis Documentary
    I trust his facts to be right, but I think you have to take his narrative/framing approach as an aesthetic device.If you're willing to temporarily suspend your disbelief, it's a thrill, but once you've watched 3 or 4 of his movies, you realize he's going to tell the same story, and use the same emotional cues to create a massively over-simplified story,csalisbury

    Exactly. Fantastic archive material and lots of weird links. It is great as art video, but lacks grounding in theory.

    I’ve only watched first part, so he may stick the landing. The bit I am liking is the angle that power is its own abstraction flowing through the circuits of modern human society. It shapes the individual psyche with its constraining memes.

    The irony is how modern individualism was born as a way to enslave people to this very system. People were not freed (as in the Romantic notion of selfhood) but constructed to have greater degrees of freedom. That is shaped to act more abstractly as a vessel for ideas. The suitable ideas are then supplied by the social power system. We call it being civilised and educated.

    This is a naturalistic phenomenon. The way natural systems evolve. And now we’ve seen it all move past the rational/mechanical image of the ideal human to the woke/Trumpian era where conspiracy theory and emotion rule. The power game feels a step more naked and direct.

    Feelings beat reason. And that is the next step for the evolving organism. Curtis seems to be saying this.
  • Who are the 1%?
    A great site where you see who actually are the 1% top income by profession can be seen here.ssu

    The OP is about wealth rather than income.

    Income is going to be largely meritocratic and deserved you would hope. But wealth goes to being part of the rentier class.

    So apples and oranges.
  • Who are the 1%?
    These are the 1 in a 100,000,000, not the 1 in 100. The top 0.000000001 percent.

    Kind of puts it in perspective.
  • Dark Matter, Unexplained
    Fundamental physics and cosmology are full of the most outrageous discoveries. And yet folk really seem to go for this dark matter mystery. Curious.

    It is like setting out to explore the world and thinking the next closest town is it.
  • Who are the 1%?
    Even in an absolutely perfect society there wouldn't be equality of opportunity. I mean what are we suppose to do, get rid of all the children with learning disabilities?BitconnectCarlos

    I see what you mean. If perfection is impossible, just give up. In fact even to try can be equated to fanatical Nazi euthanasia. Sounds legit.

    What about the people who are naturally less ambitious and prefer to live a more relaxed lifestyle?BitconnectCarlos

    Hey, that's me. That's any normal person. That's who society ought to be built around ... in my selfish view.

    Who in their right mind votes for neoliberal purism? Who would vote to construct life as a rat race?

    Are we just suppose to expect everyone to be type-A perfectionists who strive to maximize income at virtually any cost? Even if society were perfectly fair and generous we could be seeing vast inequality.BitconnectCarlos

    Something went wrong between the first and second sentence there.

    The question is why would we even seek to maximise "at any cost" striving as a social good?

    And if we indeed were, then the measurable socioeconomic lack of an equality of opportunity ought to be a prime issue for that curiously-motivated country.

    So yes. I would be the first to say that inequality - a long-tail distribution of wealth - is precisely what a growth-based economics would predict. It has to be the outcome of making growth the central system goal.

    But capital comes in various forms - social as well as financial. Politics is actually more complicated than that which is measured by GDP.

    A growth in generosity might beat a growth in monied power. Or at least, the two could be held co-equal if they are in fact complementary.

    The problem with neo-liberalism as an engine of growth is its lack of balance. And this is why social democracies are to be preferred. At least by me.

    Social mobility is a tough topic. I prefer studies which track individuals over, say, a 30 year period rather than just taking a snap-shot in time. I think when you look to these types of studies the picture is a little less bleak.BitconnectCarlos

    You prefer data that fits your prejudices? Sure.

    But the longitudinal data is what is showing that US equality of opportunity and equality of outcome were a rather fleeting 1950s post-war thing. Back when US politics was also remarkably federalist and corporatist by any neoliberal measure.
  • Dark Matter, Unexplained
    Why is it unlikely? Is the current problem some lack of theories or the capability to test between them?
  • Who are the 1%?
    Hah! But check out Peter Zeihan for the US view on why even a badly-run superpower can afford to get away with the kind of flawed politics that us smaller nations can't afford.

    https://www.youtube.com/watch?v=F68RLLXSJLU&feature=youtu.be

    Zeihan say it starts with the US owning the world’s best chunk of geography – the largest expanse of good agricultural land with an ideal range of growing climates. And its continentally isolated location means it has never had to fear invasion.

    The US has its huge demographic power too. A population of 330 million that isn’t greying like Germany, Japan, China and Russia.

    It now even has its shale oil and gas revolution. Coming out of nowhere since 2010, the US now boasts of being energy independent.

    The US is lucky in that it is born to so much wealth in the form of geographic advantage that it will take more than political dysfunction to turn it downwardly socially mobile within the larger world system.
  • Who are the 1%?
    In any case it's easy to argue against such fantastical positions.BitconnectCarlos

    Sure. With another reasonable person. :up:

    That's an interesting chart, but honestly comparing America to Denmark is a little silly. Denmark is a largely homogenous country of around 5 million. I live in one of the smallest states in the country and our population is 6 million.BitconnectCarlos

    So good political structure doesn't scale? You could fix the comparison by combining all the Scandinavian-style social democracies and maybe subsetting that against a matching sample of the most go-getting individualist US states.

    It is not that economic meritocracy doesn't exist in the US. The argument is that the US has allowed itself to become a lot less meritocratic than is its publicly-avowed aim.

    The system is not delivering that equality of opportunity as a good which would balance its "success" at also delivering vast income inequality – and indeed, declining financial expectations - for its 99 per cent.
  • Dark Matter, Unexplained
    So - science doesn't know what dark matter is, what its components are, or even really that it exists, except inferentially.Wayfarer

    Nonsense. Science doesn't even claim to "know", only to constrain uncertainty through an epistemology of theory and measurement.

    So - as Peirce explained - that is a systematic process of abductive reasoning. Yes, inference from evidence is part of the loop. But so is the free creativity of hypothesis formation and the deductive reasoning used to shape a causal theory.

    Your supposed bug is the feature.

    So if we have to redefine matter so it 'passes right through the world' and exists in parallel baryonic matter, then it completely upends every prior idea of 'matter'.Wayfarer

    Do electrons respond to the strong force? Did the fact they don't upend our very idea of matter or explain why matter could come in the form of various different fundamental particles?

    Why do neutrinos not notice electromagnetic charge? Same again.

    I think it's noteworthy how sanguine you are about it. I guess it's because you've got a slot for it in your mental model of the world, so it's not a problem.Wayfarer

    Or maybe the idea that in a world of mammals and reptiles, there could also be egg-laying monotremes and pouched marsupials, might be extraordinary if one had no general biological framework.

    Dark matter could demand some sort of new physics of course. But the likelihood is that it is just another category of particles explained by symmetry principles.

    There are much more serious challenges in a phenomenon like dark energy, or a positive cosmological constant.

    As if often observed, it might turn out that dark matter will in the end be like the epicycles of Ptolmaic cosmology - devices introduced to save the appearances, but, in the end, abandoned on account of the reigning paradigm itself being undone.Wayfarer

    And what do you think the scientists are hoping for?

    If physics were a faith, then paradigm shifts would be the end times. But it is a science. And so killing the reigning paradigm is what motivates every new generation.

    The problem is keeping a lid on those careerist ambitions.
  • Who are the 1%?
    As far as the US concerned, it is believe one thing and do another.....

    Many Americans strongly believe the U.S. is a "Land of Opportunity" that offers every child an equal chance at social and economic mobility. That Americans rise from humble origins to riches, has been called a "civil religion", "the bedrock upon which the American story has been anchored",[14] and part of the American identity (the American Dream.

    But then....

    The_Great_Gatsby_Curve.png

    Intergenerational immobility versus economic inequality in 2012. (Countries closest to the axis in the left bottom have the highest levels of socio-economic equality and socio-economic mobility)
  • Who are the 1%?
    I wonder if anyone has read or researched extensively who exactly these people are and if there are trends in their philosophies or religious outlooks.Xtrix

    The general trend is the 1% aren't self-made but inherit their wealth and social advantage. So personal qualities or broader outlooks are pretty irrelevant. Better off considering the accidents of their birth.
  • Coronavirus
    Then I will shut the hell up and wear a mask.Book273

    Another child controlled by a meme. This ain't about medical science but about sociology.
  • Dark Matter, Unexplained
    So the question it prompts, for me, is how can physicalism, as a philosophical principle, be credibly maintained in light of these conjectures?Wayfarer

    But how is this different in any respect from how we derive our knowledge of "bright" matter?

    Dark matter gravitates but doesn't radiate. Regular matter does both. But in both cases, we are imputing a cause that explains the effect.

    So it boils down to observing a change of some kind and forming some suitable concept of what is behind the curtain.

    If an event looks punctate, we posit a particle. If it looks continuous, we posit a field.

    We interpret observed changes as the sign of some metaphysical object.
  • Coronavirus
    idiṓtēs.180 Proof
    :grin:

    That is because I am speaking of psychological freedom, not societal or physical. And since the psyche is determined by it's own content, the freedom I'm discussing here is absolutely noncontextual.Merkwurdichliebe

    For someone with so much supposed psychological freedom, you seem rather constrained by your own cultural trope.

    But I guess whatever gets you a nanosecond of attention.
  • Coronavirus
    The beauty of freedom is that it is unconstrained, and its expansiveness is all consuming.Merkwurdichliebe

    As dialectics, this happens to be hogwash.

    It takes global constraints to create local freedoms. The return part of the deal is those freedoms must be designed so that they are themselves going to reconstruct the whole that has formed them.

    That is the logic of how dialectics produces historically enduring societies and institutions.

    So why did Western institutions come to underwrite individual property rights? Well, that encouraged the personal enterprise that then contributed to the collective nation-building wealth. It was understood as an obviously virtual circle.

    And the same applies to a social approach to health, education or any other useful common good.

    If you want the right to individual good health, then the social system has to be set up in a way that closes the loop and shapes your freedoms in a way that is conducive to that being a collective general outcome.

    You are instead speaking of freedoms as if they could be contextless. And that is illogical.

    What nation would vote to be ruled by a lack of logic.

    Oh....
  • The flaw in the Chinese Room
    Yeah. It was back in the 1980s that Searle was making his case. And even then a criticism was that he overplayed the physics at this point. Although given the strength of computationalism at the time, it was good to see any philosopher trying to argue so directly against it,

    So you notice how Searle says the brain isn't handling information in the TM sense - binary 0s and 1s that can literally stand for anything as they intrinsically stand for nothing.

    Instead, the brain is handling particular kinds of "experiential information" - visual, tactile, auditory, kinesthetic, gustatory, etc.

    But that then becomes a dualistic framing of the situation because he is talking about qualia and all the metaphysical problems that must ensue from there.

    So - from a mind and life sciences point of view - you don't want to shut down the computationalists by opening the door again for the idealists.

    That is where the semiotic approach came in for me during the 1990s. It is a way to glue together the computational and material aspects of organismic complexity in one formally-defined metaphysics.
  • The flaw in the Chinese Room
    I'm puzzled as that would be exactly my point. Neurons and synapses can't be understood except as prime examples of the irreducible complexity of semiosis.

    Neurons combine the physics of ion potential differences and the information of depolarisable membrane channels so as create "signals". So there is some core bit of mechanism where the two realms interface.

    But how those signals become actually a signal to an organism in its responses to an environment, rather than just an entropic and noisy bit of biophysics, is where the irreducible complexity bit comes into play.

    Neither physics, nor information processing theories, can tell us anything useful in isolation. You need the third framework of biosemiosis that has the two viewpoints already glued together in formal fashion.

    It may be too technical, but I wrote this post a while back on how biophysics actually has drilled down to ground zero on this score now. In just the past decade, the blanks have started to get filled in.

    https://thephilosophyforum.com/discussion/comment/105999
  • The flaw in the Chinese Room
    The physics of analogue computers and digital computers is not related to the physics of consciousness.Daemon

    What do you mean by the physics of consciousness then? Which part of physical theory is that?
  • The flaw in the Chinese Room
    Is that right? I thought a neural network was just a program running on a digital computer. And no analog computer has any connection with the physics of consciousness either.Daemon

    It is very easy to head back into these kinds of confusions. That is why I advocate for the clarity of the formal argument - the irreducible complexity of a semiotic relation vs the faux reducible simplicity of universal computation.

    When it comes to building technology inspired by either TM or semiotic models, the metaphysical issues always become obscured by the grubby business of implementing something of practical usefullness.

    There are no actual TM computers in use. The impracticalities of a single gate and infinite tape had to be hidden using the architectural kluges of stored programs and virtual addressing spaces. Real computers have to live within real physical constraints.

    So an epistemic cut - to use Pattee's term - has to be introduced between software and hardware. And if we zoom in close on any actual conventional computer, we can see the layers and layers of mediating mechanism - from microcode and instruction sets to operating systems and middleware - that are needed to negotiate what is supposedly a black and white distinction between the algorithms and a system of material digital switches burning electricity.

    So when it comes to neural networks, originally those were imagined as actual hardware implementations. You would have to have physical circuits that were not just digital switches and more like the analog electronics of pre-WW2 technologies.

    But then digital computers running conventional virtual machine emulations could simulate a network of weighted nodes, just as they could simulate any kind of physics for which the physical sciences have developed a theoretical description - the algorithms we call the equation of fluid mechanics, for example.

    And so that is the trick - the ruse to keep this particular debate going.

    Look, we can implement the algorithms that physics uses to make its descriptions of nature suitably algorithmic!

    But then - if you look to physics - you find that this is another level of the great con.

    Physics is good at constructing algorithmic descriptions of nature ... up to a point. But in the end - as with quantum collapse, or the ultimate non-computability of any actual complex dynamical system – the algorithms can only coarse-grain over the realities they model.

    Physicists hate this being pointed out. Like computationalists, they like to imagine that reality is actually a deterministic machine. It is the metaphysical assumption built into the discipline. And - as a useful assumption - it is great. The mechanistic view is the best way to look at the irreducible complexity of the world if your over-arching purpose is to construct a higher level of control over that world.

    To the degree you can mechanise your view of nature, you can employ that view to build a machinery for regulating nature in precisely that fashion.

    But at root - as with the "weirdness" of quantum mechanics or deterministic chaos - there is always going to be a gap between a mechanical and algorithmic description of nature and the metaphysical truth of nature being an irreducibly complex (ie: semiotic) relation.

    Searle frequently talks about the biological nature of consciousness, he refers to his position as "biological naturalism". It's not unreasonable for him to leave the biology to the biologists.Daemon

    But I was supporting Searle, not attacking him. My first post was about how he talked of simulated rain not making anyone wet, simulated carburettors being no use in an actual car.

    Simulation - or symbols - are certainly part of the biological story. But they are irreducibly connected with the physics of life from the point of origin.

    There is no sharp software/hardware division as is pretended by either computation or physics as sciences. There is instead always the necessity of an epistemic bridge that spans this epistemic divide in the fashion that even the PC on your desk has layers and layers of mechanism to give effect to the idea of a virtual machine running on real hardware plugged into an uninterrupted power supply.
  • The flaw in the Chinese Room
    My translation customers often want to make the reader feel good about something, typically to feel good about their products.Daemon

    Yep. Words can constrain experience. But they can’t construct experience.

    Of course words also construct those constraints in rule-constrained fashion. And the same brain, the same meat machine, is both acting out the linguistic habit and the sensorimotor habits that are the "experiences".

    So it is recursive and thus irreducibly complex.

    And that is the key when it comes to the debate over computational mind.

    The semiotic argument is that the relationship between symbol and physics that biology embodies is irreducibly complex. It is a story of synergistic co-dependency. You can't actually break it apart in some neat reductionist fashion.

    And once it is accepted that "mindfulness" is an irreducible triadic relation in this fashion - a knot in nature - then that rules out the simplicity of computational mind from the get-go. A Turing Machine is a clear category error.

    Of course, a TM does require a physics to make it a real device.

    It needs a gate mechanism to divide the continuity of real time and symbolise that flow as a series of discrete and equal steps.

    The gate also has to be able to make marks and erase marks. It has to be able to symbolise the logical notation of digital information in a way that is fixed and remembered.

    It needs an infinite length of physical tape to do this. And - usually unsaid - an infinite quantity of energy to operate the tape and the gate. And also usually unsaid, it must be isolated from a lot of other actual physics, such as the gravity that would collapse these infinite quantities into blackholes, or the quantum fluctuations that would also overwhelm the algorithmic function of a tape and gate mechanism in physical reality.

    So the TM is a hoax device. It is specified to minimise the actual physics - reduce the irreducible entanglement that must exist in any real semiotic system between symbol and physics. But in the end, such a reductionist move is physically impossible.

    And yet then, the computationalists like to wave this TM about - boast about its universality as an engine of algorithms, its Platonic status as implementation of pure logical idea - and demand of biology, why shouldn't a really complicated calculation also be conscious like any neurobiologically organised creature?

    Computationalist feel TMs have proved something about consciousness being just an information process, and all information processes being Turing computable, therefore the burden is on everyone else to disprove their claim.

    A biologist - especially if they understand the irreducible complexity of the semiotic relation - can see that a TM never actually removes the physics from the real world story. All that physics - the real constraints that space, time and entropy impose on any form of material existence, even one organised by symbols - is merely swept under the carpet.

    So the burden of explanation is really the other way around. The computationalists have to get specific about how they plan to re-introduce the physics to their de-realised realm of symbol shuffling.

    Semiotics doesn't say that can't be done. It just says to the degree the computationalists rely on a TM architecture, it has been all about constructing a machine that is as physics-denying as they could imagine. So they have a little bit of a problem having gone so far out on that particular limb.

    Neural network architectures, or even the analog computers that came before digital computers, are more embracing of actual physics. They reacted more directly to physical constraints on their informational habits. So it is not as if information technology can't be more lifelike in working with the irreducible complexity of a proper modelling relation with the world.

    But the Chinese Room argument was about dramatising how physics-less the TM story actually is.

    The problem was that it makes that criticism very plainly, but doesn't then supply the argument for life's irreducible complexity that makes the counter-position of biology so compelling.

    If the semiotic relation between symbols and physics is formally irreducible - at the level of mathematical proof, as has been argued by CS Peirce, Robert Rosen, Howard Pattee, etc - then that trumps the more limited claim of TMs as "universal computers".

    Universal computation applies only to the truly physics-less world that exists in the human imagination.

    Meanwhile back here in the real world ...
  • The flaw in the Chinese Room
    Of course life and minds follow rules. You are following the rules of the English languageHarry Hindu

    There is a world of difference between rules as algorithms and rules as constraints.
  • The flaw in the Chinese Room
    Edit: I think I've got it, it's the cut between the observer and the observed??Daemon

    Yep. Pattee was drawing the parallel with the observer issue in quantum mechanics. And they still talk about whether the wavefunction collapse - the act of measurement - is epistemic or ontic.

    So it was a bit of jargon he imported to biology.
  • The flaw in the Chinese Room
    there are only one set of rules for understanding Chinese, and both humans and computers would use the same rules for understanding Chinese. I don't see a difference between how computers work and how humans work.Harry Hindu

    But life and mind don’t “follow rules”. They are not dumb machine processes. They are not algorithmic. Symbols constrain physics. So as a form of “processing”, it is utterly different.

    To understand language is to know how to act. That knowing involves constraining the uncertainty and instability of the physical realm to the point that the desired outcome is statistically sure to happen.

    The connection between the information and the physics is intimate and fundamental. And with a TM, the physics is engineered out.

    So you can’t just hand wave about reconnecting the computer to the physics. You have to show where this now hybrid device is actually doing what biology does rather than still merely simulating the physics required.
  • The flaw in the Chinese Room
    But my point is that any simulation can trivially be made to "push against the world" by supplying it with inputs and outputs. But it is absurd to suggest that this is enough to make a non-conscious simulation conscious.hypericin

    A simulation processes information. A living organism processes matter. It’s computations move the world in a detailed way such that the organism itself in fact exists, suspended in its infodynamic relationship.

    So it is not impossible that this story could be recreated in silicon rather than carbon. But it wouldn’t be a Turing Machine simulation. It would be biology once again.

    Howard Pattee wrote a good paper on the gap between computationalism and what true A-life would have to achieve.

    Artificial life and mind are not ruled out. But the “inputs and outputs” would be general functional properties like growth, development, digestion, immunology, evolvability, and so on. The direct ability to process material flows via an informational relationship.

    And a TM has that connection to dynamics engineered out. It is not built from the right stuff. It is not made of matter implementing Pattee’s epistemic cut.

    This undifferentiated view of the universe, life, and brains as all computation is of no value for exploring what we mean by the epistemic cut because it simply includes, by definition, and without distinction, dynamic and statistical laws, description and construction, measurement and control, living and nonliving, and matter and mind as some unknown kinds of computation, and consequently misses the foundational issues of what goes on within the epistemic cuts in all these cases. All such arguments that fail to recognize the necessity of an epistemic cut are inherently mystical or metaphysical and therefore undecidable by any empirical or objective criteria

    Living systems as-we-know-them use a hybrid of both discrete symbolic and physical dynamic behavior to implement the genotype-phenotype epistemic cut. There is good reason for this. The source and function of genetic information in organisms is different from the source and function of information in physics. In physics new information is obtained only by measurement and, as a pure science, used only passively, to know that rather than to know how, in Ryle's terms. Measuring devices are designed and constructed based on theory. In contrast, organisms obtain new genetic information only by natural selection and make active use of information to know how, that is, to construct and control. Life is constructed, but only by trial and error, or mutation and selection, not by theory and design. Genetic information is therefore very expensive in terms of the many deaths and extinctions necessary to find new, more successful descriptions. This high cost of genetic information suggests an obvious principle that there is no more genetic information than is necessary for survival.

    If artificial life is to inform philosophy, physics, and biology it must address the implementation of epistemic cuts. Von Neumann recognized the logical necessity of the description-construction cut for open-ended evolvability, but he also knew that a completely axiomatic, formal, or implementation-independent model of life is inadequate, because the course of evolution depends on the speed, efficiency, and reliability of implementing descriptions as constraints in a dynamical milieu.

    https://www.researchgate.net/publication/221531066_Artificial_Life_Needs_a_Real_Epistemology
  • Towards a Scientific Definition of Living vs inanimate matter
    BTW, I gave you another mispelling of Pettee since it gives you some good feelings, as my mispellings are likely a Freudian slip on how little I regard his/Semiotics ideas with regard to useful Scientific endeavors.Sir Philo Sophia

    I think it says a lot about your approach to scholarship for sure. :mask:
  • Towards a Scientific Definition of Living vs inanimate matter
    Clearly, this is why you did not try to employ any of that feel-good philosophical jargon in your definition, which I "twisted your arm" to produce.Sir Philo Sophia

    I think you just can't follow the argument. So let's break it down.

    You want to employ the least action principle to define the world of inanimate physical processes. And yet from the very first bit of your definition you introduced the error of mixing entropy and potential energy - "...resulting in a tendency of monotonic increased entropy and decreased potential energy over time."

    The classical Newtonian view of the least action principle is expressed by the Hamiltonian - the symmetry that obtains in an energetically closed system where potential energy and kinetic energy form a constant yo-yo balance. The swinging pendulum story. The falling weight gains kinetic energy as it falls and that then turns into a gain in potential energy as it instead rises against the backdrop gravitational field.

    That's great for one level of physics. But then physics figured out dissipative structure or far from equilibrium thermodynamics. A least action principle can still apply. But how we are modelling an open energy system where there is a flow from a source to a sink. And dissipative structure arises in-between as self-organising physical structure that can move the flow with the greatest efficiency.

    That is what Schneider refers to....

    Emergence of coherent self-organizing structures are the expected response of systems as they attempt to resist and dissipate the external gradients that are moving them away from equilibrium

    ... the way a heated plate of oil breaks into an organised structure of hexagonal convection cells.

    So this is about two levels of physics - closed and then open systems. And how a general variational principle - a symmetry maths for calculating shortest paths - can be applied to both.

    The whole dissipative structure story was its own big revolution of thought in the 1960s to 1980s. And naturally, the sciences of life and mind could suddenly see how this second brand of physics slotted right in as a new material foundation. It changed the game.

    And so we then have the theoretical biologists who did incorporate this new physics. And began to apply the least action principle again as just the obvious way to arrive at the simplest descriptions of life as a natural system. It is equilibrium maths. You established a flat baseline - a constraint of global symmetry - and then you have two opposed values, a here and a there, as your complementary quantities scaling the departures from this baseline.

    If energy is actually conserved in a closed Newtonian system, and we only seem to see the positive motion of the kinetic energy. Then when than motion vanishes as it seems to with a pendulum on the upswing, we can still keep track of the now hidden energy by calling it an accumulating potential.

    Likewise, in an open system, if a dissipative structures suddenly crystallises out of nowhere and generates a lot of negentropy, we can balance that by saying there is a matching increase in entropy being produce and exported across the boundary of the system.

    Biologists could then develop that accounting system so that the energy/material flows could continue to be measured with some appropriate variational maths once the extra ingredient of symbols - the whole semiotic schtick - was added to the mix.

    A lot of different such models have been developed. I was pointing you towards that literature. It might be confusing, but it is all about precise definitions .... of making measurements within the appropriate theoretical framework.

    Now of course, if a semiotic level of dissipative structure exists and is bound by a least action principle, that is a big problem for your definition.

    Or maybe not if you realise that it is certainly not the dumb and blind Hamiltonian of Newtonian systems, nor even the dumb and blind dissipative balancing act of the self-organising structures that appear in "far from equilibrium" inanimate systems. It now has to be a new variational principle that provides the right kind of measure for a living system with a memory, a goal, some kind of mind.

    Something like Ulanowicz's ascendency, for instance.

    So your own definition was half-baked in being based on the idea of measuring life in terms of its ability to ignore the basic constraints of physics. You said - using vague terms like intelligence and sentience - that life can do its own thing, driven by some desire to accumulate potential energy.

    That is basically a mystical claim. Or at best, a descriptional definition. Look at life and it seems to somehow defy the laws of physics! It is anti-gravitational in that it can climb stairs. And even build stairs to climb.

    Practicing scientists can see what really makes a working definition. You need some closure principle to create a baseline for measurement - a universal symmetry statement such as the Hamiltonian. And then you need the two opposing forms of action that are the yo-yo symmetry-breaking departures from this baseline.

    That is the simplest kind of theory you can produce. The gold standard. You can now actually go out and measure the world.

    But of course, as I said, the dissipative structure revolution simply served to ground biology in the right kind of physics. Biology was also engaged in the heady business of grounding itself in the other half of the semiotic equation. It was turning to information processing concepts so as to pin down the "mind" side of living systems in measurable fashion.

    Salthe's infodynamics is an example of how this works.

    And generally, the whole field was energised by Shannon's demonstration that information is the other face of entropy. Physics itself was stumbling into a new information theoretic era where the Planckscale - the unified physics scale - has this "Hamiltonian" metric where entropy uncertainty and information certainty are a symmetry double act like kinetic energy and potential energy.

    So there is a big game in play. And the scientists actually know its rules. They know how to construct measurable theories. They understand the actual significance of the least action principle as a way to anchor that. And there is this "double foundations" thing going on where physics itself is starting to ground itself in entropy~information - the spontaneity of fundamental chance and the continuity of fundamental constraint.
  • Towards a Scientific Definition of Living vs inanimate matter
    Okay, that makes more sense of the von Neumann quote which otherwise didn’t seem connected to what you were saying, which I thought was about reproduction.Pfhorrest

    It is about reproduction - Von Neumann's influential work on self-replicating automata or universal constructors.

    But you have to have a blueprint of yourself to repair and maintain yourself as well as make clones of yourself.

    The definition of life can itself be seen as a guide for where to draw the boundary of a “self”Pfhorrest

    But the issue in question is how does life manage to draw its own boundaries. That is what makes symbols - semiosis - necessary.

    Schrödinger made this point famously in his "What is Life?" monograph ... along with fingering the complementary part paid by negentropy.
  • The flaw in the Chinese Room
    Predictions are simulations in your head, and predictions have causal power. We all run simulations of other minds in our minds as we attempt to determine the reasoning behind some behaviour.Harry Hindu

    Of course. But you took that statement out of context. Here is the context....

    Like the weather or a carburettor, the neural collective is actually pushing and shoving against the real world.

    That then is the semantics that breathes life into the syntax. And that is also the semantics that is missing if a brain, a carburettor or the weather is reduced to a mere syntactical simulation.
    apokrisis
  • Towards a Scientific Definition of Living vs inanimate matter
    I should also point out, that it is very curious that you were initially touting an entropic definition of life as being the key defining principle ( e.g., negentropic), But when I asked you to make a concise definition you completely drop that and just focus on pettee's semioticsSir Philo Sophia

    While on that subject, Stanley Salthe has written a bunch of my favourite papers on this point. If I am parroting anyone on the matter, it is his infodynamics.

    See for instance his The Natural History of Entropy....

    The story begins, appropriately enough, with the Big Bang (Layzer 1975; Chaisson
    2001). The key idea is that the universal expansion has been accelerating so fast that the universe has been unable to remain in equilibrium internally (Frautschi 1982; Landsberg 1984; Layzer 1975) and it appears that it may be continuing to accelerate at present (Ostriker and Steinhardt 2001). This expansion beyond the range of possibility for global equilibration gave rise to the precipitation of matter, which might be viewed as delayed energy.

    Clumps of matter represent potential energy gradients of one kind or another. Because of the Second Law, these energy gradients are intrinsically unstable and the world acts spontaneously to demolish them in the service of equilibration (Schneider and Kay 1994). And the faster the degradation, the more entropy (as opposed to useful work, which embodies some of the energy in other clumps) is produced per unit time. Gradients would originally form just from gravitation and fluctuation-driven winds and waves. Some of them, just by chance, would come to be configured in such a way as to be able catalyze the degradation of other, more metastable clumps.

    But, as I said, catalyzing energy degradation requires particular relations between gradients and consumers. This fact brings information into our picture. The information is required to create energy availability in a degrading gradient —availability for work. Gradient destruction in the service of work is necessarily an informed process (Wicken 1987). For a consumer to line up with a gradient so as to set up exergy extraction, it needs to have a certain orientation and form with respect to that gradient. What is a consumer? It is a gradient feeding upon another one. But it is necessarily an informed gradient. The origin of definitive semiosis (the biosemiosis of Hoffmeyer 1993) lies in these relations, as noted already by von Uexküll in 1926 (Salthe 2001). So, what is information?

    etc....
  • Towards a Scientific Definition of Living vs inanimate matter
    With that established, I then define "life" as "self-productive machinery": a physical system that uses a flow of energy to do productive work upon itselfPfhorrest

    Please note that I didn't just mean machinery that produces other machinery like itself, but rather, machinery that does "productive" work, in the sense that I defined it in that post, upon itself.Pfhorrest

    You did say that life was "self-productive machinery" and so I mentioned the telling objection to that being a sufficient statement.

    The "self" has to be dealt with here if we are going to be able to make this division between work and entropy clear as "work" does speak to there being indeed a selfish interest in play.

    A machine is defined precisely by its ability – as some system of material constraints - to separate work cleanly from a flow of entropification. A combustion engine explodes petrol vapour. Heat and gases are sent out of the system as waste, while pistons, cranks and wheels are turned to serve the system that made the machine.

    So in the semiotic view, mechanism is what stands between the symbols and the physics as the connection. The machine is a switch dipped in the entropic flow of the universe like a water wheel in a stream. It divides nature neatly so that there is now the work being directed inwards to the organism, and the waste being spent outwards to some environmental sink.

    And the shock from biophysics over the past 20 years is how literally life depends on its molecular level machinery.

  • Towards a Scientific Definition of Living vs inanimate matter
    I think that is problematic as well because gravity does fight against the second law of thermodynamics as it reduces entropy when matter clumps up together ( less micro-states are available for the matter to explore). So anything that uses such lines of definition I believe would not be viable. My general intuition, is that all entropy based definitions of life would be flawed. I'm still thinking through that and when I go through the Negtropic Articles And arguments that apokrisis Made,Sir Philo Sophia

    I agree that entropy accounting can be a little shonky on this score. As we dig into the details of the usual view - entropy always increases – we can see that the big picture view of cosmology says something different.

    Entropy looks to increase because the Big Bang says the Universe forever expands and cools. And yet that expansion is is also gravitationally negentropic. It is building up a matching amount of energy potential.

    Charlie Lineweaver is an excellent cite on the complexities of this. His publications page: https://www.mso.anu.edu.au/~charley/publications.html

    If the Big Bang had stayed just a simple bath of cosmic background radiation - if its expansion and cooling had been adiabiatic rather than fractured by a succession of matter-producing symmetry breakings - then no entropy would have been added or lost.

    But instead, mass did condense out to create lumps of energy density that then required dissipative structures to re-disperse back to cosmic radiation. So negentropy was produced by baryogenesis - those nebula gas clouds of hydrogen, heliium and lithium. Then entropy was liberated by the gas clouds first contracting into gravitational balls, then - happy accident - bursting into the radiant flames of fusion.

    Lineweaver covers this in multiple papers, showing the evolution of the Universe as a series of steps - the symmetry-breakings that cause it to fall out of thermal equilibrium and so find itself forced to use dissipative structure to get back on track. Eg: https://arxiv.org/pdf/astro-ph/0305214.pdf

    By the way, this is its own complication on any least action account of cosmic evolution of course.
  • Towards a Scientific Definition of Living vs inanimate matter
    With that established, I then define "life" as "self-productive machinery":Pfhorrest

    Ah. But the question when it comes to life is how can a machine self-reproduce. That is the essence of Pattee's epistemic cut issue. It is the central problem that a definition of life must address.

    See Pattee's account of von Neumann's famous challenge to quantum theorists....the infinite homuncular regress that arises as we try to avoid accounting for why a machine would have the intent to make the machine that it does.

    The most convincing general argument for this irreducible complementarity of dynamical laws and measurement function comes again from von Neumann (1955, p. 352). He calls the system being measured, S, and the measuring device, M, that must provide the initial conditions for the dynamic laws of S. Since the non-integrable constraint, M, is also a physical system obeying the same laws as S, we may try a unified description by considering the combined physical system (S + M). But then we will need a new measuring device, M', to provide the initial conditions for the larger system (S + M). This leads to an infinite regress; but the main point is that even though any constraint like a measuring device, M, can in principle be described by more detailed universal laws, the fact is that if you choose to do so you will lose the function of M as a measuring device. This demonstrates that laws cannot describe the pragmatic function of measurement even if they can correctly and completely describe the detailed dynamics of the measuring constraints.

    This same argument holds also for control functions which includes the genetic control of protein construction. If we call the controlled system, S, and the control constraints, C, then we can also look at the combined system (S + C) in which case the control function simply disappears into the dynamics. This epistemic irreducibility does not imply any ontological dualism. It arises whenever a distinction must be made between a subject and an object, or in semiotic terms, when a distinction must be made between a symbol and its referent or between syntax and pragmatics. Without this epistemic cut any use of the concepts of measurement of initial conditions and symbolic control of construction would be gratuitous.

    "That is, we must always divide the world into two parts, the one being the observed system, the other the observer. In the former, we can follow up all physical processes (in principle at least) arbitrarily precisely. In the latter, this is meaningless. The boundary between the two is arbitrary to a very large extent. . . but this does not change the fact that in each method of description the boundary must be placed somewhere, if the method is not to proceed vacuously, i.e., if a comparison with experiment is to be possible." (von Neumann, 1955, p.419)

    https://homes.luddy.indiana.edu/rocha/publications/pattee/pattee.html
  • Towards a Scientific Definition of Living vs inanimate matter
    .. For example, using your line of reasoning you would have to conclude that the planet Earth is alive because for all The molecules that make up the earth and its atmosphere to be exactly configured the way they are and to move with the dynamics exactly the way they do would be "a chance event beating the entropic odds by any number of lifetimes of a universe". However not even your proposed definition of life would consider the earth as Alive, so nor should it consider a virus being alive based on such very weak statistical arguments.Sir Philo Sophia

    Indeed, I am comfortable with any stab at a black and white definition of life having its interesting grey areas. We may differ on that score.

    So a virus falls into that vague zone. And so does an ecosystem or a planetary biosphere. The Gaia hypothesis has something to it. Life did transform the earth by creating the oxygen rich atmosphere that then supported the greatly more entropic metabolism of aerobic respiration.

    Right there is an example to life being able to evolve its way to even higher levels of dissipative structure. And the earth itself was brought into that regulated bio-loop.

    I generally have a disdain for any arguments that rely on statistics to come to any conclusion.Sir Philo Sophia

    I take the opposite position. It is clear reality is an emergent statistical phenomenon. Hence why thermodynamics is the foundational science.

    Propose is a plausible Darwinian mechanism that could have produced a virus from scratch?Sir Philo Sophia

    If genes are in competition, then a gene sequence that can hijack the means of its own reproduction is a statistically favoured outcome. The problem for life becomes instead to add enough regulatory machinery to keep this general tendency to go rogue in check.

    Bacterial introns and junk DNA show how general this kind of Darwinian warfare is. Life had to evolve its defences, like spliceosomes , to keep a lid on gene sequences going rogue.

    And, please do start from "first life was some kind of proton gradient, autocatalytic, dissipative cycle that emerged in the very particular environment of a warm, acidic, ocean floor thermal vent ".Sir Philo Sophia

    I'm not pushing a personal theory here but quoting "the scientific community".

    See Nick Lane's summary - https://www.nature.com/scitable/topicpage/why-are-cells-powered-by-proton-gradients-14373960/

    However, a prion lacks a Separate genotype (no symbols encoding in its Hyper complex functional shape) and phenotype in one. I think Pattee's definition of life which you are quoting would have to conclude that prions are dead.Sir Philo Sophia

    As I say, grey areas are not a problem here. Indeed - from a semiotic perspective - logical vagueness is basic to a developmental world view. The world can't start already divided. So the big question becomes how can its divisions - its epistemic cuts - arise? And for that, you need to be willing to embrace vagueness as the first ground of any distinctions.

    Black and white become what you get by starting with grey and then separating that towards its opposing extremes of light an dark. The darkest grey becomes the black. The lightest becomes the white. A binary division is developed. But only in the sense that some symmetry breaking has been taken to its complementary limits.

    You want to treat live vs death, animate vs inanimate, as dualistic categories. And so any greyness or vagueness has to be eliminated from "the holy definitions".

    But my organic and semiotic perspective takes the opposite view. Definitions are pragmatic. Differences are only relative. Vagueness is how anything new could even originate as a process of symmetry breaking development.

    So if life is defined by its organismic being - Robert Rosen's definition of life as a closed system of entailment - then that holism demands all a cell's parts are in functional co-operation. But within co-operation is buried the very antithesis of parts instead going rogue and being in competition.

    Hence no surprise that life erupts in that direction too with its viruses, prions, introns and other bits of cellular machinery making their bid for freedom.

    You should probably read Rosen's definition of life as well here. He and Pattee were colleagues so it is another angle on the general biosemiotic approach.

    See - http://www.people.vcu.edu/~mikuleck/PPRISS3.html

    Prions Seem to be more at the border of alive/Inanimate than even viruses in my estimation. So I figure they are a much better test of definitions.Sir Philo Sophia

    Or maybe they just say borders are grey on close inspection - vague and not crisp. Analog not digital.

    So, For at least the above reasons, your Definition is not a practical definition for Properly classifying all matter in the universe and establishing a metric for the earliest stage where inanimate matter transitions to living matter.Sir Philo Sophia

    But it is you rather than me that is so hung up on concise definitions. Life and mind are too complex a phenomenon to be pinned down quite so easily.

    Tornadoes and dust devils are also borderline dissipative structures if you are trying to force a biotic/abiotic division on nature. Belousov–Zhabotinsky (BZ) reactions are a classic example of inorganic systems being able to evolve better least action paths for themselves - convection cells that transfer heat with better efficiency.

    So the dead/alive distinction is very easy to apply to nature when we talk about rocks vs wombats. And becomes a suitably grey matter when we talk about tornadoes vs prions.

    The cites I have given on the issue address two issues.

    My first post pointed you in the direction of those who have been demonstrating how the sciences of life and mind can be founded on the physics of far-from-equilibrium thermodynamics. All life in general is dissipative structure. And then different as a dissipative structure in having the evolutionary capacity to break down barriers against entropification - to construct least action paths that wouldn't otherwise have existed.

    You've ignored that so far.

    Now that the discussion has turned to Pattee and Rosen, we are getting into the semiotic mechanism - the epistemic cut - that enables a dissipative structure to gain a regulatory control over its own being in this way.

    And so that now neatly roots biology in a psychological and even linguistic and logical perspective.

    Biology is placed on its correct foundation in terms of the physics. And is also being shown to be founded in the very "other" of physics - that mental or Platonic realm which we think of as information and meaning.

    You ask for definitions. I am concerned with fundamentals.

    And being a systems thinker, a holist, that is why I would want to show how biology - as something particular - arises from this founding combo of brute materiality and teleological intent. Physics and symbols.

    The Principle of least action (Which all of physics including quantum QED and My proposed definition are based on) Focus on the global action that matter/energy takes throughout its path through a fieldSir Philo Sophia

    Exactly. The teleological and holistic view. The system is imbued with its basic all-constraining principle. Something - horrors - almost mindlike.

    The only way to then demystify that telic principle is to follow Pattee, Rosen and other semioticians. The scientific account has to be expanded so it is anchored in the duality of physics and symbols, code and process, entropy and information.

    Your definition fails to do that. And indeed, you explicitly reject the symbol side in saying replication is irrelevant. The reason for citing Pattee is that it succinctly does give equal weight to both sides of the semiotic equation.

    I look forward to your feedback and may be an improved definition that does not suffer from all the problems I have pointed out. I will soon be giving you feedback on your entropic Based counterpoints after I read those papers. However, I'm still pretty sure they will suffer from serious problems along the lines as I pointed out in a prior post.Sir Philo Sophia

    I'm sure nothing will disturb the tranquility of your prejudices here.

    But I look forward to discovering how many more random spellings of Pattee you will be able to generate. I think we are up to six now!
  • The flaw in the Chinese Room
    Really? As soon as you attach inputs and outputs to the robot brain, it is no longer a simulation?hypericin

    A robot has arms and legs doesn't it? Or at least wheels. And sensors.

    So, if the Chinese room simulated a famous Chinese general, and it received orders which the laborers laboriously translated, and then computed a reply, and based on this orders were given to troops, it is not a simulation? Seems absurd.hypericin

    I'm confused which side of the argument you are running. Do you mean emulation rather than simulation in the OP?

    The universality of the Turing Machine allows the claim such a device can emulate any computer. But simulation is the claim that a computer is modelling the behaviour of a real physical system.

    Strong AI proponents may then claim conciousness is just an emulatable variety of Turing computation. Biological types like myself view consciousness as a semiotic process - a modelling relation in which information regulates physical outcomes.

    A Turing Machine designs out its physicality. And so it is straightforward it becomes “all information and no physics”. The argument continues from there.

    Now if I am a Chinese soldier and I’m following orders from a book, is the book conscious? Or is it me that is consciously applying some information to my material actions?

    And how is the Chinese room general more than the equivalent of a book I’m this thought experiment?
  • Why is panpsychism popular?
    The form "represents" the intent, but this implies that the intent is prior to the form.Metaphysician Undercover

    Or it self-organises and so intent and concrete possibility co-arise. The form is simply finality finding its fullest expression. The usual Peircean reply.

    This is what we see in human relations, society, community, the intent is prior to, and cause of existence of the formal constraints.Metaphysician Undercover

    Or instead, there is always already some vague or informal understanding in play. And development of that gives it formal expression as some system of laws and rights or freedoms.

    OK, but intent, if we are to call it a constraint, is a different sort of constraint than form is.Metaphysician Undercover

    Formal and final cause are the diachronic and synchronic view of the same essential thing. In the moment, you can see that there is some structure. In the long run, you can see that was expressing some reason.

    You are incapable of giving an account of how these constraints come into existence, where they come from, and why.Metaphysician Undercover

    I don't think you listen.

    Where does a river get its snaking curves from? From the constraints of a least action principle. It must arrange itself so as to balance the amount of water feeding it and the slope of the land which it must cross. If a straight line is too short to shift enough water in enough time, then it must throw out snaking loops and house the water that way.

    So the constraints are all the physical boundary conditions - the volume of water, the slope of the land, the hardness or softness of the terrain. The finality lies in the imperative of least action. The form is found in some degree of sinuosity. The river is the result - constrained within its suitably designed banks. It now seems a stable thing - an object of some kind we can honour with a name.

    You ask me to bring a stone. I misunderstand, so you've failed in your attempt at constraining my behaviour. You created no constraints. Would you not agree that your words still had meaning even though no constraint was created?Metaphysician Undercover

    If you brought me a pebble, that is a small misunderstanding. If you bring me wombat, at least I can credit you with understanding the notion of "bring me".

    It is all a matter of degree as to how obtuse I may judge you to be.

    the fact that you spoke them says that you meant something with them.Metaphysician Undercover

    Yeah, even if you gibbered back to me in some weird lingo, I would still have reason to think you were trying to say something in another language.

    So you are failing to demonstrate that language could have private meaning. Any meaning I could decode from the situation is relying on some familiarity with a communal habit.

    This is all gibberish to me, like you're try to change the subject again, trying to wiggle away.Metaphysician Undercover

    Seems a simple point. If I draw a line in the sand, there are now two sides to the matter.

    To be constrained is to be the one thing, and thus not any other thing. The usual negative space story.

    And talking of wiggling out of trouble, you've skirted the key issue - that sameness seems singular and difference plural for good systems reason. That was a poor choice of target on your part.

    All you seem to be saying is that if we overlook certain differences, assume that they make no difference, then we can have a true physical reality of this Ideal, "same".Metaphysician Undercover

    If you stick your big toe over the line I've drawn in the sand, I might just over-look it. If it's your whole foot, I would start to get peeved.

    Between black and white, we can leave as much grey as we like - if we are actually indifferent.

    As far as I'm concerned, I can decide you haven't yet done enough to cross my line.

    Do you not see that "same" is itself a form? It is the supreme, highest form in the hierarchy. It's often called "One".Metaphysician Undercover

    Yes. A form represents singularity for the reasons I set out. If it is a universal, it applies every all at once.

    That is how hierarchy theory works.

    There is no sense in talking about the evolution of forms, when you already assume the physical existence of the highest possible form as the background for your model.Metaphysician Undercover

    But science shows that forms are emergent and so themselves form a developmental hierarchy. There are the most truly general constraints - we call them the laws of physics, or even the principles (like the least action principle). And then there are all the local rules and regulations, such as the strength of gravity on a planet the size of earth.

    So if you only ever travel on the surface of the earth, that would seem like the general backdrop constraining all your movements. It just happens to be the highest scale of physical law you pragmatically encounter.
  • Towards a Scientific Definition of Living vs inanimate matter
    So, for example, what exactly does that say about whether a virus is alive or inanimate?Sir Philo Sophia

    It is alive when it is in the middle of hijacking some host cell's metabolic machinery. That fits the definition.

    But is it then "inanimate" when it is dormant as a viral particle? That's not so clear.

    If a viral particle were found as some molecular arrangement in the inanimate world, it would count as an extraordinarily negentropic event - a chance event beating the entropic odds by any number of lifetimes of a universe.

    It would be too much of a statistical outlier on that score to fit easily into the category of the inanimate. It wouldn't be merely Pattee's "rate-dependent dynamics", as the time for such a molecular structure to assemble by chance would be far outside such a process.

    We would be compelled to invoke rate-independent symbols as the only way that virus particle got constructed. And on earth, we are talking of perhaps as many such particles as there are stars in the sky.

    So by implication, the dormant virus is still part of the animate world rather than the inanimate one. Chance couldn't produce it. But Darwinian mechanism could produce it easily.
  • The flaw in the Chinese Room
    You can plug a simulation into the world, for example a robot, feed it inputs, and it could drive it's body and modify the world.hypericin

    Sure. Plug syntax into the world - make it dependent on that relationship – and away you go. But then it is no longer just a simulation, is it?

    A simulation would be simulation of that robot plugged into the world. So a simulated robot rather than a real one.

    And to be organic, this robot would have to be building its body as well as modifying its world. There is rather more to it.