• Philosophy is for questioning religion


    Hmmmm. The principle behind our successful adaptation and procreation is instinct, but does it follow that all our faculties, including reason, are instinctive? Wasn’t that the manifest point of the Enlightenment, to prove the human beast is naturally equipped for considerations beyond the capacities of the lesser, merely instinctive, beast?

    Yes to the latter question. This isn't just a supposition of the Enlightenment, but a core component of ancient and medieval philosophy. For Aristotle, reason was what made the human being unique. In the same way the ideal horse is strong and fleet of foot, reason is key to the essence of man; the development of reason was our telos, ultimate purpose. For Neoplatonists, logos spermatikos, universal reasons, was the principle bridging the individual soul, World-Soul, Nous, and the One, the key to the hypostasis between levels of emanation. For Hellenistic Judaism and early Christians, Logos plays a somewhat similar role to that in Neoplatonism, but without the same cosmology. Man's share in the divine Reason is what makes him "in the image/type of God."

    The attack on reason seems to come from two fronts. First there is romanticism and mysticism. These traditions claim that not all experience can be properly analyzed or put into words. Ecstatic states, aesthetic appreciation, these important facets of human life are bled out by over rationalization in this view. When we speak of the divine, there are things that cannot be put into words (e.g., Pseudo Dionysus). The existentialist tradition, which is quite strong today (the only philosophy I was introduced to in high school was existentialist literature), falls into this mold to some degree. Perhaps it's not surprising then that it is arguably more popular in literature classes than philosophy ones (but most people have English and often not a single philosophy class).

    I think the romantic critique gets some things right.

    As to your first question, the other attack comes from scientism. In this view, our sense of reason is shaped by evolution. It is thus arguably as fallible as our other senses, which tell us that the Earth is flat, that the Sun rotates around the Earth, etc. Reason then is essentially ungrounded, emerging from the essentially meaningless universe by chance.

    The problem with this latter view is that it is self-undermining. If we have no reason to believe our mathematics or reasoning is valid, then we have no reason to believe in the science that tells us this is the case and no reason to think the world should work in such a way that it is intelligible to us.

    That doesn't stop the argument from being popular though. Once reason becomes ungrounded, it becomes terrifying. The ancients lived in a much less secure world, and so what they feared most was degenerating into beasts. Thus, they saw reason as divine. Today we are more scared of becoming machines, becoming slaves to an order we cannot challenge. The horrors of the Holocaust and Soviet atrocities cast a shadow over the allure of reason. It seemed to show that reason could actually make us worse.

    IMO, this is a mistake. The brutality of the early 20th century, while a shock to Europe, which had seen relative peace since the Napoleonic Wars, was not at all uncommon historically. Ancient peoples didn't even attempt to hide their genocidal aims at times, making it a point of pride. Indeed, Europeans themselves were being just as brutal outside of Europe, e.g., in the Belgian Congo, during the period of continental peace. Brutality has been the norm, relative peace the deviation, and you don't see the latter without a rational organization of society.

    The idea that individual reason can stop atrocities was never going to hold water. Institutions have their own logic. Individuals are the accidents of social structures, not their substance. Individuals can shape institutions, but they are moreso shaped by them. A relatively modern, educated society will have individuals commiting atrocious acts if the larger structures are not rational.

    In periods of rapid economic growth, institutional development can lag development of the populace. To quote Hegel, "a[n ideal] state knows what it wills and knows it as something thought." This doesn't mean institutions have qualia, but they have their own goals that diverge from those of their members, their own intelligence, their own emergent sensory systems (think government statistics offices). The faliure of the Enlightenment was to think primarily in terms of the individual and the development of individual reason, e.g., "the legislature will be good if it has good, rational people." This leads to the naive view that political reform is just a matter of replacing bad people with good ones.

    Humans have a an innate tendency to focus on agents and don't tend to think of composite entities as true agents. I think the Enlightenment view gets more right than its detractors, but it failed because it failed to take a systems perspective of the logic of societies and failed to recognize the risks of not-yet-rational institutions havening sway over society.
  • Philosophy is for questioning religion


    Religion certainly served pragmatic functions in many societies. It can serve to legitimize the state (e.g., the deification of Roman emperors), it can act as a check on absolutist states (e.g., Saint Ambrose forcing Emperor Theodosius to wear penitent's robes and undergo chastisement after the massacre at Thessalonica), it can act as a legal arbiter (e.g., Saint Augustine mentions much of his time being sucked up by arbitrating property disputes, estates, etc.), it can help solve collective action problems when pushing for reforms (e.g., the central role of churches in the US Civil Rights Movement, and earlier, the Abolitionist movement), and it can help provide public goods in low capacity states (e.g., churches were the main source of welfare programs and education for the lower classes in Europe for over a millennia).

    Civil society organizations and the state can also provide these goods. What makes religion and philosophy unique is their ability to give people a narrative about the meaning and purpose of life, an explanation of their inner lives and the natural world.

    This is something religion aims at, but also philosophy, and the two can be quite close in this respect. A world view based on Nietzschean overcoming requires that the world be valueless and meaningless for the human to become great by triumphing over this apparent emptiness. A world where man is essentially evil and always on the verge of extinction is required for the somber rationalists to triumph over the immanent disasters by building a just structure in the world despite the opposition of the legions of the selfish. Marxism also is able to offer a religious-like, all encompassing vision of the purpose of human life, one which also ends in salvation.

    This is ultimately where I think the instinct to hold on to and defend dogmas comes from. They become like blocks in an arch, kick them out and the edifice collapses.



    Right, the myth of progress is certainly itself a dogma in some respects. In many ways there is "nothing new under the sun." For what it's worth, I didn't much care for Pinker's "Better Angles of Our Nature," it seemed naive in many respects. His conception of progress is too focused on it's being directed at the individual level by "principles." The argument for progress as a sort of evolutionary, information theoretic selection process doesn't unfold the same way.

    Progress itself can lead to reversals in progress. For example, current declines in battlefield deaths and the size of standing armies isn't a unique phenomena in history. You see the same sort of thing with the advent of the stirrup and the dominance of the mounted knight. Autonomous weapons systems might cause a similar shift.

    For a period, technology favored small, professional armies over mass mobilization, and while this reduced deaths in warfare on the whole, it also seems to have enabled incredibly unequal societies were populations became bound to a warrior caste.

    I do think Pinker makes a valid point about the "noble savage," and the "Hobbesian state of nature," being particularly well established dogmas in "The Blank Slate." I haven't read "Enlightenment Now," but the summary of the argument for why economic inequality isn't deleterious sounds like nonsense. If anything, inequality within states and between them seems likely to drive the next crisis point and makes solving other issues like global warming significantly more difficult. It's a case of focusing too much on easy to measure economic factors, whole ignoring factors that are as key to self-actualization, e.g., respect, status, etc. I'm more of a fan of Francis Fukuyama, at least his two volume opus surveying theories of state development, not so much the unfortunately more famous "End of History."
  • Philosophy is for questioning religion


    It's a process that appears to have occured in many cycles. You have evidence of a process of self-domestication within other members of the homo genus going way back, but also evidence for further rounds of rapid self-domestication occuring after the existence of homosapiens.

    Anatomically modern humans emerged around 300,000 years ago, while behaviorally modern humans emerged just 150,000-60,000 years ago. Within the latter period you also have another period of rapid neotenization occuring between 40,000-25,000 years ago. This period saw dramatic changes in skull morphology, sexual dimorphism, lower brain volume, and the introduction of genetic disorders associated with domestication that appear to be absent from earlier humans.

    The expansion of glaciation also had a major impact on humans, making the species significantly smaller. Wealthy countries are just now reaching the peak average height for males. Agriculture may have been a further blow to average height, but there seems to be less consensus on this.

    The big question is how and how much the emergence of civilization (agriculture and later states) affected human evolution. On the face of it, such a huge enviornmental shift seems sure to produce changes in the species over time. At the same time, it's an incredibly difficult question to answer due to the shorter time period and very dynamic nature of how various groups transitioned from hunter gatherer life styles to either pastoralism or agriculture, to full fledged state systems with formal legal systems, organized religions, etc.

    IMO, it's impossible to get a valid typology to use in analysis when considering levels of development and we don't have widespread state formation until very recently, which compresses the record, since increases in complexity didn't occur in a linear fashion. Plus, as you get closer to the modern era, the topic gets increasingly politically charged.

    But an interesting point that avoids this set of questions is the claim that, over time, culture became more important to human evolution than genes. Culture represents a way to encode information about the enviornment that is able to shift with enviornmental changes much more rapidly than genetic evolution. Such a trend might suggest that humans are on a road to becoming more and more a communal species (e.g., ants and bees being premier examples). This might explain why the nature versus nurture debate has so much life in it. Humans may have adapted to be increasingly malleable to cultural influences, nature causing us to rely more and more on nurture.

    https://www.sciencedaily.com/releases/2021/06/210602170624.htm

    There is also a lot of evidence that autocracies are not good at spurring scientific progress, and technology is a key determinant in warfare. Market economies tend to out perform command economies in terms of innovation and growth. Autocracies also tend to preform worse militarily for a host of well documented reasons relating to incentives, while on the other hand we have the observed phenomena that democratic states don't tend to go to war with one another (although they tend to have even longer wars when they do get into them with their non-democratic rivals). All this opens up the possibility of freedom being promoted because states that don't promote the freedom and well being of their people are more likely to be destroyed or radically altered by internal or external conflicts.

    It's at least a positive idea. The problem is that most reforms seem to only come when a crisis point is reached. For example, I don't see anything like the UN having actual teeth, power akin to the EU or US federal government, until some combination of global warming, global inequality, and migration spur on a world shaking crisis. It would be nice to do more reforms BEFORE things go to shit...
  • Philosophy is for questioning religion
    I guess I should caveat that claim by pointing out the scientism actually seeym to have a few sectarian rifts itself, and the common argument to hurl at heretics is that there version is "unscientific."



    This leaves the question of: "why do we expect science to progress, such that I trust a physics or biology textbook from 2020 more than one from 1880," but the same thing doesn't apply to other human institutions?

    It seems to me that science progresses through a progress akin to natural selection. Theories that jive less with reality eventually get selected against due to their inability to predict or explain all observations. That said, some theories also survive and thrive for other reasons (e.g., because they are elegant and aesthetically pleasing, easier to teach, are politically relevant, etc.).

    But why shouldn't the traits of states also undergo this sort evolution? Perhaps there is an attractor within the chaotic systems of possible state systems that causes states to converge on a better outcome. Certainly, one finding in political science is that developed states tend to become more similar over time in many ways.

    Anyhow, the claim that things are "just as bad as ever," certainly has its detractors, who can muster a lot of empirical evidence to support their claims of progress. % of deaths due to homicide have been trending down throughout history.

    Oxford, a wealthier town, had a homicide rate of 110 per 100,000 in the 1340s, 3.3 times over Honduras' current rate and higher than some war zones today. https://www.nytimes.com/1994/10/23/us/historical-study-of-homicide-and-cities-surprises-the-experts.html

    Studies of extant hunter gatherers and forensic anthropology converge on incredibly high homicide rates for humans in a "state of nature," at around 2,000 per 100,000. This is 44.5 times higher than the highest nations today, higher than the total death tolls of many major wars. However, a homicide rate of 1.8-2% isn't particularly at odds with what we see for the species from which we descended, so perhaps it isn't that surprising. Slavery, rape, and cannibalism are ubiquitous in human history and only slowly became anathema.https://www.nature.com/articles/nature19758

    Wars have also been becoming less deadly. Several single day battles in the ancient and medieval world have more fatalities than the leaked figures for the Russo-Ukrainian War occuring over several hours. The Thirty Years War killed 2.5 times more of Germany's population than both World Wars combined. The Huguenot Wars in France killed 11-14 times the share of the population as World War I. If Syria were to experience loss of live on a level with that conflict it would be around 10 times a deadly. Most members of the Wermacht and Red Army survived the Second World War, whereas fatality rates for Latin Crusaders in the First Crusade, an exceptionally large army for the period, were around 66-80+% despite their victory.

    US history follows a similar pattern. Fatalities as a share of the population follow an almost reverse chronological order, with the exception being the American Civil War being the highest, although if one includes the small pox epidemic made much worse by the Revolutionary War that conflict remains on top.

    Now, this trend could very well reverse in the event of a war where nuclear weapons are used against civilian targets in large numbers, but for now it is a trend that's held across centuries. Obviously it's a trend in a chaotic system though, trending down in the long term but jumping around in a self-similar power law distribution on shorter scales.

    Biology also suggests this progress may be taking place. Modern humans retain far more juvenile features into adulthood than their pre-agricultural ancestors. Human beings appear to have undergone a process akin to domestication over time.

    Then, on the economic front, we have the fact that the share of human beings living in extreme poverty or bondage (slavery or serfdom) has rapidly declined.

    There are certainly arguments against progress, but this is a tough set of trends to explain away entirely. It can't easily be reduced to "just technology," either, as there is ample evidence to support the claim that more open societies and greater economic freedom produces more rapid technological developments and scientific progress. Indeed, this was the whole reason Deng embraced a move to a market economy, because such a system is essential to national power. Thus, we can also see how a move towards greater freedom might be selected for in that it helps states survive conflicts.

    Of course, the first philosophers of progress I am aware of, the Patristics, Eusbius, John Chrysostom, Ambrose, Jerome, Cyril of Alexandria and Theodoret of Cyprus, all tied their conception of progress to the Pax Romana as leading to the eventual fruition of Isiah 2, "they shall beat their swords into plowshares and their spears into pruning hooks," and Psalm 46 "He shall make wars cease unto the ends of the Earth," and towards economic prosperity and freedom (Psalm 72), and then the Empire collapsed. So, we'll see...
  • Philosophy is for questioning religion


    I think you are right, but only half right. Philosophy helps tear down dogmas, but it also helps construct and sustain dogmas. Opposite any critical philosopher is always a set of orthodox philosophers attempting to preserve the current edifice.

    Galileo, Copernicus, and Kepler, got into hot water for heliocentrism precisely because it conflicted with Aristotle's physics, which by then had become a dogmatic framework for the Church.

    Platonism became the religion of Neo-Platonism with Plotinus, Proculus, etc., i.e. philosophers building up religion. The Patristics framed early Christian philosophy in Stoic and Neoplatonic terms because these ways of thinking were dogmatically held as the correct way to view nature, and anything that radically shifted away from them was necessarily suspect.

    Saint Augustine is widely held as the creator of the Western concept of "the will," and the originator of both libertarianism and compatiblism (you see both, since Augustine's thought doesn't fit nicely in one box and changes over his 43 year career). He is also the first philosopher to investigate semiotics, although he's less influential here since no one else picked up on it for a long time.

    Augustine challenged Neoplatonic dogmas, both outside and inside the Church (e.g., writing against Origenist positions or Arian emanationist positions that hewed closer to Neoplatonic orthodoxy). Yet, Augustine's teachings also became dogmas in turn, shaping the Western church in particular in a fundemental way. So here we have a philosopher building up religion.

    Philosophy is a process that both builds up and helps take down dogmas. Philosophy gave birth to the natural sciences, which were originally "natural philosophy," the social sciences, and a number of humanities fields (e.g. semiotics). You see the same dynamics at work in these fields, where a given paradigm is defended as orthodoxy when challenges to it first appear.

    Dogmas exist in the sciences. Physics had a 70 year span where work on quantum foundations was anathema, and orthodoxy enforced by torpedoing the careers of people who dared to investigate the interpretation of QM. Biology has a similar struggle over the Central Dogma that has blown up into public view recently

    Philosophy has shown up to help destroy dogmas in the sciences and to help erect new ones. The idea of "philosophy becoming divorced from the sciences," in the early 20th century is itself a dogma pursued by philosophers. Copenhagen orthodoxy wasn't "just the science," it was a philosophical view that claimed it was the absence of philosophy, and thus unchallengeable without "degrading the science by injecting woo filled metaphysics." This was, in retrospect, still a philosophy, and a particularly dogmatic and uncharitable one (e.g., constant claims that almost every topic under the sun is essentially "meaningless").

    Scientism is a dogma supported by a set of particular philosophical outlooks. This dogma is defended in the same way the old religions were. On this front, philosophy is still both maintaining and breaking down old dogmas. I read a lot of popular science, and hit books often contain tons of discussions of philosophical topics or metaphysical claims. These topics can often take up the majority of a book ostensibly not about philosophy, even when the same book denies a role for philosophy in modern science.

    This is why I am starting to wonder if the claim that "science doesn't make ontological claims, it is merely a set of epistemological methods," isn't simply a No True Scottman fallacy.
  • Neuroscience is of no relevance to the problem of consciousness


    I agree with the consequent, but I don't understand the antecedent. If the antecedent is false, then the project of understanding the world is hopeless. Or is there an alternative approach?

    I'm not quite sure what you mean here? I can't prove the antecedent here. If I could, I'd be out collecting my Nobel Prize for the "theory of everything."

    However, it seems at least plausible, given the successes of physics to date and all of our general experiences, that past states of affairs evolve into future states of affairs based on principles that can be defined deductively. Hence why so many theoretical physicists spend more time with equations than lab equipment, that latter of which often only comes out to test the deductive reasoning against experience.

    These descriptions of reality might be fundamentally flawed. But, if they are correct, then it follows that when we think we are seeing cause, we are, in fact, seeing something very similar to our naive conception of causation. The claim that this simply can't be the case thus begs the question about causation.

    Hume, at least the way I understand him and have most often seen him interpreted, isn't making an argument just about skepticism. He is saying we cannot see cause as such, because it doesn't exist. He reduces cause to constant conjunction, which is arguably a position that is eliminitivist towards causation rather than just reductive.

    Hume's argument for this seems to be grounded in the understanding of the natural world at the time, which involved a set of extrinsic, unalterable laws that somehow guide the interactions of "material," objects. These interactions included action at a distance that could only be observed via the discrete objects that were being acted upon.

    Hume's point makes more sense in this context. In this conception of the natural world, the laws do all the explanatory lifting and yet they can never, even in principle, be directly observed. Thus, Hume can worry that the laws might have always been the same to date, but could change at any moment.

    Modern interpretations of the natural world tend to be intrinsic. Interactions are due to properties inherit in the things interacting. In this view, there is no set of unobservable laws; the "laws" are simply properties of nature we've been able to describe in a symbolic language. To paraphrase John Wheeler, if you gather together all the formulas (laws) needed for a complete physics, what do you have? A bunch of paper with equations on them.

    There's an ambiguity between "follows" in the sense of "comes after" and "follows" in the sense of "is constrained by". It doesn't make any sense to me to speak of the universe being constrained by natural laws. Natural laws are what the universe does given that it is not constrained. Actually, it is neither constrained, nor not constrained; it just does what it does.

    There's a similarly weakness in the idea of causation. There's an idea that a cause somehow forces its effect. But that's a category mistake.

    I think you're quite right here. "Laws," is probably a bad word to use for the concept, but at this point we're sort of stuck with it.

    Likewise, if causation functions similarly to entailment, then it isn't a constraint or something that forces an outcome, except in the same sense that someone's "being a New Yorker," constrains them by excluding their "not a New Yorker."

    It's like how we don't generally say "2+2 causes 4," but then if we're considering how a calculator accepts inputs and produces outputs, it's totally natural to say that the inputs cause the outputs. Cause is something we think of in the context of temporal progression.

    There is probably a formulation similar to Hegel's dialectical move from the opposition of "being and nothing" to "becoming" that can be done for cause, but I can't think what the right ingredients would be. Entailment + change = cause? Doesn't sound quite right to me, but the best I could think of.
  • Humans are advantage seekers
    When ascetic mystics starve themselves to death looking for the truth they are doing it for advantage? When Augustine left his upper class profession, abandoned his engagement to a wealthy heiress, and abandoned his rising position in the imperial court, it was for advantage? Kepler advanced a scientific claim that saw him excommunicated, persecuted, and his mother tried as a witch for advantage? Galileo advanced a scientific theory than resulted in his perpetual imprisonment for advantage? Socrates accepted execution for advantage?

    I just don't see it.
  • Infinite Regress & the perennial first cause


    The concept of self-organization might interest you. In this book, a theoretical physicist looks into the cosmology of Jacob Boehme. He makes the argument that Jacob Boehme's conception of a self-organizing world might end up being more important in the long run than his contemporary Galileo.

    Or for something a bit more down to earth there is Erich Jantsch's "The Self Organizing Universe," which I have heard good things about, although it is a bit dated.

    Melanie Mitchell's Complexity: A Guided Tour is very good to, but isn't trying to look at any sort of big picture. But, it provides a look at how "a circle can draw itself," in a systems perspective.
  • Avoiding blame with 'Physics made me do it' is indefensible

    "All of Christianity is my interpretation of Scripture," is, while common in Christianity itself, not particularly well justified. Saint Augustine's "On The Free Choice of the Will," C.S. Lewis, Chrysostom speaking about the "hardening of Pharaoh's heart," specifically in Homiles on Romans, Origen on this same passage, the current Catechism of the (Roman) Catholic Church, etc. all embrace free will. You also have fatalist Christian thinkers. There isn't a consensus, but it's a historical reality that early writers in Church tended to be libertarians. Right before Saint Paul discusses the hardening of Pharaoh's heart and "vessels of wrath destined for destruction," in Romans 9, he spends Romans 6-8 exploring a concept of reflexive freedom and extolling the freedom God grants to humanity. It's a nuanced vision that can be taken many ways.

    I agree with you that Strawson's theory is unsatisfying. I don't generally agree with theories of justice and punishment that are wholly pragmatic either. I agree with Hegel that this reduces to treating humans like animals to be trained.

    My favorite theory of freedom and justice is the one Hegel outlines in the Philosophy of Right, but it's a bit difficult to do justice to in a quick summary. I will try later if I have time because I think it answers this question in many ways.
  • Neuroscience is of no relevance to the problem of consciousness
    On another note: Hume's assumption that people think the future will be like the past because, in prior cases the future has indeed been like the past, is also flawed. It doesn't describe why people actually have these beliefs.

    People often do think the future will be like the past, but this is often because this relationship is entailed by another belief of theirs, not because of inductive inference from past resemblances between future and past.

    E.g., Hegel thinks the future will be like (and in more ways unlike) the past due to everything that exists existing due to logical necessity. Logos theologians think the future will be like the past because of their faith in a particular conception of God, not due to generalizations. Such belief might be tied to a single mystical experience.

    The same is true for beliefs that the future will be unlike the past. Many Patristics believed in a doctrine of "Christian historical progress," where the world gets better over time. They believed this due to a faith in God paired with an interpretation of a few specific verses in Scripture. When the Roman Empire was collapsing and things appeared to be getting worse, this didn't lead to an inductive reassessment, but rather simply caused them to take these events as small setbacks against the backdrop of a larger trend. You see the same thing with Marxists. The belief about the resemblance between past and future comes from other beliefs, which might be held due to inductive arguments, but as often are held due to deductive arguments.

    Not the mention that some conclusions that appear inductive are actually tautological. That water is H2O is a necessary truth, but one only arrived at by observation. However, just because observation was involved does not mean that UP is required to say water will H2O in the future. A lot of work in science, while utilizing observation, is focused on identifying these sorts of truths. Arguably, the finding that "causation" is identical with "progressions between states based on x specific laws," could represent the same sort of identity relationship. Indeed, the way causation is generally understood, if something spontaneous occurs in the future, as a brute fact, not according to any laws that can possibly be known, we would call it "uncaused" or "a miracle."

    Scientific identities like:

    Water = H2O

    Gold = the element with atomic number 79
    Hesperus = Phosphorus

    Are necessarily true although they are discovered a posteriori.

    The terms flanking the identity sign are rigid designators.


    [Def. A rigid designator is a labelling device whose function is to pick out the same object or natural kind in every possible world, that is, in every possible counterfactual situation.]

    Identity statements between rigid designators are necessarily true if they are true. Each term independently picks out the same thing in every possible world.

    Although these identities cannot be known a priori, they are necessary empirical truths, discovered a posteriori, like all scientific identities.

    Once we know that ‘water’ and ‘H2O’ refer to the same thing, we treat both terms as rigid designators.

    They have different uses or connotations – a chemist would use the former, an ordinary speaker the latter – but they denote the same natural kind.

    If water is necessarily H2O, there is no possible world (i.e . situation) in which pure water at normal pressure, if it is the natural kind we designate by that term, is not H2O, does not have the molecular structure it does, does not freeze at 0 0C.

    This supports the essentialist picture. If a thing’s identity depends on what it is made of, its microstructure will necessarily determine its disposition to behave in particular ways, i.e. its causal powers.

    Just replace the last paragraph, which has a reductionist view, with the more modern theory of "fundemental" parts only being definable in terms of the whole of which they are a part (fields), and you have an essentialist picture that dictates relations between the past and future that exist by necessity.
  • Neuroscience is of no relevance to the problem of consciousness


    Do you know of any law that guarantees the future will necessarily correspond to the past? I for one currently believe there is none. You named a few in your post and I will try to understand and address them soon.

    Sure, but as to whether these laws actually describe our world is another question entirely.

    One good example is Max Tegmark's Mathematical Universe Hypothesis. Tegmark posits that all abstract objects exist and that our universe is one such of those objects. Tegmark's speculation requires that a number of things be true, the theory of eternal inflation in physics, the existence of a single set of natural laws that govern the universe, and the ability of abstract objects to somehow generate the first person subjective experience we are familiar with. The last of these is simply glossed over in his book because it isn't his area of expertise, so all we get is that "very complex informational patterns produce experience."

    But let's look at what happens if his highly speculative theory is true. If it is true, then everything in the multiverse is determined. A description of the abstract object of which we are a part could, at least in theory, tell us, using only deduction, exactly what will happen in the future and what happened in the past. Granted that, in Tegmark's view, this would be a description of an unimaginable,although finite, number of discrete "universes," plural.

    Causation, correctly understood, would look quite different in such a world. All future states are already defined, so causation would really just be the enumeration of state transitions that occur according to mathematical laws. Cause doesn't really exist as commonly conceived in this case, since the universe is a complete four dimensional object, but it can be formally described as what apparent state transitions look like for an experiencing entity within the universe. This is true even if the nature of our universe entails that what we take to be physical laws radically change in the future, since those changes would also be merely traits of the abstract object that is co-identical with the universe.

    More broadly, if the universe works according to set laws that can be described purely by using deduction, then there exists laws that can define that the future will correspond to the past. Indeed, if the universe has no randomness, then the very fact that the future and the past both are determined is such a law denoting similarity.

    We, as finite entities in the universe, might never fully understand these laws. We might think we understand them and get them wrong. We might mistake something for a law that is really the manifestation of the interaction of more basic laws, which could lead to "laws" we think exist changing on us. However, there is no reason to reject that such laws could be known, although arguably there is good reason to reject the idea that they could ever be perfectly predictive, because you cannot plug all the information in the universe into a function and read the output while being within the universe yourself.

    My main argument would be that such laws could be known entirely deductively, a type of knowledge Hume would accept. Indeed, this is how many people see the Holy Grail of a Theory of Everything. That we use induction to test the validity our mathematical models of how the universe works, or that our deductions are informed by prior inductive findings, does not preclude a wholly deductive understanding of the universe. This is true in the same way that, if a mathematician graphs equations to look at them and get an intuitive understanding of how they work with numbers close to zero, it does not entail that, if she later develops a deductive proof of some inequality, etc. it isn't "really" deductive because she used inductive reasoning based on the graphs to inform how she went about making the proof.

    Now, people can argue that we can never be truly sure that any deductively derived description of the laws of the universe actually maps to the universe, no matter how much we verify it with induction, but this just collapses into radical skepticism. You might as well argue that we can never be sure if 2 + 2 will be 4 in our world, because "what if nature is instantiating some other abstract principle and it just looks like it it instantiating the one that has predicted everything up to this point," applies for all attempts to use deduction.



    Maybe. If the universe follows laws, if it is deterministic (even in a stochastic way), then it seems possible, maybe even plausible given the successes of attempts to identify such laws, to define the root rules by which the present always evolves into the future. Perhaps the universe is deterministic but follows undefinable laws though? Then such a thing isn't possible.

    The elephant in the room here is "initial conditions." How much of our universe is determined by brute fact initial conditions? If the universe progresses due to laws, but it has unexplainable initial conditions, then it might not be possible to determine if "law-like" phenomena continue in the future. Tegmark's theory gets around this by positing a multiverse in which all discernible initial conditions actually exist.

    If the world doesn't progress in any determined manner then it is unclear if any knowledge can be grounded. Maybe we, and all our memories, spontaneously sprang into existence a second ago and will disappear 30 seconds from now? We can't know.
  • Neuroscience is of no relevance to the problem of consciousness
    I'm not sure if I made my point clear. I'm not saying "I believe the world progresses based on logical entailments." I'm saying that, IF this was true, then seeing things follow these deductive laws is seeing causation in action. When I throw the rock, I experience the cause of the window's breaking if my throwing the rock does indeed cause the window to break. It's hard to see what more Hume could ask for or what he thinks experiencing cause would look like if it could be experienced.

    I don't think Hume is merely a skeptic, although his point might reduce to radical skepticism. He does not seem to be saying "I don't think we can ever be sure if we are seeing cause," rather, he is saying "cause reduces to constant conjunction and we can't see one action entailing another because one action doesn't entail another."

    Now I know there is a later school of Humeans that emerged in the 1980s who say Hume is only talking about epistemological limits. I just don't see it, granted I've not reread his work extensively. It seems to me like he is taking the more concrete position of denying that causation, as generally understood, exists at all. But this argument is entirely based on the fact that seeing a billiard ball hit another one "isn't actually seeing the moving ball cause the still ball to move." This is where it seems like begging the question.

    The Problem of Induction is much more sophisticated, and so Hume's real argument about cause gets lost in the mix. But the Problem of Induction only says that we can't be sure if causation will work the same way in the future as it has in the past. Rejecting induction doesn't require rejecting causation. The denial of causation as popularly understood doesn't hinge on the Problem of Induction, it hinges entirely on Hume's assertion that common experiences of cause aren't actually experienced of cause... because cause can't be experienced... because it doesn't exist... which is the very point the argument sets out to prove.



    The theories themselves do not necessarily rely on induction to be produced, but a judgement of the reliability of them, in application, does rely on induction. So people might produce thousands or millions of such theories, in any random way, but we would only choose the ones proven by induction as reliable, to be used, and these would become the conventional.

    Right, but the selection isn't arbitrary. It's based on a principle of indifference, as further formalized by the principle of maximum entropy. You're not going for best fit, because of over fitting problems, but the least assumptions, a sort of formalized Ockham's Razor.

    So Hume really just makes an inductive conclusion about inductive conclusions, that they all employ some sort of presupposition about temporal continuity.

    Good point. I think the larger issue that gets buried in Hume and much modern philosophy of science is that the acceptance that:
    1. Logic and our understanding of it is valid; and
    2. The world is a logical place where at least some things follow from others; and
    3. The logic of the world is intelligible to us,

    All need to come prior to any knowledge statements. If this is not true, and one thing doesn't follow from another, then any prediction is impossible; I
    we can't even trust our memories. Our theories of information and semiotics also collapse in a context where outcomes for any observation X have no relationship to any others.

    Claims against the rationality of the world also need to explain how so much science can involve doing deductive work on a chalkboard. Often, experiments are only doing the work of confirming deductive arguments about how nature progresses from state to state, i.e., cause. If rules accessible by deductive reasoning don't guide state progression, why should they seem to? If deductively accessible logical laws do cause progression, then seeing the rock break a window IS seeing causation.

    Hume is following the Platonist tradition in allowing some types of knowledge to side step this problem. "A thing can't be green and not green," still supposedly holds. I don't disagree that it holds, but rather maintain that this requires that we trust that our sense of logic is meaningful prior to accepting this as a true statement.

    And of course, we can be fooled as to logical statements. Is there a mathematics who student hasn't had at least one occasion where they have argued with their teacher about how they MUST be right because of iron clad logic, only to find out they are wrong? Has there ever been a programmer who hasn't run their program, absolutely certain the logic works out, only to get an inconceivable error they want to attribute to broken logic gates?

    But if Hume's take gets reduced to being skeptical of all knowledge claims in this way, then it is just the claim of the radical skeptics, Descartes' Evil Demon, the Academics, etc.

    If someone wants to maintain that Hume's Fork holds, they have to counter Quine and Co's arguments against it and explain why we, as creatures in an illogical world, who can know nothing certain of that world, can still somehow access inviolable a priori truths from the ether. It seems to me that if worldly creatures can access those truths then, in at least some sense, logic is in the world. But how can logic be in just part of the world and not collapse from the Principle of Explosion? This seems to require some sort of dualism.

    This is not really a consistent geometry though. A "curved plane" is contradictory because the curve of a sphere requires three dimensions while the plane is two

    "Curved plane," is my sloppy, improper terminology; it's a surface with curvature. The easiest way to visualize a triangle with more than 180 degrees is to think of a triangle drawn with a ruler on a sphere, or for one with less than 180 degrees, one drawn on a saddle. While more intuitive, this is a misleading analogy because we don't need three dimensions to make the triangle have degrees unequal to 180; hyperbolic geometry, on a hyperbolic plane, accomplishes this.

    The argument that all mathematics is simply invented, and selected for its usefulness, is another angle from which Hume's Fork can be attacked. However, if one accepts that abstract objects are real ontological entities, then this also seems to provide a reason for doubting the reliability of Hume's distinction. I don't tend to buy into the argument that mathematics is only selected based on usefulness. Often, investigations are based on elegance and aesthetic preferences
  • Neuroscience is of no relevance to the problem of consciousness


    What would be the point? If I give you my answer now, and you accept it as you read it, what grounds do you have for thinking it will still hold in a week, or tomorrow, or even five seconds after your read it? :cool:

    I am aware of a few ways of attacking the problem:

    1. Techniques in statistics and probability theory do not rely on induction. We have proofs for why Weibull regressions, multinomial logits, OLS, etc. work. We can use these techniques in the context of Bayesian Inference, while hewing to the principal of maximum entropy. This will never allow us to be absolutely certain of any inferences, but it does allow us to have high confidence in them. There are also combinatoric arguments along this line, see: https://plato.stanford.edu/entries/induction-problem/#BayeSolu .

    2. You can attack Hume's premises. The uniformity principle (UP) that Hume invokes for his attack on induction doesn't seem to hold up. This doesn't necessarily resolve the problem, but it changes it.

    Maybe inductive inferences do not even have a rule in common. What if every inductive inference is essentially unique? This can be seen as rejecting Hume’s premise P5.

    P5: Any probable argument for UP presupposes UP.

    Proponents of such views have attacked Hume’s claim that there is a UP on which all inductive inferences are based. There have long been complaints about the vagueness of the Uniformity Principle (Salmon 1953). The future only resembles the past in some respects, but not others. Suppose that on all my birthdays so far, I have been under 40 years old. This does not give me a reason to expect that I will be under 40 years old on my next birthday. There seems then to be a major lacuna in Hume’s account. He might have explained or described how we draw an inductive inference, on the assumption that it is one we can draw. But he leaves untouched the question of how we distinguish between cases where we extrapolate a regularity legitimately, regarding it as a law, and cases where we do not.

    One way to put this point is to say that Hume’s argument rests on a quantifier shift fallacy (Sober 1988; Okasha 2005a). Hume says that there exists a general presupposition for all inductive inferences, whereas he should have said that for each inductive inference, there is some presupposition. Different inductive inferences then rest on different empirical presuppositions, and the problem of circularity is evaded.

    https://plato.stanford.edu/entries/induction-problem/#NoRule

    3. You can show that Hume's argument is self-undermining.

    First, you can attack Hume's Fork, the distinction between relations of idea (logical truths) and matters of fact, see: https://plato.stanford.edu/entries/analytic-synthetic/#ProDis . There appear to be significant problems with the formulation. For example, it was considered an a priori fact that a triangle's angles add up to 180 degrees. This turned out to not be true under all consistent geometries, e.g., a triangle on a curved plane, as drawn on a ball. That is, there is no way to tell between an a priori analytic truth and a firmly held dogma. To be sure, some truths true by virtue of being simple tautologies, but then these do no lifting in any analysis, and in any event, many of these can be shown to be true only as regards arbitrary axioms.

    If relations of ideas are actually matters of fact, and inductive inference preformed on such facts is invalid, than Hume's position reduces to the radical skepticism of the Academics. We end up with "knowledge is impossible." Why should we even trust our memories? Just because your memory has seemed to be accurate in the past is no assurance that it will be in the future. But the statement that "knowledge is impossible" pretends at being a knowledge statement; it's the equivalent of the man who says "I only tell lies," a contradiction.

    If anyone said that information about the past could not convince him that something would happen in the future, I should not understand him. One might ask him: what do you expect to be told, then? What sort of information do you call a ground for such a belief? … If these are not grounds, then what are grounds?—If you say these are not grounds, then you must surely be able to state what must be the case for us to have the right to say that there are grounds for our assumption….

    -Wittgenstein

    Hume's argument is can also be attacked by looking at the "Paradox of Analysis" and the "Scandal of Deduction." If deduction gives us no new information, then we can learn nothing that we did not already know from it. This also implies that Hume's argument denies the possibility of knowledge, as we cannot learn what we don't already know if only deduction is valid.

    Either of these routes then leaves Hume open to all the arguments against radical skepticism, my favorite being from Augustine's "Against the Academics," because they're witty.

    ---

    The above gives me reasons to think the past will be like the future, while also undermining the credibility of Hume's attempt to undercut this claim. Additionally, if I buy into computationalist conceptions of physics, then what comes before dictates what comes after by the same sort of logical entailment Liebniz had in mind when he developed his conception of computation, then my expectation that the future is like the past is not grounded in Hume's UP. Or if I buy into Hegel's arguments from phenomenology and speculative logic, then I see the progression of events, at least in the big pictures, as part of a process of dialectical-logical unfolding, which is also not grounded in the UP. The same is probably true for other views of nature that don't jump to the top of my mind right now; they reject Hume's premises.


    BTW, I also think Hume's idea of causation is nonsense and that it contributed to his error here, and I say that as someone who largely appreciates his work, especially his work on this very interesting topic.

    When we say X causes Y we don't mean that X occurs before Y in all instances of Y (constant conjuction). We generally mean to imply some sort of step-wise chain of entailments between Y's becoming a state of affairs and X then becoming a state of affairs, not merely conjunction. (As an aside, Hume's conception of cause as being reducible to constant conjunction arguably collapses in the face of (mostly) reversible laws of physics.)

    Combined with his view on induction, Hume's whole argument against causation ends up turning into what is possibly just a very convoluted form of begging the question.

    Hume says we cannot sense that cause is a form of step-wise entailment. Why not? Because our senses can't tell us anything about the logical laws that may or may not be underpinning events. Why not? Because seeing events follow from one another is somehow not seeing howevents follow from one another. But this is true only if you don't accept that events follow from one another in the first place. This problem is obscured by the fact that Hume argued for undecidability rather than the denial of a world that progresses logically.

    If the world is logical, then my throwing a rock at a window and seeing it break is my observing causation/entailment in the exact same way that my tallying 3+4 to equal 7 is my observing that the two sum together to 7 when the inputs 3 and 4 are given for the addition function.

    Following the Wittgenstein quote above, it's worth asking what Hume would count as observation of causation/entailment? If we discovered a physical theory of everything, and all observations followed its predictions, and further if we could use mathematical induction to prove that this relation holds in n+1 cases, would Hume still deny we have grounds for explaining causation? It seems possible given his arguments, but then this is essentially just radical skepticism that has been dressed up.

    Example: we know how video games work. They use logical computation to produce their outputs based on given inputs. Everytime Mario jumps on a Goomba, it falls off the screen. But if we're Hume, we have to think that the console running Mario only appears to instantiate computation, and that our observing the step-wise enumeration of mathematical entailments is actually not sure to "really" be the step-wise enumeration of mathematical entailments in the world, it just "appears" to be identical. This is Descartes' evil demon territory, because it implies that while 2 + 2 = 4, adding two apples to two apples might result in 5 apples at some time in the future; we can't be sure because we can never determine if mathematics is instantiated when it appears to be.

    The argument reduces to "cause cannot be logical connection because you cannot sense such a thing, and you cannot sense such a thing because you could only sense such a thing if cause is logical connection." However, if cause IS logical connection, then seeing X after performing Y every time would be your sensing the logical connection.

    At best Humeans can say "if cause is step-wise entailment then the world would look exactly like it does, and you can indeed observe cause, but it's possible to imagine that our world is observably indistinguishable from such a world but somehow different." This is just positing a potential bare illogical nature of reality though, radical skepticism.
  • Name for a school of thought regarding religious diversity?


    Would this be religious pluralism? To quote Wikipedia: "Religious pluralism holds that various world religions are limited by their distinctive historical and cultural contexts and thus there is no single, true religion. There are only many equally valid religions."

    Religious pluralism treats each religion as disjunct; to my mind that's what separates it from perennialism, which takes all religions to be pointing towards a single unified religious truth. Mythography treats all religions as fictions; thus it gives them all the same epistemological status.

    Indifferentism is used to describe the non-committal belief that no one religion is better than any other, and there is "apatheism," which denotes a lack of interest in the truth of religious claims.

    I am not sure if there is a specific sub-type of religious pluralism that specifies that all religions are epistemologically disjunct, but that each system is valid "onto itself." The term is fuzzy, and sometimes just used to denote tolerance, but some post-modern versions talk about "different types of truth."
  • Will Science Eventually Replace Religion?


    And sects of major religions either die or radically transform. The Vatican of 2023 is not the Vatican of 1123; the faith has undergone a dramatic transformation.

    Likewise, the Gnostic sects died out in late antiquity, although some of their ideas were reborn with the Cathars/Albagensians.

    I think this is generally taken as evidence against the veracity of religious or spiritual systems.

    However, I've found it interesting that some religious thinkers see this as a necessary process. I've seen this view more often in the Christian tradition, but I assume it applies elsewhere. The idea is that God reveals God's plan for humanity through history, in stages of progression, hence the Bible being over 50% histories.

    The Bible starts with God having a 1 on 1, personal, handholding, relationship with Abraham and the other Patriarchs. This relationship only required faith, like a toddler who must learn to do what their parents says, but who also willfully misinterprets commands and ends up being corrected.

    With Moses the relationship moves to a cultural group and the members are now expected to follow arbitrary rules. Christianity then represents a move to following the more nuanced, flexible reason behind the rules. A lot of Jesus' discussion of the law focuses on following the spirit of the law, love for God and others, over the letter of the law.

    The idea is that this progression continues today. Societies weren't ready for modern governance in antiquity. Before you get to socialism, the social question, you first need to progress to constitutional rule of law and the end of noble status, the political question.

    So the faith will change over time, growing towards an ultimate realization. This change will sometimes be painful; as Saint Paul says in Romans 8, the world is in labor pains as it gives birth to the future world where freedom is achieved.

    Just an aside, because I've always found both religious and non-religious theories of progress interesting. It is relevant though in that science itself also believes in progress. Even people who assiduously deny the concept of historical progress often allow that science is a human institutions in which theories progressively get better at representing the world over time.

    Indeed, I think this belief in progress is necessary for science. If we don't think our theories today are necessarily any better than the theories of 1900, then we can't trust any text books and learn about a wide number of fields. All our efforts will have to be focused on deciding if the text book of 2023 is actually better than the one of 1883 if we have no reason to assume science "progresses."
  • Why Monism?
    lol, forgot to ever submit this response from a while ago.



    The last place I saw such a point being made was Quentin Lauer's "Hegel's Conception of God," which I realize has the unfortunate problem of coming from a commentary on a philosopher who no one can agree on :lol:.

    That said, I can't think of any sense in which I've ever heard the contention of multiple types of logical necessity, as in, these different types being elements of fundamentally different things. Certainly, in the view that logic is merely a game, the different forms of logic are different, although the same sort of thing (games), but this would be a position that tends to deny that logic "really exists," independent of minds, not one that posits fundamentally different types of reason.

    Anyone else know of one? The closest I can think of is the idea of different axioms in formal systems, but then those are still generally acknowledged to be the same type of reason, not multiple different types, and we have things such as model theory for looking across systems.

    Saint Paul talks of the wisdom of the world versus the wisdom of God in the opening of I Corinthians, but this seems to be a difference in quality, not necessarily a difference in type. The hints at later Logos Theology in Paul's letters sort of undercut the idea of God's wisdom being its own type anyhow.
  • Replacing matter as fundamental: does it change anything?


    There is what I would call a faulty interpretation of Wittgenstein's "Philosophical Investigations", which assumes a "private language argument", as demonstrating the impossibility of the individual's "private language" as having a relationship with language as a whole. This is analogous to the interaction problem of dualism, the private language is portrayed as incapable of interacting with the public language. But this is a misinterpretation because what Wittgenstein's so-called private language argument really demonstrates is how it is possible for the private aspect of language to incorporate itself into, and therefore become a feature of the more general public language, through this causal relation which Wittgenstein saw as necessary to the existence of language.

    Great point. This seems to be key to popular computational frameworks for investigating AI (e.g. Kowalski's "Computational Logic and Human Thinking or Levesque's "Computation as Thinking"). These embrace the idea of a private language, but because the language is itself a logical system it can be translated into a social language via computation.

    This translation isn't always effective. Understanding communication requires that we understand that agents have goals, and that communication is a means of fulfilling these goals. If current public languages are insufficient for communicating something an agent wants to communicate, it can use other means to try to transmit the semantic content, e.g. drawing a diagram of inventing a new word. You see this with kids all the time. They want to convey something, but lack the relevant linguistic knowledge base, and so attempt to combine existing words into new ones.

    Such combinations can enter the public language, but diffusion varies, e.g. in the US we say "sandbox" but it seems like in the UK "sandpit" is more popular. Once established, the phrases can be mapped to new semantic content, hence the sandbox/pit differences appears even when the term is referring to the more recent concept of a computer programming "sandbox."

    Your naturalist argument is flawed for the reason I explained. You wrongly portray final causation as top-down. This is because you incorrectly conflate final causation, which is bottom-up causation empowered by the freedom of choice, with the top-down constraints of formal cause, of which "entropy" is one. It is very clear, from all the empirical evidence that we have of the effects of final cause, that the purpose by which a thing acts, comes from within the agent itself, as a bottom-up cause, and it is by selecting this purpose that it may have a function in relation to a whole.

    Right, this is why, for the universe as a whole to have a "purpose," its relation to God, an agent who creates it, is often invoked. However, does this rule out theories of natural teleology to you?

    These have a conception of teleology/final cause that isn't dependent on an agent, at least not in a straight-forward way. Nagel's "Mind and Cosmos," proposes a sort of teleology of immanent principles underlying the universe that in turn result in its generation of agents. That is, the principles come first and in turn generate the agents that fulfill them. Aristotle's teleology is generally considered "natural teleology." Max Planck seems to have had ideas of this sort too, maybe Liebnitz for another example. I'd add Hegel but it's unclear if it fits the same sort of type, but his system is certainly interpreted that way fairly often.

    I find these hard to conceptualize at times. The principles are what generate the agents who can recognize the principles and whose existence is part of the process of actualizing them. But then it seems like the agents are essential to defining the principles as teleological, even though the principles predate them, which, if not contradictory, is at least hard to explain in a straight forward fashion.




    [/quote]
  • Replacing matter as fundamental: does it change anything?


    I totally buried the lead in my first attempt to answer you and muddled it all.

    Summary: The big benefit of information theoretic models of nature is that they can show how phenomena traditionally seen as "mentally constructed," can have an independent existence in nature and how information about these entities can enter the human nervous system. Bridging the subjective/objective gap and finding a solution to highly counter intuitive efforts at eliminitivism helps to make physicalist theory of mind more plausible, even though it also changes that theory in some ways.

    Second, information is necessarily relational. Information doesn't exist "of itself," but as a contrast between possible values for some variable. Such a frame work denies the reality of any sort of "view from nowhere," or "view from anywhere," as contradicting our observations of how physics actually works. This helps us understand why we would experience things relationally, and debunks the idea that perspective (the relation of a system to an enviornment) is an arbitrary hallucination unique to consciousness. Information exchange between a rock and its enviornment follows the same sort of logic; the ability or inability to discern between different signals affects the behavior of enzymes as well as people, making elements of "perspective," less mysterious.

    ---

    More detail if you're interested information theory allows us to explain how words in a piece of paper, signals in a cable that are part of the internet, DNA codons, the path a river cuts in rock, etc. can all be thought of as the same sort of thing. It connects different levels of emergence (this can also be done using Mandlebrot's concept of fractal recurrence, and the two concepts complement each other).

    What this let's us do vis-á-vis the subjective/objective divide is identity entities that we previously thought must exist only in the mind, out in the world. For example, Galileo thought color did not "really," exist; color was reducible to the motion of fundemental particles. This sort of reduction has been popular throughout history, but comes with significant conceptual problems, not the least of which being that it says that many objects of study are somehow unreal despite their explaining large scale physical events. This is the view point that something like "Japanese culture," is not real, it is something we can eliminate and/or reduce to patterns of neuronal activation. The same is said to go for color, taste, economic recessions, prices, etc. They are "mental and/or social constructs," with a hazy ontological status.

    Of course, the view that Japanese culture is reducible to diffuse patterns of synapse development, physical media, etc. is different from the eliminitivist view that such things are somehow "unreal," but they often go together. Information-based conceptualizations of nature give us a way to locate incorporeal entities like recessions or cultures in the natural world. A key benefit of information is that it is substrate independent, so we don't have a problem speaking of an entity that exists as a collection of neurons, printed symbols, vases, films, etc. Conceptually we can talk about morphisms within an entity that remain even if its physical components shift radically. E.g., if I wrote this post on paper with a pen, then typed it into the browser, then submitted it so that it now exists in a server and is reconstituted when accessed, we would be able to identify the signal throughout its shifts in physical media, including how the signal reaches human eyes and is then encoded in neuronal behavior.

    This seems to at least partially dissolve the subjective/objective barrier provided we already believe the body causes consciousness. It addresses the Hard Problem by filing in gaps in the physicalist view. If the body generates mind, then we can see how interactions in physical systems can bring information from a chair into the brain, thus creating a holistic model. But the "how is first person experience generated," question does remain unanswered here. The most the concept can do there is explain how any system, conscious or not, will have a "perspective;" different signals are relatively indiscernible depending on the receiver.
  • When Adorno was cancelled


    Sounds a lot like the student protest movements of the mid-2010s. I was taking some classes at UNC around that time, and I recall reading the list of student demands from that period. There were some good ones, or at least policies that would have been beneficial if scaled back a significantly, but the whole thing had a farcical air because of how far it went. I recall the university was going to somehow provide food and healthcare for non-students across the region. I don't recall if there were any details about implementing this, but it runs into the immediate problem of Chapel Hill being extremely wealthy and expensive so the people who would stand to benefit from the changes don't live anywhere near it.

    Not to mention there was an existing, if still quite inadequate, set of programs already run by the government to address just these issues that were properly run by non-profits, and state/local government, not a university (and which dwarfed the university budget because healthcare is expensive). Like, if you want to feed people in Durham, shouldn't Durham's government get to manage it, not a university that was going to now be managed by students? It almost flipped into a sort of unintentional reverse elite rule. I can just imagine the reactions of people in rural Alamance county on being told that essential services will now be run through an agency overseen by an undergraduate council.
  • Replacing matter as fundamental: does it change anything?


    If light waves are information and patterns of neuronal activation are information, and we can describe both using the same information theoretic framework, it becomes easier to see how an event in the environment is tied to specific events in the brain.

    There is a causal chain to follow. We can also see how the brain is subjecting information coming in from the sensory systems to computation. Most incoming sensory data is quickly scanned for change or relevance, then dropped. Many of the more interesting experiments on how human sensory systems work hinge on how sight is "constructed," in the brain, rather than essentially being a video feed from the eyes. The idea is that, if computational models can explain the "why" of profound aspects of first person experience, it may also be able to explain the why of experience existing itself.

    This has been a useful model for understanding why sensory experience is the way it is and why we have persistent illusions that experimentation shows to be false.

    That said, I actually don't think it tells us anything about "where does first person experience come from." What you get is a lot of good work on how what the brain does can be seen as computation, how agents can be modeled computationaly, and then an unsupported move to "and so a complex enough informational process that feeds into a global workspace creates first person perspective." That is, all the complexity kind of masks that the Hard Problem part is only vaguely addressed.

    However, this is because we're still asking for information based explanations to turn back to the old physicalist frame work and explain consciousness in those terms. If you had a different ontology, one based on information, then maybe it gets easier? That's pure supposition though.

    TBH, I think computational theory of mind is either a blind alley or requires a different model of the rest of nature to work.
  • Real numbers and the Stern-Brocot tree


    Thanks for the detailed explanation. That makes sense, I was thinking the resolution was somewhat along those lines.
  • Replacing matter as fundamental: does it change anything?


    Maybe? At least a lot of people seem to think so. Information theory is arguably the biggest paradigm shift in the sciences in centuries. Quantum mechanics and relativity rewrote how we think about the world, but for many fields they were largely irrelevant.

    By contrast, information theory has had a huge impact on physics, biology, neuroscience, economics, etc. It's a paradigm that has allowed us to link together phenomena in the social sciences with phenomena in physics using not only a common formalism but a shared semantics (complexity studies does this too). And obviously the technology that theories of computation and information helped create have dramatically reshaped human society by giving us the internet, digital computers, etc.

    My take is that it is too early to tell if "information" theories will end up radically transforming how we think of the natural world, or will simply fizzle out. Currently, it's widely accepted that definitions of information all have major problems, at least from a philosophical perspective, the formalisms have been amazingly useful.

    There is a reason computational neuroscience is probably the biggest theory of consciousness right now or why many of the more well known physicists publishing popular science books today seen extremely excited about pancomputationalism and "it from bit," theories, even if they don't fully endorse them. That said, information is a notoriously vague concept. I feel like every paper on the philosophy of information starts by stating this fact, and so it's not always clear what this new vision actually is in a systemic sense.

    Information theory ties into the hard problem by showing how signals in the enviornment, e.g. light waves bouncing off objects, can be picked up by the eyes and encoded in patterns of neuronal activation. It seems like a potential way across the objective/subjective gap, but such explanations are in no way close to being complete and rely on vaguely defined terms to do a lot of leg work.

    Suprisingly, there hasn't been much philosophical work on "what is computation," (but Liebnitz actually has some interesting, very ahead of their times ideas of computation as logical entailments).Turing was thinking of human computers, people whose job was to run through computations, when he wrote his seminal paper on computation. He was thinking "what are the minimal instructions and inputs a person needs to receive to perform a computation and what are the minimal things they need to be able to do to carry it out." This is strange when you think of it. Computational theory of mind is a theory that says consciousness is caused by/reducible to, a formalism based on a conception of what conscious human beings do while performing mathematical calculations. It is, at least in its historical conceptualization, circular in this way.

    I think digital physics, the idea that all reality can be reduced to 1s and 0s, that bits can be swapped in for fundemental particles in old corpuscular models, has been pretty well debunked. It's important not to conflate this with all information ontologies, something that seems to happen fairly often. Digital physics is sort of the strawman for attacking "it from bit," it seems.

    IDK, I could write a lot about this but I figured many people might already be aware of these things or uninterested. If anyone wants some recs I have a sort of "information reading list," I've been collating. The Routledge Philosophy of Information Handbook is particularly good though for an in-depth conceptual look that also has specialized articles grounded in the philosophy of specific natural and social sciences.
  • Real numbers and the Stern-Brocot tree


    I did come across the term I meant to use if reference to solution sets. It's the replacement set, so for a formula like 1 + x = x + 1 this would be the variables replaced with each real number.

    Now if the solution set is the real numbers, does that mean the replacement set is the same size? And if so, what do we call the members of the replacement set if not equations? Expressions? Or, as I thought might be the case from your post, can we say that the replacement set is actually smaller than the same formula's solution set?

    Here is why I thought the replacement set might smaller, but tell me if I'm wrong:

    The things in the replacement set seem to be equations, 1+2 = 2+1 is an equation, at least as defined as two expressions with a equals sign (which maybe is a definition lacking rigour?). However, if equations are necessarily finite, how could we have one for every real number? We would need an uncountable number of such equations, one for each real, which seems to violate the logic you described.

    You'd either need an infinite string for the equation to put the real number in, since you can't do it with a finite number of digits, or a unique symbol for each real, something like π. But then you would need an uncountable number of symbols, which also violates the logic. This would mean that there is not a well formed equation for every solution of 1 + x = x + 1, the old "some truths aren't expressible in a system."

    I looked around for answers but it's hard to find something specific and then some online sources aren't vetted and conflict. I figured the answer has to be either that the members of the replacement set have a different name than "equation" or that the replacement set is counterintuitively smaller than the corresponding solution set in cases where the solution set it the real numbers.

    But could you have a set of "equations" from a system that does allow an infinite alphabet? These wouldn't be valid equations under set theory, but they would be a set in the way we can have a set of mathematical models, or a set of library books. I guess "a set of statements from a language in an infinite symbol language."
  • What is neoliberalism?


    Obama's bailing out of the banks after the 2008 crisis was a conspicuous neo-liberal move. Cornel West described Obama as a 'black mascot of Wall Street.' The point, I guess, is that liberalism seems inescapable.

    It's something that might be justified under neoliberalism but it isn't uniquely neoliberal. Keynesianism would justify the same move. I think virtually all modern economic theorists would say you bail out the banks at that point; it's a 1929 type moment. The differences in economic theory apply more to "what do you do after bailing out the banks to prevent doing it again."

    Obama wasn't even elected when the main bailout program, TARP, was passed on October 3, 2008 and wouldn't be President for several more months. He has other legislative options after that, but they had to pass Congress. He couldn't veto the bailout and I doubt he could have gotten a bill passed to kick the brace holding up the economy out of place at the exact moment unemployment was skyrocketing and contagion was hitting markets around the globe.
  • Will Science Eventually Replace Religion?


    For sure. Scientism is definitely a thing. Now, there is a good argument to be made that scientism isn't science, and that science doesn't deal with key aspects religion does, e.g. ontology. But I think there is also a good argument to be made that this is a No True Scottman fallacy given some of the world's most famous scientists write best sellers in the science category that are substantially or even mostly about ontology, the origins and nature of the world, or make explicit claims about morality and moral realism.
  • Will Science Eventually Replace Religion?


    Scripture is at most interpretative today, at worst completely lost/mis-translated. Every copy of the Bible/Torah/Quran etc is a lesser version of the previous due to human error/misunderstanding and general societal change. Just as when you repeatedly feed something printed back into the printer, the definition, the visibility of the text, is lost to imperfect reproduction. Loss of resolution.

    But when was this not true? Versus from Numbers carved in silver in proto-Hebraic predate the Hebrew language. The story of Noah, include the phrase "two by two," is on a Sumerian tablet that is among the oldest pieces of writing ever found.

    We no longer think many of the books of Prophets were written all by the titular prophet (Ezekiel being a notable exception). Isiah appears to have been a collection of sayings of an Isiah that took final form from other works over centuries. We see differences in OT documents dating from before Jesus' time. I'm not sure if there ever was a one true text. It changes less now, but it still changes with new translations and archeological finds.

    We lose some context, for example, none of us speak Greek. But the Greek speakers in the 5th century all thought Paul wrote Hebrews, which changes the context. We now think this is very likely a different author who knew Paul or Paul's teachings, so we may have gained a better reference frame in some sense.
  • Will Science Eventually Replace Religion?


    Religion, on the other hand, is the same old, same old. The crucifixion of Jesus. The raising of Lazarus. The loaves and the fishes. I heard those stories as a child. They are still around today, same as ever. Same as they were a thousand years ago. Religious people may spin those facts as an advantage. “See,” they might say, “the unending power of God’s Word. Indeed, his word shall endure forever.” But the foundational religious texts are still a finite resource.

    You could also spend many lifetimes studying just Christian theology and not read a fraction of all that has been written, let alone doing an in depth comparison with Judaism, Islam, Hinduism, Buddhism, or historical religions. As with science, there are constantly new developments in theology. While it doesn't move as quickly as science most of the time, at times it does, i.e. there have been multiple periods where numerous new sects have formed with radically different interpretations of Christianity over a short period.

    In the Abrahamic tradition the core texts have remained largely the same for a long period, but even this isn't absolute. The early church had myriad new "books of the Bible," that were ultimately deemed non-canoncial by orthodox Christianity. Judaism had something similar with the deuterocannonical books that show up in the Targum but were later removed from the Canon or the Books of Enoch, which circulated widely but are only canonical in Ethiopian Judaism and Christianity.

    The Reformation saw several books removed from the Protestant Canon (although they were still included in copies of the Bible and read in churches until they were cut for printing costs in the 19th century, strangely given American Evangelical antagonism to the texts now). The Book of Mormon was "revealed' relatively recently and new, quite different sects of Christianity have emerged since the 1800s, e.g. the entire Charismatic movement.

    The Sethian Gnostics rewrote Genesis such that the God creating the material realm was a demonic figure names Yaldaboath and Jesus gives mankind the fruit of knowledge with the aeon Wisdom.

    Few people today believe that there really once was a garden with a talking serpent and a naked couple

    Few people believed that in 100-500AD when Church doctrine was formed. Origen, Ambrose, Augustine, etc. all interpreted these allegorically. To be sure, the less educated probably did tend to think of these more as factual records of discrete events, but the Church has a very long history of allegorical, philosophical, and esoteric interpretations of Scripture. Paul's letters show a man well versed in, and in many ways accepting of Greek philosophy, so this is not a stretch. Pagels and others have argued that John is essentially a Gnostics gospel in some ways, Neo-Platonism and what it says about the general reality of all physical events, has been a part of Christianity since the very outset. A Catholic priest who was also a physicists developed the Big Bang Theory.


    Fundementalist is a modern movement. Most Christians are not fundementalist. Fundementalists are given outsized weight in perceptions of Christianity in the Anglophone world because they are more common in America, more ostentatious in many ways about their theology, and because they are a much easier punching bag for people who want to attack Christianity or religion in general. Most Christians are not even Protestant, let alone Evangelical Protestants. Even in the US now, Roman Catholic is the largest denomination.

    There is definitely plenty of development historically. The God of Hegel and Behemism more generally is extremely different from the God of fundementalism or even mainstream Lutheranism from which Boehme and Hegel emerged. Some theologies speak specifically to this historical progression. Most of the Bible is histories. God is said to demonstrate God's nature through history. This means that the message man needs to hear changes over time and so the faith changes. The rise of rationalism was celebrated by some Christians during the Enlightenment and the triumphs of science (and limitations) extolled by some Christians today (neo-Hegelians for instance).

    If history is any guide, religion is sure to keep changing and also to stick around. It's been ubiquitous in every human society, and religious doubts are also at least as old as writing.

    Most Christians ignore the teaching of Jesus that disease is the result of demons and sin

    Because this isn't in the Bible and you would have to stretch it quite far to say that Jesus teaches this. Hence, it was never a major interpretation of disease, although it certainly did have more truck with people before disease was better understood.

    I think it's also important to distinguish here between widely accepted doctrine/theology and the superstitions of the laity. Just because many people misunderstand quantum mechanics or have a naive understanding of science doesn't mean "science says x." Christianity has always had a leadership structure that vets teachings, as did Judaism before it, but that doesn't mean people don't have their own interpretations.

    Jesus casts out demons but also heals illnesses. They are described in a distinctly different manner. The Epistles only mentioned healing of diseases, never exorcism. Throughout the Old Testament God is involved in disease, not Satan or demons.
  • What is neoliberalism?


    Apparently whether it's right or left that fails, the result is Nazis. :worry:

    "Reductio ad Hitlerum" is my favorite philosophy joke phrase.
  • What is neoliberalism?


    I think you have it right. As a historical moment with a lot of theorists it's always going to be a bit of a fuzzy term.

    The only part I might watch it trying to condense this explanation is "strong interventionist state." This makes me think of European social democrats or Bernie Sanders. But the politicians generally cited as representing neoliberalism are Ronald Reagan and Margaret Thatcher, it's more the ideology of the 80s-90s US Republican party or Christian Democrats in Europe.

    I've always thought "market economy," as opposed to a control economy (controlled by the state or a nobility with special legal status) was less loaded with competing definitions than capitalism, but it's still a fuzzy term.

    Strangely, "neoliberal" in the vernacular has morphed into being a sort of stand in for "center left." It's sort of an insult in the sense that none of the new targets for the label embrace the term; Joe Biden doesn't want to be a neoliberal, while neoliberal theorists did embrace the term. So, you can see right wing talking heads complaining about "neoliberals," while also arguing for neoliberal policy in the same segment.

    Sort of a weird turn that I think resulted from "liberal" coming to be synonymous in the political vernacular with "left wing," and further left Democrats using the term to slam their more right wing Democratic rivals (not saying this was a wholly unjustified comparison, but it's not a term center Dems have been eager to use to describe themselves; they don't want to be Reagan.)
  • Real numbers and the Stern-Brocot tree
    Thanks for that.

    I apologize for this whole digression anyhow because I had the realization that the thought I had that kicked this off is irrelevant to the thing I'm actually interested in, the ways in which the "process" by which 2 and 2 are added together, the computation, is different from just the output of the process. Total blind alley.

    It's shockingly hard to find a discussion of computation that isn't just "computation is the processes that lambda calculus, Turing Machines, etc. can define." You can find a lot of articles on "what are numbers," or "what is entailment," but I've had trouble saiting my interests on this front. You'd think that all the interest in physics re: pancomputationalism would have sparked more philosophical interest in the topic? IDK, maybe I'm just looking with the wrong terms.
  • Real numbers and the Stern-Brocot tree


    Yeah, I thought I understood the miscommunication there and I did not. You're correct re: well formed formulas. I had always thought formula = well formed formulas and equation could be defined more broadly as "any two equivalent expressions," such that an equation would allow for things that a formula would not, e.g. having infinite length or an infinite symbolic alphabet. Also that formulas have variables whereas an expression needs none. What would be the term for all statements following the form "1 + n = n + 1", but actually using the real numbers, not the variable?

    Basically, if the reals are the solution set of all the values that can be put into 1 + n = n + 1, what do we call the set of all the "things" (I said equations before) that the values are solutions to?

    The set that has 1 + π = π + 1, etc. as its members?
  • Real numbers and the Stern-Brocot tree



    Ah, my mistake. This is supposed to say "If 4 + 4 = 8 and 10 - 2 = 8, what does that mean for the instantiation of the computation?
    "

    I.e., how is computation instantiated in the world? (E.g., https://plato.stanford.edu/entries/computation-physicalsystems/).

    If you tell a Turning Machine to add 2 to 2, that's different than subtracting 2 from 6, right? But they have the same answer, 4. However, if all arithmetic expressions that = 4 are identical with it, why does this seem so unintuitive and why do we think dividing 16 by 4 is different from adding 2 and 2 when it comes to computation. We tend to think of computation as being the thing a Turing Machine does to produce outputs, not the outputs themselves.



    Yeah, that was very unclear with the wrong word in there.
  • Is The US A One-Party State?


    The only unit of analysis is cash in a capitalist system

    Is that true in all cases? If we took away access to birth control entirely and began allowing child marriages, wealth would still be the only thing that mattered so long as the system remained capitalist?

    What if we have a capitalist system but allow slavery for one class of people? Would emancipation be only relevant in economic terms?

    While some of this nonsense has profound, tragic effects on the lives of individuals, from a socio-economic perspective it should be ignored

    If someone loses their job for being gay, doesn't it effect their socioeconomic status? Jim Crow had dramatic effects on the socioeconomic status of 13+% of the population and was a similar issue. That seems quite relevant.


    I don't agree that differences between the parties on cultural issues are irrelevant, but I find that more understandable than the claim that in a capitalist system rights are essentially irrelevant if that's what you're saying.
  • Real numbers and the Stern-Brocot tree


    You could look at the link I shared about it from SEP.

    Instantiation =

    The word "instantiate" is related to "instance". If someone says, "Name some things that are red." you could answer, "For instance, roses are red, apples are red, blood is red." In other words, roses, apples and blood are instances of the property red. In other words, roses, apples, and blood instantiate the property red.

    That's all "instantiate" means. An object x instantiates a property p if p(x). That is, x instantiates p if x has the property p, if x exhibits p, if x is an instance of p. All are ways of saying the same thing (with possibly some subtle metaphysical distinctions).

    So, yes a property can be instantiated by another property. The property "is a color property" is instantiated by the property red.

    Five apples is an instantiation of the number five, etc. e.g., Plato's Theory of Forms.

    Informational encoding: the signal in Shannon-Weaver information if you're familiar with that.

    System: from physics, either physical or in an abstract toy universe. By this I just meant any computer we can envisage. I didn't want to get into a specific definition because I want to consider all possible computers.

    Can recognize: not a signal sent in ambiguous code, one signal cannot refer to two+ different outcomes for a random variable.

    Without symbolic manipulation having to be performed: I should have said "without multiple computational steps (quintuples)," to be more precise. One step would be "see symbol x, print symbol y on that section of the tape (doesn't matter which way to move the tape after)." As opposed to how computation generally has to be performed, utilizing multiple spaces on the tape in a stepwise fashion.

    The only way to avoid having to use multiple spaces on a tape for at least some computations, even with an infinite symbol system for every real number, would be to have a unique symbol for every arithmetic combination of those symbols.
  • Real numbers and the Stern-Brocot tree


    Meanwhile, among other points, I hope that at least you understand that in ordinary mathematics '=' means identity, which is to say, for any terms 'T' and 'S',

    T = S

    means that 'T' and 'S' name the same object, which is to say that T and S are the same object.

    I said as much at the outset.

    Here is the problem, if computation is not reversible. If 4 + 4 = 8 and 10 - 2 = 8, what does that mean for the instantiation of the abstraction?

    Having 4 $20 bills and being given 4 more is not the same thing as having 10 ($200) and giving away 2 ($40). Having 5 apples and picking two more isn't the same as having 9 and throwing two in the fire.

    This entails that computation is not instantiated in the world at all, that adding two apples to seven is the same thing as taking two from nine, a tough argument to make, or claiming that the relationship described in 4 + 4 is acomputational, that it exists without reference to computation. That is, numbers exist as real abstract objects but computation is just a human language for describing their relationships.

    That's a fine way of looking at it, maybe one of the more popular. My problem is that it seems hard to explain why we can recognize identity sometimes but not others, and formal systems for describing information do not account for this. So there appears to be a problem with the formalizations or the ontology in this respect.
  • Real numbers and the Stern-Brocot tree


    I took the tone from posts starting with:

    "All aboard the crazy train,"

    "No, only as you are deluded. "

    "Wrong."

    Generally in field with multiple subfields where the same term can refer to multiple concepts, it's common to ask if there might be a communication problem, not call someone an idiot. And I'll readily admit I misused the term sequence earlier, which was pointed out to me in a helpful way.

    Second, that assumption also came from the fact that I tried expressing the point at length and you only responded to small fractions of each post, only where there appeared to be an in for calling me deluded or uniformed.

    Anyhow, thanks anyways, as it's always good to see what the least charitable take would be on an idea so as to better polish up the description. Although, even if I know "less than nothing," about mathematics, I think I know enough about conversation to know that when someone starts an interchange with calling you deluded, or responds to a point about how some philosophers of mathematics don't think numbers exist outside formal systems, games we set up, with "they don't need to care about the philosophy of mathematics to know that 2+2 is 4," they aren't particularly interested in a discussion.
  • Real numbers and the Stern-Brocot tree


    Correct, if there is a unique symbol for each real, then the set of symbols is uncountable.

    Thank you. That's all I was saying. Now will you allow that, given an alphabet where every real has a unique symbol, those symbols could be used in arithmetic, such that, for example, the symbol for 1 added to itself is equivalent to 2?

    Everything else you responded to was you jumping over yourself to demonstrate knowledge about terminology and irrelevant. I mentioned all possible encodings of the type "any real = itself,"etc. (i.e. one such string for every last real), which I thought was apparent given the context. This set has nothing to do with well formed formulas re: standard set theory. You jumped to the formula x = x; that's not what I was referring to. If I meant "x + 1 = 1 + x" in terms of the variables I would not have bothered listing out the variations using integers or mentioned solution sets as an analogy to illustrate the point. "x +1 = 1 + x," is not one such an encoding because x is a variable, not a real. Since the context was encodings' equivalence with the thing they encode, in an abstract sense, I thought this was obvious.

    I'm talking about the informational encoding of any object such that a system can recognize that encoding X uniquely specifies Y without any symbolic manipulation having to be performed. As I mentioned originally, this is in the context of communications theory. My point is that, even if we imagine an infinite computer with an infinite alphabet, it still must use step wise transformations to relate symbols to each other.

    Which yes, is a point far adjacent to mathematics proper, but this isn't a math forum lol.


    You don't know what you're talking about. In ordinary model theory for ordinary first order languages, there are only countably many symbols in the language. That does not contradict that the universe of a model may be uncountable.

    Yes, because the statement "model theory can be used to examine infinite symbolic alphabets," is equivalent to my saying "ordinary model theory for ordinary first order languages uses infinite alphabets." Notice how you had to throw in a bunch of specifiers into that sentence so you could show how wrong I am and how smart you are? Have you heard of the principal of charity?
  • Real numbers and the Stern-Brocot tree


    We're getting our wires crossed. You are talking about all "well formed formulas," which I didn't use as a term for a reason. These are countable under standard set theory because it is assumed that a formula is a finite string, by definition. But that's begging the question on the topic of equations being equivalent to identity. If you want to say most of mathematics generally assumes this, I would agree, I only brought up this tangent because I'm not sure if it should accept this given. Semantics vs syntax.

    For example, model theory works with uncountable symbols. For every real there is a 'theorem" in such a system of the form x = x. This is only true where x is a symbol that uniquely identifies each real. If the reals are uncountable then so to are the symbols required to show that x = x for all reals or x + 1 = 1 + x. Set theories formulas are countable, but they are so by definition. A symbolic system with a unique symbol for every real cannot be smaller than the reals, the set of symbols must be, by definition, in one to one correspondence with the reals. And since these symbols can also be combined in an encoding, there are more combinations (infinitely more) than there are reals.

    Probably my fault for the language. This is why I started with "encodings," but accidentally slipped into "equations."
  • Real numbers and the Stern-Brocot tree
    It might be easy to look at the problem a simpler way.

    If we had developed a Platonism of processes instead of objects, we might find nothing weird with saying each unique computation is it's own unique entity. These computations would share a relationship in having a "common output," in the same way numbers can have a "common denominator."

    Human biology is adapted to generating a world of discrete objects for itself. Physics increasingly casts doubt on the reality of this perception. Platonism casts a long shadow on mathematics, and there we tend to think of objects as fundemental, not relationships or processes. Hence statements like "φ is a rational number, thus the Golden Ratio is a rational number." If we focused on relations instead of objects we would say, "that is ridiculous, you cannot have a ratio of nothing to nothing else as a relationship, φ and all numbers are just useful symbols for representing relationships, not irreducible things in themselves." But instead we think of the ratio was reducible to the numerical object.

    Which has obviously worked out pretty well, except for the ludicrous results of assuming any algorithm = x IS x as concerns finite computation instantiated in the world or our knowledge of uncomputable numbers.

    We say that an algorithm for finding Ω ≠ Ω due to the decision problem, and yet Ω + 1 = Ω - 1 is true. That is, we can work with the uncomputables in compressed form, and we work with hard to compute numbers effectively in compressed form without identifying their numerical value all the time.

    This suggests to me that computation is not simply a statement of identity (else we should jettison the uncomputable numbers).



    Missed this. Mathematics: A Very Short Introduction has a good description in the intro. It's on LibGen, but if you want something that is open access you can check on intuitionism versus Platonism versus formalism, etc. https://plato.stanford.edu/entries/philosophy-mathematics/

    Here is a description of some of the problems I mentioned with P ≠ NP vis-á-vis identity and Kolmogorov Complexity
    https://plato.stanford.edu/entries/information/#AnomParaProb
  • Real numbers and the Stern-Brocot tree


    Choo choo! All aboard the crazy train!

    lol, exactly. But I only suggest it because we already are on a crazy train. When we try to apply formalizations of information quantified as a reduction in uncertainty we get the patently false assertion that receiving a clear message in a code we understand, such as the equation (√1913 • π ) ÷ 1.934 , makes us just as certain of what the result of that equation is as having received the decimal number. That is, we should instantly know which numbers are greater than the result and which are less than it and the identity of all equations that result in the same number. This is clearly not the case. Digital computers are much quicker, but if you throw a complex enough algorithm at them it would take them trillions of years to complete it, even when the function of program(input) = a number that is quite simple to represent.

    It's a broader part of P ≠ NP. There is really a separate problem with the P ≠ NP that is not related to specifically reducing complexity classes. It's related to the problem of systematic search when two equivalent terms nonetheless cannot be recognized as such without step-wise transformations, particularly when these can take more computational resources than appear to exist in the visible universe.

    The bottom half of the Stanford Encyclopedia of Philosophy article on Philosophy of Information covers this, or you can also look up the closely related "scandal of deduction."


    And this has nothing to do with P vs. NP, which is a problem in mathematics that understands that 4 is the same object as 2+2.

    I don't think most mathematicians particularly care that much about the philosophy of mathematics. At least that is what I've heard in enough lectures and books on the topic to believe it.

    There are several major schools in philosophy of math and some deny that numbers as distinct objects even exist, so I don't get how they can be too offended here. Especially since I will allow that 2+2 = 4 in a sense that the are numerically equivalent, they are just not equivalent vis-á-vis computation.

    Right, a description of a number and the number are not the same objects.

    Is "the successor integer of 2" a description but 2 + 1 is not? Is "three" not a number but 3 is? A mark on a paper, symbols on a Turing Machine tape and pixels are not numbers. In the mathematics of computation/theoretical computer science this is necessarily true. We don't have to reduce encodings to some one true description, although we do have the shortest description in any one appropriate language, Kolmogorov Complexity.

    Wrong. An equation is a certain kind of formula. In an ordinary mathematical theory (such as set theory, which is the ordinary theory for the subject of equinumerosity) there are only countably many formulas. But there are uncountably many real numbers. It's true that the set of equations is not 1-1 with the set of reals, but it's the set of reals that is the greater.

    I think you are confusing the set of all computable functions with the set of all equations. The concept of a solution set will be helpful here: https://www.varsitytutors.com/hotmath/hotmath_help/topics/solution-sets

    A solution set is the set of all solutions for an equation. For example, "x + 1 = 1 + x" has the reals as its solution set. We are talking about the set of all equations, which is the size of the set of all solution sets for all equations. Thus, the SS of "x + 2 = 2 + x" is also the same size as the reals. We can use complex numbers as well, since "x + π = π + x" also = R. We can do this with addition for all the reals, but also do it with multiple addition operations, division, exponents, etc. And of course we have some equations that have only a few or one member in the solution set. The set of all solution sets is not the set of all computable functions, which I'll agree is countable.

    So as you can see, there are infinitely more equations than reals, which means that no computer, even an infinite one that allows for infinite operations (which is generally disallowed anyhow), can place encodings and numbers into a 1:1 correspondence. There will always be multiple equations specifying a single number (whereas with computable functions there are not enough functions for all the reals).

    Thus, not even an infinite computer can take any one number and treat it like an identity name for all the corresponding equations equal to that number via a preset repertoire (since they cannot be in 1:1 correspondence). This is basically the same thing as saying their are more possible solutions to combinations of numbers than numbers, which is prima facie true, and I imagine it can be shown combinatoricaly as well. Not only this, but in some well known cases the equation -> number relationship is many to many, not many to one.

    Now many mathematicians might not care about the scandal of deduction, which is fine, but it's a serious problem created by current definitions nonetheless. Notably, for centuries extremely skilled mathematicians argued that any positive X ÷ 0 = ∞ and had good arguments for that conclusion. It's not like major shifts don't happen because current established practice ends up resulting in absurdity.

Count Timothy von Icarus

Start FollowingSend a Message