Comments

  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    So your solution is basicallythat we just know things inherently?

    As opposed to what, that conclusion we are inherently unable to know things? "Know" might be the wrong word here. Maybe "learn?"


    Or perhaps a functioning brain.

    Not sure if this is supposed to be snide or actually an appeal to everything about what we find useful being explained by "having a functioning brain," without reference to the other things I mentioned. Presumably the evolution and individual development of each functioning brain depends on physics, chemistry, etc., and presumably no language games existed before individuals with brains, so the point stands that something sits prior to usefulness. Brains don't spring from the void uncaused, and what constitutes proper function for a human brain is dependent upon "how the world is."
  • Donald Hoffman


    I'm forgetting his exact wording on this, but I don't find his rebuttal forceful. He accepts that evolutionary theory says something (true) about the world, ergo some of our theories are true.

    Yes, there seems to be a bit of the common move of setting up the "view from nowhere," as a strawman foil here. You see the same thing in deflationary thinkers like Rorty as well. The old "we cannot achieve 'the one true ahistorical, perspectiveless view of truth,' thus truth is inaccessible," as if there is no middle ground. Yet it's not like my brother and I cannot both know our parents simply because each of our knowledge of them differs.

    But then there's the whole issue of evolved to discover what kind of truths? Truths about the constitution of the universe? Very unlikely. That must be some kind of lucky accident that we are able to form theories that apply to the universe.

    Or no luck is required. It has become common to think of logic and reason as being the sui generis products of mind, something "constructed" or something like that. But if there is a certain logic to the world, a Logos, then it should not be surprising if minds correspond to it. Rather it would be impossible for it to be otherwise. And the world certainly appears to have an intelligible order.

    Phenomenologicaly, the intelligibility of the world is given. One needs to make some advances in philosophy before one starts holding apart the world and its intelligibility.

    Hoffman's interface theory is ultimately guilty of the same old Cartesian/representationalist error that haunts a good deal of contemporary philosophy. Saying "we don't know the world, we just know our experiences of it," is a bit like claiming no one can drive a car because "they can only push pedals and turn a steering wheel," or that writing is impossible because "we can only move muscles in our fingers." What is "being" supposed to mean if it's not what is thought, experienced, known or talked about?
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    Well Kripke's Crusoe is isolated from birth IIRC. The distinction is important and has led to the differentiation between your Tarzans (always isolated) and your Crusoes (isolated at some later point).

    To be honest, the whole debate seems like a sort of philosophical blind alley to me.

    It's a while since I've read Kripke's text, but that seems to be right. But it's a bit more complicated than that. If the thesis is that meaning is established by practices, then it does not seem to be wrong to say that there is no fact of the matter that determines it. However, given that the sky is blue, it is true to say that there is a fact of the matter that makes the statement "the sky is blue" true. IMO.

    Well, this is tricky. The fact that we call the sky blue, think water is composed of hydrogen and oxygen, or think that dogs and cats are different species is not "social practices all the way down." The shape of language games, their evolution, the hinge propositions we accept as fundemental, etc.— these don't spring into our minds uncaused. It would be a mistake to think that just because we cannot formulate propositions outside of language games and the context of social practices that nothing exists outside that context or that such things are unknowable tout court.

    At the root of deflationary theories of truth (which is often how On Certainty is read) lies an error that is isomorphic to the Cartesian error that Wittgenstein is at pains to correct. The error is to assume that language games, theories, models, words, ideas, etc. are what we know instead of that through which we know. It's unsurprising that a deflationary reader of Wittgenstein like Rorty uses the image of words and ideas as "a mirror of nature" as a foil through which to dismiss metaphysical notions of truth, while a phenomenologist relying on the pre-modern tradition like Sokolowski would rather have us speak of "lenses we look through" (not at).

    Why do practices develop the way they do? Why do some things seem "self-evident?" If it isn't "for no reason at all," then we have something sitting posterior to any individual language game or any hinge propositions, namely metaphysical truth. What's missing from deflationary or skeptical accounts is any concept of the causality specific to signs, that they make us think one thing instead of another. Instead, we have the sign vehicle mistaken from the sign, leading to it becoming disconnected from its object. This turns the sign vehicle into an impermeable barrier between the interpretant and the world, rather than it being what joins them in an irreducible tripartite gestalt, a nuptial union. (Reductionist assumptions might play a role here too, in that it is assumed that sign relations can be decomposed into their parts without losing anything).

    I think the move to deflation vis-á-vis truth became inevitable after the move to place logic entirely within the "subject." After the Cartesian divide between been subject and object, things has to be assigned to one or the other, leading to Lewis' "bloated subject," the sui generis source of beauty, goodness, logic, intelligibility, meaning, and eventually truth itself. (The "What is Logic?" thread discussed this: https://thephilosophyforum.com/discussion/14593/what-is-logic/p1)

    But if we're willing to allow that we and our language games have causes external to ourselves, then there is no need to question the existence of "facts" that lie outside any specific game. After all, the absolute view is not reality as set over and against appearances, but rather must itself include all of reality and appearance. When Kripke or Rorty want to appeal to usefulness they have to allow that there is some truth about what is actually useful, and presumably this will be determined by factors outside of any language game. Otherwise we get the infinite regress of appeals to pragmatism that deny any truth (e.g. https://thephilosophyforum.com/discussion/15308/pragmatism-without-goodness/p1). That or usefulness is just whatever we currently prefer, which then leaves us in the position of Protagoras in the Theatetus, having no reason to philosophize because no one can ever be wrong about anything.

    I'm always uncomfortable with those grand philosophical concepts. But I would agree in many cases that our access to - no, better, our practices in - a world "outside" language does ground meaning. I think the game may be differently played in fields like mathematics and logic - though even there, there are facts that kick us in the face; we are not simply in control.

    Seems to me that all sorts of facts can "kick us in the face," e.g. when we design a plane based on a flawed understanding of lift and it crashes. Reason might be "defenseless," in that it cannot justify itself from outside itself, but truth asserts itself in our lives all the time. Regardless of which hinge propositions you hold to, if you jump off a building, it seems truth will show up to hit you on the way down.




    But you just said his view wasn't about certainty?

    Imo, I don't think you are offering any solution that is inherently different from the sceptical solution since what you are saying seems to come down to just ignoring indeterminacy, which then brings up the question of "how are you doing what you are doing?" which comes to be the same kind of acting "blindly".

    The set up for the skeptical problem is based on a very analytic notion of certainty and also seems to assume that meaning must be grounded in a nominalist context. I would say there are grounds for rejecting the "skeptical problem" rather than finding a solution for it. So this isn't the same thing as the skeptical solution. Kripke's argument could be framed as a dilemma syllogism, and we could "grab it by the horns," and reject its premises, rather than looking for a path through the horns.

    This entails neither "ignoring indeterminacy" nor "acting blindly." Gadamer's views re hermeneutics are instructive here. The fact is, a prejudice against all prejudices is itself a prejudice. The "view from nowhere," isn't a coherent model of knowledge. Most of philosophy accepts this now, and yet the VFN continues to haunt us because it is often dragged out as a strawman/punching bag to argue for various flavors of relativism or nihilism, as if realism can only exist within the context of the VFN.

    It is possible to rely upon one's prejudices and still question them. We can question the Law of Non-Contradiction or the Law of the Excluded Middle while still maintaining that we must hold to them. Folks like Hegel have done this fruitfully for example. But we always keep some things constant. We don't "begin from nowhere." Indeed, on the classical and phenomenological view we begin untied to the intelligibility of being; meaning is "always already there." The solution, à la Gadamer (or Hegel), is a consciousness that is aware of the process by which it comes to know things. E.g., we don't become dislodged for any historical context, but rather we become aware of and can question our specific context.

    This sort of finding isn't new. Aristotle's solution to the skeptical problem re syllogisms (that every premise in a syllogism must be justified by a prior syllogism, and so on, ad infinitum) relies on using self-evident truths as axioms. Likewise, Plato looks at how it is impossible to give an argument justifying reason and argument that isn't circular. Reason is transcedent, hence it can question its own foundations. Like G.E. Moore's point re goodness, we can always ask coherently of any proposition "but what if it is false?" or "what if we are mistaken?" But the problem only results in a sort of nihilistic crisis if other presuppositions are in play (i.e., subject/object dualism, nominalism).

    I guess one key difference here is the idea that abstraction is just induction. On the Aristotlean, immanent realist account they aren't the same thing; abstraction involves the mind's access to the intelligibility of things, the eidos that makes them anything at all. A purely inductive account of abstraction cannot overcome indeterminacy and the problem of induction. But then such an approach assumes subject/object dualism.

    Against this view we might consider Eric Perl on Plotinus:

    In arguing that being qua intelligible is not apart from but is the content of intellectual apprehension, Plotinus is upholding what may be called an 'identity theory of truth,’ an understanding of truth not as a mere extrinsic correspondence but as the sameness of thought and reality. The weakness of any correspondence theory of truth is that on such a theory thought can never reach outside itself to that with which it supposedly corresponds.1 Thought can be ‘adequate’ (literally, ‘equal-to’) to reality only if it is one with, the same as, reality. In Aristotle’s formulation, which as we have seen Plotinus cites in support of his position, knowledge is the same as the known.2

    If thought and reality are not together in this way, then, as Plotinus argues, there is no truth, for truth just is the togetherness of being with thought. Plotinus’ arguments against the separation of intellect and being thus resonate profoundly with the nihilistic predicament of modernity. If
    thought and reality are conceived in modern terms, as ‘subject’ and ‘object,’ extrinsic to and over against one another, and truth is conceived as a mere correspondence between them, then thought cannot get to reality at all,
    then there can be no knowledge, and in the end, since nothing is given to thought, no truth and no reality. We must rather understand thought in classical Platonic, Aristotelian, and Plotinian terms, as an openness to,
    an embracing of, a being-with reality, and of reality as not apart from but as, in Plotinus’ phenomenological terms, “given” (V.5.2.9) to thought. This, again, is the very meaning of the identification of being as εἶδος or ἰδέα. Being means nothing if it is not given to thought; thought means nothing if it is not the apprehension of being. Hence at the pure and paradigmatic level of both, intellect as perfect apprehension and the forms as perfect being,
    they coincide. “We have here, then, one nature: intellect, all beings, truth” (V.5.3.1–2).
  • Semiotics and Information Theory


    An interesting quote from Deacon. I like his work, even though I don't find myself agreeing with a good deal of it. Personally, I think there is good grounds for thinking in terms of pansemiosis and there are useful ways to apply the concept of computation to physics and non-living systems, an idea he tends to write off. I mentioned Lyons book before and I think he highlights how the Scholastic idea of virtual signs or St. Thomas' "intentions in the media," can be used for describing how signs exist in non-living contexts.

    In particular, computation (or something like it involving real numbers and/or indeterminism) seems to be a useful model of causation, where past states entail future ones (or a range of future ones in a stochastic fashion). But I still think he probably gets something right in the relationship between thermodynamics, life, and the relevance of the "absential."



    I think this is a fair assessment of mainstream contemporary phenomenology. The later Husserl does seem to lurch towards a sort of idealism that I don't think is particularly helpful. Personally, I am more of a fan of Robert Sokolowski's merging of Husserl with Aristotle and and St. Thomas, which comes through with a more sensible "realist" view of phenomenology. The "Phenomenology of the Human Person," is a sort of summa of his work and is really great. However, it doesn't touch on signs or the type of causality unique to signs (making us think one thing instead of another), and this seems like a real miss, something that could ground his realist intuitions.

    Eric Perl's "Thinking Being," is another good one in this vein. Perl is interested in drawing out the commonalities in the classical tradition, particularly Parmenides, Plato, Aristotle, Plotinus, and St. Thomas. A big thesis of his is that the phenomenological solution to the problems that comes with the subject/object divide are already present in ancient and medieval thought, and that there are better solutions to contemporary problems just sitting there for us to pick up again. What I particularly like is the chapter on Plotinus, which offers a solid critique of the problem in strict correspondence theories of truth of the sort that continue to dominate analytic philosophy, and end up resulting in deflationary theories of truth and meaning.
  • Motonormativity
    I grew up in a city in Massachusetts, but the city was also about 400 years old and had peaked in population in the decades after WWII, undergoing to Detroit-like deindustrialization and decline. As a result, it was actually fairly walkable. All the bloated strip mall style developments were out in the suburbs because the downtown and surrounding neighborhoods were too built up to allow for that sort of design (this also makes parking in downtown impossible except for in big expensive garages because the streets are all very narrow).

    Of course, you didn't really walk it because the city had one of the worst crime rates of any city in the county, but in theory you could. Biking was a bit more difficult.

    But I've also lived in more recently developed communities down in North Carolina and they are virtually impossible to walk across because every building is required to have a huge lawn and a huge parking lot. The US is very bad about this sort of design, especially stuff developed in the 90s-00s.

    This kills public transportation because you need a certain level of density to make light rail, etc. economically viable. Places have buses, but they are generally atrocious. I took the bus to my job that was a 15 minute drive away in Massachusetts and it became an hour and a half long odyssey each way for the week my car was broken. And even big city public transportation is hard to keep afloat. Boston and NYC have some of the best density for rail networks, and both the MBTA and the MTA have absolutely horrendous financial positions.

    I honestly don't think it is fixable. Once you build strip mall sprawl communities the only way to make them walkable would be the rebuild over them—everything is just too spread out. There is just too much distance to cover, particularly in places where the heat index is routinely 100+ degrees all summer.

    It's funny that you mention small villages because I now live in rural Kentucky and here things are truly, completely unwalkable. All the farm land means buildings are extremely far away from one another and then the roads are very narrow with steep drop offs and high weeds on either side of them (because there isn't funding for wide roads because the density is low).

    We do have a lot of Amish around here using horse drawn carriages, and people do ride bikes sometimes. You also have tractors and other slow farm equipment being moved around. This just ends up being dangerous though. There isn't room to pass on windy country roads and a single tractor on one of our few major traffic routes makes everyone late for work. Plus, all the roads are 55MPH (so people go 55-70), since it takes a long time to get anywhere, but then you go over a hill going fast and find there is a horse cart in front of you going 15MPH you have to slam on your breaks for.

    Kentucky's old town centers actually tend to be very aesthetically pleasing and walkable. I noticed the same thing when I lived in Iowa and upstate New York. Nevada is generally similar too. The problem is that all these store fronts have closed, since Walmart and co. have the economy of scale advantage, so you can walk there but all the commerce is actually out in some strip-mall on the edge of town, with the new commercial being zoned more like the old industrial sectors (which are now empty, business having been moved out to China).
  • Donald Hoffman


    The fact that it is information arising from the processing in a neural network which has inputs which are largely the outputs of sensory nerves. (E.g. the optic nerves for vision, the olfactory bulb for smell, whatever nerves carry signals away from the cochleas for hearing.)

    But neural networks run on PCs are not concious, right? So being a neural network and processing outputs and inputs isn't enough, even if these outputs come from the environment via photoreceptors, microphones, etc.

    So again the appeal to the data/organ being of a "sensory" sort seems to do all the explaining. Why is an eye a sensory organ but the camera on a self-driving car isn't? It seems to me that the difference is that the former involves sensation. But then it looks like all we have done is explain what has conciousness by appeal to a term that implies something is concious.




    And yet modern AI does such modelling, presumably without consciousness. I think what makes brains conscious is that they are general informational processors whose interface to the world is the result of the modelling of sensory information you are talking. To brains, as far as they/we are concerned, such models are the subjective plentitudes we experience, they/we are wired to interface with the world in this way. Just as computers run on symbolic logic, our wet "computers" "run" on sensory experiences: we perceive, feel, imagine, and think to ourselves, all of which are fundamentally sensorial. It is these and only these sensations, externally and internally derived, that we are aware of, every other brain process is unconscious to us.



    I think something very much like this might be true, but the appeal to "sensory" information seems to be doing the explanatory lifting here. Yet what makes something "sensory" information? A combat drone uses video, IR, radar, etc. inputs to get information about the world. It puts this information into a model. But presumably this isn't "sensory" information because it doesn't involve sensation.

    If the term "sensory" does the heavy lifting in our explanation of conciousness it seems like we need to describe how to identify "sensory information" without reference to sensation itself. Otherwise we end up saying something like "experiencing entities experience because they receive experiential information from the enviornment or have an experiential relationship to it."

    "Modeling relationships," might be another tricky term here. Does a dry river bed model past flow of rainwater? We probably wouldn't want to say that, but it certainly does contain information about past rainfall.
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    I forget the authors but one of the more famous Wittgensteinian rebuttals of Kripkenstein points out the Kripke is not even advancing a skeptical position but a nihilist one. He isn't saying facts about meaning are impossible to pin down with certainty, but rather that they don't exist. But conceptual nihilism is self-refuting in a way skepticism isn't, because it implies that the position doesn't even mean anything in the first place.

    I remember also thinking that the Robinson Caruso argument should also apply to all learning, but the idea that it is impossible for an isolated feral human being to learn anything or to ever be wrong about what they think they've learned seems implausible to me.

    At any rate, I think one can rebut Kripke straightforwardly by using Wittgenstein himself, and pointing out that his bar for "certainty" and ideas about truth/facts are simply what are leading to the nihilism problem. Behind the "truth of correspondence" or Husserl's "truth of correctness" lies the more basic Hiedeggerian idea of "revealedness." As Gadamer points out, some prejudices are needed for making any inferences at all, and so I think Kripke's work, while interesting, is mostly showing the flaws of a certain sort of focus on "certainty."

    So, the "skeptical solution" might work (that's a whole different question), but I think we might question if it's even required.

    It's been a while since I've read him, but IIRC he didn't really deal with metaphysical realism. This would seem to offer another way out of the meaning dilemma, since meaning is grounded in the mind's access to the intelligibility of being.
  • Donald Hoffman


    What makes some information "sensory information?" Doesn't "sensory information" imply conciousness anyhow? Likewise there is the issue of what makes a physical process an "information processing" process and what makes a physical relationship a "modeling relationship?"

    One of the challenges for CTM is that all physical processes can be described as computations or information processing.
  • Donald Hoffman


    :up:

    Yes, although this would also hold true for all sorts of deterministic physical systems or chaotic functions, so long as they don't enter any sort of cycle or steady state. Likewise , if you get enough billiard balls bouncing around and you need a lot of computational power to predict how they will interact. You can't predict Life without doing the computations, but that's also true for all sorts of things.

    But I think the more interesting question is if Life, or computation more generally, is reducible to anything else. I don't think it is. Certainly programs can be decomposable, but that doesn't seem like the same thing.

    It's sort of like Zeno's paradoxes. There seem to be issues with reducing motion to a series of frozen instants, and the issues Zeno highlights seem applicable to how we think of process more generally.
  • Donald Hoffman


    Something like this seems plausible, but it doesn't seem to me to do much as an actual explanation. Why are some systems conscious? Well, it isn't just that they are adaptive or respond to the environment. A thermostat ticks those boxes. So we say it is because they have "sensorimotor" interactions. Well, here the term we are using for an explanation contains "sensory." We've explained which systems experience in terms of those systems having a sensory (i.e., experiential) component. "It experiences because it has experiential interactions."

    Wouldn't this be a bit like the old "opium causes sleep because it has a hypnotic (sleep causing) property?"
  • Donald Hoffman


    This seems backwards.

    Of course you can predict Life, or anything similar. You could do it with a pencil and paper, just apply the rules and go step by step. You can predict any instance of Life by inputting the starting conditions and running it forward, computation works as well here as for calculating orbits of billiard balls bouncing off one another.

    But an instance of Life isn't just the rules of the game. It isn't even the rules plus the initial conditions. It's the entire process of the instance run forward—computation is inheritly processual.
  • A Review and Critical Response to the Shortcomings of Popular Secularist Philosophies


    I fear that your distinction might be missing the point. The workout routines of the cast of the Jersey Shore probably required a lot of effort, but it seems that were not very edifying. The Place of the Lion or Out of the Silent Planet are meant to be easy reads, enjoyable fantasy/sci-fi, and edifying.

    Whitman's walks in the woods reach towards the "higher," despite being physical, while a much more strenuous run aimed only at eliminating body fat to look "cut for the girls at the club," might fail in the dimensions where Whitman succeeds.

    To be honest, the division into "higher" and "lower" activities seems likely to become pernicious and misleading. The point, as I take it, is more that there are proper responses and orientations to things in the world. To see Zion or the Grand Canyon for the first time and go right back to playing Candy Crush on one's phone is to miss something important. Likewise, to find a lost and injured child on a trail and to respond with an internal shrug of "not my problem," is similarly a sort of failure.

    You mentioned the study of the natural sciences. Without getting into a long digression into Plato, I think it's worth pointing out that the development of techne and episteme is important because it makes us more self-determining and more fully real as ourselves. These aren't always good in themselves except to the extent they involve self-transcedence. But obviously they can be pursued in a way that is tainted by vice.
  • "ReasonLines" to determine validity in categorical logic


    Question: I know that Venn diagrams presuppose the Boolean assumption that all particular propositions have existential import and universal ones lack it. Does this method do the same?

    I'll be honest, I understand the distinction but I'm not totally sure how to check it.
  • "ReasonLines" to determine validity in categorical logic
    Neat.

    I was taught vetting syllogisms with Aristotle's "Six Rules," which I always found very intuitive, but it's also sort of slow and it takes time to think them through. And then if you botch your translation into logical form or identification of the distribution of terms it ends up wrong anyhow. Normally this is easy on class exercises, but fairly difficult when parsing sophisticated natural language arguments. I find the Euler diagram intuitive to look at but not to make, so I'll have to try this out.
  • Donald Hoffman


    Sorry, I was really asking about the neutral monism and dual aspect theories in general; I was only thinking of Spinoza in the first question. A common problem I've seen for dual aspect theories in particular is they have no explanation for why panpsychism shouldn't be the case given the starting presuppositions.

    The reply: "why shouldn't it work just so?" is essentially defaulting on an explanation, no? One could give the same sort of explanation for all manner of competing theories as well. We have a contingent fact supported by no sufficient reason.

    For example, you could use the same sort of explanation to support Integrated Information Theory instead. Why do some sorts of information processing feel a certain way? Why do some forms of information processing result in first person perspective and other's don't? Why do these result in a phenomenological horizon centered on a specific body? "Why shouldn't it?" or "it just does," is not a compelling answer. If there is no "why should it?" answer for a contingent fact, then isn't "why shouldn't it be false or some other way?" is just as strong of a position as "why shouldn't it be true?"

    So for example, the epiphenomenalist might say consciousness does no work, just "goes along for the ride", so to speak, but that would be an illegitimate elimination of one reasonable way of explaining human behavior. I think what puzzles people is that we cannot combine the two explanations or achieve any absolute perspective which would eliminate one and retain the other. 'Either/ or' thinking seems to generally dominate the human mind.

    I think the mental explanation is usually downgraded because it is assumed that the physical can be predicted with mathematical certainty, whereas "thirst" and such cannot be represented mathematically in a direct fashion. The mental explanation becomes completely superfluous because the physical explanation explains everything, at least in terms of predicting behavior.

    The explanatory power only goes in one direction, the physical explaining the mental. All behavior can be explained and predicted in terms of physics (according to the reductionists). But a purely mental explanation doesn't tell us about the physical with any great degree of accuracy, nor does it have the same predictive power. If mental states are multiply realizable, which seems quite possible, then the physical is undetermined by the mental, whereas the mental is uniquely determined by the physical.
  • How do you interpret nominalism?


    I think it would be fair to say that most theology prior to the Reformation and most Catholic and Orthodox theology since is pantheistic. God is present to all things as cause (St. Thomas in the Summa) and all effects as signs of their causes (St. Bonaventure and St. Thomas).




    subject and object

    Well, except when they get frustrated by the explanatory gap and decide to start arguing that they don't really exist in order to save their system :rofl:
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"
    On a side note, Grayling's book has a funny story vis-á-vis Wittgenstein's influence on logical positivism (which Grayling suggests is greatly overestimated). Apparently the Vienna Circle read TLP aloud to one another and members got so fed up with it that they began screaming "metaphysics!" at certain lines. Which, given their views, amounted to yelling out "bullshit!"

    This is of course only an indictment if one thinks the Circle had things right, which I imagine very few people would agree with these days.

    So, while there was influence, they weren't particularly hot on all of it. It's a funny scene to picture though.
  • Donald Hoffman
    Well, and the anti-metaphysical camp was hoping to censor metaphysics and enshrine logical positivism as the default hegemon. But their own project collapsed in spectacular fashion, so all they really did was ensure that elements of Locke, Hume, and Kant (which they ironically disliked) would remain dominant and unquestioned for another 50 years.
  • Donald Hoffman


    Exactly, because the world was the expression of a will, not simply dumb matter being acted upon by physical forces. Existence was ‘participatory’ in that through religious mythology and ritual we re-enact and participate in creation. We had yet to see ourselves as pieces of flotsam thrown up by what basically amounts to a highly sophisticated chemical reaction, Stephen Hawking’s ‘chemical scum’.

    This is no doubt part of it, but it seems to me that the issue goes down to the very basics of metaphysics, with two ideas. The first is the idea of things' properties emerging from arelational properties that exist and subsist "in-themselves," making the world reducible to "building blocks" with things simply "being what they are made of." This view has been attacked from a number of perspectives and I think Hegelians, Thomists, and contemporary process philosophers all give good reasons to reject it. The other view is representationalism, the idea that all we know is our own ideas or experiences, and I think this idea also has very strong points against it. The first idea is still going strong. The second is finally showing signs of breaking down. Perhaps I am too optimistic, but representationalism finally seems to be losing credibility in the philosophy and science of perception, and this will trickle down into the mainstream if the trend continues.

    I don't think these positions even necessarily go hand in hand with a "disenchanted naturalism," and certainly they don't go hand in hand with science. Rather, the first is just a bad inference from the assignment of values to "objects themselves" in early modern mathematical physics, with people mistaking the shape of their mathematical model for the structure reality, and the second is due to early modern philosophers being rather poor students of the scholastics and missing their careful distinctions vis-á-vis the role ideas play in sign relations (in part because the late nominalists got very sloppy about this distinction at times and these were the folks more likely to be read in the early modern period, in part due to sectarian prejudices).

    I sort of see this more as a historical accident than any necessary connection between technological progress and the methods of science and this particular (bad) brand of metaphysics. The other factor I see at work is the "anti-metaphysical movement," which has slowed the reappraisal of "primary versus secondary property" distinctions and smallism/building block ontologies. This sort of metaphysics has been allowed to remain part of a "default view," largely because of the intentional and dogmatic push to sideline metaphysics and discredit it more generally, which is of course itself just a particularly pernicious brand of metaphysics and philosophy of science (at least as I see it).
  • A Review and Critical Response to the Shortcomings of Popular Secularist Philosophies


    Or, he is a New Age God, and he is LOVE

    This isn't really "New Age." It's in First John: "ὁ θεὸς ἀγάπη ἐστίν," generally rendered as "God is love." (God is also being itself, that "in which we live and move and have our being"—Book of Acts 17:28—and truly subsistent being, being where essence entails existence—Exodus 3:7-14.) Apokatastasis, the doctrine that the entire cosmos shall be redeemed, including the Devil and his archons, far from being a modern "hippy" innovation, was at its peak in the first 500 years of the Church. It was probably more popular that "infernalism," the doctrine of eternal punishment for sinners, in the first 300 years (this did not mean it was a majority opinion; majority opinion was that unrepentant sinners would simply cease to exist, or that all people would endure purgation and some would cease to exist to the extent they did not repent). In the Oriental Orthodox churches one finds matter of fact references to universal salvation until the 14th century.

    The question of how to deal with the evolution of tradition is as old as theology. Bart Ehrman types or your Christopher Hitchensens might think that the problems of reconciling changes in tradition is a modern problem brought out by "new tools of scholarship," but it isn't. Questions like the authorship of some Epistles are as old as Christian theology, showing up in Origen. In fact, because the Jewish and Christian Canon were still open people were particularly aware of how man's decisions effected tradition.

    Likewise, visions of tradition as unfolding in history according to Divine Providence, but also shaped by the free volitions of creatures (man but also angels, and corrupt archons/principalities)—the idea that Plato was were he was, when he was to help with understanding revelation, or that Rome helped to bring Athens and Jerusalem together, and the general idea of historical synthesis driving on progress, all go back long before Hegel (Eusebius, St. Jerome, etc.). A big innovation of Christianity was to extend the cyclical philosophy of history dominant in antiquity into a spiral pattern progressing towards a teleological horizon.

    David Bentley Hart's Tradition and Apocalypse is a pretty good book on this, although I think it undersells the historicism present in earlier eras.
  • A Review and Critical Response to the Shortcomings of Popular Secularist Philosophies


    In fact, it is the people who actually went through great hardships and actual suffering that seem to have the most positive outlook on life. The "always kinda-depressed but not really" type seems to be an existence that occurs almost exclusively in upper middle-class urban settings. There is almost a role-play element to it:
    "Oh no, my crush is sleeping with another guy! There are children in Africa starving! Time to read another Dostoyevsky novel."

    lol, but the last part doesn't make sense. The Brother's Karamazov presents both one of the most effective arguments against the acceptance of suffering (Ivan in Pro and Contra) and also one of the most powerful theodicies in response to it (David Bentley Hart's "The Doors of the Sea," on the 2004 tsunami is a great essay looking at BK as well). Reading Dostoevsky should have the opposite effect!
  • A Review and Critical Response to the Shortcomings of Popular Secularist Philosophies


    Religionists argue that these restraints are necessary to prevent civilization from descending into decadence and excessive hedonism, where higher values are discarded in favor of simple pleasures. They believe that without these moral guidelines, society would lose its ethical foundation and succumb to chaos.

    I think that's partly the critique, but it can go in a number of directions. Partly the critique is that people are made unfree by hedonism, they are essentially chained to their appetites and conditioned passions à la A Brave New World. They never develop a capacity for self-determination. Huxley is a good example here, or for a (slightly) more religious angle you might consider C.S. Lewis' The Abolition of Man, particularly just the last, rather short chapter, which can be found here: here.

    I don't think this is exclusively the purview of "Christian nationalists." This seems to be a fairly common objection to the direction of modernity. A key idea is that "not everything we desire is good for us," even if we can grow the economy or make some men rich by allowing or promoting them. The issue here isn't so much any one religion but belief in a concrete human good, which, as Lewis notes, is common to the Indian and Chinese traditions as well.

    Virtually all people agree with this sort of view to some degree. Very few people want the drinking age lowered to 11 because kids might enjoy a drink, for pornography to be show on broadcast television because people might want to tune in (although it might as well be given how ubiquitous it is), for heroin to be sold as the corner store like candy, or even for us to market and hook kids on sugary and caffeine loaded drinks the way we do. Nor do we want people to necessarily be able to buy all manner of things just because they'd like to consume them—hence why I have never gotten to mount an anti-tank guided missile on my car to stop people from cutting me off.

    There is even some grudging acceptance that the state and society have a shared interest in forcing people to undergo an education. Although here you can see the fault lines come through. A lot of people want such an education to be "practical," to be totally focused on "getting a job," and so really focused on "doing some unpleasantness now so you (and we) can all consume more in the future." Lewis would represent the common opposition to this way of thinking, the traditional view that education should involve showing people what should be appreciated, an education in good taste, a moral education, an "education in the virtues," and the fostering of "excellence." Obviously, for people who deny that excellence really exists, or goodness, the focus is different since moral "conditioning" can only really ever be justified on some other grounds (perhaps higher consumption for all). Why is more consumption better? That's a good question; it seems unanswerable given common starting points vis-á-vis goodness.

    Anyhow, the OP seems to dismiss any real belief in "goodness as such," as opposed to some sort of operationalization like "promoting pleasure," as the realm of religious fanatics. I don't think this is particularly accurate. And the common critique from this direction of existentialism and hedonism would tend to be:

    A. That it ignores the risk of being ruled over by appetites, passions, and circumstance—that it ignores the freedom of the self over the self, and the unification of the self (as opposed to Plato and St. Paul's "civil war within the soul"); and

    B. That it ends up being ultimately arbitrary. Value is "created" based on what? Why is becoming free to create one's values "good?" Why is authenticity "good?" What if someone is authentically a sadist, or thinks they are? Why shouldn't they deprive others of their freedom? The pragmatism of "live and let live," would of course only apply if one were not strong enough to overcome all rivals, and so it really can't be an answer to these questions.



    Humanism champions the pursuit of happiness, ethical living, and progress, with an optimistic belief in the potential for human improvement. Yet, it often glosses over the fundamental suffering that pervades life and the fleeting nature of pleasure. Pessimism, on the other hand, cuts through this idealism, recognizing that suffering is a constant part of existence. By acknowledging this harsh reality, pessimism offers a more grounded perspective that avoids the disillusionment that comes from chasing unattainable humanist ideals.

    If suffering was constant could it be distinguished from non-suffering? Suffering might be ubiquitous, but then again so is pleasure.

    It seems to me that pessimism needs to show that the Good is truly unachievable, not merely that suffering is ubiquitous. Medieval thought for instance can often be pessimistic. The world is indeed fallen, and what is worse progressively decaying and getting worse. Life is filled with suffering and evil. However it has a radically different conclusion because the Good is achievable
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    If it applies to those things then surely, the skeptical solution also applies.

    So not only can Tarzan not follow rules, but he has no memory and no sense experiences. Seems hard to believe.

    Reminds me of Davidson's Swamp Man.
  • Donald Hoffman


    There is certainly a relation to Plato, Hegel, and Wallace, but they are also very different from Hoffman's ideas in a lot of ways (particularly his "interface theory" stuff, which is very representationalist and Kantian). The idea of the mind "constructing reality" isn't one I find in the Plato, or the classical tradition more broadly for the most part. To be sure, "quidquid recipitur ad modum recipientis recipitur,"—“whatever is received is received according to the manner of the receiver,"—but this doesn't have the Kantian implications that Hoffman's descriptions of the interface theory of perception do. Mind is never separated from nature. Rather, the two are part of a nuptial whole. Being and being known are two sides of the same coin (this might come through most clearly in Plotinus).

    Hoffman's agential realism can't get away from the modern tendency towards reductionism and smallism. So ultimately he has the world of experience composed from very many less complex, part-like agential atoms. It's pretty far from Plato or Hegel, who have the idea that ideas/concepts are "more fully real," in that there is necessity behind their being what they are, whereas something like a rock, while obviously real in a sense, is just a bundle of external causes.

    To quote Eric Perl's Thinking Being on the distinction:

    The key insight of phenomenology is that the modern interpretation of knowledge as a relation between consciousness as a self-contained ‘subject’ and reality as an ‘object’ extrinsic to it is incoherent. On the one hand, consciousness is always and essentially the awareness of something, and is thus always already together with being. On the other hand, if ‘being’ is to mean anything at all, it can only mean that which is phenomenal, that which is so to speak ‘there’ for awareness, and thus always already belongs to consciousness. Consciousness is the grasping of being; being is what is grasped by consciousness. The phenomenological term for the first of these observations is ‘intentionality;’ for the second, ‘givenness.’ “The mind is a moment to the world and the things in it; the mind is essentially correlated with its objects. The mind is essentially intentional. There is no ‘problem of knowledge’ or ‘problem of the external world,’ there is no problem about how we get to ‘extramental’ reality, because the mind should never be separated from reality from the beginning. Mind and being are moments to each other; they are not pieces that can be segmented out of the whole to which they belong.”* Intended as an exposition of Husserlian phenomenology, these words hold true for the entire classical tradition from Parmenides to Aquinas.

    * The quote here is from Robert Sokolowski's Introduction to Phenomenology.

    So I think Hoffman helps us by making a good case for why a certain sort of thinking is self-refuting, but then he can't really get himself away from the bad elements of this way of thinking himself.

    It probably deserves its own thread, but different ideas of "mind-independence," matter here.

    For the classical tradition, there is no "mind-independent being," and I think they have a strong case for this based on the way they define "mind-independence." The paradigmatic formulation here is Parmenides' "the same is for thinking as for being." So the argument goes, even if there was truly "mind-independent being," one could not philosophize about it or speak of it. By the very act of speaking of it or thinking about it one has already given lie to its "mind-independence." If it can be thought of then a relationship between it and mind exists. This does not exclude what is super-intelligible, infinite being, but it does include the unintelligible.

    Now, when moderns talk about "mind-independent" being they are generally bringing in a whole load of metaphysical assumptions alien to the earlier period. The "mind-independence" here is sometimes framed as a causal one. "The mind doesn't create the world; looking at things doesn't make them spring into existence." This point is made a lot, but it's a little strange because I know of no one who ever argued that looking at things makes them exist. But I think we end up here because of the modern division between subject and object, and the division between primary qualities that exist "out there" "in objects themselves," and secondary qualities (e.g. color or taste) that are said to only emerge in interactions between objects and minds. And this is also where "mind's constructing/generating" the world comes in. Now that we have assumed subject/object dualism, we find ourselves having to assign parts of reality to either one or the other (and this is the road to C.S. Lewis' "bloated subject," the sui generis source of all truth, goodness, and beauty).

    Well, I don't think this distinction is a very good one. The fact is, to be epistemologically accessible and to make any difference at all, any property has to involve interaction. The positing of "properties 'in-themselves'" as set against "properties that exist in interaction," seems like a bad move. To be sure, apples only "look red," when someone sees them, but they also only "reflect wave lengths of light associated with red," when they are in the light. Likewise salt only dissolves in water when placed in water, and when photons do anything at all they can be said to be interacting, etc. All properties involve some sort of interaction, some sort of relationship, so relationships involving minds are hardly unique in this.

    For much of ancient and medieval philosophy, created things only exist within a web of relations. They are, in some sense, defined by how they relate to everything else. Part of what makes a key a key is the context of a lock for example. Something can only reflect yellow light because there is light in our world, etc. And this is crucial for the differences in how mind-independence is approached.

    Common modern examples of "mind independent realities," might be "the early galaxies before life formed." But clearly these are not mind independent in the ancient sense. We think about these things. We can see evidence of them. We think the formation of the galaxies and even the state of the earliest moments of the universe are causally connected to our own surroundings and our own being. So they aren't mind-independent at all. To get the "mind-independence" of modern thought you need to have already, perhaps unknowingly, started with some metaphysical assumptions about relationships, reductionism, the subject/object distinction, etc.
  • Semiotics and Information Theory


    :up:

    I've found Robert Sokolowski to be a great updater/rehabilitator of these sorts of ideas too, although he unfortunately doesn't delve into the semiotic side much, sticking more to phenomenology and an updating of Aristotle and St. Thomas.



    He's a very important figure in the development of semiotics. Nathan Lyons has a pretty interesting book called "Signs in the Dust" on him, sort of a updating. I thought the most interesting part is the final section on the application of semiotics to non-living things, but he has a good intro on him at the start.
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    As I understand it, Kripke’s argument begins with the skepticism that ensues from rejecting a classical realist approach to the factual justification of meaning interpretation. There is no fact of the matter that can determine whether the meaning for me of a rule like the plus sign is the same as I apply it now as when I applied it last year.

    I also am not a huge fan of Kripke's Wittgenstein. For one, the skeptical challenge seems too strong here. It seems like it should just as well apply to all memories and all sense experience, resulting in exactly the sort of all encompassing skepticism Wittgenstein was trying to avoid.

    Obviously, each utterance of a word and each thought is different. I think the difficulty is identifying what stays the same between these.




    :up:

    Yes, I think this is a good point. The understanding of rules happens "by nature," and it's a mistake to conflate abstract explanations of this process with the process being abstract itself.
  • Donald Hoffman


    :up:

    I agree on the type of error involved. I disagree on the track record of reductionism. How many true reductions do we have? Thermodynamics and statistical mechanics is the canonical example, but it is a rare example. 120 or so years on, the basics of molecular structure in chemistry has yet to be reduced to physics. Reductions are not common. Unifications, the explanation of diverse phenomena via an overarching general principle are far more common. For example, complexity studies explains disparate phenomena like earthquakes and heart beats via a similar underlying mathematics. But of course this does not say that heart beats or earthquakes are "nothing but," the math they share in common. Yet it seems to me that unifications are very often misunderstood as reductions.

    That and facts about composition is misunderstood as a reduction. To be sure, all cells are made from molecules. All molecules are made from atoms. This isn't a reduction. You can't predict how a molecule works from theory in physics, you need all sorts of ad hoc empirically derived inputs for it to work. The method of reductionism is fine, but it's not the only successful method either. Unification is as successful, if not more. But when reduction becomes an ontological assumption, I think it leads to problems. For one, you get the elevation of parts over wholes, the idea that wholes must be "nothing but" their parts.

    Prima facie, there is no reason to believe that smallism, the idea that all facts about large things must be reducible to facts about smaller entities, and that smaller = more fundamental, is true. A sort of bigism where parts are only definable in terms of the wholes of which they are a part actually seems more common in physics by my reckoning.

    But I think the error you identify is directly related to smallism and reductionism. The justification for the causal closure principle is normally that minds are "nothing but" brains/bodies, and the brains and bodies are "nothing but" atoms and their constituent particles. Particles are the smallest structure and thus most fundemental. Everything is "nothing but" these, and so everything is describable in terms of their interactions. This makes all other causal explanations duplicative. At best they are a form of data compression. And so this makes motive irrelevant and conciousness epiphenomenal.

    Of course, were this true, it would make conciousness a sort of bizarre physical phenomena. Where else in nature would such epiphenomna exist? You'd have a physical property of a system (conciousness) that only has causation going in one direction. Everywhere else all properties are causally efficacious. This explanation still seems to rely on a sort of sui generis conciousness and dualism of sorts.
  • Donald Hoffman


    Yes, there is a dualism there. Causal closure is defined in terms of a sort of mental/physical dualism where only the physical is causally efficacious. That's one of the things that I think is wrong with it.

    Does a dual aspect theory avoid this problem? I am not sure. For one, it seems hard to have dual aspect theories without panpsychism, but even if you have pansychism you have to explain why, if everything experiences, our minds end up being discrete and 'bound to our bodies' in the way they appear to be. And the question of psychophysical harmony remains completely unaddressed.



    Sure, but this leaves the problem of psychophysical harmony completely unaddressed. So you have all the same problems.

    I asked this in the other thread. How exactly does Spinoza's conception demonstrate why the experiences produced by our bodies should synch up with the evolutionary history of our perceptual organs? If everything has an experiential/mental side to it, why is our phenomenological horizon rooted to our body in the way it seems to be? Shouldn't half my brain and the surrounding air make up its own physical system with a concious dual aspect, or the same for 4 people in a room? If all physical systems have this dual aspect, how does anesthesia or a drop in blood pressure wipe this away for us? Why is the experiential side of this dual aspect "just so" so as to make the experience side match the physical side? What's the relation between the two? How could such a position ever be verified or supported empirically? It seems very "God of the gaps," to me.

    I don't think this is a silver bullet at all. All the same problems remain.



    Have you read the book? I feel like it sort of gets misrepresented in reviews because the argument really doesn't come into focus until the last chapter. Hoffman's point is an argument about a certain, fairly dominant form of naturalism that imports Kantian dualism into "science." This view is ultimately self-refuting in just the sort of way Plantinga is talking about, although Hoffman provides much more empirical support for his claims. If you've only read his earlier "desktop interface" papers (which have the benefit of being free), I think it's easy to miss how Hoffman's ideas have changed pretty radically, even if he uses the old arguments to prove the new thesis.

    Hoffman thinks the only way to avoid this is a sort of objective idealism with agents baked in from the start—"agential realism." This solution is not very convincing IMO. The argument that the sort of semi-Kantian representationalist soup dominant in modern cognitive science is self-refuting is, IMHO, quite strong (and plenty of people other than Hoffman have attacked it, for example the enactivist view counters other elements of it).
  • Donald Hoffman
    I suppose there is also something to be said against the push to "naturalize" epistemology when the "naturalized" attitude is presumed to be one that makes the targets of practical and aesthetic reason (the good and the beautiful) somehow illusory, "less real," or "merely subjective," while elevating theoretical reason as the sole appropriate instrument for knowing the world. This is quite the presupposition start with.

    Is it required by "naturalism" tout court? No, but it's an assumption often bundled into "naturalism" as the term is commonly employed.
  • Donald Hoffman


    I don't think Plantinga's argument is air tight, but neither is it merely a strawman. It's been taken seriously because, even if it is a simple argument, there is something to it. Hoffman is making a very similar sort of argument and I think he lays out a pretty good case for how some understandings of naturalism and a naturalized epistemology end up being self-refuting. That is, if we assume they are true, they suggest that they are false.

    It's a complex issue.

    I honestly think both Plantinga and Hoffman bury the lead here in not focusing on the problem of psycho-physical harmony. Physicalism is normally defined in terms of casual closure. Reductionist materialism also assumes causal closure. But if causal closure is true the mental never—on pain of violating the principle—has any effect on behavior. It is just "along for the ride." Everything is determined by particles and how they interact, so no one ever goes and gets a drink "because they feel thirsty" (at least not in the causally efficacious sense of "cause.")

    But then, were this true, natural selection can never directly select on how the world is experienced by us. Since the mental doesn't affect behavior, it has no relevance for survival or reproduction. So what is needed is some sort of "just so" story where evolution makes our experiences close enough to reality that they tell us things about how the world really is, even though we don't think our experiences dictate our behavior (because what we do can be entirely explained in terms of particles interacting, and particles lack experiences). Where is the just so story? Well, it doesn't exist. You'd need an answer to the Hard Problem to provide one.

    Yet it seems rather implausible that such a story can be found given how well evolutionary psychology predicts how the world is experienced. It certainly seems like "we have sex because it feels good," or that "being cold feels unpleasant so that we will try to find shelter," etc.

    I think Plantinga's argument is ultimately just one simplified form of an entire web of arguments that can be made vis-á-vis psychophysical harmony, causal closure, and epistemology. Hoffman is able to flesh this out with some models and empirical results. Is it air tight? No. But then again what they are arguing against is also a position that is not airtight. Yet this position, like reductionism, is one that seems to demand that it be "assumed true until decisively proven otherwise," and I'd venture that there is not good grounds to accept this
  • Donald Hoffman


    Have you made it to the last chapter? He sort of turns everything he has said on his head. His point is that a common way of looking at the relationship between mind and nature is self-refuting. Plantinga has previously made a similar argument. I don't think this is a bad argument, although the way it is framed it does seem like he is refuting himself as well. But I take it that this is exactly the point, his position is self-refuting because it's situated in popular assumptions that are self-refuting.

    IMO, much modern philosophy ends up in a sort of Kantian dualism because it's unwilling to challenge dogmatic assumptions stemming for Lockean objectivity and the primacy of "primary properties," reductionism, and the division of the word into subject and object, phenomenal/noumenal.
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    I would say the intention is to dissociate the idea of categorizing actions, based on some rule which is inherently indeterminate, from the actions themselves. We act regardless of the indeterminacy. I would also say such a view is in direct contrast to views where thinking is something more than sequences of our surface experiences, e.g. computational theories of mind, "language of thought hypothesis" and views that there is some intelligible, determinate semantic representations / symbols / modules underpinning those sequences.

    IDK, if "language of thought" or CTM are correct then they are still describing "things we do." Saying rule following is determined by "acts" or "what we do," has to limit what constitutes an "act" or "doing" or else we are still left with something trivial. If anything a person can possibly do constitutes and "act" or "doing" then to say rule following shows up in these categories is to say essentially nothing at all. Clearly, our following rules can't be "something we do not do," or "an act we do not preform."

    And clearly that we are following a rule does not depend on "anything we do at all," but rather our doing particular things, and this particularity is what must be defined in order to avoid something like: "someone's following a rule is constituted by them acting in accordance with the rule," which is just tautological.

    While I think behaviorism is misguided for other reasons, it at least avoids this.
  • Perception


    The question of whether Boltzmann Brains have ever or will ever exist seems ancillary though. Michael simply misunderstands the concept if he thinks it provides an example of "brains alone producing conciousness in space." The concept isn't even specific to "brains," it's an argument about the minimum that is needed to produce any given interval of experience and how this smaller system is more likely to emerge from random fluctuations than any larger system. It is entirely silent on "what is the minimum physical system required to produce x interval of experience," since this is simply not a question that is addressed by the concept.

    However, we can certainly extrapolate from biology and neuroscience that a Boltzmann brain would need to exist in some range of ambient temperature, atmosphere, etc. in order to produce anything like say "5 seconds of human experience."
  • Perception


    Sorry, I wasn't trying to be comical or ridiculous. I was just saying that my experience doesn't have to reflect interaction with my environment

    I should be the one to apologize, I just meant to add some rhetorical flourish, not impune anything.

    I was just saying that my experience doesn't have to reflect interaction with my environment. I have long had a recurring dream about a house that opens up into another house. Though I've experienced being in this weird house multiple times, it doesn't exist. My environment at the time was my bedroom. It appears that experience was generated by my brain.

    Funny enough, I've been working on a novel that involves people stuck in an infinite house.

    Anyhow, I get what you are saying. I would just frame it differently. Your enviornment isn't irrelevant to your dream. Obviously if we filled the room with anesthetic or poison gas instead of air it would change the experience. Rather I would frame it like this: "our experiences don't always correlate with the enviornment the way we think they do under 'normal' conditions."

    It's possible to have convincing dreams, false memories, hallucinations, etc. These still involve the environment, but they don't have the connection to the world that we think obtains when we "see an apple" or "taste a lemon."

    Of course, even in such "normal" instances of perception things that don't exist are phenomenologicaly present to us. For instance, we might see smoke on the horizon as a sign of a fire that has already ceased to exist. When we read fiction the signs on the page of our book direct our awareness to things that have never existed. Dyadic mechanistic accounts of nature seem to always have a problem with this sort of thing, hence the divorce of "mental" and "natural" or subject and objective world.
  • Perception


    You seem to be operating under the impression that the "Boltzmann Brain" is "a brain and just a brain experiencing is space." It isn't. It is just "physical system capable of producing consciousness." It says absolutely nothing about brains floating in vacuum having experiences.
  • Perception


    Are you under the impression that Boltzmann brains actually exist? They are a thought experiment, the inferred result of a universe with an infinite duration. If the Big Bang marks the begining of our universe it is vanishingly unlikely that a Boltzmann Brain has ever existed or will come to exist at any relevant time scale. If the universe has any sort of "Big Crunch" or "Big Tear" or "Big Reset" Boltzmann brains will never exist.

    You might as well be arguing that people can walk through walls or teleport because theory might allow for the possibility at some incredibly small probability.


    So, yes, apparently brains can generate experiences in the vacuum of space. All that is required is the appropriate neurological activity, regardless of what causes and maintains this activity.

    This shows a misunderstanding of the thought experiment. The Boltzmann brain is a critique of the Boltzmann universe, the idea that the observable universe could arise from chance thermodynamic fluctuations. The point of the Boltzmann brain is that it is far more likely for random fluctuations to result in a smaller system, the minimum needed to produce any given interval of conciousness. The Boltzmann brain says absolutely nothing about brains alone producing conciousness in the vacuum of space. It would be silly if it did, since this is considered a biological impossibility. If random thermodynamic fluctuations are to produce any given interval of conciousness they will clearly need to include an environment in which a brain is actually able to produce conciousness.

    It in no way says that a human brain can generate conciousness at a temperature close to absolute zero, without any oxygen, etc.
  • Perception


    I am quite aware of the Boltzmann Brain. What do you think the relevance is?
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    Yes, of course you're right. It's just that that it isn't like the resemblances between one dog and another, but between a dog and a sculpture of it. We wouldn't confuse a fossil with a living member of the species, would we?

    Of course not. But if one wants to explain why we don't confuse them one has to move from the vague metaphor to something more concrete.

    That was my only point on the comment. Wittgenstein clearly knows he is being very vague, he anticipates this charge. So to answer your point: "Vagueness is not necessarily OK. But I think that W has a point if he's saying that sometimes it is all you've got," yes, I think this is sometimes true. I was speaking to the specific application.



    Family resemblance isn't intended as a basis for biological theorizing. The notion of family resemblances is not intended to do any intellectual work for a biologist.

    But it is appropriate for the linguist, philosopher of language, or semiotican? Don't grunts and screams share a family resemblance with speech? The issue is that you still need the "right sort of family resemblance," since all things resemble each other in at least some ways.

    The "oh, now I get it, moment" you mention is definitely included as part of enacting or demonstrating understanding, and I have many times had that kind of thing in mind when thinking about it.

    Then we're in agreement. I did not take it that this is what meant by behavior. Perhaps he can clarify.

    If someone's behavior is expanded to include their thoughts, experiencing, etc. then I see no issue in the saying that understanding a rule can be judged solely in terms of behavior. Although, in this case, wouldn't "behavior" constitute essentially everything it is possible for a person to do (e.g., thinking, perceiving, liking, existing, understanding, being angry, etc.)? Would there be anything a person can do that won't count as behavior? Or to use Banno's phrasing: if "that someone is following a rule is shown by what they do," is "what someone does," anything they do at all?

    But then "rule following is something people do, as part of the sum total of anything they do at all," isn't saying much of anything. I was thinking of "what people do," or behavior in terms of Skinnerian stimulus and response. Such a framing has the deficit of being wrong IMO, but it does at least actually say something.

    The idea is not to resolve indeterminacy. The idea is that we enact appropriate behaviors even when a characterization of them is simultaneously indeterminate in principle. Social interaction doesn't make rules less indeterminate. We learn how to act appropriately by interacting with our environment, including the appropriate use of words when we interact socially, para-socially or whatever.

    Ok, but wouldn't this hold for all activities, not just social ones. And wouldn't this be true or animals as well?

    I do think Wittgenstein brings in social interaction to fix the underdetermination problem. That part seems fairly straightforward. I think I might disagree that it actually addressed the problem though. The argument from underdetermination is too strong, it proves too much.
  • Perception


    It happens every time you dream, it's happening to people who have received chemical paralytic drugs, it's happening to people who are locked in. The burden would be on you to show that bodily interaction is necessary to consciousness.

    No it doesn't. The idea that the brain can generate experiences without any access to a very specific sort of enviornment is not "supported by science," in the least. I have already explained why. The enviornment is not simply a "power source," either, this is a comic simplification.

    Does a brain generate any experience on the ocean floor? On the surface of a star? In the void of space? In a room filled with helium gas? Torn out of the skull? All your counter examples still involve brains inside bodies and bodies that are inside environments that are in the very narrow range that allow for the production of experience.

    Take someone with locked in syndrome. Replace the atmosphere in the room with most other gasses: helium, argon, hydrogen, etc. They will stop experiencing. Turn the temperature down low enough and they will stop experiencing. Turn it up enough and they will instantly stop experiencing. You are abstracting away relevant details and then claiming that the brain can operate in a vacuum. The claim that "science says this is true," is particularly ridiculous. Science says there are no truly isolated systems and science also days that putting a human body in all sorts of only relatively isolated systems—even simply zipping someone into an airtight bag—will cause then to cease having experiences extremely rapidly.

    Brain function requires a constant exchange of matter, information, energy, and causation across the boundaries of the brain. Dreaming and locked in syndrome are not remotely counterexamples of this.
  • Wittgenstein, Cognitive Relativism, and "Nested Forms of Life"


    Yes, this is pretty much the point of family resemblances so I just don't really understand what you are criticizing about it when you agree with it. I feel like you are attributing more to this concept than required and criticizing it for things not intended.

    It would be vacuous for a biologist to say "all life shares a family resemblance," and to stop there. Whatever "all life," is it must surely have some sort of resemblance to be deemed "all life" in the first place. What biologists do in reality is posit a constellation of features that make up this "family resemblance," e.g. having a metabolism, undergoing selection, etc. If one stops at the metaphor and introduces nothing else one hasn't said anything. All of being can be said to resemble all that is in some way or another.



    And what is understanding over and above the ability to enact or demonstrate understanding?

    Chat GPT can enact grammatical rules. Does it understand them? I would say no, understanding has a phenomenological element.

    I am pretty sure every mentally capable adult has had multiple experiences where they have struggled to learn some game or set of rules and had an "oh, now I get it," moment. That's understanding. The issue of validating understanding is not the same thing as describing what understanding is, just as being in pain is not equivalent with wincing and grunting.

    Attributing rules to the behavior is chronically underdetermined / indeterminate on some level, and this issue regresses chronically. You can observe some behavior whose description by a rule is completely indeterminate; nonetheless, a person attributes a rule anyway.

    As a previous poster already pointed out, all empirical science is undetermined. The problem of underdetermination is about as broad as the Problem of Induction or the Scandal of Deduction. It would seem to make most knowledge impossible if one demands "absolute certainty." That's why I never found the arguments about rule following from underdetermination particularly convincing. You could make the same sort of argument about Newton's Laws, quantum mechanics—essentially all empirical claims, or about all induction. That the future is like the past is "undetermined," as is memory being reliable. Thus, the issue of under determination is as much a factor for any sort of social rule following as it is for some person designing their own board game and play testing it by themselves; democratization doesn't eliminate the issue.

    But I don't think this warrants nescience vis-á-vis phenomena like "understanding a rule," that we are well acquainted with either. The demand for "absolute certainty" is the result of a good deal of ridiculousness in philosophy.

Count Timothy von Icarus

Start FollowingSend a Message