• Count Timothy von Icarus
    4.1k
    Introduction:
    Broadly speaking, an argument from underdetermination is one that attempts to show that available evidence is insufficient to determine which of several competing theories is true. That is, many different theories might be able to explain the same evidence, hence any move to choose between theories must be “underdetermined,” i.e., not determined by the evidence. Within the class of such arguments, there are many that go a step further. These will often purport to show that for any body of evidence, there will always be an infinite number of different explanations that are consistent with that evidence.1

    Since the start of the 20th century, these arguments have increasingly played a pivotal role in philosophy, often leading to radical conclusions. Yet even those who have resisted such conclusions have often found these arguments difficult to refute. Such arguments were not unknown to pre-modern thinkers, and yet they were generally considered fairly easy to dispatch. Hence, we are left with a bit of a mystery. Why did arguments from underdetermination begin to seem insurmountable?

    In this post, we will try to answer this question. As we shall see, it is not that the ability of underdetermination to produce support for radical skepticism was lost on pre-modern thinkers. Rather, it was that different starting assumptions tended to render such arguments fairly innocuous. To illustrate this point, we shall explore a typical Thomistic response to arguments of this form.

    Underdetermination and Skepticism:

    Arguments from underdetermination have played a defining role in contemporary thought. There is for example:

    • David Hume’s argument against causal inferences and explanations, as well as his hugely influential “Problem of Induction;”

    • Ludwig Wittgenstein’s rule-following argument, as well as Saul Kripke’s influential reformulation of it;

    • W.V.O Quine’s argument for the inscrutability of reference;

    • Quine’s holist arguments for the underdetermination of theories by evidence, as well similar arguments for forms of theoretical underdetermination made by J.S. Mill and expounded upon by Pierre Duhem;

    • Thomas Kuhn’s arguments about underdetermination at the level of scientific paradigms;

    • As well as many others, including Feyerabend’s “epistemological anarchism,” Goodman’s “new riddle of induction,” etc.

    It would not be an understatement to say that these arguments have been, in many ways, the defining feature of the Anglo-American, empiricist-analytic philosophical paradigm. It is also hard to overstate the radical nature of some of the conclusions that have been drawn from these arguments. Bertrand Russell, for instance, suggested that if Hume’s “Problem of Induction” could not be resolved then “there is no intellectual difference between sanity and insanity.”2 Richard Rorty and others, drawing from Quine and Wittgenstein, argued that truth and knowledge themselves would have to be radically redefined. “Truth,” rather than “being the adequacy of the intellect to being”—the view held, in varying forms, for most of history—would instead have to be understood in terms of a new belief’s coherence with our preexisting beliefs, or as merely a compliment we pay to beliefs we find good to hold. Similarly, these arguments have been used to argue against even modest forms of scientific realism, resulting in a radical anti-realism, which claims that scientific truth is merely a social label bestowed upon well-entrenched practices.3

    Obviously, not all thinkers within this tradition have agreed with such radical proposals, yet they have often found it hard to counter them. Were a typical Enlightenment or Victorian-era philosopher transported to our time, I do not think it is a stretch to say they would find these skeptical conclusions shocking, a radical reversal of the faith in reason and empirical methods that dominated in their epoch. So too, we can well imagine that pre-modern scholastic thinkers would be astounded by shifts such as the attempt to eliminate causation from science (the very thing that, for them, defined science), or the various moves to radically redefine truth.

    On this last move, the redefinition of truth, it is worth noting just how radical the paths that have been charted in contemporary thought might seem. We could consider here how Quine's more radical successors took their epistemological holism to require a shift to a coherence-based, pragmatic model of truth, where truth is a function of how well a new belief fits in with our “web” of preexisting beliefs. Such changes, and there have been many suggested, are of course, rarely presented as rejections of truth, but instead as necessary redefinitions (this being true even of most proponents of deflationary theories of truth). Yet we could well imagine our time travelers objecting: “but this is just equivocation. Truth is thought’s grasp of being. These redefinitions amount to denying truth, and then recommending some other thing to fill the gap. The equivocation just softens the blow. On your view, we are no longer capable of scientia (knowledge), but merely a system of conceptual and linguistic manipulation.” That is, the charge would be that this is really just radical skepticism and epistemic nihilism masquerading as an a mere “adaptation.”

    However, there is one group who might not be particularly surprised by this course of events: the original empiricists. After all, Sextus Empiricus, from whom the name “empiricist” is derived, was part of the explicitly skeptical Pyrrhonist school of ancient thought. A key idea within that school was that of equipollence. Equipollence exists when opposing arguments or theories appear to have equal strength or plausibility. In this way, it is quite similar to underdetermination. If we have attained equipollence, then we are not led to assert one conclusion over all others, and so we are not compelled towards any particular belief.

    Rather than viewing equipollence as a sort of epistemic crisis (which was often how arguments from underdetermination were initially viewed in modern thought4) these skeptics strove to attain equipollence. The idea here is fairly straightforward. If we are not forced to any particular conclusions, then our passions will also not be raised by any particular conclusion. This allows us to achieve ataraxia, a state of serene calmness. By contrast, strong beliefs were thought to promote anxiety and distraction, the opposite of the state of dispassion sought by these thinkers. To be clear, the argument here is not that all arguments are equally strong, or that truth is, in principle, impossible to attain.5 It is rather that we can reason to the view that many arguments seem equally strong (which in turn does put the reliability of most truth claims in question).

    It is useful to clarify here what made “empiricism” unique. It is not that it used sense observations to try to explain the world. If this alone made a philosophy “empirical,” then essentially all philosophy is empiricist. Nor was it a commitment to the proto-scientific methods of the ancient world. Aristotle, labeled as a dogmatist by the ancient empiricists, is often considered to be the father of many sciences. Of course, modern empiricists do sometimes claim Aristotle as one of their own on account of this fact. Yet, if empiricism is construed in these terms, then the overwhelming majority of all thinkers would be "empiricists." Scholasticism and its modern variants would be “empirical,” as would Hegelian philosophy, etc.

    Rather, what made the ancient empiricists unique, aside from seeking equipollence, was their means for doing so: rejecting causal and metaphysical explanations. That is, they stuck to descriptions of phenomena and observed patterns. The early empiricists were physicians who employed a practice of recording case studies in order to keep track of patterns. Rather than posit causes that would explain why some cases improved and some did not, or why some treatment was efficacious, they simply guided their practices based on the consistency of the patterns they observed. This allowed for conclusions like: “treatment with this herb works consistently in these cases,” but not explanations for why this was so. Prima facie, this is actually somewhat at odds with the “scientific method” as popularly conceived, since one could hardly develop hypotheses to test without violating the prohibition on causal theses.6

    Hence, it is perhaps unsurprising that a later philosophical tradition modeled on these thinkers would eventually find itself mired in (or perhaps, “set free by”) skepticism. Indeed, this connection was clear for some in the tradition, such as Hume, although it certainly became obscured as “empirical” was increasingly used as a synonym for “a scientifically-informed philosophy and world-view” (with “science” overwhelmingly being understood in realist terms in the 19th and early 20th centuries).7 For some, this skeptical shift, and a move towards anti-realism has been a positive outcome. Somewhat akin to the ancient empiricists, these thinkers think that the death of realism and certainty is an improvement, often because it is thought to “free us” by allowing us to think in new ways. Against this though, we might consider that, if realism is the case, our ignorance of reality would itself represent a limit on freedom.

    At any rate, for most, the skepticism resulting from underdetermination has been seen as a serious threat and challenge. Yet responses have often involved “work-arounds” that might themselves been seen as skeptical. Here, it is worth employing Saul Kripke’s distinction between different responses to skeptical problems: straight solutions and skeptical solutions. The “straight solution” aims at showing that the skeptic is simply wrong, that their skepticism is unjustified. A “skeptical solution,” by contrast, allows that the skeptic has a point, but will maintain that “our ordinary practice or belief is justified because–contrary appearances notwithstanding—it need not require the justification the sceptic has shown to be untenable.”8 For example, Kripke’s own skeptical solution vis-à-vis linguistic meaning claims that language is not meaningful because of metaphysical facts, but only on account of shared behavioral regularities related to social norms and public agreement.9

    Again, the difficulty here is that the “solutions” often seem quite skeptical, e.g., “words never refer to things,” “there is never a fact about exactly what rules we are following,” etc. Here, it is worth considering what it is one ought to do if one sees an argument with an absurd conclusion. The first things to do are to check that the argument is valid, and crucially, that the premises are true. I would argue that contemporary thought, particularly analytic thought, has far too often only done the first. Because it holds many empiricist presuppositions beyond repute (indeed, “dogmatically” might not be too strong a word) it has not generally questioned them.

    Yet if an epistemology results in our having to affirm conclusions that seem prima facie absurd, and if further, it seems to lead towards radical skepticism and epistemological nihilism, or an ever branching fragmentation of disparate “skeptical solutions” and new “anti-realisms,” that might be a good indication that it is simply a bad epistemology. Indeed, an ability to at least secure our most bedrock beliefs might be considered a sort of minimal benchmark for a set of epistemic premises. Yet, due to the conflation of “empiricism” with “the scientific method,” as well as modern culture’s preference for iconoclasm, novelty, and “creativity,” the starting assumptions that lead to these conclusions are rarely questioned. With that in mind, let us turn to the realist responses that, in prior epochs, made these arguments seem relatively insubstantial.10


    The Realist Response:

    A Thomistic or Aristotelian response to arguments from underdetermination would deny the assumptions that make such arguments plausible in the first place. It is only because of the presupposition that no metaphysically substantial premise can enter into our epistemology that such arguments are able to gain momentum. This, paired with the demand that all relevant “evidence” must be “third-person” and “external,” arguably makes explaining knowledge (a first-person experience) impossible from the outset. Indeed, it should be unsurprising that such limitations result in theories of truth that claim that truth has no metaphysical import; such a conclusion has been simply assumed as a premise!

    These different assumptions show up a key in difference in how sense knowledge is considered. For the empiricist, perception is merely raw, inchoate “sense data,” from which patterns can be derived. For Aquinas and Aristotle, there is a real form in things that is received by the senses, and then abstracted and known by the intellect. The latter process explains the phenomenon of our understanding anything at all. From the realist perspective, we start from a world populated by trees, rabbits, etc. When Quine concludes that we can never tell which word in a foreign language refers to rabbits, because there is always other stimuli that accompanies the presence of a rabbit that could be referred to instead, he is involved in a sort of question begging. He has assumed, from the outset, that there are no things with determinant natures for us to name. Yet if this is assumed, it can hardly be shocking that we are forced conclude that words cannot refer to things, for we have already decided that there are no things to refer to.11

    Here, the empiricist might object: “that’s a fine thesis, but how can you prove that this ‘form’ or ‘actuality’ exists?” The key assumptions that underwrite these notions boil down to this:

    1. Things do not happen “for no reason at all.” Things/events have causes. If something is contingent, if it has only potential existence, then some prior actuality must bring it into being. It will not simply snap into being of itself. Our experiences are contingent, thus they must be caused by something that is prior to them.

    2. Being is intelligible, and to be is to be intelligible. Every being is something in particular. That is, it has a form, an actuality, that is determinant of what it is (as well as the potential to change, explained by matter). This actuality determines how a thing interacts with everything else, including our sense organs and our intellects. If this was not the case, interactions would be essentially uncaused, and then there would be no reason for them to be one way and not any other (i.e. random).

    That is it. These are assumptions, but they do not seem to be particularly objectionable ones. Indeed, if they did not hold, if being were unintelligible and things did happen “for no reason at all,” we might suppose that philosophy and science are a lost cause. Of course, much disagreement here results from a misunderstanding of the nature of form, potentiality, etc. Sometimes, these are taken to be something like terms in a scientific theory, rather than a set of metaphysical principles. Yet a thing’s form is simply that which makes a thing anything at all. The basic idea is this. We perceive a world around us. It is not a world of indistinct sense data, but a world full of things, particularly living things that continually act to sustain themselves, to keep being what they are (i.e., to maintain their form). Such perceptions must come from somewhere, since they cannot occur “for no reason at all.” They have causes. Whatever acts on our senses, and is understood by our intellects, has a prior actuality that activates our potential to perceive and understand.12

    This is not a naive realism. Perception can be thought of as “of” the interaction between the immediate environment and our sense organs. For instance, light waves carry the form of a tree to our eye.13 Perception is, in a sense, the experience of this interaction, but the interaction relates us directly to the objects known by the senses. Knowledge of trees, an understanding of what a tree is, comes from the presence of this form in our intellect after it has been abstracted from the senses.

    A denial of the transmission of form would essentially be a denial of the idea that any prior actuality in the things we perceive and known makes it to our minds. Yet if this were the case, it hardly seems that perception could be of anything. If perception is not caused by the prior actuality of things, it would seemingly be uncaused. More importantly, it could not involve the “appearance of” anything; rather any such appearances would simply be free standing appearances that are “of” nothing in particular (in which case, the appearance/reality distinction seems to collapse, and it turns out that phenomena just are reality).

    However, it is clear how confusion can arise here. We do experience error. We can, for instance, mistake a fake apple for a real one. However, crucially, this is because the fake apple has the form (at least partially) of a real apple. Likewise, even if we were to experience seeing an apple due to some sort of electromagnetic stimulation of our visual cortex, we would still experience the sight of an apple because something carrying the form of an apple (or something very much like it) is interacting with us. The point here is that perception emerges because we possess a determinant actuality (a nature) that interacts with other determinant natures. No perception occurs in a vacuum. Placing a healthy, experiencing human body in an absolute vacuum would result in death; consciousness requires a quite narrow range of environments. It is the entire system (object, environment, and person) that is required for sensation. Error can occur within our thoughts, but it will always be an error with, in principle, distinct causal origins.

    The second key realist assertion is that the intellect is able to abstract this form. This “abstraction” is not merely a sort of pattern recognition. In abstraction, the form of what is known becomes present in the intellect; the intellect is in a sense identical with the thing known. This explains the act of understanding. This does not imply that we come to know everything about the actuality of the form. Indeed, we will never know everything about any sort of thing. As Aquinas’ famously put it: “all the efforts of the human intellect cannot exhaust the essence of a single fly.” Nonetheless we know what a fly is. We understand it. It is this phenomenological experience of understanding that is the key datum which epistemology is supposed to explain. Empiricism, by rendering this experience off-limits, is essentially sawing off the branch on which any epistemology must hang.

    Form and its transmission explain two crucial things. First, it explains why we perceive anything at all, why we experience one thing and not another, and how we come to possess understanding. Second, it explains why we experience things that are “outside” of ourselves. Much more can be said here. In depth accounts have been given of how these principles might work in terms of the mechanics of perception. However, this basic understanding is all that we need to critically weaken many famous arguments from underdetermination. It is not the case that anything at all (or nothing at all) can be responsible for our understanding. Understanding must flow from causes, from a prior actuality (form). If the cause of my understanding of apples is not apples, it must still be something the contains the form of an apple. Likewise, an understanding of number (dimensive quantity)—of magnitude and multitude—presupposes some understanding of a measure, which must be defined in terms of distinguishable qualites (i.e., virtual quantity). To know “three ducks” or “half a duck” requires the measure “duck,” and so it is for all quantities, although we can abstract quantity from substantial form.14 Our understanding of number then, must come from a prior actuality that is abstracted from the senses.

    To be sure, we might always be mistaken about the things which we only partially understand, or how and why different things interact. Nonetheless, there must be actualities that are responsible for our understanding and we seem to have absolutely no good reason to think these actualities are anything other than trees, ants, flowers, stars, etc. (i.e., all the things known by the senses and understood by the intellect). When we learn something new about these entities, e.g., when combustion theory replaces phlogiston theory as an explanation of fire, we are still involved in understanding the same entities and natures. What is important to keep in focus here is that theories, models, etc. (as well as ideas and words) are how we know, not primarily what we know. What we know, the form, remains the same.

    Arguments to the effect that we can never know from our own experiences alone which sorts of arithmetic operations we are preforming rely on denying the interior act of understanding as a valid datum in epistemology. Yet, prima facie, epistemology, the study of knowledge, is precisely the field that is supposed to explain this phenomenon. That is, it is supposed to explain how we know what we know, and secondarily, the possibility and causes of error. Excluding all noetic experience makes the main goal impossible by default, leaving only the secondary goal, elucidating error. In such a context, skepticism is inevitable because the only valid topic is error. Of course, another irony here is that “external justifications” themselves always rely on experiences that are first-person, and which require an act of understanding to be intelligible. When the intellect is disrupted, as during a stroke, sense data is still received, and yet without the action of the understanding there is no knowledge (as described by victims, who can no longer recognize everyday objects or words).15

    This helps explain a huge disconnect in modern epistemology. Through advances in quantitative methods, computing methods, and instrumentation, we can now make predictions far better than ever before. We are better able to collect data, and to use this data to forecast the future, or to predict the outcome of a particular action. We can explain prediction, but not understanding. Indeed, we seemed pressed ever closer towards either a sort of anti-realist pragmatism bordering on epistemic nihilism or an ontic structural realism that claims that the quantities of our predictive models just are all that exists. Yet hasn’t this outcome been inevitable? We have removed the act of understanding from the datum of epistemology. Next, we removed all the metaphysical infrastructure that explained how such an act was possible, on the grounds that it was extraneous to explaining prediction and pattern recognition. The result is that the underdetermination of sheer prediction becomes unanswerable, and skepticism reigns.*

    On a closing note, I would just suggest the possibility that, on further investigation, give the strict epistemic presuppositions often in play, it could be revealed that many arguments for underdetermination are in fact themselves underdetermined. That is, their truth might be underdetermined by the evidence we can muster. In particular, this is likely to affect arguments from phenomenological underdetermination (the idea that false beliefs can "feel like" true ones).

    ---

    1. This is akin to the idea that, for any finite set of points on a graph, there will always be an infinite number of functions that will pass through every point.

    2. Bertrand Russell. A History of Western Philosophy. (1946) pg. 699. Russell, of course, also extended Hume’s thought on causation to propose that causation itself should be wholly eliminated from scientific discourse

    3. E.g., the work of David Bloor and others working from the “sociology of scientific knowledge” paradigm.

    4. Recall Russell’s statement about Hume’s conclusions collapsing the distinction between sanity and insanity.

    5. The ancient Academic skeptics did utilize similar methods, mostly focused on phenomenological underdetermination (i.e., that false beliefs feel like true ones) to argue against our ability to grasp truth.

    6. Thus, the emergence of a strong “anti-metaphysical” movement within early-20th century empiricism was a pivot back towards empiricism’s roots (sometimes a conscious one).

    7. Indeed, this conflation of “empiricism” with “science” is probably the key reason that the epistemic presuppositions of empiricism go unchallenged regardless of the apparent absurdities they lead to.

    8. Saul Kripke. Wittgenstein on Rules and Private Language. (1982) pg. 66

    9.E.g., we do not “mean ‘tree’” by “tree” because the form of trees is present in our intellects and signified our utterance.

    10. Another intriguing explanatory connection here is the historical linkage between empiricism and the now hegemonic political ideology of liberalism. Liberalism itself often relies explicitly on skepticism about human nature and the human good for its justification.

    11. Of course, the intelligibility of Quine’s argument requires that we understand what a rabbit is.

    12. See: St. Thomas Aquinas. Summa Theologiae I, Q.84-86, De Veritate Q.2, A.6 & Q.3, A.3 and Aristotle’s De Anima, Books II-III (as well as St. Thomas’ commentary).

    13. This can be conceived of in terms of a triadic semiotic relationship. Nathan Lyon’s Signs in the Dust: A Theory of Natural Culture and Cultural Nature (2019) includes a detailed description of the mechanics here, relying on Aquinas’ notion of “intentions in the media.”

    14. See: Aristotle. Metaphysics. (Book X, Ch. I)

    15. Indeed, in some forms of phenomenology, a preference for immediate experience, and a bias against any sort of conceptual understanding seems to elevate the experience of the stroke victim or infant to a sort of ideal. The sage and scientist are said to be “lost in abstractions,” while the unformed or damaged mind becomes an idealized epistemic goal.
  • Moliere
    6.1k
    • David Hume’s argument against causal inferences and explanations, as well as his hugely influential “Problem of Induction;”

    • Ludwig Wittgenstein’s rule-following argument, as well as Saul Kripke’s influential reformulation of it;

    • W.V.O Quine’s argument for the inscrutability of reference;

    • Quine’s holist arguments for the underdetermination of theories by evidence, as well similar arguments for forms of theoretical underdetermination made by J.S. Mill and expounded upon by Pierre Duhem;

    • Thomas Kuhn’s arguments about underdetermination at the level of scientific paradigms;

    • As well as many others, including Feyerabend’s “epistemological anarchism,” Goodman’s “new riddle of induction,” etc.
    Count Timothy von Icarus

    I want to nitpick these examples on the basis that they're underdetermined -- or, the flip side of "underdetermination" is confirmation bias. There's some reason for the selection of examples, and that selection of examples may justify what you're saying as "this is where I'm coming from", but how are we to know that these are good examples of underdetermination such that Aquinas or Aristotle or the pre-modern mind had answers to these questions if we just dropped the questions and read Aquinas, Aristotle, and the ancients only?

    This is something I thought while reading MacIntyre. Yes, I see what you're saying, but like Heidegger you're sort of inventing a whole mindset that is "pre-modern", and justifying it with many quotes -- but at the end of the day if you haven't spoken to people from the pre-modern era then, my brother in christ, you cannot make claims about how pre-modern people think no matter how many texts you read from that era.

    It elucidates how we think, but it may not be the panacea of problems contemporary philosophy faces.

    It looks soothing -- but ultimately when someone says that if we go back to some ancient or medieval thinker as the person who saw it all I think that we're kind of fibbing to ourselves.

    Perhaps with good purpose, and definitely with good thinking -- but it's more imaginative than the statement reads. We're attempting to reconstruct the thoughts of people we can't talk to, yes. Especially in the history of philosophy -- that we even have a moon-shot chance of doing so is itself amazing. But something I've learned from doing Epicurus studies is that humility is important in approaching anyone pre-Gutenberg. Aquinas may be so well read because it was just before the Gutenberg press rather than because he had the insight into the real nature of things, and he provided a soothing picture of the world as a harmony.
  • Count Timothy von Icarus
    4.1k


    I want to nitpick these examples on the basis that they're underdetermined -- or, the flip side of "underdetermination" is confirmation bias. There's some reason for the selection of examples, and that selection of examples may justify what you're saying as "this is where I'm coming from", but how are we to know that these are good examples of underdetermination such that Aquinas or Aristotle or the pre-modern mind had answers to these questions if we just dropped the questions and read Aquinas, Aristotle, and the ancients only?

    I see what you're saying, The reason that I picked these is because they are influential and fit the basic idea, and because they generate theses that are fairly radical (and so relevant and interesting). Basic underdetermination of theory is considered by medieval writers (and 19th century guys) but it isn't that exciting because its consequences are contained (generally, only affecting weaker quia demonstrations "from effects", but I didn't want to get into that). There were arguments from underdetermination in the ancient world and the Middle Ages though. I am pretty sure Epicurus appeals to the underdetermination of theory by data re astronomical models in one of his surviving letters, and Aquinas mentions the same issue vis-a-vis astronomy in the Summa. Islamic occasionalists also argued from underdetermination, etc. It's just that the scope of under determination for certain sorts of things was limited by the assumptions.

    For example:

    Reply to Objection 2. Reason may be employed in two ways to establish a point: firstly, for the purpose of furnishing sufficient proof of some principle, as in natural science, where sufficient proof can be brought to show that the movement of the heavens is always of uniform velocity. Reason is employed in another way, not as furnishing a sufficient proof of a principle, but as confirming an already established principle, by showing the congruity of its results, as in astrology the theory of eccentrics and epicycles is considered as established, because thereby the sensible appearances of the heavenly movements can be explained; not, however, as if this proof were sufficient, forasmuch as some other theory might explain them.

    Summa Theologiae, I, q.32, a.1, ad 2

    IIRC the Commentary on On the Heavens has more on this. The idea that one might "save appearances" by tweaking a theory, a big idea for guys like Quine and Kuhn, was known to the medievals and considered problematic, but only so problematic because it would only affect a certain sort of model that tries to reason from observed (generally distant) effects back to principles. Empiricism (speaking very broadly of course) sort of had the effect of making all knowledge come to be affected by this difficulty, because now all knowing fits this sort of pattern recognition/internal model building structure. That's partly why I don't think it's impossible to go across the eras here; it's the same problem, expanded to new areas because of a change in upstream positions.

    This is something I thought while reading MacIntyre. Yes, I see what you're saying, but like Heidegger you're sort of inventing a whole mindset that is "pre-modern", and justifying it with many quotes -- but at the end of the day if you haven't spoken to people from the pre-modern era then, my brother in christ, you cannot make claims about how pre-modern people think no matter how many texts you read from that era.

    Sure, it's a real limitation. But wouldn't this apply, to a lesser degree (and perhaps not even that much lesser) to talking to people from different backgrounds across a language/culture gap today? In many ways, because of the historical lineage, we might have more in common with a pre-modern Western thinker than someone extremely steeped in some parallel tradition of thought. And then the same issue could be said to apply to greater or lesser degrees across a whole range of contexts, e.g., for even saying what we ourselves would have thought about something years ago, or for generalizing about what "Americans" or "Chinese" think today, let alone in 1980 or 1890.

    Nevertheless, I still think plenty can be said with careful analysis. And note, the topic is not super broad. We can have a quite good idea about how people thought about arithmetic in the past because they both wrote about it in detail and it's not a super broad subject.

    I think another ameliorating factor is that there has been an unbroken, and fairly robust/large Thomistic and Neoscholastic tradition dating all the way back to that era. And so, even if we cannot say what the medievals would have thought, we can say what people steeped in their texts have generally thought, and it has generally been that underdetermination, while interesting and relevant in some areas, shouldn't support the radical theses that have been laid on it.
  • Moliere
    6.1k
    Nevertheless, I still think plenty can be said with careful analysis. And note, the topic is not super broad. We can have a quite good idea about how people thought about arithmetic in the past because they both wrote about it in detail and it's not a super broad subject.Count Timothy von Icarus

    Plenty can be, and has been, and ought be said in the future.

    I think it's broad in that you're talking about any and all arguments from underdetermination and using the ancients to say they have solutions to the arguments for underdetermination.

    Specific when thinking about the pre-moderns, yes -- there's a great dialogue there to engage with, and I think medieval and ancient philosophy ought be given more time. i.e. i favor the historical method -- but that does not then mean that those of a previous era who did not see the modern problems as interesting thereby solved the contemporary problems.

    Yes, we can focus on what they were talking about, but if Sartre is who we're interested in then all this remembrance of another philosophy, another tradition which :

    I think another ameliorating factor is that there has been an unbroken, and fairly robust/large Thomistic and Neoscholastic tradition dating all the way back to that era. And so, even if we cannot say what the medievals would have thought, we can say what people steeped in their texts have generally thought, and it has generally been that underdetermination, while interesting and relevant in some areas, shouldn't support the radical theses that have been laid on it.

    says such a thing, then "radical theses" are what are being pursued. The wondering isn't about what is generally comfortable for thought, but about problems for thought.

    Yes, there's a connection through the tradition of Thomism, at least. And, honestly, it's an amazing connection in that it's a line of flight that has managed to develop in spite of the historical contingencies.

    It's cool, but if it doesn't address what others are thinking then it's not a panacea. Ultimately I don't see the world as a harmony, for instance -- I think it's absurd.

    Your ameliorating factor ameliorates some doubts, but what if I think that Hume, Quine, Wittgenstein, Feyerabend, et al. , have a point? Do I just need to read more Thomas Aquinas to see the errors in my ways?
  • Count Timothy von Icarus
    4.1k


    Your ameliorating factor ameliorates some doubts, but what if I think that Hume, Quine, Wittgenstein, Feyerabend, et al. , have a point? Do I just need to read more Thomas Aquinas to see the errors in my ways?

    Well, I pointed out that many advocates prefer these results, just as the ancient skeptics thought skepticism was a preferable outcome. But, for the many who find them deeply troubling (e.g. Russell on Hume), it might be helpful to consider past approaches. For instance, Russell moves to eliminate causation (a move that was quite unsuccessful), and yet from other traditions there are some strong solutions to the Problem of Induction that rely precisely upon rejecting Hume's deflated notion of causality (indeed, they generally agree with Russell that causes serve no real function when they have been deflated to this degree). Likewise, there are a lot of people who bemoan how scientific anti-realism and arguments for science coming down to sociology and power relations has been used to pernicious effect on public debates on vaccine safety, global warming, GMO crops, etc., and are looking for solutions to underdetermination here.

    That is, there are many who see these primarily as problems to be overcome, hence, old solutions should be interesting.

    I also think insights into the difference would be just as useful for empiricists who want to defend such views (although presumably not all of them, because some of the skeptical solutions contradict one another). It would allow them to give a better explanation of why both common sense and long standing ideas in Eastern and Western thought needed to be rethought—which assumptions need to be defended (there is a potential circularity here worth noting too, because the phenomenon of understanding is often itself removed as a proper datum of epistemology because it is said to be underdetermined!).

    This seems useful to me because sometimes you see this sort of thing dealt with using simple appeals to "old is worse, new is better," which doesn't seem like a particularly good heuristic in philosophy, particularly when very old ideas are often recycled and become the new cutting edge.

    For instance, I would think one option would be to say: "yes, epistemology should be properly the study of prediction or error. The experience and possibility of "knowledge," whatever it might be, should be a topic of psychology and phenomenology, not epistemology and philosophy of science. Indeed, this is sort of what some views do, reducing learning and knowledge to statistics.
  • Leontiskos
    5.1k
    This is a wonderful essay, eminently relevant. Its work in clearing away canards cannot be overestimated. Its research and accuracy are commendable. It is long yet worthwhile and readable.

    Yet if an epistemology results in our having to affirm conclusions that seem prima facie absurd, and if further, it seems to lead towards radical skepticism and epistemological nihilism, or an ever branching fragmentation of disparate “skeptical solutions” and new “anti-realisms,” that might be a good indication that it is simply a bad epistemology.Count Timothy von Icarus

    I recently listened to Nathan Jacobs' podcast on realism and nominalism, which I thought gave a good overview of the territory. I started but have not finished his follow-up podcast, "The Case for Realism," in which he argues for realism and explains why he abandoned nominalism for realism. So far it has been good, and has tracked some of the same points you are making.

    Knowledge of trees, an understanding of what a tree is, comes from the presence of this form in our intellect after it has been abstracted from the senses.Count Timothy von Icarus

    There is a good exchange on this point between Robert Pasnau and Gyula Klima, where Pasnau takes up skepticism arguendo against Aristotle and Klima responds.

    The result is that the underdetermination of sheer prediction becomes unanswerable, and skepticism reigns.*Count Timothy von Icarus

    I would want to add that the realism quandary is also internal to "predictionism." The one who predicts is attempting to predict ad unum (towards the one, actual, future outcome). Without that future-oriented determinacy—whether actual or theoretical—the "predictionist" cannot function.

    I am looking forward to following this thread. I think it will be especially hard to keep it on-topic given that it touches on so many neuralgic subjects which could lead us far afield of the OP.
  • Leontiskos
    5.1k
    This is something I thought while reading MacIntyre. Yes, I see what you're saying, but like Heidegger you're sort of inventing a whole mindset that is "pre-modern", and justifying it with many quotes -- but at the end of the day if you haven't spoken to people from the pre-modern era then, my brother in christ, you cannot make claims about how pre-modern people think no matter how many texts you read from that era.

    It elucidates how we think, but it may not be the panacea of problems contemporary philosophy faces.

    It looks soothing -- but ultimately when someone says that if we go back to some ancient or medieval thinker as the person who saw it all I think that we're kind of fibbing to ourselves.
    Moliere

    You don't seem to be engaging the OP at all.

    We're attempting to reconstruct the thoughts of people we can't talk to, yes.Moliere

    Such is philosophy. You do the same thing with the philosophers you appeal to and interpret.

    There is lots of substance in the OP. Why not address that substance instead of trying to undercut it with ad hominem gesturing towards Aristotle or Aquinas? Count spent a fair amount of time on this. I would want to honor that.

    I want to see some responses that show evidence that the OP was actually read. Your post doesn't manage that. It could be recycled for any Aristotelian-Thomistic OP, regardless of content. Therefore if one reads your post they will not be given any insight into the content of the OP, given that your post in no way reflects the content of the OP. They will only be able to infer that the OP involves the thought of Aristotle and Aquinas.
  • Moliere
    6.1k
    Which part?

    Is it enough to say

    "Modern philosophy has problems. These medieval thinkers didn't have these problems. This is because modern philosophy invented this problem for itself by stripping out all the thoughts which earlier thinkers relied upon in making such inferences. Therefore, we should adopt these earlier approaches, given the incredible progress knowledge has made -- there is a disconnect between ability, and these supposed modern problems that we can pass over by reading the older solutions" ?

    Does that demonstrate having read the OP?

    My thinking is with respect to underdetermination and its value -- what I read were some solutions to underdetermination based on a generalization of a few select authors rather than what I might say in favor of underdetermination, for instance. So I wanted some sort of reason why these are even appealing at all?

    For myself I don't feel a deep need to argue for underdetermination because to me it explains why we go through all the hoops we do in making scientific inferences -- we don't just see the object as it is, we frequently make mistakes, and go about looking for reasons to justify our first beliefs while discounting possibilities not on the basis of evidence, but because they do not fit. This is inescapable for any productive thought at all -- but it has the result that we only have a tentative grasp of the whole.

    Basically we don't need Hume's rendition of causation to point out that underdetermination is part and parcel to scientific practice: hence all the methodological hurdles one must overcome to be justified in saying "this is a scientific conclusion"; if it were something we could conclude without underdetermination then the scientists would be wasting their time, to my view.
  • Moliere
    6.1k
    Likewise, there are a lot of people who bemoan how scientific anti-realism and arguments for science coming down to sociology and power relations has been used to pernicious effect on public debates on vaccine safety, global warming, GMO crops, etc., and are looking for solutions to underdetermination here.Count Timothy von Icarus

    I think this is a mistake to draw these philosophies towards some sort of anti-scientific agenda. At least, not when I speak on them they're not -- more like I'm very interested in the truth of how science actually works, and I don't want the cartoon version but to really understand what's going on (and, in that pursuit, noting how the goal is itself almost infinite, if not fruitless, in that we never really finish philosophizing about science where we finally have The Answer, but it still provides insight)

    That is, there are many who see these primarily as problems to be overcome, hence, old solutions should be interesting.

    I mean, that's fair. I said above my position is to default "the other way", so it may just be that the article isn't addressed to me. I like digging out old ideas and trying them in new ways -- that's a time honored philosophical practice. I suppose it just doesn't appeal to me is all.
  • Relativist
    3.2k
    If we accept abductive reasoning (inference to best explanation on available evidence - IBE) as leading to rational beliefs, is there really a problem? Such beliefs will, of course, be undertermined but that just means they don't comprise knowledge (in the strict sense).

    Epistemology should be of practical use in the world, and in the real world we are nearly always deriving conclusions from limited information. IBEs are the practical ideal.
  • T Clark
    15.2k

    One of the more graceless posts I’ve read here on the forum.
  • Count Timothy von Icarus
    4.1k


    This is a wonderful essay, eminently relevant. Its work in clearing away canards cannot be overestimated. Its research and accuracy are commendable. It is long yet worthwhile and readable.

    Thanks! :up:

    There is a good exchange on this point between Robert Pasnau

    So, I think Pasnau is right that the identity doctrine has, at least vis-á-vis Aquinas in particular, more often been used more to deal with representationalism and subjective idealism. However, it is used explicitly to counter empiricist skepticism by a number of the Neoplatonists. Gerson has a good article on this appropriately titled: "Neoplatonic Epistemology." That empiricism and academic skepticism died out, in part perhaps because of these arguments, is why St. Thomas doesn't have them as major contenders to rebut in his epoch.

    I am not sure about the rhetorical strategy of continually expressing perplexity about the doctrine you are expounding on or its use by people you are criticizing. I think though that in this case it actually suggests a real confusion it probably doesn't mean to imply. The argument for why form in the intellect (the intellect's move from potency to actuality) cannot be unrelated to its causes comes from the idea that: a. every move from potency to act has a cause in some prior actuality; b. causes cannot be wholly unrelated (i.e. arbitrarily related) to their effects (completely equivocal agents) or else they wouldn't be causes in the first place and what we'd actually have is a spontaneous move from potency to act. Form is just that which makes anything actual to effect anything at all, so form is, in one sense, always present in all causes (granted there are analogical agents). Arguing for this doesn't require question begging and presupposing the doctrine, it requires upstream premises (I see now that Klima appears to have hit on this in more detail).

    I would want to add that the realism quandary is also internal to "predictionism." The one who predicts is attempting to predict ad unum (towards the one, actual, future outcome). Without that future-oriented determinacy—whether actual or theoretical—the "predictionist" cannot function.

    Yeah, that's true. Even seemingly very abstract and deflationary, formalistic approaches that make everything Bayesian have a sort of unresolved kernel of volanturism in that the agent has some sort of purpose for predicting, or else they just collapse into mechanism.



    For myself I don't feel a deep need to argue for underdetermination because to me it explains why we go through all the hoops we do in making scientific inferences -- we don't just see the object as it is, we frequently make mistakes, and go about looking for reasons to justify our first beliefs while discounting possibilities not on the basis of evidence, but because they do not fit. This is inescapable for any productive thought at all -- but it has the result that we only have a tentative grasp of the whole.

    I hope this is not what you take the earlier approach to underdetermination to be because that's certainly not what I was trying to convey. As noted earlier, underdetermination was acknowledged, rather, it is some of the more radical theses that flow from it that are contained. For instance, the move where"x is true" becomes merely "hooray for asserting x," seems fairly destructive to ethics and epistemology. Hence the point about "dressed up nihilism."

    The basic idea is that deception is always parasitic on reality (actuality) because what determines thought must always correspond to some prior actuality.

    So, consider the point about the apple. It's not denying that we can be fooled by fake apples or that some sort of sci-fi technology might be able to use EM stimulation to get us to experience seeing an apple. It's that both the fake apple and the stimulus contain the form of the apple. The "brain in the vat/evil demon" argument is generally trying to show that we have absolutely no (sure) veridical perceptions/knowledge and thus no grounds for actually saying how likely it is that we are so deceived. But the point here is that this absolute prohibition on meaningful knowledge doesn't hold up given some fairly straightforward assumptions about things not happening for no reason at all. Even the illusion must derive from something actual. The deceiver’s manipulation has to carry intelligible structure (form) from somewhere.

    Nonetheless, in theory, if we were brains in vats then all the biological species and weather phenomena, elements, etc. we know could be false creations that don't really exist outside some sort of "simulation." (I would just point out here that this is basically magic, not sci-fi , and magic tends to do damage to philosophy, that's sort of the point). The things we know could be compositions and divisions of other real natures that exist in the "real world" but not our "fake world." And this still seems to leave open a very extreme sort of skepticism. Yet it's not the totalizing skepticism of the original demon experiment, where there is no ground on which to stand to argue that this is implausible.

    Here, there is a related argument about the teleology of the rational faculties. The intellect seems to be oriented towards truth. If it weren't, then there would be no reason to believe anything, including the brain in the vat argument.

    Likewise, a common argument in early 20th century empiricism was that we cannot be sure that the universe wasn't created seconds ago along with all our memories (also from underdetermination). But this also rests on the assumption of either a spontaneous move from potency to actuality or else a volanturist God who does arbitrary things (i.e., not the God of natural theology, but a sort of genie).

    Basically we don't need Hume's rendition of causation to point out that underdetermination is part and parcel to scientific practice: hence all the methodological hurdles one must overcome to be justified in saying "this is a scientific conclusion"; if it were something we could conclude without underdetermination then the scientists would be wasting their time, to my view.

    Well, if extreme forms of underdetermination are successful, the scientist is wasting their time. They cannot even know if they have actually run any of their experiments or what the real results are, because an infinite number of possibilities/experiences are consistent with their thinking the results are one thing when they really aren't. The Academics use phenomenological underdetermination to motivate a sort of nihilism.
  • Leontiskos
    5.1k
    Is it enough to say

    "Modern philosophy has problems. These medieval thinkers didn't have these problems. This is because modern philosophy invented this problem for itself by stripping out all the thoughts which earlier thinkers relied upon in making such inferences. Therefore, we should adopt these earlier approaches, given the incredible progress knowledge has made -- there is a disconnect between ability, and these supposed modern problems that we can pass over by reading the older solutions" ?

    Does that demonstrate having read the OP?
    Moliere

    No, not really. No mention of underdetermintion or realism. You're basically assuming that the OP is about something that it doesn't claim to be about, hence the ad hominem nature. The OP is about underdetermination and realism. That's the core.

    My thinking is with respect to underdetermination and its value -- what I read were some solutions to underdetermination based on a generalization of a few select authors rather than what I might say in favor of underdetermination, for instance. So I wanted some sort of reason why these are even appealing at all?Moliere

    So you think Hume, Wittgenstein, Quine, Sextus Empiricus, etc., offer poor arguments for underdetermination? That's intelligible. What is your alternative argument for underdetermination?

    For myself I don't feel a deep need to argue for underdetermination because to me it explains why we go through all the hoops we do in making scientific inferences -- we don't just see the object as it is, we frequently make mistakes, and go about looking for reasons to justify our first beliefs while discounting possibilities not on the basis of evidence, but because they do not fit. This is inescapable for any productive thought at all -- but it has the result that we only have a tentative grasp of the whole.Moliere

    Okay, well that's a good start for an argument for underdetermination. :up:

    I would want to actually look at some of these premises you are alluding to. For example:

    1. We don't just see the object as it is
    2. We frequently make mistakes
    3. We frequently go about looking for reasons to justify our first beliefs
    4. We have only a tentative grasp of the whole
    5. Therefore, Underdetermination explains why we go through all the hoops we do in making scientific inferences

    I don't see how (5) follows from your premises. Here is something you might want to revisit regarding premises like 1, 2, or 4:

    This does not imply that we come to know everything about the actuality of the form. Indeed, we will never know everything about any sort of thing. As Aquinas’ famously put it: “all the efforts of the human intellect cannot exhaust the essence of a single fly.” Nonetheless we know what a fly is. We understand it. It is this phenomenological experience of understanding that is the key datum which epistemology is supposed to explain.Count Timothy von Icarus
  • Leontiskos
    5.1k
    If we accept abductive reasoning (inference to best explanation on available evidence - IBE) as leading to rational beliefs, is there really a problem? Such beliefs will, of course, be undertermined but that just means they don't comprise knowledge (in the strict sense).

    Epistemology should be of practical use in the world, and in the real world we are nearly always deriving conclusions from limited information. IBEs are the practical ideal.
    Relativist

    This is a really interesting objection. Is an IBE underdetermined? Remember that the conclusion is not, "X is the explanation," but rather, "X is the best explanation." I actually don't see why underdetermination would need to attend IBEs.

    A significant part of this thread will turn on what exactly is meant by "underdetermined."
  • Count Timothy von Icarus
    4.1k


    This is a really interesting objection. Is an IBE underdetermined? Remember that the conclusion is not, "X is the explanation," but rather, "X is the best explanation." I actually don't see why underdetermination would need to attend IBEs.

    I think it depends on how far underdetermination is allowed to roll. If you pair these arguments, their reach is far greater than scientific theories. The term is most associated with the underdetermination of scientific theories, but as noted in the OP is has been used for substantially broader effect.

    If some of these arguments go through, then the "best" explanation is not "the most likely to be true (as in, corresponding to reality)," but rather "the explanation I most prefer," or "the explanation society most prefers, given its customs."

    An inference to the best explanation normally relies on the security of some prior ideas. But what if causes are underdetermined? What if induction is out? What if theories cannot even be "explanations" in the conventional sense? Underdetermination of scientific theories seems to me like the most benign of these.



    I think this is a mistake to draw these philosophies towards some sort of anti-scientific agenda. At least, not when I speak on them they're not -- more like I'm very interested in the truth of how science actually works, and I don't want the cartoon version but to really understand what's going on (and, in that pursuit, noting how the goal is itself almost infinite, if not fruitless, in that we never really finish philosophizing about science where we finally have The Answer, but it still provides insight)

    I didn't say they must lead that way, or even that they are designed to. I said that, historically, they absolutely have been used on both the right and the left to push such agendas. And yes, this is normally in a sort of corrupted, naive form, but some propagandists, radicals, and conspiracy theorists have a very good grasp on this stuff and have become quite adept at molding it to their causes. On the left, it's tended to be used more for things like casting doubt on all findings related to sex differences, or often the entire field of behavioral genetics.
  • Leontiskos
    5.1k
    That empiricism and academic skepticism died out, in part perhaps because of these arguments, is why St. Thomas doesn't have them as major contenders to rebut in his epoch.Count Timothy von Icarus

    That seems likely. :up:

    I am not sure about the rhetorical strategy of continually expressing perplexity about the doctrine you are expounding on or its use by people you are criticizing. I think though that in this case it actually suggests a real confusion it probably doesn't mean to imply. The argument for why form in the intellect (the intellect's move from potency to actuality) cannot be unrelated to its causes comes from the idea that: a. every move from potency to act has a cause in some prior actuality; b. causes cannot be wholly unrelated (i.e. arbitrarily related) to their effects (completely equivocal agents) or else they wouldn't be causes in the first place and what we'd actually have is a spontaneous move from potency to act. Form is just that which makes anything actual to effect anything at all, so form is, in one sense, always present in all causes (granted there are analogical agents). Arguing for this doesn't require question begging and presupposing the doctrine, it requires upstream premises (I see now that Klima appears to have hit on this in more detail).Count Timothy von Icarus

    I think this is good. I don't mean to derail the thread with those papers, but they are related to these central theses of your OP:

    1. Things do not happen “for no reason at all.” Things/events have causes. If something is contingent, if it has only potential existence, then some prior actuality must bring it into being. It will not simply snap into being of itself. Our experiences are contingent, thus they must be caused by something that is prior to them.

    2. Being is intelligible, and to be is to be intelligible. Every being is something in particular. That is, it has a form, an actuality, that is determinant of what it is (as well as the potential to change, explained by matter). This actuality determines how a thing interacts with everything else, including our sense organs and our intellects. If this was not the case, interactions would be essentially uncaused, and then there would be no reason for them to be one way and not any other (i.e. random).
    Count Timothy von Icarus

    I'd say that if people want to object to the OP then this is a good place to begin.

    At play is the classic clash of metaphysics-first vs epistemology-first paradigms, and it's fairly hard to find a rope such that both sides can grab on and start tugging in different directions. This is actually why I think folks like @Moliere are ultimately tempted to take the shortcut of relativizing Aristotle or Aquinas.

    So at the birds-eye level, we have something like:

    A. If (1) and (2), then underdetermination is false
    B. (1) and (2)
    C. Therefore, Underdetermination is false

    ...that's at least the argument coming from your vantage point. There are arguments for ~C coming from other vantage points above.

    Broadly speaking, an argument from underdetermination is one that attempts to show that available evidence is insufficient to determine which of several competing theories is true. That is, many different theories might be able to explain the same evidence, hence any move to choose between theories must be “underdetermined,” i.e., not determined by the evidence. Within the class of such arguments, there are many that go a step further. These will often purport to show that for any body of evidence, there will always be an infinite number of different explanations that are consistent with that evidence.Count Timothy von Icarus

    Regarding the definition of underdetermination, we seem to have at least three options:

    • U1. In this particular circumstance, the available evidence is insufficient to determine which of several competing theories is true
    • U2. For every body of evidence, there will always be an infinite number of different explanations that are consistent with the evidence
    • U3. For every circumstance, the available evidence will be insufficient to determine which of several competing theories is true

    I think everyone agrees that (U1) does occur. (U2) may be a point of contention, yet (U3) is probably the point of contention that the average opponent would be more comfortable defending. So perhaps (U3) helps narrow the thesis in question, and if so, then I still think the arguments you have offered are decisive.
  • Moliere
    6.1k
    Well, if extreme forms of underdetermination are successful, the scientist is wasting their time.Count Timothy von Icarus

    Hrrrmmm, I don't think so. But fair that I misread you, then -- in part at least. There's still something here that I can see that wasn't conveyed on my part.

    Basically my thought is that if anti-realism is true that has no effect on the value of science. It'd be like saying because dancing is not really a thing dancing is not valuable: no, the value question is separate from the descriptive question. If science doesn't "reveal reality", but rather makes us aware of which parts we are interested in manipulating it will still chug along regardless of the philosophical interpretation of the science.

    I didn't say they must lead that way, or even that they are designed to. I said that, historically, they absolutely have been used on both the right and the left to push such agendas. And yes, this is normally in a sort of corrupted, naive form, but some propagandists, radicals, and conspiracy theorists have a very good grasp on this stuff and have become quite adept at molding it to their causes. On the left, it's tended to be used more for things like casting doubt on all findings related to sex differences, or often the entire field of behavioral genetics.Count Timothy von Icarus

    Mkay. Then I suppose I'd just say that if it's been used by both sides so has the "realist" side been mis-utilized by the same actors.

    All the various phrenologies which basically justify social hierarchy are what I have in mind there, or "race science" or eugenics.

    So I'd rather put the bad actors to one side since they'll use either argument that they see fit, but this does not then reflect upon the philosophy if we are treating it properly.

    Still thinking on a return to your OP, this is just what leaped out for now.
  • Moliere
    6.1k
    No, not really. No mention of underdetermintion or realism. You're basically assuming that the OP is about something that it doesn't claim to be about, hence the ad hominem nature. The OP is about underdetermination and realism. That's the core.Leontiskos

    And the medievals are the ones who have a better solution to underdetermination and realism, yes? Is the outline that I gave of @Count Timothy von Icarus 's argument entirely wrong, just unrelated whatsoever?

    They acknowledge it, as Tim put it, but don't draw the radical conclusions.

    I'm sort of saying "Well, what if the radical conclusions are true, after all? Maybe it's the realist philosophy of science which is wrong, then" -- I'm a realist, but not a scientific realist, exactly. It's too provisional a discipline to draw metaphysical conclusions from, even if we'll want to pay attention to its limited conclusions while thinking about nature.


    I would want to actually look at some of these arguments you are alluding to. For example:

    1. We don't just see the object as it is
    2. We frequently make mistakes
    3. We frequently go about looking for reasons to justify our first beliefs
    4. We have only a tentative grasp of the whole
    5. Therefore, Underdetermination explains why we go through all the hoops we do in making scientific inferences
    Leontiskos

    Underdetermination is the theory that theories are not determined by the evidence, but rather are chosen in order to organize the evidence, and in some way are a selective pressure on which evidence is relevant to consider.

    1-4 are observations of human beings attempting to generate knowledge which fit with this belief -- basically an IBE, or really just a set of reasons for why I think underdetermination is a good default position. I.e. I don't have a deep quandary with denying causation as a metaphysical reality. That's because causation isn't real but how we decide to organize some body of knowledge.

    Closer, or does that just read as more of the same to you?
  • Leontiskos
    5.1k
    I'm sort of saying "Well, what if the radical conclusions are true, after all? Maybe it's the realist philosophy of science which is wrong, then"Moliere

    Sure, and I want to see arguments for and against underdetermination, for and against realism. It makes no difference whether Aristotle or Aquinas or Aladdin are the ones who made the good arguments. Focusing on individuals at the expense of the arguments is no help, especially at the very beginning of a thread.

    So let's look at your arguments:

    Underdetermination is the theory that theories are not determined by the evidence, but rather are chosen in order to organize the evidence, and in some way are a selective pressure on which evidence is relevant to consider.Moliere

    Okay, that is an interesting definition. :up:

    1-4 are observations of human beings attempting to generate knowledge which fit with this belief -- basically an IBE, or really just a set of reasons for why I think underdetermination is a good default position. I.e. I don't have a deep quandary with denying causation as a metaphysical reality. That's because causation isn't real but how we decide to organize some body of knowledge.

    Closer, or does that just read as more of the same to you?
    Moliere

    No, I think this is all quite helpful. We now have alternative definitions, arguments, considerations, objections, etc. :up:

    (I will come back to this when I have more time. I was mostly trying to expend some effort to try to keep the thread focused on the central theses and the arguments).
  • Count Timothy von Icarus
    4.1k


    C. Therefore, Underdetermination is false

    Yes, but just to clarify, and I realize the OP didn't do this very well because I thought it had gotten to long, the point isn't that underdetermination doesn't ever exist. It clearly does. If we do the Monte Hall problem, the evidence we get from seeing one goat doesn't determine that switching will get us to the prize (although it does determine that it is more likely to).

    It's the global sorts of underdetermination that result from excluding the act of understanding as a valid datum of epistemology that are resolved, which then includes a denial of our having any real grasp of principles, such that all demonstrations are necessarily either merely from effects (and so open to underdetermination) or only hypothetical (and thus also underdetermined).

    I think we'd have to look at each individual case to see if it is resolved unfortunately, but it seems to me to affect many of those listed. For instance, consider the argument that there is no fact of the matter about what rules we are following because all our past actions are consistent with an infinite number of possible rules. This would also imply that there is never a fact of the matter about which rules nature is "following" (or which would describe how it behaves). For example, "gravity has always worked like x," is also consistent with "gravity spontaneously changes after a given time." If any sort of formal causality is axiomatically barred, then there is no way to deal with this.

    Obviously, these sorts of conclusions have a follow-on effect for science. It isn't just that "language is not meaningful because of metaphysical facts, but only on account of shared behavioral regularities related to social norms and public agreement," but this also holds for science (and of course, for any language science is expressed in). I think it's easy to see how such conclusions help lend weight to the parallel arguments from underdetermination that are used to argue for a redefinition of truth, one which, in its more deflationary forms, seems to me to come close to epistemic nihilism.

    So, we don't "resolve underdetermination tout court," rather, we resolve some specifically pernicious instances of its application. And then, when it comes to scientific theories, the problem of underdetermination is less concerning because our knowledge isn't just a sort of statistical model, which if radically altered, has "remade the world." When we shift paradigms, it isn't that the old world of trees, fire, stars, and sound is revealed to be illusory, and a new socially constructed world has taken its place. We are still dealing with the same actualities as apprehended through new conceptual means. And crucially, while there might be many ways to correctly describe something, these will be isomorphic. When underdetermination becomes more pernicious is when it denies this isomorphism, such that scientific findings become "sociology all the way down" or "power struggles (will to power) all the way down."



    Basically my thought is that if anti-realism is true that has no effect on the value of science. It'd be like saying because dancing is not really a thing dancing is not valuable: no, the value question is separate from the descriptive question. If science doesn't "reveal reality", but rather makes us aware of which parts we are interested in manipulating it will still chug along regardless of the philosophical interpretation of the science.

    I think you're misunderstanding by "extreme forms" here. I don't mean anti-realism, but rather those sorts of "Boltzmann brain" type arguments that conclude that it is more likely, or just as likely, that the world will dissolve at any moment or radically alter its behavior, as to maintain in its reliable form. This implies that science isn't even likely to be predictive or "useful" on any consistent timescale, and I don't see how that doesn't make it a waste of time.

    If science doesn't "reveal reality", but rather makes us aware of which parts we are interested in manipulating it will still chug along regardless of the philosophical interpretation of the science.

    IDK, my reading would be that denials of any knowable human good ("moral/practical anti-realism," which is often aided by other forms of anti-realism) have tended to be destructive to politics, applied science, and ethics. That a key concern of contemporary politics, and a constantly recurring motif in our media is that our technology will drive our species extinct or result in some sort of apocalypse or dystopia because it is "out of anyone's control," suggests to me a fundamental problem with the "Baconian mastery of nature" when combined with anti-realism about human ends and the ends of science. If the aim of science is to improve our casual powers, but then we are also driven towards a place where we are largely silent on ends, that seems like a recipe for disaster, the sort of situation where you get things like predictable ecological disasters that will affect generations of future people but which are nonetheless driven on largely by unrestrained and ultimately unfulfilling appetites.

    Mkay. Then I suppose I'd just say that if it's been used by both sides so has the "realist" side been mis-utilized by the same actors.

    Phrenology was discredited because it was thought to be false. But if "true" and "false" are themselves just social endorsements, then truth cannot arbitrate between racist, sexist, etc. scientific theories. So, sure, both forms are open to abuse, but only one can claim that abuse isn't actually abuse, and that all science is about power struggles anyhow. If science is really just about power or usefulness, then there is strictly speaking nothing wrong about declaring sui generis fields like "Jewish physics" just so long as it suits your aims and gets you what you want.
  • Count Timothy von Icarus
    4.1k


    And the medievals are the ones who have a better solution to underdetermination and realism, yes? Is the outline that I gave of @Count Timothy von Icarus 's argument entirely wrong, just unrelated whatsoever?

    Well, it's not actually my main point. Only the last third or so is about a particular solution. I would summarize it this way.

    Arguments from underdetermination is extremely influential in contemporary philosophy.

    They have led to many radical, and seemingly skeptical theses.

    These theses are perhaps more radical than we today recognize, when seen from the perspective of Enlightenment and pre-modern prevailing opinion.

    These types of arguments were not unknown in the past, and were indeed often used to produce skeptical arguments.

    The tradition most associated with these arguments, ancient Empiricism, sought skepticism on purpose, as a way to attain ataraxia.

    Thus, we should not be surprised that borrowing their epistemology leads to skeptical conclusions.

    Hence, if we do not like the skeptical conclusions, we should take a look at the epistemic starting points that lead to them.

    Indeed, if an epistemology leads to skepticism, that might be a good indication it is inadequate.

    The Thomistic response is given as one example of how these arguments used to be put to bed. I use it because I am familiar with it and because the Neoplatonist solution is quite similar. (But the Stoics also had their response, etc.).

    I do think that solution is better, but the point isn't to highlight that specific solution, but rather the genealogy of the "problem" and how it arises as a means of elucidating ways it might be resolved or else simply understanding it better.

    ---

    Had I more space, I might suggest some Indian thinkers here. They have the idea that the sensible world is indeed, in an important sense, illusory (maya). However, they do not see this as barring access to the knowledge that really matters, which grounds our approach to happiness and ethics, or even to a sort of first principle. Likewise, the "arbitrary world" is able to be eliminated. This also has to do with their starting points, which are in some ways quite similar to the Neoplatonists. So skepticism also loses its bite in these contexts.
  • Moliere
    6.1k
    :up:

    I think you're misunderstanding by "extreme forms" here. I don't mean anti-realism, but rather those sorts of "Boltzmann brain" type arguments that conclude that it is more likely, or just as likely, that the world will dissolve at any moment or radically alter its behavior, as to maintain in its reliable form. This implies that science isn't even likely to be predictive or "useful" on any consistent timescale, and I don't see how that doesn't make it a waste of time.Count Timothy von Icarus

    I wouldn't propose radical skepticism, but also it's not a possibility I feel the need to deny. It is, after all, logically possible -- it's just entirely irrelevant to the task at hand.

    Generally I treat radical skepticism as a special case rather than a case we generalize from, except for the cases where a philosopher is purposefully arguing for or utilizing it towards some other philosophical question (so, Descartes and Hume are the "good" kind of radical skeptics; The Freshman philosophy student who just heard about the possibility of solipsism isn't -- rather, that's a sort of "right of passage" that all people interested in philosophy bumble over)

    Basically I think such arguments are sophomoric, in the literal rather than pejorative sense, and someone would have to present a radical thesis to make it credible, to my mind; i.e. the "default" position isn't radical skepticism, to my mind, and so isn't so worrying. Sure it's logically possible, so are a host of irrelevancies just like it. Where the bite?

    IDK, my reading would be that denials of any knowable human good ("moral/practical anti-realism," which is often aided by other forms of anti-realism) have tended to be destructive to politics, applied science, and ethics. That a key concern of contemporary politics, and a constantly recurring motif in our media is that our technology will drive our species extinct or result in some sort of apocalypse or dystopia because it is "out of anyone's control," suggests to me a fundamental problem with the "Baconian mastery of nature" when combined with anti-realism about human ends and the ends of science. If the aim of science is to improve our casual powers, but then we are also driven towards a place where we are largely silent on ends, that seems like a recipe for disaster, the sort of situation where you get things like predictable ecological disasters that will affect generations of future people but which are nonetheless driven on largely by unrestrained and ultimately unfulfilling appetites.Count Timothy von Icarus

    Heh, this is something we're wig-wamming our way about here because it seems we both believe things like "it's a good idea to talk about ethics, especially with respect to what science does" and "Jewish science is a pseudo-science", but we keep on reading these bits of evidence towards our respective views :D

    All to be expected, but I want to say that I think it possible to be a skeptic towards scientific realism and realize it's important to direct ethically -- in fact, because there's no Architectonic of Science that one must follow, we are free to modify our practices to fit with our ethical demands.

    I think there's a fundamental problem with reducing reality to science, and with prioritizing the mastery of nature in our understanding of what science does. But then this might be something of an aside with respect to underdetermination. (heh, the rhetorical side of me thought: In fact, because underdetermination is true we should see that science's activity is a direct result of our ethical commitments rather than an arche-method of metaphysical knowledge that's value-free.

    It's descriptive, but not value-free, if that makes sense. Science is always interested for some reason, even if that reason is "I just think snails are cool and like to study their behavior because they make me feel happy when I'm around them"

    Phrenology was discredited because it was thought to be false. But if "true" and "false" are themselves just social endorsements, then truth cannot arbitrate between racist, sexist, etc. scientific theories. So, sure, both forms are open to abuse, but only one can claim that abuse isn't actually abuse, and that all science is about power struggles anyhow. If science is really just about power or usefulness, then there is strictly speaking nothing wrong about declaring sui generis fields like "Jewish physics" just so long as it suits your aims and gets you what you want.Count Timothy von Icarus

    Phrenology was always a pseudo-science. It has all the characteristics -- the theories follow the form of confirmation and don't try to disconfim them. They held some social significance which allowed people to justify their position or actions to others. They were vague and easy to defend in light of evidence.

    Now I'll go this far: If underdetermination, as a theory, leads us to be unable to differentiate between science and pseudo-science, and we believe there is such a thing as pseudo-science (I do), then we're in a pickle.

    But like you have a theory which takes care of underdetermination, within realist parameters I'd be able to defend our ability to spot pseudo-science on the social model of the sciences -- i.e. it's not just me, but all the scientists that say what science is. "Jewish Science" wasn't even as clear as phrenology; it was definitely a racist category for expelling Jewish scientists from the academy. That it resulted in expelling people who we still consider scientists -- like Bohr -- is an indication that it's not a science even if "Jewish Science" happened to get the aims desired after.

    I.e. though underdetermination complicates the question, it's still addressable by my lights without a realist science.

    Arguments from underdetermination is extremely influential in contemporary philosophy.

    They have led to many radical, and seemingly skeptical theses.

    These theses are perhaps more radical than we today recognize, when seen from the perspective of Enlightenment and pre-modern prevailing opinion.

    These types of arguments were not unknown in the past, and were indeed often used to produce skeptical arguments.

    The tradition most associated with these arguments, ancient Empiricism, sought skepticism on purpose, as a way to attain ataraxia.

    Thus, we should not be surprised that borrowing their epistemology leads to skeptical conclusions.

    Hence, if we do not like the skeptical conclusions, we should take a look at the epistemic starting points that lead to them.

    Indeed, if an epistemology leads to skepticism, that might be a good indication it is inadequate.

    The Thomistic response is given as one example of how these arguments used to be put to bed. I use it because I am familiar with it and because the Neoplatonist solution is quite similar. (But the Stoics also had their response, etc.).

    I do think that solution is better, but the point isn't to highlight that specific solution, but rather the genealogy of the "problem" and how it arises as a means of elucidating ways it might be resolved or else simply understanding it better.
    Count Timothy von Icarus

    Ok, fair. It may just not be for me, then -- here I'm saying "but I like the skeptical conclusions", and so the rest kind of just doesn't follow. The motivation isn't there for me.

    But you were talking about a lot of the things I think about which is why I replied. I see I missed a good chunk of the essay just because of what grabbed my attention, though.
  • Leontiskos
    5.1k
    So, we don't "resolve underdetermination tout court," rather, we resolve some specifically pernicious instances of its application. And then, when it comes to scientific theories, the problem of underdetermination is less concerning because our knowledge isn't just a sort of statistical model, which if radically altered, has "remade the world." When we shift paradigms, it isn't that the old world of trees, fire, stars, and sound is revealed to be illusory, and a new socially constructed world has taken its place. We are still dealing with the same actualities as apprehended through new conceptual means. And crucially, while there might be many ways to correctly describe something, these will be isomorphic. When underdetermination becomes more pernicious is when it denies this isomorphism, such that scientific findings become "sociology all the way down" or "power struggles (will to power) all the way down."Count Timothy von Icarus

    Yes, I think that is a good elaboration. :up:

    Hence, if we do not like the skeptical conclusions, we should take a look at the epistemic starting points that lead to them.

    Indeed, if an epistemology leads to skepticism, that might be a good indication it is inadequate.
    Count Timothy von Icarus

    Right, and not to get ahead of things, but it is this crucial move that is especially interesting (and is also seen in thinkers like Nathan Jacobs). We eventually come to this point where one must decide whether they prefer the skeptical premises even at the expense of the absurd conclusions. Your "good indication" sums up the complexity of this, because this approach is not demonstrative. That's not necessarily a problem, and perhaps there is no strictly demonstrative alternative, but it is worth noting how the inference at this crucial juncture is rather sui generis (and could even perhaps be construed as coherentist, depending on one's appraisal of the reductio and reductio arguments in general).
  • Relativist
    3.2k
    .
    This is a really interesting objection. Is an IBE underdetermined? Remember that the conclusion is not, "X is the explanation," but rather, "X is the best explanation." I actually don't see why underdetermination would need to attend IBEs.Leontiskos

    I see your point, that by labelling X and IBE, underdetermination may not apply. Labeling it the explanation would be underdetermined.

    But I suggest that in the real world, we operate on beliefs, which are often formed by inferring to the best explanation from the facts at hand (background beliefs will unavoidably affect the analysis). We make errors, of course, but a proper objective is to minimize these errors (more on this, below).

    I think it depends on how far underdetermination is allowed to roll. If you pair these arguments, their reach is far greater than scientific theories. The term is most associated with the underdetermination of scientific theories, but as noted in the OP is has been used for substantially broader effect.

    If some of these arguments go through, then the "best" explanation is not "the most likely to be true (as in, corresponding to reality)," but rather "the explanation I most prefer," or "the explanation society most prefers, given its customs."
    Count Timothy von Icarus
    Forgive me if I misunderstand, but this sounds a bit fatalistic, to me - in that it seems to imply the quest for truth is irrelevant or hopeless. I suggest that we have a deontological duty to minimize false beliefs and maximize true beliefs. To do otherwise is irrational, and this includes embracing an explanation simply because he prefers it (there are exceptional cases where this might be appropriate, but I'll leave that aside).

    Even if we were perfect at this, the resulting beliefs would still be "underdetermined", but ideally they will be our best explanation for the set of information we have. There will necessarily be subjectivity to it (we each make a judgement, and it will be based on our background beliefs - many formed the same way, others the product of learning). This is a proper objective for critical thinking.
  • Leontiskos
    5.1k
    I see your point, that by labelling X and IBE, underdetermination may not apply. Labeling it the explanation would be underdetermined.Relativist

    Yes, that's right.

    But I suggest that in the real world, we operate on beliefs, which are often formed by inferring to the best explanation from the facts at hand (background beliefs will unavoidably affect the analysis).Relativist

    Well, do I believe that every one of my beliefs is "the explanation," or do I believe that some of my beliefs are "the best explanation"? I think we believe that some of our beliefs are only IBEs, and therefore we believe that some of our beliefs are more than IBEs.

    Even if we were perfect at this, the resulting beliefs would still be "underdetermined", but ideally they will be our best explanation for the set of information we have.Relativist

    I would basically argue that some theory which is believed to be underdetermined is not believed. So if I think there are only two theories to account for a body of evidence and that both are exactly 50% likely to be true, then I psychologically cannot believe one over the other.

    So I think we would need to get more precise on what we mean by "underdetermined." For example, why do you think "the resulting beliefs would still be 'underdetermined'"? Does that mean that the person might change their mind when they reconsider the evidence from a different point of view? If so, then I would say that that possibility to change one's mind (and one's ratio or angle of perspective) is different from one's belief being underdetermined. On Aquinas' view, that form of 'underdetermination' is essential for free will. Apparently it is possible to read the complement of "underdetermination" as fatalism or determinism.
  • Moliere
    6.1k
    Sorry for double posting @Count Timothy von Icarus but I wanted to make sure you saw this thought:

    I ought say that underdetermination, to my mind, is at odds with a strictly empirical epistemology -- it's more of a rationalism of empiricism. "Yes, we have to go and see, but..."

    It highlights that the mind is at least partly responsible for our knowledge -- we don't have a blank slate which is imprinted upon by reality, ala Locke.
  • Relativist
    3.2k
    I would basically argue that some theory which is believed to be underdetermined is not believed. So if I think there are only two theories to account for a body of evidence and that both are exactly 50% likely to be true, then I psychologically cannot believe one over the other.

    So I think we would need to get more precise on what we mean by "underdetermined.
    Leontiskos
    Agreed that we need to establish what "undetermined" means, when were talking about beliefs. I've been treating "underdetermined" as any belief that is not provably true (i.e. determined=necessarily true). Under this extreme definition, nearly every belief we have is underdetermined. I also agree that we ought not to believe something that has a 50% chance of being false.

    Most of our beliefs are not provably true, so I have labelled them IBEs. I don't see how else one could claim to have a warrant to believe it. So if you say your belief in X is "more than an IBE" - is it really, if it's not provably true? Or is it still an IBE, but with strong support?
  • Leontiskos
    5.1k
    - Okay, thanks for that. Makes sense.

    Most of our beliefs are not provably true, so I have labelled them IBEs.Relativist

    So would you say that some of our beliefs are provably true?
    (I would say that, but I am just verifying that you would also say such a thing.)
  • Leontiskos
    5.1k
    Underdetermination is the theory that theories are not determined by the evidence, but rather are chosen in order to organize the evidence, and in some way are a selective pressure on which evidence is relevant to consider.Moliere

    This sounds to me a bit like post hoc rationalization, as if one is going to decide on a theory and then allow their theory to be "a selective pressure on which evidence is relevant to consider."

    The difficulty here is that you seem to be redefining "theory" to be something that precedes rather than follows after evidence, and such is a very strange redefinition. For example, on this redefinition someone might say, "I have a theory...," and this statement would be indistinguishable from, "I have a prejudice..." The basic problem is that 'theory' and 'prejudice' do not mean the same thing. We distinguish between reasoning and post hoc rationalization, and yet your definition seems to have made such a distinction impossible. It seems to have made impossible a distinction between "following the evidence where it leads," and, "engaging in selection bias in favor of some a priori theory."

    ---

    Now I'll go this far: If underdetermination, as a theory, leads us to be unable to differentiate between science and pseudo-science, and we believe there is such a thing as pseudo-science (I do), then we're in a pickle.Moliere

    I think this is one of the places where the problems become more apparent. For example, if underdetermination requires that there be multiple possible and inadjudicable theory-candidates, and nevertheless pseudoscientific theories do not belong to this set of viable candidates, then there must be some real way to separate out the wheat from the chaff. Even if one thinks this is possible they have already abandoned full-throated underdetermination in favor of an underdetermination that is nevertheless determinate vis-a-vis determining which theories are scientific and which are pseudoscience. They are doing something akin to "stance underdetermination," which is a species of a, "Underdetermined subset theory." I.e., "Within this specific subset a quasi-global underdeterminacy holds, but apart from that subset it does not hold." All of these theories struggle mightily to say how or where the specific subset ends and the complement-set begins. The task is so difficult that few such proponents even really attempt to answer that challenge.
  • Count Timothy von Icarus
    4.1k


    Now I'll go this far: If underdetermination, as a theory, leads us to be unable to differentiate between science and pseudo-science, and we believe there is such a thing as pseudo-science (I do), then we're in a pickle.

    But like you have a theory which takes care of underdetermination, within realist parameters I'd be able to defend our ability to spot pseudo-science on the social model of the sciences -- i.e. it's not just me, but all the scientists that say what science is. "Jewish Science" wasn't even as clear as phrenology; it was definitely a racist category for expelling Jewish scientists from the academy. That it resulted in expelling people who we still consider scientists -- like Bohr -- is an indication that it's not a science even if "Jewish Science" happened to get the aims desired after.

    What's the argument here: "There is no problem with identifying pseudoscience because in these examples scientists came around to calling out the pseudoscience?"

    Why exactly will science always tend towards correctly identifying pseudoscience? Will this always happen? What's the mechanism?

    Anyhow, on some anti-realist views, legitimate science just is whatever current scientists say it is. Science has not always been quick to identify pseudoscience. Lysenkoism wasn't considered pseudoscience within the Soviet bloc. Scientists said it was legitimate. Millions of people died before it was rejected. Arguably, all simply pointing to some infamous cases where pseudoscience was eventually identified does is show the norms of science change.

    Some ideas identified as pseudoscience (largely for being wholly unfalsifiable and dogmatic) were in wide currency for lifetimes (e.g. aspects of Marxist political economy, aspects of Freudian psychoanalysis, and even aspects of liberal capitalist political-economy have been accused as such, while graphology is another long lived example with less import). Hence, I am skeptical of the idea that scientists will just know without some sort of notion of how they would know.

    The 19th century was rife with pseudoscience, and I think developments in scientific methods and the philosophy of science played a significant role in curbing this.



    I think it is a pretty dismal view. So too for the Nietzschean idea that the desire truth is "just one among all the others." But underdetermination of scientific theory is only ancillary related here. Arguments for a more widespread skepticism or relativism I am familiar with tend to instead rely on a more global underdetermination of things like all rules/rule-following, all causal/inductive reasoning, or the underdetermination of any sort of solid concept/meaning that would constitute the possession of knowledge, which is a step up (or down) from simple scientific underdetermination.
  • Relativist
    3.2k
    So would you say that some of our beliefs are provably true?Leontiskos
    Good question. We have beliefs that follow necessarily from other beliefs/facts, so they're provable in that sense. It seems inescapable that we depend on some foundational beliefs. So nothing can be proven without some sort of epistemological foundation. What are your thoughts?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.