• Tom Storm
    9.2k
    So would you be more attracted to 'thinking is occurring (as a presupposition), therefore I probably am?universeness

    Good question. I guess I am ok with 'I think therefore I am' as a presupposition. I'm just exploring the notion that if Descartes can imagine a reality wherein an evil demon has created an illusion of a world around us, then why did he assume the thoughts he experienced were his or that they were thoughts? Could an evil demon not also broadcast thoughts into one's mind? Might we in fact be many people in one body, etc...

    Many people experience thoughts as someone else's in their heads. This would be enough to doubt the 'I am.'

    It's not a huge point with me but it's kind of interesting.
  • Mww
    4.9k


    Oh, so….I think, therefore thinking is occurring? I get it, but that reflects on ’s note on tautological truths and minimal relations, in that the switch wouldn’t lead to a productive philosophy. He wasn’t interested in the thinking, which was never in doubt, but only in that which thinks, and that as something other than object.
  • flannel jesus
    1.8k
    All of the alternatives you listed seem to me to fit well within the "I am" category. They don't seem to be actual alternatives to "I think, therefore I am", they are just counter intuitive notions of circumstances that correspond to "...I am"
  • universeness
    6.3k
    Many people experience thoughts as someone else's in their heads. This would be enough to doubt the 'I am.'Tom Storm

    I think 'I' can be perceived as singular or as a collective, without destroying the independence or individuality of me, myself and 'I' as an individual thinking agent with intent and the ability to create meaning and purpose. This is what cogito ergo sum, indicates to me and divine hiddenness is evidence that no god exists that can demonstrate cogito ergo sum and 'I am that I am,' in my opinion, is a very poor, irrational and incoherent competitor to cogito ergo sum.

    A computer program is a singular program, but it's also a collection of code instructions. Each instruction is singular but is a collection of binary digits. Each binary digit is singular but represents a range of voltages (in the case of BInary digiT 1) and the absence of any voltage but over a collection of moments or durations of time (for the case of a BInary digiT 0). A human though is singular but also involves a neurological process, involving many separate events. Each event is a collective of ......., and on and on we go.

    All such paths must eventually lead to quarks, quantum fluctuations, and a regress back to a cause for the origin of the universe, which an individual can choose to push back, much much further, into cyclical time aeons, using something such as Roger Penrose's CCC. You can also just decide that the posit of an eternal cosmos or 'energy,' is the only final solution. If you want to insist that is what god is, then I, as an atheist, am ok with that. I would accept that rather meaningless label. It's only when someone insists that such a god, is an absolute thinking agent with intent, that I start to accuse them of irrational thinking.
  • Vera Mont
    4.4k
    I think you're on your own there.Isaac

    And here. OK
  • Sam26
    2.7k
    In logic there are two kinds of arguments (inductive and deductive), an inductive argument means the argument is either strong or weak based on the strength of the evidence. So the conclusion of an inductive argument is more or less likely (probable) based on the strength of the evidence/reasons. The point being, is that the conclusion doesn't follow with absolute certainty. However, the conclusion may follow with a very high degree of certainty. So, again the conclusion isn't absolute but it's still considered knowledge if the evidence is strong enough. Most of what we know falls into this category, including science.

    Deductive arguments, on the other hand, follow with absolute certainty if the argument is sound (i.e., it has true premises and it's valid). Deductive arguments are proofs in the strict sense, i.e., they follow with absolute necessity if they are sound. So yes we can know things with absolute certainty.

    Finally, it's not a matter of knowing that we exist (Descartes cogito). It's a brute fact that's not in need of justification. What would it mean to even doubt one's existence? Explaining this would take us into Wittgenstein.
  • LuckyR
    522
    Well the term "absolute" is relative, as it is compared to a reference standard. The question is: who gets to decide what is the standard? Thus the question devolves into a question of perspective.
  • Sam26
    2.7k
    There are other uses of the concept absolute, but it seems pretty clear to me what is meant by absolute certainty. We're referring to an epistemological concept, not some use apart from epistemology. Although, given the many theories of epistemology I'm sure that there are other uses. I'm defining absolute certainty as something that necessarily follows, i.e., we know with 100% certainty as opposed to some probability based piece of knowledge (inductive reasoning).

    Who decides what the standard is? Language users decide what the meaning of our concepts are based on how we use the concept in a variety of contexts. There is no committee or person that decides unless there is a new discovery that requires a new concept. It's not just a matter of perspective, i.e., I can't just use words any way I want, although today people think they can.
  • creativesoul
    12k
    The evidence that currently exists which refutes and/or falsifies the claim that "your brain functions separately/independently from mine" is the very words you used. Language bridges the gap between your brains. It connects them. Connected things are neither separate nor independent.
    — creativesoul

    A computer can act forever, as a stand alone device. A human brain can also function as a completely stand alone device (hermitical human). You can connect computers together in a network by wired or wireless means and allow them to communicate, via language/code. Human brains can also network via language/code, yes. But, networking is optional, and is not evidence that refutes the existence of 'I.'
    universeness

    False analogy. Irrelevant.

    Humans are not computers. Boolean logic is not equivalent to native tongues/common languages. Common language acquisition is not optional. So, the comparison is a false analogy on its face. That's enough, really, to dismiss the counter you offered.

    There is no "I" without common language. There is no common language without shared meaning. There is no shared meaning without a plurality of language users. There is no plurality of users without others. Hence, there is no "I" without others. There is no "I" without a belief system replete with self-identification stemming from common language use.

    None of this refutes the existence of "I", nor was I trying to(hence, I prefixed the original objection by saying it was completely beside the point of the ongoing quibble it was dissected from). Rather, this is only meant to help you recognize that the statement "your brain functions separately/independently from mine" is false on its face. It doesn't. It cannot. It's impossible, because you cannot unlearn common language while continually using it. You cannot 'disconnect' all of the meaningful correlations that you've long since drawn between language use and other things, including the use of "I" and yourself.

    All this only to say that our brains do not function separately/independently from each other. Language bridges the spatiotemporal gap with shared meaning, shared belief, shared thought, shared understanding. If your brain functioned separately and independently of every other brain, you would not even have the capability to say so.
  • Janus
    16.5k
    Finally, it's not a matter of knowing that we exist (Descartes cogito). It's a brute fact that's not in need of justification. What would it mean to even doubt one's existence? Explaining this would take us into Wittgenstein.Sam26

    As I understand it Kant asserted that every thought is. implicitly at least, an "I think", but he does not take this to entail that the I is something, a substantial entity, that is itself something more than a thought. Kant saw the I as a kind of master thought that is implicit in all the others.

    It is not a matter of doubting our own existence, but of knowing what we are. the most immediate certainty is that there is thought, sensation, feeling, experience. It does not follow that there is any substantial entity thinking, sensing, feeling, experiencing,
  • Tom Storm
    9.2k
    It is not a matter of doubting our own existence, but of knowing what we are. the most immediate certainty is that there is thought, sensation, feeling, experience. It does not follow that there is any substantial entity thinking, sensing, feeling, experiencing,Janus

    Nice - that's what I was getting at.
  • Wayfarer
    22.8k
    It does not follow that there is any substantial entity thinking, sensing, feeling, experiencing,Janus

    What does 'substantial' bring to 'entity' in this statement? Recall the Aristotelian term that was translated as 'substance' was 'ouisia' which is much nearer in meaning to 'being' than what we normally mean by 'substance' ('a material with consistent qualities'.) So does this mean that there's no being who thinks, senses, feels, etc?

    Kant saw the I as a kind of master thought that is implicit in all the others.Janus

    Isn't this where Kant's theory of transcendental apperception comes in? Which is designated in Kant as the transcendental ego, and was also accepted by Husserl.
  • Mikie
    6.7k


    There are no absolutes; I’m absolutely certain of it. :wink:
  • Janus
    16.5k
    Well there are conceptual or abstract entities: for example numbers or generalities. An entity as I use the term is anything identifiable, whether concrete or abstract.

    'Being' for me is a verb, not a noun, an a tivity or process, not a substance. So be-ing is an activity that goes along with thinking, feeling, experien ing,

    Isn't this where Kant's theory of transcendental apperception comes in? Which is designated in Kant as the transcendental ego, and was also accepted by Husserl.Quixodian

    Right, but I don't accept the idea myself: I think it is underdetermined, and that it seems more plausible to think that there is a primordial sense of self associated with the body's perceived difference from the rest of the world, and that sense gets transformed in thought into the "master" thought of "I".

    The other point is that the idea of an actual transcendental ego is meaningless without thinking it as a transcendent substance, which would take us back to Cartesian dualism, and if we think of it as just an idea, then it is no different than the idea of a "master-idea-as-self". Such an idea would then be transcendental only in the sense that it is not empirically observable, but then no thoughts are, so in that sense all thoughts would be transcendental.

    :cool:
  • universeness
    6.3k
    False analogy. Irrelevant.creativesoul
    It's better to offer your argument and your evidence before you state your conclusive opinions.

    Humans are not computers.creativesoul
    Computers are an attempt to simulate/emulate the human brain.

    Boolean logic is not equivalent to native tongues/common languages.creativesoul
    It what way? Based on what evidence?

    Common language acquisition is not optional. So, the comparison is a false analogy on its face. That's enough, really, to dismiss the counter you offered.creativesoul
    I stated that networking is optional, not common language acquisition. Don't accuse me of a false analogy I did not make and you just made up.[/quote]

    There is no "I" without common language.creativesoul
    So to you, the deaf, dumb and blind kid has no 'I' before they learn to communicate through touch?
    Helen Keller had no inner notion of an 'I' identity before she was tought to communicate through a common language of touch? Is that what you think?

    There is no common language without shared meaning. There is no shared meaning without a plurality of language users. There is no plurality of users without others. Hence, there is no "I" without others. There is no "I" without a belief system replete with self-identification stemming from common language use.creativesoul
    I disagree. If I was placed here at birth and was maintained by a lifeless system until I was able to take care of myself and I never experienced or communicated with another human, in my life, then I think I would still be able to experience an 'I' identity, as different from the flora and non-human fauna around me.

    None of this refutes the existence of "I",creativesoul
    We agree on that.

    Rather, this is only meant to help you recognize that the statement "your brain functions separately/independently from mine" is false on its face. It doesn't. It cannot. It's impossible, because you cannot unlearn common language while continually using it. You cannot 'disconnect' all of the meaningful correlations that you've long since drawn between language use and other things, including the use of "I" and yourself.creativesoul
    Well, thanks for 'trying to help me, ' in the way you suggested but I think your arguments are incorrect for the reasons I have already given.

    All this only to say that our brains do not function separately/independently from each other. Language bridges the spatiotemporal gap with shared meaning, shared belief, shared thought, shared understanding. If your brain functioned separately and independently of every other brain, you would not even have the capability to say so.creativesoul
    Again, incorrect, imo, for the reasons I have already given.
  • universeness
    6.3k
    There are no absolutes; I’m absolutely certain of it.Mikie

    Paradox is such fun!
  • LuckyR
    522


    Exactly. When it comes to human opinions, being certain is about as meaningful as the amount of effort it takes to say "oops" when what one is certain about is shown to be in error.
  • Pantagruel
    3.4k
    Because you do know stuff. Like which draw your socks are in and what your phone number is and occasionally even where your keys are. It takes training in philosophy to deny this. And even more philosophy to learn otherwiseBanno

    Absolutely. Why do we have to know absolutely? I personally start from the (pretty obvious) assumption that 'everybody knows something.' That people know things is evident. The complications arise when we try to systematize what we know in an attempt thereby to know more. Sometimes it works, improved theoretical knowledge can lead to improved practical knowledge. The best example of this is the periodic table of the elements.
  • HarryHarry
    25
    There are no absolutes; I’m absolutely certain of it.Mikie
    There is an absolute.
    But I don't know anything about it.
  • Fooloso4
    6.2k
    Because you do know stuff ... It takes training in philosophy to deny this.Banno

    This is similar to the affliction many suffer when the first read psychology and convince themselves that they have various dire psychological disorders.

    Socratic skepticism became a victim of its own success. On the one hand, contrary to what Plato's Socrates says, some come to believe that we know nothing, but others come to believe there is a realm of transcendent knowledge, and still others that the problem is methodological, that with the right method all will be revealed.
  • Banno
    25.3k
    There are a few ways to not know where your socks are kept. One is pragmatism, in which the location of your sock draw can never be known, but only approximated asymptotically. Such brilliance derives not only from Charles Sander Peirce.

    In a recent variation, due to Kant, Hoffman and others, you do not know where your sock draw is, but instead you construct an arbitrary mental video game that supposes the sock draw, allowing you and your ancestors to find their socks and hence survive, presumably by fending off frostbite. But you cannot know that there are socks.

    Some will allow you to know where your sock draw is, but only relative to your own understanding. They will have an utterly different approach to socks, based on there own culture or their different experiences, and so develop beliefs about socks that are utterly incommensurate with your own, to the extent that in their world there may not even be sock draws.

    Presumably this explains sock puppets.
  • Fooloso4
    6.2k
    One is pragmatism, in which the location of your sock draw can never be knownBanno

    I would think that the pragmatist, or some subset of pragmatists, would say that opening the drawer and finding your socks where you claim they there is sufficient for knowing where they are.
  • Banno
    25.3k
    That'd be Dewy, not Peirce.
  • Srap Tasmaner
    5k
    There are a few ways to not know where your socks are kept. One is pragmatism, in which the location of your sock draw can never be known, but only approximated asymptotically. Such brilliance derives not only from Charles Sander Peirce.Banno

    Why is it so hard to tell the difference between someone who knows where your socks are and someone who thinks they know? Why is it so hard to tell the difference even about yourself?

    I think there are two issues here, and they are separable: (1) Is the idea of knowledge helpful for modeling our mental lives? (2) What's going on when someone says "I know"?

    For the first, no, it's probably not. Even Hume knew that reasoning concerning matters of fact is probabilistic, but we didn't listen. We have probabilistic expectations about our environment and our own state, these eventuate in dispositions to act, and everything is subject to revision as we go.

    For the second, I think it turns out Sellars was right, that "I know" puts your claim in "the space of reasons," meaning you commit to the claim and indicate a willingness to attempt to defend it with reasons. If you assure me that you know something, I might take your confidence into account, or I might not, but that depends on a lot of other factors. I won't, as a rule, take it as a fact about you from which I can infer the truth of your claim. Why would I? If you're wrong about one, you're wrong about the other. What does hold, for me, is that reasons to believe you count as reasons to believe what you claim, and that might be very helpful. (For you, this is useless, since you already believe both you and what you claim.)

    "I know" is a sort of exaggeration of the reliability of our cognitive state, which makes it suitable for the back and forth of argument, but not as a description of how we navigate the world and keep tabs on our own thoughts. Or maybe it does have some introspective use, as a sort of marker for what is not at the moment being questioned, even if it isn't altogether unquestionable. Some of our ideas, habits of inference, are so close to wired in (I'm thinking of things like the physics we already 'know' as infants) that it takes a lot of work to question them at all.
  • Srap Tasmaner
    5k
    The final word on this whole socks business:

  • Banno
    25.3k
    Why is it so hard to tell the difference between someone who knows where your socks are and someone who thinks they know?Srap Tasmaner

    It is?

    Ask 'em to get you a pair of your socks; if they succeed, then that'll do, won't it?
  • Srap Tasmaner
    5k
    Ask 'em to get you a pair of your socks; if they succeed, then that'll do, won't it?Banno

    It'll mostly do, sure, but the question isn't whether the way we talk is fine, of course it is.

    I think we ought to have a theory about how we do things like keep track of socks and find them on demand. (Or acorns. Squirrels do more caching than recovering, sometimes much more, but it's not entirely clear yet whether the unrecovered acorns are actually lost.)

    And we also talk a lot, so that's more behavior we want a theory for.

    The question is whether the sorts of stuff we usually say about the sorts of stuff we usually do is a good start on a real theory of what we do, and there I think it's a pretty mixed bag.

    We do tend to treat what we say as a kind of theory, so we say things like "He can get the socks because he knows where they are," or "He can't get the socks because he doesn't know where they are." And that's theory-ish, because it looks like you could make a prediction, right? People who know where the socks are will be more successful and more efficient at retrieving them than people who don't, than people who are just guessing or searching.

    But of course you just said that your test for knowing where the socks are is fetching them. That's a no good. We want to be able to sort people first by what knowledge they have and then test the theory that knowledge is predictive of performance.

    You can keep your performance criterion like this: sort people first by whether they have successfully performed the task in the past -- pause for a moment and say this is a strong indicator of knowledge -- and then test whether those we've attributed knowledge to out-perform those we haven't.

    But that's just induction. There's a flourish in the middle involving the word "knowledge" but you're really just testing whether past performance predicts future performance. Now it's starting to look like this is not a theory of behavior at all, but about how we use the word "knowledge." That is, we're not so much leaning on knowledge as an explanation of behavior, since we've given no independent criterion for its presence or absence, as noting that we make inductive inferences about behavior straight up and then label some behavior "knowing" and some not.

    But even as a theory about how we use the word "know" this is pretty weak, because we allow all sorts of exceptions. I might not usually know whether the trash has been picked up but this time I do because I happened to see them stop this morning and take the trash.

    Squirrels don't talk about where they've cached acorns; we (at least when doing philosophy or laundry) do talk about where we've put our socks. The fact that we talk about it doesn't change the fact that our sock caching probably relies on abilities we share with squirrels. (They sort and organize acorns, for example.) And that's why a theory of how we talk, though intensely interesting in its own right, is not the same as a theory of how we find our socks.
  • Isaac
    10.3k


    Excellently written.

    I think this sums up the issue perfectly. If Bob says "I know the pub is at the end of the road", I don't stand in what is now a car park and stubbornly order a beer because he said he 'knew'. I treat the expression no differently to how I might treat it had he said "I'm quite sure the pub is at the end of the road". I treat the choice of words as an expression of confidence (in that context).

    Likewise, someone who 'knows' where his socks are is my estimation of confidence in his likelihood of finding them.

    We might build some castles in the air about what metaphysical constructs might be possible to invoke off the back of that connection, but they'd be subservient to the use, not the other way around.

    why a theory of how we talk ... is not the same as a theory of how we find our socks.Srap Tasmaner

    There's a paper title in that...
  • Srap Tasmaner
    5k


    The part I think is noteworthy is that when you claim that P, my reasons for believing you -- your trustworthiness, reliability, honesty, likelihood of having first-hand knowledge in this case, etc -- those now count as reasons for believing that P. That's an incredibly useful inference pattern, so useful that it runs on automatic most of the time: if I'm accustomed to getting the truth from you and you tell me you bought milk, I assume (that is, infer) that you did. Reasons only come into it if your claim has to be defended -- maybe against someone who doesn't know your many fine qualities as an evidentiary source.

    And around we go. Suppose now I vouch for you. Our skeptic might have reason to trust what I tell him, but what does he know about my reliability as a judge of other people's trustworthiness? Might never have come up for him, so he'll believe you only to a degree because he doesn't know yet whether I've made the smart move believing you, whether that's something else about me he can rely on.
  • Banno
    25.3k


    :smirk:

    The test for whether someone knows where the socks are is not their reliability, honesty and integrity, but their interactions with socks - variously finding, returning, mending, washing and so on.

    We want to be able to sort people first by what knowledge they have and then test the theory that knowledge is predictive of performance.Srap Tasmaner
    Well, no. Knowledge is shown in performance, including linguistic performances. Induction does not seem the right notion to use here.

    And around we go.Srap Tasmaner
    Indeed, and around again, if knowledge is understood only as mental furnishing. Knowledge is enacted.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.