• Terrapin Station
    13.8k
    But it's true that the notion of conceivability in Chalmers' argument is a bit troublesome. It positively invites the charge of begging the question, because it is so easy to confuse "We can conceive x", with "I believe there could be an x", where the latter obviously rests on one's philosophical commitments.jamalrob

    Why do you think, by the way, that we'd use the phrase "conceivable" if we don't intend to connote something about the mental act of conceptualization, and we're only saying that something is logically possible? Why wouldn't we just use the phrase "logically possible" if that's what we're saying?

    Also, if we're only saying that something is logically possible, what would you say that it's logically possible with respect to? In other words, a set of logical statements? A set of ontological facts? What?
  • Real Gone Cat
    346


    Ooh, I think the flying pigs are going to help.

    Sure it's possible to conceive of flying pigs, but what if I describe the world of flying pigs as being physically identical to this one, and then ask you to conceive of flying pigs in that world. Then flight must be non-physical! And in that case, you cannot use the conceivability of flying pigs to prove that it is possible for a physical world to give rise to flying pigs.*

    This is the problem with Chalmers request that we conceive of a world physically identical but different in regards to consciousness - it establishes as a premise that consciousness cannot be physical. So you are correct that the problem lies with Chalmers' notion of conceivability - he is putting conditions on conceivability that destroy his argument.

    *Note : If pigs in the other world have wings, or if pigs there are lighter than air, then they are not strictly pigs, but something else. So you can't get around it that way.
  • Jamal
    9.7k
    Too confusing. Physically identical flying pigs are uncontroversially inconceivable. It is a contradiction, since by flying we just mean, and have always meant, something physical. This is not the case with consciousness and the mind.

    Back where we started.
  • Cabbage Farmer
    301
    Creativity and sentience may be the same thing or mutually necessary. One difference between a p-zombie and a human is that the p-zombie would not be able to create knowledge - i.e it would be stuck in its programming, just as animals are.tom

    Don't the AI geeks still talk about neural networks, and programs programming themselves ad infinitum, and all that jazz? I don't see any reason to doubt that it's already underway with plenty of room for growth.

    It seems to me that humans are like other animals, we're stuck in our programming too. It's just that our programming is more flexible, more variable, more adaptive, more creative, amenable to far greater complexity.

    What I'm suggesting is that p-zombies cannot possess a GENERAL intelligence, because they cannot create knowledge of themselvestom

    What do you mean by "creating knowledge"?
  • Real Gone Cat
    346


    (This has the danger of going in circles like the discussion with Michael.)

    But why is consciousness different from flying? Flying is obviously physical, but consciousness is not?

    Sure, consciousness might feel non-physical to you, but that seems like folk wisdom, not a considered argument. You seem to be assuming it as premise. Can you further explain?
  • Wosret
    3.4k
    Obviously machines, or A.I.s aren't physically identical to people... I didn't realize that it was a physically identical world.

    I don't think that it's controversial that we have a private qualitative experience that others don't have access to. That we can't even imagine what it would be like for a bat. At the same time, we do imagine that it is similar to other people, because they are physically similar, and behaviorally similar to ourselves, whereas bats differ in ways that suggest differing qualitative experience.

    Even the behaviorists that assure you that all intentions can be read in behavior, and micro-expressions are impossible to conceal and all that -- even those that write the books on it cannot actually demonstrate an ability to discern whether or not someone is lying with a success rate higher than chance in controlled trials.

    I don't think that I wholly infer from behavior or physicality that others are conscious, or sentient alone. I imagine that I have same nugget of a priori disposition for this, so that as long as a piece of it doesn't come from experience, then it doesn't come entirely from my observations of the behaviors, and physicalities of others. So I think that it is both conceivable, and this indeed does imply in itself, that the puzzle isn't wholly completed with behavior and physical structure. Even if we could reduce this missing piece to physicality, it still wouldn't change that that inference isn't how I'm aware of it.
  • Jamal
    9.7k
    I'm not saying it's not physical, although I do believe this to be a category error. I'm saying that unlike flying it's not uncontroversial to say that it is physical. There is room for the p-zombie argument precisely because we don't know how to account for consciousness.

    Generally, I think that when we talk about the mind or about consciousness we are referring to the same thing people referred to back when it didn't occur to anyone to equate it with brain states. Physicalism purports to account for something that we already talked about.
  • Cabbage Farmer
    301
    'The same' means 'the same type'.Wayfarer

    In this case, yes. Type-identity, not token-identity; isn't that how it's said?

    Or we might say, a one-to-one correspondence of molecules and relations of molecules, "the same exact type" of molecular structure.... Or whatever idiom the make-believe mad scientists use to cook up doppelgangers nowadays.

    Not at all. Notice the last passage:Wayfarer

    Thus one would discover that they [machines, p-zombies] did not act on the basis of knowledge, but merely as a result of the disposition of their organs. For whereas reason is a universal instrument that can be used in all kinds of situations, these organs need a specific disposition for every particular action. — Descartes

    Even here. Leave aside for a moment the question of whether it counts as genuine (as opposed to simulated) "knowledge" and "reason" -- I suppose that's still at issue between Searle and the eliminitave materialists and the other players in this market segment. The question is whether the thing could resemble us enough to deceive us into mistaking it for one of us over indefinitely long periods of interaction. Not, would it be genuine knowledge and genuine reason, but merely, could it be a convincing simulation of knowledge and reason? It seems likely we're getting there; I see no reason to expect it's out of the question; and it seems an empirical question.

    As to the archaic flavor of Descartes's account: What do we make of "these organs need a specific disposition for every particular action"? For me it calls to mind all those gears and levers, each single action prepared mechanically as a passive response to a particular input, without any room for adaptation or generalization, without any active gathering of information from the environment, without any coherent organization of rules and functions and behaviors.... It seems too simple. Even our floor-cleaning robots seem more intelligent than Descartes's sideshow automaton.

    And then "reason is a universal instrument that can be used in all kinds of situations": Do we suppose "reason" to be a single, indivisible instrument? Surely we should remain open, at least, to the possibility that our power of reason depends on and consists in the shifting "dispositions" of so many moving parts in us.

    So on both sides of that coin, his picture seems out of fashion. Descartes seems perhaps to miss the full potential of computational technology, to exaggerate the "unity" and "universality" of reason, and to speak as if "reason" were a ghost in the machine. Though it's a small passage out of context.

    Non-rational creatures can't form concepts, they're essential to the operations of reason.Wayfarer

    I'm inclined to agree that something like conceptual capacities are essential to rationality, to the emergence of rational creatures. However, I leave open the possibility that there are various sorts of rational creature. For instance, a rational sentient animal, a rational sentient robot, or a rational nonsentient robot.

    Life, sentience, and rationality are logically distinct terms. The way I prefer set my terms, there's no language without rationality, but there is rationality without language.

    I say nothing appears to a nonsentient simulated intelligence. So it doesn't perceive like we do, it doesn't introspect like we do, it doesn't know like we do, it's not rational like we are. But it seems to behave rationally, like we do, and not only in the manner of an old-fashioned machine, according to a fixed set of rules, but even as we do, learning new tricks as we proceed, creating new rules, cultivating a style or even something like a personality.

    That's my sense of the direction we're already headed. It seems in keeping with what the mad scientists have in mind when they speak of zombies.
  • Terrapin Station
    13.8k
    Physically identical flying pigs are uncontroversially inconceivable. It is a contradiction, since by flying we just mean, and have always meant, something physical. This is not the case with consciousness and the mind.jamalrob

    Hold on a second--you're conflating logical possibility and consensus/conventional definitions? You've got to be kidding me.
  • Wayfarer
    22.6k
    Leave aside for a moment the question of whether it counts as genuine (as opposed to simulated) "knowledge" and "reason"Cabbage Farmer

    How can you leave that aside? If knowledge is not genuine, then it's not knowledge.

    It seems likely we're getting there; I see no reason to expect it's out of the question; and it seems an empirical question.Cabbage Farmer

    Well, I think that is a classical case of what Karl Popper described as one of the 'promissory notes of materialism'. That's not to deny that effective AI and machine learning already exists - I have actually have been reading up on Microsoft Azure Machine Learning and it's very interesting from a practical point of view. But I think in this context, it's 'intelligence' that has to be put in scare quotes, not 'knowledge' or 'reason'; it's not really knowledge until there's a knower involved. Otherwise it is still just binary code.

    I quite agree that Descartes was spectacularly wrong about many things, but I still think his depiction of the universal nature of reason is on the mark. I think it's a big mistake, and one made every day, to feel as though reason is 'something that can be explained'; reason is always the source of explanation, not the object of it. As Thomas Nagel says, somewhere, reason often seems to be imposed on us, it is something we have to yield to, oftentimes through painful learning. Whereas, I think us moderns take it for granted that reason 'has evolved' and that, therefore, we have an in-principle grasp of what it is - namely an adaption, something which helps us to survive. But that is precisely what has been criticized as the 'instrumentalisation of reason'* which is endemic in materialist accounts of the nature of the mind.

    I leave open the possibility that there are various sorts of rational creature. For instance, a rational sentient animal, a rational sentient robot, or a rational nonsentient robot.Cabbage Farmer

    Another effect of that, is that we think because we understand it, that it is something that can be replicated by us in other systems. Hence the debate!

    As far as I can see h. sapiens is the only rational sentient animal. I don't think robots are rational, but are subject to reason; higher intelligences, if there are any, are superior to it. And all of that is in keeping with classical Western philosophy and metaphysics.

    -------------

    * "The concordance between the mind of man and the nature of things that [Bacon] had in mind is patriarchal: the human mind, which overcomes superstition, is to hold sway over a disenchanted nature. Knowledge, which is power, knows no obstacles: neither in the enslavement of men nor in compliance with the world’s rulers... Technology is the essence of this knowledge. It does not work by concepts and images, by the fortunate insight, but refers to method, the exploitation of others’ work, and capital... What men want to learn from nature is how to use it in order wholly to dominate it and other men. That is the only aim." - Adorno and Horkheimer
  • m-theory
    1.1k

    Many people criticize the argument because it is an equal logical possibility that we are p-zombies.

    The problem is it is just as conceivable that we are p-zombies as it is that consciousness is not physical.
    It is easy to reject this argument as incoherent for this reason, because very obviously we are not p-zombies.
    Very obviously humans are physical and very obviously we are conscious, what is not made obvious by the argument is why it would be necessary to define consciousness in such a way that consciousness must be excluded from the physical,

    We could conceivably define humans in the same way and offer that because humans are physical that this is mutually exclusive of consciousness.

    But of course it is not true of humans so in what way is the lack of consciousness of a p-zombie something which is coherent?
  • Michael
    15.6k
    Very obviously humans are physical and very obviously we are conscious, what is not made obvious by the argument is why it would be necessary to define consciousness in such a way that consciousness must be excluded from the physicalm-theory

    What do you mean by us defining consciousness a certain way? We no more define consciousness than we define a dog. Rather we encounter a dog and then examine it to determine its nature. So too with consciousness.
  • m-theory
    1.1k

    That is my point.
    P-zombies lack consciousness from definition only, there is no analytical reason that they are not conscious.
    We cannot, from our encounters, be sure that we are not ourselves p-zombies or that others may be p-zombies.
    It is just as conceivable that we are p-zombies as it is that consciousness is not physical.

    But very obviously we are not p-zombies, so a p-zombie, as described by the argument, is incoherent in that it does not actually distinguish humans with consciousness from p-zombies.
  • Cabbage Farmer
    301
    How can you leave that aside? If knowledge is not genuine, then it's not knowledge.Wayfarer

    Surely we can leave it aside for just a moment, as I did, to consider a question like "could it be a convincing simulation of knowledge"?

    So far as I can see, only that much is required to get these thought experiments running. I say it could be a convincing simulation; but it doesn't count as full-blooded knowledge if there's no sentience.

    But I think in this context, it's 'intelligence' that has to be put in scare quotes, not 'knowledge' or 'reason'; it's not really knowledge until there's a knower involved. Otherwise it is still just binary code.Wayfarer

    I don't mind putting all three terms in scare quotes, or doing away with the quotes and adapting usage to accommodate different kinds of intelligence, knowledge, and rationality. The knowledge of honeybees, the knowledge of homo sapiens, the knowledge of syntax engines.

    There would be a referent corresponding to the grammatical subject for "know"-talk in relevant cases: Does the robot butler know the way to the store? Has the robot butler learned to recognize each of the guests' voices yet? And so on. It's meaningful talk, we know how to check for the answer, and there would be good reason to extend usage this way under such circumstances.

    I quite agree that Descartes was spectacularly wrong about many things, but I still think his depiction of the universal nature of reason is on the mark. I think it's a big mistake, and one made every day, to feel as though reason is 'something that can be explained'; reason is always the source of explanation, not the object of it.Wayfarer

    Reason is not a simple instrument. It can be stretched and folded, and turned to face itself. Why should we suppose that rationality cannot "explain" rationality; that language cannot be used to speak about language; that thinking cannot be about thinking; and so on? This is a familiar pattern for us; you and I tend to diverge at such points.

    Everything that appears to us may be described. Everything that remains available to us may be investigated. To describe is not to describe completely and perfectly. To investigate is not to arrive at complete and perfect understanding.

    Reason is not the only thing in the world of which we have a partial view.

    As Thomas Nagel says, somewhere, reason often seems to be imposed on us, it is something we have to yield to, oftentimes through painful learning.Wayfarer

    Reason is imposed on us as a natural fact.

    If a mind tends to recognize the same things as the same on different occasions, and to recognize different things as of a same kind, then its experience is organized in accordance with basic principles of number, arithmetic, and logic -- whether or not it can count and give proofs. To have a mind like ours, or like a dog's, is to be a sentient rational animal. Conceptual capacities and rationality emerge in the animal world together.

    Whereas, I think us moderns take it for granted that reason 'has evolved' and that, therefore, we have an in-principle grasp of what it is - namely an adaption, something which helps us to survive. But that is precisely what has been criticized as the 'instrumentalisation of reason'* which is endemic in materialist accounts of the nature of the mind.Wayfarer

    A moment ago I criticized you and Descartes for treating reason like a simple instrument.

    What's meant by "instrumentalization of reason"? Such a postmodern ring.

    I'm not sure that a thing's "having evolved" or "not having evolved" has anything to do with how easy or hard it is for human beings to understand. A stone has not evolved, but we have ways of getting to know the stone, as it stands in the present, and with respect to potential or prospective uses to which it may be put, and even with respect to its history.

    Reason has a history, in each of us, in communities and civilizations, in animal lineages. That history can be investigated, just like the history of the stone, or of mathematics, or of wheat cultivars can be investigated. To recognize the historical character of a thing is not to "instrumentalize" it. Nor is it to neglect the structure and limits of the thing as they appear to us in the present and across time.

    Another effect of that, is that we think because we understand it, that it is something that can be replicated by us in other systems. Hence the debate!Wayfarer

    Of course we understand it one way or another, to some extent or other. Can we understand it well enough to simulate it?

    I'm not sure how well we need to understand it in order to simulate it. What we need to understand are adequate techniques of production. Some of the most promising developments in AI involve systems that develop their own programming by a process of trial and error in coordination with feedback from human trainers.

    As far as I can see h. sapiens is the only rational sentient animal. I don't think robots are rational, but are subject to reason; higher intelligences, if there are any, are superior to it. And all of that is in keeping with classical Western philosophy and metaphysics.Wayfarer

    There's plenty of bathwater in that tradition. I'm not sure being in keeping with it warrants a strong recommendation for any view. Hume's Enquiry has a short section on the reason of animals. Even Aristotle skirts around the issue.

    I call dogs and chimps, for instance, rational and sentient. I'm not sure yet if we differ in our views on dogs and chimps, or merely in our use of words like "rational" and "sentient".

    A dog forms rational expectations on the basis of past experience: Hearing a familiar sound that has been frequently followed by a desirable result, a dog adopts an attitude of expectation, even while the states of affairs reported by that sound remain otherwise hidden from view. The dog moves through attitudes resembling hope, wonder, doubt, and positive anticipation with respect to the prospect that sometimes, though not always, follows the sound. It seems absurd to deny the dog knows what it expects while it's expecting, knows what outcome it has in mind. The dog has learned a sequential correlation between two sorts of event, the sound and the desirable outcome; it adopts attitudes of expectation upon recognizing the first event in the sequence; it has an idea what it expects while it's expecting; the character and intensity of the attitude of expectation vary over time in proportion with historical trends in the correlation of the two events, sound and desirable outcome; the dog's behaviors are correlated with the attitudes it undergoes, and thereby with trends in the correlation of sound and outcome.

    All this counts as a form of "rationality" in my language.
  • Janus
    16.3k


    Sure, we can reason about reason, " turn it back to face itself"; which presumably a dog or chimp cannot. We can investigate and hypothesize about histories of reason, in animal and human lineages, just as we can with histories of digestion, if that is what we find thrilling.

    But we have no clue as to its origin and its mysterious ability to make the world intelligible, just as we have no way of rationally working out what the absolute origin of the world, or its capacity to be made intelligible by reason, is. This is where reason ends and faith based on intuition begins.

    Of course no one is constrained to step beyond merely empirical inquiries if the latter are found to be satisfactory. That's certainly a matter for the individual, and individual taste.
  • tom
    1.5k
    But we have no clue as to its origin and its mysterious ability to make the world intelligible, just as we have no way of rationally working out what the absolute origin of the world, or its capacity to be made intelligible by reason, is. This is where reason ends and faith based on intuition begins.John

    But we do have a big clue as to where reason comes from: the property of the laws of physics that permits universal computers to abstract and simulate any finite physical system.

    The difference between a p-zombie and a human would be the software running on the brain.
  • Wayfarer
    22.6k
    Does the robot butler know the way to the store? Has the robot butler learned to recognize each of the guests' voices yet? And so on. It's meaningful talk, we know how to check for the answer, and there would be good reason to extend usage this way under such circumstances.Cabbage Farmer

    The robot butler 'knows' how to do a lot of things, but it can't improvise, or adapt, or do anything outside being a robot butler. I agree it's meaningful to talk of such things, but I think the problem with simulated and artificial intelligence is that it conflates intelligence and computation - it says basically that intelligence is a variety of computation, which is why the AI advocates truly believe that there is no ontological distinction between the two; that rational thought, and the operations of computer systems, are essentially the same.

    Of course, there is vast literature on this question - there's a thread on another forum on Can Computers Think which has been active since 2007. So, not going to solve it here. All I can say is that my view is that humans unconsciously project their intelligence onto these devices, which are wholly dependent on a human intelligence to maintain, build, program and interpret. They're instruments of human intelligence; but I think the idea that they are actually beings themselves remains in the domain of science fiction. (Not for nothing was Asimov's great series on these ideas was called 'I, ROBOT'. That short title speaks volumes.)

    Why should we suppose that rationality cannot "explain" rationality; that language cannot be used to speak about language; that thinking cannot be about thinking; and so on? This is a familiar pattern for us; you and I tend to diverge at such points.Cabbage Farmer

    The divergence is because I want to resist what I see as the reductionism that is inherent in a lot of modern philosophising; and I know this rubs a lot of people up the wrong way. My approach is generally platonistic, which tends to be top-down; the Platonic conception of mind is that mind is prior to and the source of the phenomenal domain, whereas naturalism presumes that mind is an evolved consequence of a natural process.

    This divergence, then, is one manifestation of the 'culture war' between scientific naturalism and it's opponents. I'm not going to apologise for the conflict this often causes, but I will acknowledge it.

    In any case, one reason that we can't explain reason is because of the recursive nature of such an undertaking. To explain anything, we must employ reason, but if reason is what we're trying to explain, then such attempts must invariably be circular. You can't 'put reason aside', and then analyse it from some point outside of it; every attempt to analyse it, must call on the very thing it wishes to analyse *.

    The basic operations of reason - if/then, greater than, same as, etc - are in my view 'metaphysically primitive', i.e. they can't be explained or reduced to anything more simple. They are intrinsic to reason and therefore to science (a point that is broadly Kantian). I think there are evolutionary accounts of how h. sapiens developed the capacity to reason - but notice the expression 'capacity to reason'. I think the furniture of reason, these primitive terms without which reasoning is not possible, are not themselves something that evolves - what evolves is the capacity to grasp them. Once intelligence reaches the point of being able to grasp them, then it passes a threshold, namely, that of rationality, which makes modes of being and understanding available to it, which are not available to its forbears. So in that sense, those elements of rational thought are not something that can be explained, even though they can be used to explain many other things. (That, by the way, is why I believe that 'science doesn't explain science', i.e., science doesn't really account for the nature of number or natural law, but it can use its ability to discern these things to explain all manner of other things. Wittgenstein touches on this when he says 'the whole modern conception of the world is founded on the illusion that the so-called laws of nature are explanations of natural phenomena.' TLP 6.371)

    A dog forms rational expectations on the basis of past experience: Hearing a familiar sound that has been frequently followed by a desirable result, a dog adopts an attitude of expectation, even while the states of affairs reported by that sound remain otherwise hidden from view. The dog moves through attitudes resembling hope, wonder, doubt, and positive anticipation with respect to the prospect that sometimes, though not always, follows the sound. It seems absurd to deny the dog knows what it expects while it's expecting, knows what outcome it has in mind.Cabbage Farmer

    I think dogs, elephants, birds, primates, cetaceans, are certainly sentient beings, but that all of what you're describing can be understood in terms of learned behaviour, response to stimulus, memory, and so on. Actually animals are capable of a great many things science doesn't understand at all, like fish and birds that travel around the world to return to their place of birth. They're sentient beings, so we have that it common with them. But those attributes don't qualify as abstract rationality.

    -------------
    * This point is the subject of one of the essay's in Thomas Nagel's The Last Word.
  • Janus
    16.3k


    Do you have an argument for that? In any case, even if it were accepted, it does not constitute a final explanation, but remains just another unverifiable conceptual model to be taken on faith. Any model require further explanation unless it is concluded 'This is just the way things are'. It's easy enough to see that there can be no final explanation, which means that reason and the world are intractable mysteries.
  • tom
    1.5k
    Do you have an argument for that? In any case, even if it were accepted, it does not constitute a final explanation, but remains just another unverifiable conceptual model to be taken on faith. Any model require further explanation unless it is concluded 'This is just the way things are'. It's easy enough to see that there can be no final explanation, which means that reason and the world are intractable mysteries.John

    It has been proved that under the laws of physics (minus gravity) that a universal computer can simulate any finite physical system to arbitrary accuracy by finite means.

    What this means is that Reality is amenable to reason, and that all problems are soluble. This property of the laws of physics is also what permits life.

    It seems reasonable to assume for the time being, that the human brain is computationally universal, and we know from the theory of computation that all universal computers are equivalent.

    P-zombies and humans are thus distinguished by different software.
  • Janus
    16.3k


    That's one interpretation of the purported facts anyway.
  • quine
    119
    Everything conscious can be observed as conscious. Every P-zombie can be observed as conscious. So, what do you think?
  • quine
    119
    Here's an argument:
    Everything that can be observed as conscious is conscious. Every P-zombie can be observed as conscious. Therefore, every P-zombie is conscious.
  • Wayfarer
    22.6k
    has a p zombie ever been observed?
  • quine
    119
    A P-zombie might be observed in a possible world...
  • quine
    119
    I mean, by definition, P-zombies are non-conscious beings that are observed as conscious. The concept of P-zombies is itself contradictory. That's what I wanted to say.
  • Wayfarer
    22.6k
    that, I agree with although I'm not sure how you got there :-)
  • Michael
    15.6k
    Everything that can be observed as conscious is conscious. Every P-zombie can be observed as conscious. Therefore, every P-zombie is conscious.quine

    Given that a p-zombie is defined as something that appears to be conscious (although, what does it mean to appear to be conscious?) but isn't conscious, your conclusion is a contradiction. This is a reductio ad absurdum against one of your premises. Either p-zombies can't be observed as conscious or not everything that can be observed as conscious is conscious.

    Of course, if being conscious is defined as behaving a certain way then p-zombies would be contradictory. However, given that the very issue under discussion is the nature of consciousness, you can't premise your argument by defining it to be a certain thing. That's question-begging.
  • tom
    1.5k
    has a p zombie ever been observed?Wayfarer

    Animals are p-zombies.
  • Terrapin Station
    13.8k
    Given that a p-zombie is defined as something that appears to be conscious (although, what does it mean to appear to be conscious?) but isn't conscious, your conclusion is a contradiction.Michael

    His argument is a simple modus ponens. If A, then B. A. Therefore B. You're saying that is a contradiction?
  • Michael
    15.6k
    His argument is a simple modus ponens. If A, then B. A. Therefore B. You're saying that is a contradiction?Terrapin Station

    I'm saying that the conclusion is a contradiction, given that a p-zombie is defined as not being conscious. Therefore one of his premises must be false.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.