Comments

  • The important question of what understanding is.
    Two experiencing entities.Daemon
    Alright, I think we're talking past each other a bit. The two (mind you, not one) experiencing entities are a result of corpus callosotomy. The notion that experience is what makes you an entity cannot account for the fact that a corpus callosotomy should make two entities. Agentive integration by contrast explains why you are a single entity. The notion that you are an entity due to agentive integration does account for the fact that a corpus callosotomy should make two entities. Once again, experience is doing no work for you here; it's an epicycle.

    But I don't really think the effects of cutting the corpus callosum are as straightforward as they are sometimes portrayed, for example, the person had a previous life with connected hemispheres.Daemon
    No idea what you're saying here. Are you suggesting there are two individuals before the corpus callosotomy?
    I don't think the "corpus callosum/AHS" argument addresses this.Daemon
    Quite the opposite; see above. We can take an external view as well:

    https://movie-usa.glencoesoftware.com/video/10.1212/WNL.0000000000006172/video-1
    From: https://n.neurology.org/content/91/11/527

    This person did not have a corpus callosotomy; she had a stroke (see article). It is very obvious that she has AHS. That's also curious... why is it obvious? What behaviors is she exhibiting that suggest AHS?
  • The important question of what understanding is.
    Cutting the corpus callosum creates two entities.Daemon
    So if experience is what makes us an entity, how could that possibly happen?

    If integration makes us an entity, you're physically separating two hemispheres. That's a sure fire way to disrupt integration. The key question then becomes, if cutting the corpus callosum makes us two entities, why are we one entity with an intact corpus callosum, as opposed to those two entities?

    Incidentally, we're not always that one entity with an intact corpus callosum. A stroke can also induce AHS.
  • A first cause is logically necessary
    But hidden variables have been ruled out by experiment, yes?tim wood
    Yes. Experimental results violate Bell Inequalities. (FYI, there are "outs" for HVT's, but they require giving up something like locality, realism, etc; some choose to do so).
  • A first cause is logically necessary
    All right, then, the game is not fair. QED. Is that the point?tim wood
    No:
    The point of this game is that it is Bell's Theorem in disguiseInPitzotl
    What I can possibly do to rig the game is analogous to a Hidden Variable Theory. The "real" goal here is to explain the 1/4 probability (the "win" thing is just to encourage working the classical probabilities).
  • A first cause is logically necessary
    Yes, I am a super determinist.Philosophim
    Are you sure?
    Yes, I am a super determinist. Once some type of existence is in play, it will act and react the same way identically each time.Philosophim
    That's not what superdeterminism means.

    Superdeterminism means that Terra Mater is dealing the cards now, and on this particular deal she happens to deal BBR. So I have three options of two cards to pick, and if I pick at random, I would flip over the first two cards 1/3 of the time. But instead, just because the square of the cosine of 60 is 1/4, then Terra Mater mind controls me via my physical makeup to manipulate me into picking the second and third, or first and third, a sum total of an extra 1/12 of the time, such that my probability of picking the first two cards matches the square of the cosine of 60. This is the type of story you have to tell if you call yourself a superdeterminist.

    Superdeterminism is kind of whacky. I'm agnostic on a lot of things; including a position on free will and a position on determinism... but it would take a lot to sell me on superdeterminism.
  • A first cause is logically necessary
    That's because probability requires a certainty of certain facts for formulation. As soon as you said, "I might not be necessarily being fair," you remove the ability to make an accurate assessment of odds.Philosophim
    I think you're misreading the game. I can be unfair, but I can't change the game being played. All I can do is be maximally unfair but follow all of the rules.
    With this, we can calculate the likelihood of standard deviation.Philosophim
    You're trying too hard. We're not talking about "in a given run". We're talking about, I set up a casino, you come play, and I have a viable business model where your funds slowly drain into my casino.
    I read the discussion between you and the others after posting this, so you can be sure this was my personal and honest view, and not influenced by the other conversations.Philosophim
    And we're not talking about a puzzle you have to guess right at either. This is open ended. You can look up the answer. You can have other people do the work. I'll work it out myself, and you can use my workbook.

    @tim wood gave an excellent crack at it here. I followed up with a few more details here. Picking up from there, assuming I cheat like I outlined in that post, all of the rest of the arrangements are symmetric... they're all of some form "PPM" where P is either red or black, in some permutation. In this arrangement you can pick either the first and second, the first and third, or the second and third cards; and you only win if you pick the first and second. So you win 1/3 of the time. This is where the puzzle gets fishy; in BBB or RRR arrangements, you win 100% of the time. In any other arrangement, you win 1/3 of the time. But when you actually play, you win 1/4 of the time. The 1/4 actuals seem a bit impossible to rig. But 1/4 is what we get.

    The point of this game is that it is Bell's Theorem in disguise (as you requested; rephrased as a deck of cards example). The arrangements here are possible "riggings" that Terra Mater could make up; these are Hidden Variables. A meta-theory of how I might rig the game would be a Hidden Variable Theory. The analysis that the riggings only get down to your winning 1/3 of the time is a Bell Inequality. The 1/4 comes from quantum mechanics and agrees with experiment.

    This brings me back to here:
    Lets use an easier model to digest, as odds work the same no matter the complexity.Philosophim
    ...the puzzle stays open. Quantum mechanics would tell us the probabilities of this sort of match are 1/4. But classical probability can only bring us down to 1/3. Experiment appears to confirm quantum mechanics; that is, that Bell Inequalities are violated as per Bell's Theorem.

    This isn't meant as a refutation against anything specific... but BT is definitely something that demands an explanation.

    I gave you an example of an atom radioactively decaying. You followed up with a card analogy to make it easier. But atomic decay does not behave like cards. QM doesn't play by classical rules; it cheats.
  • The important question of what understanding is.
    I've looked at this many times, and thought about it, but I just can't see why you think it is significant.Daemon
    I think you took something descriptive as definitive. What is happening here that isn't happening with the thermostat is deference to world states.
    But also there's a more fundamental point that I don't believe you have addressed, which is that a robot or a computer is only an "entity" or a "system" because we choose to regard it that way.Daemon
    You're just ignoring me then, because I did indeed address this.

    You've offered that experience is what makes us entities. I offered AHS as demonstrating that this is fundamentally broken. An AHS patient behaves as multiple entities. Normative human subjects by contrast behave as one entity per body. Your explanation simply does not work for AHS patients; AHS patients do indeed experience, and yet, they behave as multiple entities. The defining feature of AHS is that there is a part of a person that seems to be autonomous yet independent of the other part. This points to exactly what I was telling you... that being an entity is a function of being agentively integrated.

    So I cannot take it seriously that you don't believe I have addressed this.
    The computer or robot is not intrinsically an entity, in the way you are.Daemon
    I think you have some erroneous theories of being an entity. AHS can be induced by corpus callosotomy. In principle, given a corpus callosotomy, your entity can be sliced into two independent pieces. AHS demonstrates that the thing that makes you an entity isn't fundamental; it's emergent. AHS demonstrates that the thing that makes you an entity isn't experience; it's integration.
    I was thinking about this just now when I saw this story "In Idyllwild, California a dog ran for mayor and won and is now called Mayor Max II".Daemon
    Not sure what you're trying to get at here. Are you saying that dogs aren't entities? There's nothing special about a dog-not-running for mayor; that could equally well be a fictional character or a living celebrity not intentionally in the running.
  • A first cause is logically necessary
    Given three cards, each either R or B, there are eight possible arrangements of R and B.tim wood
    That's correct. Eight isn't a large number, so let's list them. The possible arrangements are BBB, BBR, BRB, BRR, RBB, RBR, RRB, and RRR.
    And there are three ways of choosing two of three cards. That is, 24 possibilities.tim wood
    That's also correct. But I think you're missing this:
    I always shuffle the deck (incidentally, I am not necessarily being fair; take that into account).InPitzotl
    So there are 24 possibilities here, but that doesn't mean they're equally likely. I could be stacking the deck. So pretend you're me, maybe. How would you rig the odds? Well, in the BBB and RRR case, you're guaranteed to win... so maybe I just never give you those deals.

    So let's say I do that. I'm only going to give you BBR, BRB, BRR, RBB, RBR, and RRB deals. Now how often do you win?
    I await your revealing the error in this reasoning.tim wood
    It's not really that kind of puzzle. The whole point of this puzzle is that it looks fishy. It's more relevant that it looks fishy than that you solve it (it's also not new; though it's slightly in disguise here).
  • A first cause is logically necessary
    Lets use an easier model to digest, as odds work the same no matter the complexity.Philosophim
    Not... exactly.
    Does that mean the cards don't follow causality?Philosophim
    You tell me. I'm still asking you what your concept of causality is. It appears to me that you are indeed committing to sufficiency here though.
    If that did not explain what you were asking, please try to rephrase the question with a deck of cards example.Philosophim
    Hmmm... that might be interesting. Okay.

    Let's imagine you and I are playing a card game; here is how it works.

    We take turns. I always shuffle the deck (incidentally, I am not necessarily being fair; take that into account). After the shuffle, I deal three cards in front of you face down... left, right, and center. Then it's hands off for me; the rest is entirely on you.

    Here's what you do. You pick any two of those cards... your choice. If the two cards are the same color (both black, both red), you win. If the two cards are different colors (black/red, red/black), you lose. I offer you two to one odds; you pay me $1 if I win (<- corrected), and I pay you $2 if you win. FYI, there are only ever two colors when you turn the cards over; each card is always either red, or black.

    So here's the first question. Is this a fair game? Can you prove it? Can you work out the minimal probability that you'll win?

    So here's a quick cheat sheet. Somehow, you lose 75% of the time. That's just a given. If you play 1000 times, you just plain lose around 750 times. Play 10000, and you just plain lose around 7500 times. Can you tell me how that works?
  • A first cause is logically necessary
    Yes, you've nailed it.Philosophim
    So to me, it sounds like your notion of causality is similar that of "reason" in the Principle of Sufficient Reason, with the exception that I've yet to hear a commitment to sufficiency. I'd now like to explore sufficiency.

    We have an atom that can, in a duration of time x, decay with 50% probability. Between times t0 and t1=t0+x, it did not decay. Between times t1 and t2=t1+x, it decayed. Let's call the time from t0 to t1 time span 1, and from t1 to t2 time span 2. Can we describe the cause of the decay in time span 2 as opposed to the lack of decay in time span 1? Can we say this cause in time span 2 is attributed to the properties contributing to 50% decay rate, and also that the cause of it not decaying in time span 1 is attributed to the properties contributing to 50% decay rate?
  • A first cause is logically necessary
    I've been on a computer chips kick in my posts, so I suppose I'll continue with them.Philosophim
    I'm still not sure you answered my question.
    A transistor can either be on, or off. If it is on, the electricity will travel through the gate. When it is off, the electricity is cut off. Imagine that we have power constantly running to the transistor. Now imagine that the circuit is complete. We have electricity traveling that circuit. What caused electricity to travel the entirety of the circuit? At a particular scale we can say, "The gate was on". Or we could be more detailed and say, "And the electricity was on."Philosophim
    So let's go the other way. There's no electricity flowing out of the transistor. Can we ask what caused no electricity to flow out of the circuit? Can the answer be, "The gate was off" and/or "the electricity was off"?
  • A first cause is logically necessary
    I feel at this point you have something you want to say. Feel free to. Once I understand the larger point, I think we can get all of your questions out of the way at oncePhilosophim
    Honestly, no, I'm still trying to analyze this. I can still see what you possibly mean branching off in a few different directions, and I don't quite know which one you'll take. I reserve the right to make a point later, if I have one to make; but for now, I'm just trying to figure out where you're coming from.

    The question I just asked is similar to a question a couple of posts ago. You're talking about an explanation for a "different" state. I'm trying to figure out if this is some counterfactual difference you're talking about, or just a change.
  • A first cause is logically necessary
    Its about things being a state captured in time, another state captured later in time, and an explanation for why the state of the later is different form the former.Philosophim
    Different from the former as opposed to same as the former?
  • A first cause is logically necessary
    I am not trying to put my own spin on force here. Yes. All of these are forces in physics.Philosophim
    I'm just trying to capture what you mean by causing something to exist. It sounds like it would be less confusing to just drop the exists part... at this point I'm not sure what the difference is between "cause things to exist" and just "cause things".
  • A first cause is logically necessary
    My apologies if I've been confusing. The state of the cue ball in its new velocity is not the same as the cue ball without velocity. This is a "new" state caused by the cue ball's collision. Without the cue balls collision, or an equally placed force, the 8 ball would not be in its new state of velocity.Philosophim
    Would gravity be a force? Magnetism? The Higgs Mechanism?
  • A first cause is logically necessary
    Yes, the 8 ball in a state of velocity is different from the 8 ball in a state of zero velocity.Philosophim
    I'm not clear how this is answering the question. Are you comparing the 8 ball before the cue ball hits it to the 8 ball after it hits it, or the 8 ball after the cue hits it to what would be the 8 ball were the cue ball not to hit it? And how does this relate to my question... what new thing was caused to exist?
    The reason it is in the state of velocityPhilosophim
    This means nothing to me until you tell me what new thing was caused to exist.
    Depending on the scale of measurement,Philosophim
    You're a bit ahead of yourself here. I'm trying to figure out what you mean by causing something to exist, and you're having me pick scales for some reason or another.
    I believe the argument isn't concerned with scale,Philosophim
    Curious language... isn't this your argument? I would have thought you would be the authority on what was meant.
  • A first cause is logically necessary
    Sure, the usual example in philosophy is a cue ball hitting an 8 ball.Philosophim
    Example of what? This sounds like a typical example of causality per se. My question is about what you mean causing something to exist.
    The 8 ball exists in a new velocity statePhilosophim
    Is there a new thing that exists when the 8 ball exists in a new velocity state?
    You could go plot the life of the entire ball up to its creation in the factory if you wanted.Philosophim
    Sure... would that be a new thing existing?
  • A first cause is logically necessary
    1. Either all things have a prior cause for their existence, or there is at least one first cause of existence from which a chain of events follows.Philosophim
    Could I get an example of a thing causing something to exist?
  • The important question of what understanding is.
    This may be off topic , but that’s one definition of intentionality, but not the phenomenological one.Joshs
    I don't think it's off topic, but just to clarify, I am not intending to give this as a definition of intentionality. I'm simply saying there's aboutness here.
  • Do Chalmers' Zombies beg the question?
    Don't or can't?noAxioms
    Irrelevant. This is ostensive.
    Would it make it not the same toaster if the name got scratched offnoAxioms
    Yes.
    I'm talking about it actually being the toaster in question or not.noAxioms
    If you say so, but all I can talk about is what I mean by "being the same".
    That's not the story being pushednoAxioms
    That's David Chalmers' story. I'm not David.
    Not how I'm using it when I make a distinction.noAxioms
    You said the toaster feels warm. It doesn't matter how you're using the word "I"... the toaster doesn't feel warm. It lacks the parts.
    Age of five eh?noAxioms
    Yes; by that age, most humans learn theory of mind.
    Does that imply you were a zombie until some sufficient age?noAxioms
    No, it implies that you can for example pass the Sally Anne test. Theory of Mind has nothing to do with p-zombies.
  • Do Chalmers' Zombies beg the question?
    A computer can't tell you it's conscious?frank
    I'm a bit lost. This is what a zombie is according to Chalmers:
    A zombie is physically identical to a normal human being, but completely lacks conscious experience. — Chalmers
    http://consc.net/zombies-on-the-web/

    ...that doesn't sound like a computer. So what's the objection here?
  • The important question of what understanding is.
    That's broadly what I meant when I said that it's experience that makes you an individual, but you seem to think we disagree.Daemon
    My use of mind here is metaphorical (a reference to the idiom "of one mind").

    Incidentally, I think we do indeed agree on a whole lot of stuff... our conceptualization of these subjects is remarkably similar. I'm just not discussing those pieces ;)... it's the disagreements that are interesting here.
    Think about the world before life developed: there were no entities or individuals then.Daemon
    I don't think this is quite our point of disagreement. You and I would agree that we are entities. You also experience your individuality. I'm the same in this regard; I experience my individuality as well. Where we differ is that you think your experience of individuality is what makes you an individual. I disagree. I am an individual for other reasons; I experience my individuality because I sense myself being one. I experience my individuality like I see an apple; the experience doesn't make the apple, it just makes me aware of the apple.

    Were "I" to have AHS, and my right hand were the alien hand, I would experience this as another entity moving my arm. In particular, the movements would be clearly goal oriented, and I would pick up on this as a sense of agency behind the movement. I would not in this particular condition have a sense of control over the arm. I would not sense the intention of the arm through "mental" means; only indirectly through observation in a similar manner that I sense other people's intentions. I would not have a sense of ownership of the movement.
    Suppose instead of buying bananas we asked the robot to control the temperature of your central heating: would you say the thermostat is only metaphorically trying to control the temperature, but the robot is literally trying?Daemon
    Yes; the thermostat is only metaphorically trying; the robot is literally trying.
    Could you say why, or why not?Daemon
    Sure.

    Consider a particular thermostat. It has a bimetallic coil in it, and there's a low knob and a high knob. We adjust the low knob to 70F, and the high to 75F. Within range nothing happens. As the coil expands, it engages the heating system. As the coil contracts, it engages the cooling system.

    Now introduce the robot and/or a human into a different environment. There is a thermometer on the wall, and a three way switch with positions A, B, and C. A is labeled "cool", B "neutral", and C "heat.

    So we're using the thermostat to maintain a temperature range of 70F to 75F, and it can operate automatically after being set. The thermostat should maintain our desired temperature range. But alas, I have been a bit sneaky. The thermostat should maintain that range, but it won't... if you read my description carefully you might spot the problem. It's miswired. Heat causes the coil to expand, which then turns on the heating system. Cold causes the coil to contract, which then turns on the cooling system. Oops! The thermostat in this case is a disaster waiting to happen; when the temperature goes out of range, it will either max out your heating system, or max out your cooling system.

    Of course I'm going for an apples to apples comparison, so just as the thermostat is miswired, the switches are mislabeled. A actually turns on the heating system. C engages the cooling system. The human and the robot are not guaranteed to find out the flaw here, but they have a huge leg up over the thermostat. There's a fair probability that either of these will at least return the switch to the neutral position before the heater/cooler maxes out; and there's at least a chance that both will discover the reversed wiring.

    This is the "things go wrong" case, which highlights the difference. If the switches were labeled and the thermostat were wired correctly, all three systems would control the temperature. It's a key feature of agentive action to not just act, but to select the action being enacted from some set of schemas according to their predicted consequences in accordance with a teleos; to monitor the enacted actions; to compare the predicted consequences to the actual consequences; and to make adjustments as necessary according to the actuals in concordance with attainment. This feature makes agents tolerant against the unpredicted. The thermostat is missing this.

    ETA:
    The word "meaning" comes from the same Indo-European root as the word "mind". Meaning takes place in minds.Daemon
    The word "atom" comes from the Latin atomus, which is an indivisible particle, which traces to the Greek atomos meaning indivisible. But we've split the thing. The word "oxygen" derives from the Greek "oxys", meaning sharp, and "genes", meaning formation; in reference to the acidic principle of oxygen (formation of sharpness aka acidity)... which has been abandoned.

    Meaning is about intentionality. In regard to external world states, intentionality can be thought of as deferring to the actual. This is related to the part of agentive action which not only develops the model of word states from observation, and uses that model to align actions to attain a goal according the predictions the model gives, but observes the results as the actions take place and defers to the observations in contrast to the model. In this sense the model isn't merely "about" itself, but "about" the observed thing. That is intentionality. Meaning takes place in agents.
  • Do Chalmers' Zombies beg the question?
    Bad analogy. In the case in question, nobody is ostensively using a term.noAxioms
    Of course they are. This is why they tend to say we have these properties, but these things over here, they don't. They are ostensively pointing to the properties, and they are formulating an incomplete theory in an attempt to explain the properties they are pointing to. And I even agree it's a bad theory about what they're ostensively including.
    The only way I can parse it, it is the followers of Chalmers that are making the error you point out, where a human is privileged in being allowed to call something water/cold/wet, but anything else (a sump pump moving the stuff) doing the exact same thing is not allowed to use such privileged language (the pump moves a substance which could be interpreted as water).noAxioms
    The notion that either we have an immaterial driver in the driver's seat experiencing things or the toaster feels warmth sounds like a false dichotomy to me.
    Is it the same rock,noAxioms
    Yes, it's the same rock...
    or merely a different arrangement of matter in the universenoAxioms
    ...and it's probably that too. Most of the toaster's mass is in gluons. They're constantly obliterating and reforming. And the next grand TOE may even do something more weird with the ontologies.

    Nevertheless, if you had your name scratched onto the toaster when I stole it, it will tend to still be scratched on there unless I scratched it off.

    Identity need not be fundamental; it can be "soft"... emergent, pragmatic versus universal by necessity, constructed from invariances, and the like.
    You can't point to your subjective feeling of warmth and assert the toaster with thermostat doesn't feel anything analogous. Sure, it's a different mechanism, but not demonstrably fundamentally different.noAxioms
    Actually, yes, I can. The toaster reacts to warmth. "Legal me" reacts to warmth as well. But "legal me" also reacts to an increase in blood acidity.

    But there's a difference between how I react to warmth and how I react to an increase in blood acidity. I can subjectively report on my feeling of warmth; I cannot subjectively report on my feeling of high blood acidity. Ostensively speaking, warmth is an example of something I subjectively feel; acidity is an example of something I react to but do not subjectively feel. There is no good reason for me to suspect that because the toaster reacts to warmth like I react to warmth, that it is subjectively feeling warmth like I subjectively feel warmth in contrast to how I do not subjectively feel blood acidity.
    This seems to be an example of the privileged language mentioned above. What I see as the 'bad theory' asserts privileged status to humans, raising them above a mere physical arrangement of matter, and assigns language reserved only for objects with this privileged status.noAxioms
    Your sales pitch here is a dud. I can play Doom on this computer. I might could even play Doom on my Keurig. But I cannot play Doom on this bottle of allergy pills.

    Different physical objects have different physical arrangements, and some arrangements have properties other arrangements don't have. We might could even say certain arrangements of physical objects have privileged status, raising them above other arrangements, and that we are justified in assigning language reserved for some classes of objects.

    The "I" I accused you of having is simply a unit of theory of mind as it applies to the linguistic aspect of your posts. Humans can be thought of as objects susceptible to be described in terms of units of theory of mind, at least in the typical sense.
    My son has one of those 'hey google' devices sitting on its table, and it might reply to a query with "I cannot find that song" or some such.noAxioms
    There are particular arrangements of physical matter that come in individual "toaster"-like bodies, which are embedded in their environments and must navigate them, and which regularly participate in conversations of various sorts with other entities. "Hey google" is not one of these things. But noAxioms is one of these things.

    TOM can probably be extended to work in some version on "hey google", but it's distinct enough to reassess how we want to discuss its identity. An automaton of the right type might work better (SDC's are a bit out... the networked trend confuses the information-complex-to-body relation so requires the reassessment). You, OTOH, meet the requirements to apply theory of mind to as humans above the age of five regularly do.

    But for some reason, instead of asking me what I mean by "I", or getting this plain reference to the notion that you as a unit consistently type out the same themed argument throughout single posts and across time, you keep going to this "experiencing the device" thing. I understand you're rejecting a bad theory of "I"; I too reject it. But I cannot play Doom on my bottle of allergy pills, and I cannot play debate-the-zombies with my toaster (at least yet).
  • Do Chalmers' Zombies beg the question?
    I am not sure how to take this. Is this just a generic putdown, or did you mean something more specific? What am I missing?SophistiCat
    It was not a put-down. I'm not just generically using braggart language here; you're literally one step behind. The water example is a response to the response you just gave, and it does not negate it. We did not discard the notion of water when we discarded classical elements, and there is a good reason we did not do so. That we discarded phlogiston on replacing it with a better theory, does not negate this good reason not to discard water when dropping classical element theory.
    Well, referring to the phlogiston theory as a theory of heat heat transfer was perhaps clumsy, but you have ignored the substance of my response in favor of capitalizing on this nitpick.SophistiCat
    That's not quite the clumsiness I was referring to. "X is a bad theory of Y" is to be understood in the sense of X being an explanans and Y an explanandum. In this sense, phlogiston theory is not a theory of phlogiston because phlogiston is an explanans. The explanandum here is combustion; so phlogiston in this sense is a theory of combustion. When we got rid of phlogiston theory, we did get rid of phlogiston (explanans), but we did not get rid of combustion (explanandum).
  • Do Chalmers' Zombies beg the question?
    An eliminativist about personal identity could hold the phlogiston as a counterexample.SophistiCat
    I am pretty sure you're at least one step behind, not ahead, of the post you just replied to.
    But the preferred solution, at least in the case of the phlogiston, was not to come up with a better theory of the phlogiston, but to drop the stuff altogether as part of a better theory that accounts for the manifest reality of heat transfer.SophistiCat
    This is clumsily phrased. Phlogiston theory is a theory about combustion. It was replaced by oxidation theory, a better theory about combustion. We dropped the notion of phlogiston, but not the notion of combustion.
  • Do Chalmers' Zombies beg the question?
    The "I" on the other hand refers to the experiencer of a conscious thing, something which gives it a true identity that doesn't supervene on the physical.noAxioms
    I cry foul here. Imagine a believer of the classical elements telling you that he just fetched a pail of water from the well. When you ask the guy what water is, he explains that it is the element that is cold and wet. Analogously, you object... there is no "water"; for "water" refers to an element that is cold and wet, and we don't have such things. The problem is, the guy did in fact fetch the stuff from the well. This I believe is your error.

    Slightly more analytical, the guy has a bad theory of water. When asked to describe what water is, the guy would give you an intensional definition of water that is based on the bad theory. It's proper to correct the guy and to say that there is no such thing as he described in this case; however, the guy is also ostensively using the term... the stuff in the well is an example of what he means by water. His bad theory doesn't make the stuff in the well not exist. So the guy is in a sense wrong about what water is, but is not wrong to have the concept of water. The stuff the guy goes out to fetch from the well really is there.

    You're objecting to an intensional definition of "I", which is simply based on a questionable theory of self... but you still have the extension to which "I" refers.
    No, that's the legal 'me' doing that. Any toaster has one of those. Any automaton can type a similar response in a thread such as this.noAxioms
    I've no idea what you mean by legal me, but the ostensive I to which humans refer is not something a toaster has. I can't comment on the automaton... the term's too flexible.
  • The important question of what understanding is.
    It's experience that makes you an individual.Daemon
    No, being agentively integrated is what makes me (and you) an individual. We might could say you're an individual because you are "of one mind".

    For biological agents such as ourselves, it is dysfunctional not to be an individual. We would starve to death as Buridan's asses; we would waste energy if we all had Alien Hand Syndrome. Incidentally, a person with AHS is illustrative of an entity where the one-mindedness breaks down... the "alien" so to speak in AHS is not the same individual. Nevertheless, an alien hand illustrates very clear agentive actions, and suggests experiencing, which draws in turn a question mark over your notion that it's the experiencing that makes you an individual.
    In ordinary everyday talk we all anthropomorphise. The thermostat is trying to maintain a temperature of 20 degrees. The hypothalamus tries to maintain a body temperature around 37 degrees. The modem is trying to connect to the internet. The robot is trying to buy bananas. But this is metaphorical language.Daemon
    Not in the robot case. This is no mere metaphor; it is literally the case that the robot is trying to buy bananas.

    Imagine doing this with a wind up doll (the rules are, the wind up doll can do any choreography you want, but it only does that one thing when you wind it up... so you have to plan out all movements). If you try to build a doll to get the bananas, you would never pull it off. The slightest breeze turning it the slightest angle would make it miss the bananas by a mile; it'd be lucky to even make it to the store... not to mention the fact that other shoppers are grabbing random bananas while stockers are restocking with bananas in random places, shoppers constantly are walking in the way and what not.

    Now imagine all of the possible ways the environment can be rearranged to thwart the wind up doll... the numbers here are staggeringly huge. Among all of these possible ways not to get a banana, the world is and would evolve to be during the act of shopping a particular way. There does exist some choreography of the wind up doll for this particular way that would manage to make it in, and out, of the store with the banana in hand (never mind that we expect a legal transaction to occur at the checkout). But there is effectively no way you can predict the world beforehand to build your wind up doll.

    So if you're going to build a machine that makes it out of the store with bananas in hand with any efficacy, it must represent the teleos of doing this; it must discriminate relevant pieces of the environment as they unpredictably change; it must weigh this against the teleos representation; and it must use this to drive the behaviors being enacted in order to attain the teleos. A machine that is doing this is doing exactly what the phrase "trying to buy bananas" conveys.

    I'm not projecting anything onto the robot that isn't there. The robot isn't conscious. It's not experiencing. What I'm saying is there is something that has to genuinely be there; it's virtually a derived requirement of the problem spec. You're not going to get bananas out of a store by using a coil of two metals.
    My contention is that a computer or a robot cannot understand language, because understanding requires experience, which computers and robots lack.Daemon
    And my contention has been throughout that you're just adding baggage on.

    Let's grant that understanding requires experience, and grant that the robot I described doesn't experience. And let's take that for a test run.

    Suppose I ask Joe (human) if he can get some bananas on the way home, and he does. Joe understands my English request, and Joe gets me some real bananas. But if I ask Joe to do this in Dutch, Joe does not understand, so he would not get me real bananas. If I ask my robot, my robot doesn't understand, but can get me some real bananas. But if I ask my robot to do this in Dutch, my robot doesn't understand, so cannot get me real bananas. So Joe real-understands my English request, and can real-comply. The robot fake-understands it but can real-comply. Joe doesn't real-understand my Dutch request, so cannot real-comply. The robot doesn't fake-understand my Dutch request but this time cannot real-comply. Incidentally, nothing will happen if I ask my cable modem or my thermostat to get me bananas in English or Dutch.

    So... this is the nature of what you're trying to pitch to me, and I see something really weird about it. Your experience theory is doing no work here. I just have to say the robot doesn't understand because it's definitionally required to say it, but somehow I still get bananas if it doesn't-understand-in-English but not if it doesn't-understand-in-Dutch, just like with Joe, but that similarity doesn't count because I just have to acknowledge Joe experiences whereas the robot doesn't, even though I'm asking neither to experience but just asking for bananas. It's as if meaning isn't about meaning any more; it's about meaning with experiences. Meaning without experiences cannot be meaning, even though it's exactly like meaning with experiences save for the definitive having the experiences part.

    Daemon! Can you not see how this just sounds like some epicycle theory? Sure, the earth being still and the center of the universe works just fine if you add enough epicycles to the heavens, but what's this experience doing for me other than just mucking up the story of what understanding and agency means?

    That is what I mean by baggage, and you've never justified this baggage.
  • Do Chalmers' Zombies beg the question?
    There's no 'I' (a thing with an identity say) that's being me.noAxioms
    This phrase sounds suspicious. There's a me, but there's no I being me?

    Also, there's definitely an "I" there. Something typed an entire grammatically correct, if not coherent, response in this thread with a unified theme conveying some particular form of skepticism to zombies.
  • Do Chalmers' Zombies beg the question?
    My brain hurts now. I'll admit to having difficulties with the p-zombie argument when it comes time for the zombies to talk about consciousness.Marchesk
    Yeah, that's the real problem here. If qualia are epiphenomenal, how can we talk about them?
  • The important question of what understanding is.
    The cat wants something.Daemon
    I'm not sure what "want" means to the precision you're asking. The implication here is that every agentive action involves an agent that wants something. Give me some examples... my cat sits down and starts licking his paw. What does my cat want that drives him to lick his paw? It sounds a bit anthropomorphic to say he "wants to groom" or "wants to clean himself".

    But it sounds True Scotsmanny to say my cat wants to lick his paws in any sense other than that he enacted this behavior and is now trying to lick his paws. If there is such another sense, what is it?

    And why should I care about it in terms of agency? Are cats, people, or dragon flies or anything capable of "trying to do" something without "wanting" to do something? If so, why should that not count as agentive? If not, then apparently the robot can do something us "agents" cannot... "try to do things" without "wanting" (whatever that means), and I still ask why it should not count as agentive.
    The robot is not capable of wanting.Daemon
    The robot had better be capable of "trying to shop and get bananas", or it's never going to pull it off.
  • The important question of what understanding is.
    You're wrong because the robot doesn't have a goal.Daemon
    Ah, finally... the right question. But why not?

    Be precise... it's really the same question both ways. What makes what the robot not have a goal, and what by contrast makes what my cat have a goal?
  • The important question of what understanding is.
    But a robot buying bananas is?Daemon
    Why not?

    But I want you to really answer the question, so I'm going to carve out a criteria. Why am I wrong to say the robot is being agentive? And the same goes in the other direction... why are you not wrong about the cat being agentive? Incidentally, it's kind of the same question. I think it's erroneous to say my cat's goal of following me around the house was based on thought.

    Incidentally, let's be honest... you're at a disadvantage here. You keep making contended points... like the robot doesn't see (in the sense that it doesn't experience seeing); I've never confused the robot for having experiences, so I cannot be wrong by a confusion I do not have. But you also make wrong points... like that agentive goals require thought (what was my cat thinking, and why do we care about it?)
  • The important question of what understanding is.
    But where do the goals come from, if not from "mere thought"?Daemon
    In terms of explaining agentive acts, I don't think we care. I don't have to answer the question of what my cat is thinking when he's following me around the house. It suffices that his movements home in on where I'm going. That is agentive action. Now, I don't think all directed actions are agentive... a heat seeking missile isn't really trying to attain a goal in an agentive way... but the proper question to address is what constitutes a goal, not what my cat is thinking that leads him to follow me.

    My cat is an agent; his eyes and ears are attuned to the environment in real time, from which he is making a world model to select his actions from schemas, and he is using said world models to modulate his actions in a goal directed way (he is following me around the house). I wouldn't exactly say my cat is following me because he is rationally deliberating about the world... he's probably just hungry. I'm not sure if what my cat is doing when setting the goal can be described of as thought; maybe it can. But I don't really have to worry about that when calling my cat an agent.
  • The important question of what understanding is.
    It might be better to take a clearer case, as you drinking the coffee is agentive, which muddies the water a little.Daemon
    I've no idea why you think it muddies the water... I think it's much clearer to explain why shaking after drinking coffee isn't agentive yet shaking while I dance is. Such an explanation gets closer to the core of what agency is. Here (shaking because I'm dancing vs shaking because I drank too much coffee) we have the same action, or at least the same descriptive for actions; but in one case it is agentive, and in the other case it is not.
    Agency is the capacity of an actor to act. Agency is contrasted to objects reacting to natural forces involving only unthinking deterministic processes.Daemon
    Agentive action is better thought of IMO as goal directed than merely as "thought". In a typical case an agent's goal, or intention, is a world state that the agent strives to attain. When acting intentionally, the agent is enacting behaviors selected from schemas based on said agent's self models; as the act is carried out, the agent utilizes world models to monitor the action and tends to accommodate the behaviors in real time to changes in the world models, which implies that the agent is constantly updating the world models including when the agent is acting.

    This sort of thing is involved when I shake, while I'm dancing. It is not involved when I shake, after having drank too much coffee. Though in the latter case I still may know I'm shaking, by updating world models; I'm not in that case enacting the behavior of shaking by selecting schemas based on my self models in order to attain goals of shaking. In contrast, in the former case (shaking because I'm dancing), I am enacting behaviors by selecting schemas based on my self model in order to attain the goal of shaking.

    So, does this sound like a fair description of agency to you? I am specifically describing why shaking because I've had too much coffee isn't agentive while shaking because I'm dancing is.
  • The important question of what understanding is.
    That isn't a difficult question.Daemon
    So answer it.

    The question is, why is it agentive to shake when I dance, but not to shake when I drink too much coffee? And this:
    Only conscious entities can be agentive, but not everything conscious entities do is agentive.Daemon
    ...doesn't answer this question.

    ETA: This isn't meant as a gotcha btw... I've been asking you for several posts to explain why you think consciousness and experience are required. This is precisely the place where we disagree, and where I "seem to" be contradicting myself (your words). The crux of this contradiction, btw, is that I'm not being "careful" as is "required" by such things. I'm trying to dig into what you're doing a bit deeper than this hand waving.

    I'm interpreting your "do"... that was your point... as being a reference to individuality and/or agency. So tell me what you think agency is (or correct me about this "do" thing).
  • The important question of what understanding is.
    But neither does a robot.Daemon
    You seem to be contradicting yourself.Daemon
    Just to remind you what you said exactly one post prior. Of course the robot interacts with bananas. It went to the store and got bananas.

    What you really mean isn't that the robot didn't interact with bananas, but that it "didn't count". You think I should consider it as not counting because this requires more caution. But I think you're being "cautious" in the wrong direction... your notions of agency fail. To wit, you didn't even appear to see the question I was asking (at the very least, you didn't reply to it) because you were too busy "being careful"... odd that?

    I'm not contradicting myself, Daemon. I'm just not laden with your baggage.
    What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?InPitzotl
    ...this is what you quoted. This was what the question actually was. But you didn't answer it. You were too busy "not counting" the robot:
    They aren't agents because they aren't conscious, in other words they don't have experience.Daemon
    I'm conscious. I experience... but I do not agentively do any of those underlined things.

    I do not agentively generate a particular body temperature, but I'm conscious, and I experience. I do not agentively radiate in the infrared... but I'm conscious, and experience. I do not agentively shake when I have too much coffee (despite agentively drinking too much coffee), but I'm conscious, and experience. I even am an agent, but I do not agentively do those things.

    There's something that makes what I agentively do agentive. It's not being conscious or having experiences... else why aren't all of these underlined things agentive? You're missing something, Daemon. The notion that agency is about being conscious and having experience doesn't work; it fails to explain agency.
    When your robot runs amok in the supermarket and tears somebody's head off, it won't be the robot that goes to jail.Daemon
    Ah, how human-centric... if a tiger runs amok in the supermarket and tears someone's head off, we won't send the tiger to jail. Don't confuse agency with personhood.
    If some code you write causes damage, it won't be any good saying "it wasn't me, it was this computer I programmed"Daemon
    If I let the tiger into the shop, I'm morally culpable for doing so, not the tiger. Nevertheless, the tiger isn't acting involuntarily. Don't confuse agency with moral culpability.
    I think you know this, really.Daemon
    I think you're dragging a lot of baggage into this that doesn't belong.
  • Do Chalmers' Zombies beg the question?
    Would Chalmer's P-zombie twin also have the same evolutionary history as Chalmer?RogueAI
    Zombies are functionally equivalent to conscious entities. Generically different entities have different evolutionary histories (because "you count to two when you count them"), but given the functional equivalent clause in the definition, any treatment of p-Chalmers as saying something Chalmers says is by definition fair game.
  • Do You Believe In Fate or In Free-Will?
    Now, Fate is defined as: “the development of events beyond a person’s control, regarded as determined by a supernatural power.” And, as for Free-Will, this is defined as: “the power of acting without the constraint of necessity or fate; the ability to act at one’s own discretion.”Lindsay
    Generically, I reject fate outright. I'm agnostic about determinism. And I'm agnostic about free will. This definition of fate roughly fits into the concept of fate that I reject. This particular definition of free will I have conceptual issues with, requisite to fit my free will agnosticism. So I don't quite mesh well with the fate/free will ying/yang concept here.

    Roughly speaking, I view fate as the idea that some future event will occur regardless of what happens; this in stark contrast to determinism, which is the idea that some future event will occur because of what happens. Determinism is perfectly viable for me; fate just seems silly (to me).

    Regarding the notion of free will here, I'm not much of a "principle of alternate possibilities" type of guy... this kind of makes the "necessity" angle tough to speak to (if I act at my own discretion, I would still necessarily do what I do; think from a time perspective... if I act of my free will tomorrow at this exact time, regardless of what "free will" means; then two days from now there's only one thing I could have done at that time, and tomorrow that is the thing I necessarily would do, since there's no such thing as an actual event other than the event that occurs... like I said, I'm not much of a PAP guy).
  • The important question of what understanding is.
    You seem to be contradicting yourself.Daemon
    I'm pretty sure if you understood what I was saying, you would see there's no contradiction. So if you are under the impression there's a contradiction, you're missing something.
    The other day you had a robot understanding things, now you say a computer doesn't know what a banana is.Daemon
    the CAT tool still wouldn't know what a banana is.InPitzotl
    Your CAT tool doesn't interact with bananas.
    I've been saying from the start that computers don't do things (like calculate, translate), we use them to do those things.Daemon
    What is this "do" you're talking about? I program computers, I go to the store and buy bananas, I generate a particular body temperature, I radiate in the infrared, I tug the planet Jupiter using a tiny force, I shake when I have too much coffee, I shake when I dance... are you talking about all of this, or just some of it?

    I assume you're just talking about some of these things... so what makes the stuff I "do" when I do it, what I'm "doing", versus stuff I'm "not doing"? (ETA: Note that experience cannot be the difference; I experience shaking when I have coffee just as I experience shaking when I dance).
  • The important question of what understanding is.
    I thought some examples of Gricean Implicature might amusingly illustrate what computers can't understand (and why)Daemon
    I think you're running down the garden path.

    I'm a human. I experience things. I also understand things. I can do things like play perfect tic tac toe, go the store and buy bananas, and solve implicature puzzles.

    I'm also a programmer. I have the ability to "tell a computer what to do". I can easily write a program to play perfect tic tac toe. Not only can I do this, but I can specifically write said program by self reflecting on how I myself would play perfect tic tac toe; that is, I can appeal to my own intuitive understanding of tic tac toe, using self reflection, and emit this in the form of a formal language that results in a computer playing perfect tic tac toe.

    But by contrast, to write a program that drives a bot to go to the store and buy bananas, or to solve implicature puzzles, is incredibly difficult. Mind you, these are easy tasks for me to do, but that tic tac toe trick I pulled to write the perfect tic tac toe player just isn't going to cut it here.

    I don't think you're grasping the implication of this here. It sounds as if you're positing that you, a human, can easily do something... like go to the store and buy bananas, or solve implicatures... and a computer, which isn't a human, cannot. And that this implies that computers are missing something that humans have. That is the garden path I think you're running down... you have a bad impression. It's us humans that are building these computers that have, or don't have as the case may be, these capabilities. So when I show you my perfect tic tac toe playing program, that is evidence that humans understand tic tac toe. When I show you my CAT tool that can't even solve an implicature problem, this is evidence that humans have not solved the problem of implicature.

    And maybe they will; maybe in 15 years you'll be surprised. Your CAT tool will suddenly solve these implicatures like there's no tomorrow. But that just indicates that programmers solved implicatures... the CAT tool still wouldn't know what a banana is. How could it?

    The whole experience thing is a non-sequitur. I have just as much "experiencing" when I write tic tac toe as I do when I fail to make a CAT tool that solves implicatures. I don't think if I knew how to put experiences into the CAT tool that this would do anything to it that would help it solve implicatures. I certainly don't make that tic tac toe perfect player by coding in experiences. It's really easy to say humans have experiences, humans can do x, and computers cannot do x, therefore x requires experiences. But I don't grasp how this can actually be a justified theory. I don't get what "work" putting experiences in is being theorized to do to pull off what it's being claimed as being critical for.