Comments

  • on the matter of epistemology and ontology
    That was known, at one point in history, as 'metanoia', although that is now usually translated simply as 'repentance', thereby blurring the distinction between insight and belief. Originally it meant 'mental transformation' or something like a cognitive shift.Wayfarer
    Yes. Christianity has a similar trope. So does Islam. My point is that in Bhuddhism the shift is not merely cognitive. It's very complicated.
  • What is Simulation Hypothesis, and How Likely is it?
    Most of the opponents of machine consciousness simply refuse to use the word to describe a machine doing the same thing a human is doing.noAxioms
    I don't think this is Lewis Carroll's tortoise arguing with Achilles. Understanding this is heart of the problem. We need to be much more careful about what "doing" means in the context of planets and the weather and in the context of people. People and inanimate objects are not in the same category, which means that understanding planets or the weather and understanding people involve different language-games. Machines have a foot in both camps. The answers are not obvious.

    Ditto for the thermostat. It doesn't react any more to the sensory input other than to convey a signal. So maybe my boiler is crudely conscious because it processes the input of its senses.noAxioms
    My boiler, on its own, is clearly not conscious, even if it contains a thermostat to switch it off when the water is sufficiently hot. Neither is the thermostat that switches it on. Neither keeps the house warm. What keeps the house warm, (not too hot and not too cold) is the entire system including the water, the pump and the radiators, with its feedback loops and not any one component. You can call the system "crudely conscious" if you like, but I think few people will follow you. But you are right that it is in some ways like a conscious being.
    A computer is arguably more like a conscious being, that it is probably too rational to count as one. AI is more like. There's no simple, once-for-all, distinction.
    One reason why it is so hard is that it is not just a question of a matter of fact about the machine (putative person) but also of how we treat them. So there's a circularity in the debate.

    If I could experience the real world, then be hooked up to a machine that simulates the same thing I have experienced, seamlessly, that I would not be able to tell the difference, then the theory has made its point.L'éléphant
    If that's the point, we don't need the theory. We all experience dreams from time to time. And we know how to tell the difference. But we can't tell the difference while we are dreaming. What's so exciting about the theory?
  • on the matter of epistemology and ontology
    I believe the important philosophical perspective they bring is that of non-dualism.Wayfarer
    Yes, but there is also the idea that understanding requires training the mind - or maybe even reconstructing it. (I mean, by meditation, of course) Christianity, it seems to me, talks a great deal about belief and so presents itself as primarily a matter of doctrine. (Judaism emphasizes law, Islam acceptance, and so on.) This is complicated and not a sharp distinction, but the emphasis is there and sets these views apart from Western empiricism and rationalism.

    Encounter a bank teller and think of all that comes to mind in terms of what a bank teller qua bank teller is, and you will have a list of all a bank teller Does.Astrophel
    This is the difference between what a bank teller IS and what a bank teller DOES. Popper, in the Open Society, identifies this difference as part of the difference between science and (some kinds of) philosophy. (Maybe in other places as well - I just don't know.) It seems to me a very important difference.
  • Bugs: When the Rules are Wrong
    I agree that its feedback, but isn't feedback a sort of circular causality?Count Timothy von Icarus
    Yes. It's like the difference between parasitism and symbiosis. To put it this way, they are the same phenomenon, except that parasitism damages the host, and symbiosis benefits (or at least does not damage) the host. Or think of the difference between murder and execution, which both mean killing, but in different contexts, or with different evaluations.

    More helpful than what? A systems view of language? But then it's always been obvious that language is a social practice and this alone doesn't really elucidate any of the big questions in philosophy of language.Count Timothy von Icarus
    Yes, more helpful than a systems view of language. I'm not sure what was obvious before Philosophical Investigations and How to do things with words. Part of the point of such views is that they encourage us to consider the possibility that (some of) the big questions in the philosophy of language are the result of the systems view of language. We could call them bugs.

    Attempts to unpack what "social practices" are seem to lead to more questions. E.g., if rule following is just based on "the expectations of others," what are we to make of apparent rule following in animal behavior, biology, and "law-like" behavior in nature? Are these different sorts of rules?Count Timothy von Icarus
    Wittgenstein took us only so far, leaving us to take the idea further. These are really interesting questions. I'm not dogmatic about the answers. But surely that the "higher" mammals, at least, are capable of responding to the expectations of others, because of the way they interact with us. Whether the same applies to, for example, the social insects or schools of fish is another question. Lots of difference cases, no expectation of a tidy distinction.
    The short answer to your last question is Yes, of course.

    I think this is a good classification. Although, they can also blend together a bit. E.g., the black swan causes us to discover the Type 2 problem, or the Type 2 problem opens up the possibility of exploiting incoherencies in a system.Count Timothy von Icarus
    :smile: Of course they blend and interact. I regard that as a feature, not a bug.

    Kripke, unlike the later Wittgenstein, could not accept the non-existence of a universal and shared semantic foundation.sime
    I have the impression that Wittgenstein did think that "way of life" and "human practices" gave a shared context. If they don't, how could he think they explained how come we agree about the interpretation (application) of a rule?

    For Wittgenstein, any assertibility criteria can be used for defining the meaning of 'grasping' a rule, and not necessarily the same criteria on each and every occasion that the rule is said to be 'used'. And a speaker is in his rights to provide his own assertibility criteria for decoding what he says, even if his listeners insist on using different assertibility criteria when trying to understanding the speaker's words.sime
    On the other hand, you are quite right that human life is as much the stage for divergence and disagreement as a shared basis of consensus. The importance of the idea is that human life is both a basis for agreement and the common ground that is necessary for divergence and disagreement to develop.
  • on the matter of epistemology and ontology
    I feel like too much is dismissed as unknowable because it can't be formalized in static systems, as if the limit of current modeling abilities is the limit of knowledge. Sort of like how many in physics say the universe must be computable because we lack an understanding of how things would be "decidable" otherwise.Count Timothy von Icarus
    I am pretty confident that the first sentence is right. As to the second sentence, I find myself considering the possibility that the two concepts of decidability and computability may be defined in terms of each other. If they are not, then I'm rather unclear what they mean.

    Once we locate the proximate source of meaning in social practices, the obvious next question is "what causes those practices to be what they are?" I find some phenomenological explanations of how predication arises quite plausible, but then these lead to the question: "why is human phenomenology this way?"Count Timothy von Icarus
    Yes, that's part of what I'm saying. Any proposed foundation will generate a question why that is so. There are only two ways to stop the regress - first, find an indubitable, self-evident, axiomatic starting-point or second, turn the regress into a loop. Neither is very satisfactory. On the other hand, I don't find the idea that there will always be unanswered questions or that our explanations are incomplete and no matter how fast we run, we will never arrive at the Grand Theory of Everything. None of that means that what we call following a rule is not the result of human practices and way of life.

    If we say, "well the natural world is involved in meanings, as well as human cognitive architecture, the phenomenology of human experience, intentionality, and purpose," though, which I think we must, then the role of social practices seems to slide back towards the merely obvious.Count Timothy von Icarus
    I'm not sure that this is much of an objection to what Wittgenstein is trying to do - assembling reminders to enable us to find the way out of the bottle. Like the fly, once we've seen the way out, it is obvious. He starts on the basis that everything is in plain sight. Actually, this sounds like the well-worn "trivial or false" dilemmas that analytic philosophers used to be so fond of.

    It's easier to have destructive certainties when you allow them to sit apart from one another, and so to selectively decide where reason applies. So, yes we should be skeptical of certainties, but we should also not be terrified of them.Count Timothy von Icarus
    Yes. I think that Hume is very sensible when he distinguishes between judicious or moderate scepticism and radical or Pyrrhonic scepticism. (He thinks the former is necessary and wise and the latter is unhinged; he recommends a month in the country for anyone suffering from it.)
    I think that a parallel critique of certainty is entirely appropriate. A judicious and moderate certainty is indeed wise, but a radical and dogmatic certainty is not only divisive (but, let's be blunt about this, people love a fight, especially when they can join in) but also unlikely to stand up to the test of debate.
    When I asked whether we have a sound basis for making large-scale judgements about movements of ideas in the past - especially the distant past, I did intend the question as a reminder of the complications involved in reading those texts and the need for caution in evaluating them. I was particularly exercised by what appeared to be Heidegger's nostalgia for scholastic philosophy and by doubts about how far it is reasonable to apply modern philosophical ideas to what are much more like religious texts rather than what we would think of as philosophy. I know we think we can separate the two, but I'm not sure about how appropriate that is. It depends, I suppose, on what the project is.
    I have to admit, however, that I have a prejudice about any pronouncement about History or Culture (Ancient or Modern). The grand and large scale too often sweeps aside nuance and detail and creates distortions in doing so.
  • What is Simulation Hypothesis, and How Likely is it?
    Keep in mind that I am not supporting the simulation hypothesis in any form. I'm looking for likely ways to debunk it, but in the end, there can be no proof.noAxioms
    Thank you for telling me that. It helps a lot.
    I think that sounds like magic, but everyone else is taking it seriously,RogueAI
    I agree with you, though I would describe it as hand-waving. I agree also that sometimes it is best to roll with the punch if someone takes an idea seriously and I don't. I've done it myself. It may not result in them changing their mind, but it does allow some exploration and clarification.

    You and the frog both make your own decisions, not some AI trying to fool the subject by making a frog shape behave like a frog.noAxioms
    So if I miniaturized the AI hardware and grafted it into the frog, it becomes a simulation instead of a VR?

    Conway's Game-of-Life (GoL) is not in any way derived from the world in which we live, so there's a counterexample to that assertion.noAxioms
    What made the game? Though I grant you, it is quite different from the kinds of simulation we have been talking about, and far from a VR. But it is an abstraction from the world in which Conway - and you and I - live.
    There's an ambiguity here. There's a sense of "world" in which it comprises everything that exists. There are other similar words that aim to capture the same or similar ideas - "universe", "cosmos" and in philosophy "Reality", "Existence". There is another sense in which we speak of "my world" and "your world" and "the lived world" or "the world of physics" or "the world of politics. I thought we were using "world" in the first sense.

    The intent in that example (sc. the simulation of a car crash) is to find an optimal design based on the simulation results. Not so under GoL.noAxioms
    I agree. I can't answer for Conway's intent, but it looks to me as if the intent is to explore and play with the possibilities of a particular kind of system. In which it has definitely succeeded, in most interesting ways.

    Those (sc. Star Trek and Star Wars) are not simulations. Heck, the physics of those worlds are both quite different than our ownnoAxioms
    Well, I would say that those films are simulations of a fantasy scenario/world. But I'm not fussed about the vocabulary here. I am fussed about the idea that they have no connection with the actual world. That is simply false. For a start, there are human beings in it, not to mention space ships, planets and suns. As to the physics being different, that doesn't seem to bother people like Hume ("the sun might not rise tomorrow morning") or Putnam ("Twin Earth"). We can, after all, imagine that physics is different from our current one, and, believe it or not, there have been people who did not believe in our physics, but something quite different. Perhaps there still are.

    Yes, that's the idea (one of them) (sc. the idea that VR might become good enough to deceive people) under consideration here. How do you know it's false? Just asserting it false is beyond weak.noAxioms
    Yes, there may be a need to say more. But the idea that VR might be used to deceive people itself presupposes that what is presented by the VR is not real. What might be more troublesome is a VR that re-presented the actual world around the wearer. Pointless, though there might well be a use for it in some medical situations. On the other hand, it couldn't work unless it was possible for the wearer to actually (really) act.

    Clearly, we know that human beings are persons without knowing (in any detail) about their internal physics. - Ludwig V
    The idealists for one would disagree with this.
    noAxioms
    I have the impression that idealists do not think that human beings have any internal physics. (Do they even think there is any such thing as physics?) I was not taking that issue into account, but was assuming a shared background assumption that we could call common sense. Are you an idealist?
  • What is Simulation Hypothesis, and How Likely is it?
    if not all people/creatures are conscious in the same way (a process running the same physics).noAxioms
    I'm not sure about whether or in what way the actual physics of the person/computer are relevant. Clearly, we know that human beings are persons without knowing (in any detail) about their internal physics. On the other hand, the commentary on the current AIs seems unanimous in thinking that the details of the software are.

    OK, if not all the people are simulated the same, then the ones that are not (the NPC's) would be fake, not conscious,noAxioms
    One needs to specify that "the same" means here. Otherwise, any difference between people (such as brain weight or skin colour) could lead to classifying them as not conscious, not people. I'm sorry, what are NPCs?

    Sort of. Yes, they have a model. No, it isn't a model of something that exists. There isn't a 'real thing' to it.noAxioms
    Yes, there is an issue here. We can, of course construct, imaginary worlds and most of the time we don't bother to point out that they are always derived from the world we live in. As here, we know about real cars that really crash and what happens afterwards (roughly). That's the basis that enables us to construct and recognize simulations of them. "Star Trek" and "Star Wars" are extensions of that ability.

    The skull-vat view does not feed the mind a set of artificially generated lies. VR does.noAxioms
    That's a bit unfair, isn't it? We know quite well what is VR and what is not, so it is clearly distinguishable from reality. Nobody pretends otherwise. Of course, we can frighten ourselves with the idea that a VR (In some unimaginably advanced form) could be used to deceive people; "Matrix" is one version of this. But, unless we are straightforward positivists or followers of George Berkeley, the fact that the difference between VR and reality is perfectly clear and the problem is no different from the problem how we tell dreams from reality.
  • on the matter of epistemology and ontology
    Quite. I'm not pushing for a return to a golden past. It's more along the lines of a forgotten wisdom.Wayfarer
    It's one thing to retrieve the wisdom. It's quite another to one bring back the fool's gold. Effective panning is essential. And then I wonder whether you can have one without the other.

    he'd be lionizedWayfarer
    That would explain why he's so hard to understand.
  • on the matter of epistemology and ontology
    Wittgenstein said in his Lecture on Ethics that, say a man's head turns suddenly into a lion's head. We would all be shocked, suspect a miracle; that is, until science got a hold of it and a discovery, perhaps something completely new, was measured, compared, tested in different environments, etc. And if this were simply not explainable because the results defied the repeatability requirement of science, then this, too, would be admitted and normalized. We would call this "chaos". There are many things called chaos by science.Astrophel
    Thank you very much. I didn't know that Wittgenstein articulated this thought.
  • What is Simulation Hypothesis, and How Likely is it?
    The simulation hypothesis has nothing to do with an imitation of a person, which would be an android or some other 'fake' human.noAxioms
    The "simulation hypothesis" is indeed quite different from the hypothesis that there are imitations of people around. I'm not quite sure that it has "nothing to do" with fake people.

    What if this is all a simulation and everyone you think is conscious are really NPC's?
    — RogueAI
    RogueAI is probably not suggesting an imitation person here.
    noAxioms
    The simulation hypothesis proposes that what humans experience as the world is actually a simulated reality, such as a computer simulation in which humans themselves are constructs."noAxioms
    On the face of it, this looks like a generalization from "there are some fake. imitation, simulated people around" to "everything is a simulation".
    One complication is that we have a forest of similar concepts that work in the same space. Teasing out the differences between an imitation, a fake, a forgery, a pretence, a simulation, etc. would be very complicated. But I think that some general remarks can be made.

    It is undoubtedly true that any money in your pocket could be forged. But it does not follow that all money everywhere at all times might be forged. On the contrary, a forgery can only be a forgery if there is such a thing as the real thing.

    In all of these cases, there is always a question what is being imitated or forged or whatever. We should never use these words on their own. We should always specify what a simulation or imitation is a simulation of..., which means specifying what a real example is of the thing you are simulating.

    Simulating or imitation a reality is simulating everything. So what is it a simulation of? To put it another way, what is the reality that is being simulated? Reality is a totalizing concept and so not an object like a picture or a tree or a planet. "Simulate" does not apply here.

    mathematical universe hypothesis,noAxioms
    What empirical evidence could possibly confirm or refute this? I don't see this as a hypothesis at all, but as a methodological decision. In the 17th century, physicists decided to eject anything that seemed incapable of mathematical treatment, so colours and sounds were banished to the mind, placed beyond the scope of science. Science did not need those hypotheses.

    simulation is simply an explicit execution of an approximation of those laws, on a closed or open system.noAxioms
    So how does a simulation differ from reality?
    They perform for instance simulations of car crashes at the design phase, the result of which eventually generates a design that is safer.noAxioms
    Fair enough. But in those cases, it is clear what the simulation is a simulation of. We know what the real thing is. As you say, this has nothing to do with a simulation of everything.

    I'm afraid I don't have the time to respond in detail to what you say about actual simulation and virtual reality. Perhaps later. I'll just say that, so far as I can see, the BIV hypothesis either presupposes the existence of normal reality or describes all of us right now. (The skull is a vat.)
  • on the matter of epistemology and ontology
    Metaphysics, in that context, is not a dry textbook of scholastic definitions and dogmas, but a grounding vision, a way of being-in-the-world, but one that has been long forgotten, on the whole.Wayfarer
    something goes dreadfully wrong when man reaches for a divided and false certainty, and this was certainly as true for Counter Reformation figures as well.Count Timothy von Icarus
    In this context, do we really have a basis for making these judgements? I've no problem with the idea that the Enlightenment is not perfect, and perhaps it has run its course. But when I think about what preceded it, I do not find myself longing to return to the Good Old Days. So we find ourselves trying to work out the Next Thing, avoiding the mistakes of the Last Thing.

    But
    something goes dreadfully wrong when man reaches for a divided and false certaintyCount Timothy von Icarus
    So perhaps we should be very careful, and sceptical of certainties.

    When you look for the causes of practices, there is nothing concrete to point to behind them, no essences to inform what it is that rules might be used to point out.Count Timothy von Icarus
    Aren't practices and ways of life ("This is what I do") foundations for Wittgenstein at least? If they are, your question does arise, as it always does for any foundation. For some, it leads us to a change of discourse, to naturalistic ideas about human beings, social animals finding their way through the "real" world. But that seems to be where we came in!

    I think it might be more useful to say that there are general principles that are essential to making the scientific method work that are also relevant to statistics, probability theory, perception, Hebbian "fire-together-wire-together" neuronal activity, and how physical information works at a basic level.Count Timothy von Icarus
    Do their have to be general principles as such? Should we not change the model and think of something more dynamic, more evolutionary?
  • Bugs: When the Rules are Wrong
    I suppose my next question is what is flawless without bugs? Even concepts of the most perfect thing "God," are riddled with bugs. "everything being a quick and dirty fix," I like that for a little comedy routine I've been toying with, it Starts off with "Y'all are some Naaaassty motherfuckas..." Ty for the inspiration.Vaskane

    Well, perhaps, "without bugs" is an ideal, a target that may well not be achievable.

    Wittgenstein identifies what could be considered a bug with that idea - that we cannot think in such a world. "Back to the rough ground".
  • Bugs: When the Rules are Wrong
    Our brains seem bugged when it comes to communication. Or perhaps it's both language and brain?Vaskane

    Maybe the expectation that either is a coherent system is a mistake. Evolution only requires that the systems work in normal circumstances. So quick and dirty ways of arriving at answers will survive so long as they work for the creature on which they are running.

    I sometimes wonder whether "This sentence is false" or "the set of all sets that are not members of themselves" are bugs in logic, or language or in our brains. Or possibly whether they are not bugs in the sense that they need to be fixed - more like "0" and divisibility, we just need a rule to give us (or maybe refuse to give us) the answer.
  • Bugs: When the Rules are Wrong
    I also think the phenomena of "wrong" rules is a reminder that rules ultimately are also defined and refined in terms of their purposes, which is easy to recall with games, but harder with natural language and mathematics.Count Timothy von Icarus

    "Purposes" has to be interpreted liberally here. It will work in some cases, but not in all. For example, I don't think it works very well in the context of a game. (An "exercise" in the context of teaching or training or practising is different.) I prefer to think of the "point" of the rules. The problem with the work-around you identify is that if only some people know about it, it is an unfair advantage, and if everyone knows about it, it make the "stun" feature pointless.

    This ends up circular, but not in a bad way. Words have their meanings because of how people use them, but then how people use words ends up being driven (in at least some cases) by what people want/intend them to mean.Count Timothy von Icarus
    This isn't circularity. It's feedback.

    Also, video games have the benefit of having canonical rules that are faithfully executed by a computer.Count Timothy von Icarus
    In a sense, that's true. But both the software and hardware are designed and built to produce certain results which are meaningful in the context of human life and practices. So the ultimate foundation that Wittgenstein arrives at "This is what I do" does apply.

    Natural language bugs will necessarily be hard to define due to the lack of canonical rules and faithful execution.Count Timothy von Icarus
    I like your example here. As a matter of style, there's a lot to be said for avoiding the passive unless it is unavoidable just because it is vague. It's to be expected that natural language will be messy and complicated, and also that people will find and adopt ways of using it in problematic ways. Although one might argue that ambiguity is often useful or desirable in pragmatic ways. Of course, whether they are desirable or not will depend on your point of view.

    The most profound consequence of all of this is that it tells against the approach to language as a complete consistent structure with its own metaphysical existence. That model is not wrong, because we all learn language as a pre-existing practice and it can be useful, for example in logic or linguistics, but it cannot be fully achieved. The view that language is a practice amongst human beings and part of the human way of life is more helpful in many ways.

    I'm not a fan of systematic analyses, but perhaps we could distinguish between three different kinds of problem here.
    1. One is issues caused when a difficult or anomalous case turns up in the world. The discovery of black swans or of platypuses.
    2. Another is the kind of discovery that has been so much evident in mathematics - irrational numbers, etc. The problem of what to do about "0" is perhaps not quite the same, but shares the feature that the standard rule don't apply. But it is the rules themselves (given the standard interpretation of them) that produce the result.
    3. A third is where people take advantage of (misuse) the rules to achieve some thing that is not strictly relevant to them. The passive voice is one example, and the "fix" for your bug seems to me to be another.
  • on the matter of epistemology and ontology
    I'm sorry haven't been able to reply to you, but it seems that the moment has passed and the discussion moved on. In interesting ways

    Whatever existence is is bound analytically to the saying it is.Astrophel
    Yes, that's the point that one keeps coming back to - even if one thinks about different ways of using language.

    But knowing does not give one the kind of "ontological" intimacy you seem to be suggesting. To knowledge, the world will remain transcendental. There is my cat, that lamp, that fence post over there, and here am I. Nothing is going tp bridge that distance, no matter how one theorizes epistemic relations. I know that they exist, but I don't know what that means. This is because language is pragmatic: in perceptual events I DEAL with the world, and meaning is bound up in this.Astrophel
    This seems to me the right way to approach the problem. Is it too brutal to observe that the description of the cat is not the cat. Why should it be? It would be pointless if it were. But when we are dealing with the cat, interacting with it, it is the cat we are interacting with, and not a description of it. Is describing the cat inter-acting with it? Clearly not in the sense required to state the problem. To accept a sense of interaction that includes description as interaction is to dissolve the problem by definition and will satisfy no-one.
  • on the matter of epistemology and ontology
    I am using Language as broadly as one can imagine, to include all images, representations, signifiers etc., if there are ceteras, stored in memory/History and structuring what we--philosophers and laity alike--think of as human experience.ENOAH
    It sounds as if Language is a real rag-bag. But I'm guessing that you are relying on the structure of signifier and signified as the common element. But, in this use, it doesn't help the effect of the way you use these concepts to smother differences that seem important to me. Pictures are very different from descriptions, just because they are representations of something; descriptions, in my unorthodox view, and not representations at all; maps and diagrams are half-way houses between the two; signalling flags are a code; they are more like words, but not the same. Words are not all of a piece either; The numeral "1" stands in a very different relationship to its signified from "horse"; "walking" signifies something very different from either - and so on. You may think the differences don't matter. We'll see.
    An important point for me is that "language" (and "logic") can seem to be something that exists in its own right, in some way and when we learn it, it does seems so. But though sentences may exist independently of speakers, in the sense that they can be written down, they are, like propositions until they are asserted or denied. The life of language is in its use by speakers and their use is what maintains or modifies it.

    I am using History to refer to the collective of these Signifiers operating on the Natural World beyond the individual body, and constructing Narratives beyond individual personalities, all of which moves autonomously in accordance with evolved Laws and Dynamics, is inter-permeable or accessible to Itself in spite of embodiment, is ultimately Fictional, and though it affects Realty via embodiment and the manipulation of resources into Culture, it has no access whatsoever to knowing Reality, despite all of our (Its own) efforts to prove it wrong.ENOAH
    Like language, history is a mixed bag. But that's not my main problem here. My problem is that I simply don't follow what you say and in any case, I'm not at all sure that there are laws of history. Certainly, since it normally takes the form of a narrative, which does not present us with any laws, the idea must be problematic. But the biggest issue is that much history is about people. You seem to regard it as an independent actor. It's as if you were telling me about the army going to war, rather than people going to war.

    Both these responses to you are well summarized by Astrophel when he says:
    It's Kierkegaard who complained that Hegel had "forgotten that we exist."Astrophel

    However, the opening your paragraph - "I am using History to refer to the collective of these Signifiers operating on the Natural World beyond the individual body, and constructing Narratives beyond individual personalities," is very promising. But then you conclude with "it has no access whatsoever to knowing Reality, despite all of our (Its own) efforts to prove it wrong." Either you mean by "reality" what you mean by "natural world" or you don't. If you do mean the same, you are contradicting yourself; if you don't, I have no idea what you mean.

    If its Reality you want, just breathe.ENOAH
    This is a jewel. I know I could argue that if I obediently breathe, language has put me in touch with reality. But you remind me of the Zen masters who will reply to questions like "what is reality?" by offering you a cup of tea. Perhaps we should share one and stop worrying so much. Or am I misunderstanding you?

    All we can say regarding the Truth of this hypothetical in Reality is the Organism seeing. It is in the Organism do-ing, be-iing, see-ing , is-ing, all of which "exists" in presence, in is-ing/be-ing, which is True.ENOAH
    I can just about get my head around this. But you said earlier:-
    But with the advent of uniquely human Consciousness or Mind, "seeing" is immediately displaced by "perceiving." That is, it is displaced by the Signifiers re-constructing the sensation with its Narrative.ENOAH
    I don't see why you can't count perceiving as just one of the activities of human beings. Good, bad or indifferent as signifiers may be, they are also real and part of reality.
    The key point to grasp, I think, is this. Language is part of the world. The world was there first and language developed in it. It is as real and natural as anything else in the world.
  • on the matter of epistemology and ontology
    I am not necessarily using any philosophical dialectic,ENOAH
    When I wrote "dialect", I did not mean "dialectic". But maybe you are pointing to the same issue - mutual comprehension. There's only one philosophy that seriously tried not to use specialized philosophical dialect/language/dialectic - "ordinary language philosophy" - and that didn't end well. (I say that it turned out that ordinary language was just another speciality.) I think we have to look at some sort of translation between philosophies if there is to be any kind of dialogue. You are clearly succeeding in that, because I at least have the impression that I can partly understand what you are saying.

    This was an over simplification. But, alas, oversimplifying, I find, is unavoidable in a forum like this.ENOAH
    Everything is an over-simplification. There's no final statement of a philosophical doctrine. What matters is relevance to the matter at hand. I need to think over what you say, but I will respond - as briefly as I can.

    Your question 'how is knowledge that you see a lamp possible' follows from the assumption that you never see the lamp, only something prior to the seeing, in your own seeing.jkop
    It is worse than that. If you know that you never see the lamp, you must know what it would be like to see the lamp. That means it is possible to see the lamp (under some circumstances). This "assumption" involves changing the meaning of "see". But the idea that hallucinating that you see a lamp (etc.) assumes that "hallucinating" is like seeing, but different. So even the conclusion that when we think we see a lamp we are hallucinating see the lamp, still assumes that it is possible to see the lamp.

    That something is, is found in a proposition. Quantification or domain of discourse.Banno
    Surely, more accurately, that something is, is found in a true proposition (but not in a false one). But I would agree that a (meaningful) domain of discourse includes criteria for distinguishing between truth and falsity. But discourse is not, as formal logic is supposed to be, a structure fixed for all circumstances - the rules can break down, but they can be revised. That seems to me to address, at least partly, the fundamental concerns here.
  • on the matter of epistemology and ontology
    Isn't that exactly what eventually but (almost?) inevitably happens when there are gaps in the Language structures.ENOAH
    In one way, I agree with you. However, I have great difficulty in understanding the philosophical dialect you are speaking after that. One problem (which does not occur here) is that I suspect that the term "language" is often taken to mean a single structure; that is reinforced when you give it a capital letter "Language". I don't think language has a single, overall, structure. (I wonder if Platonism is not the back of that idea.) Wittgenstein compares language to an ancient city with many overlapping and interacting structures, and that seems more helpful to me.
    I don't think that "gap" is a helpful metaphor to describe the places where development happens - though it may be useful in some cases. In others "fault" or "extended (stretched) application" is better. I have a similar problem with "History". But we seem to be agreed that the possibility of novelty is inherent in language. It is not a closed system (a grid). On the contrary, we respond to challenges, difficulties, inadequacies to a linguistic structure in all sorts of different ways. Sometimes we adapt, sometimes we invent, sometimes we just forget. (And yes, no language is an abstract structure, though it is convenient to think of it that way. But in the end (or rather, in the beginning) it is inescapably realized in how human beings live their lives in the world.

    We dont use a concept to establish a world without concepts, we find ourselves thrown into a world ( we ‘are’ a self by continually transcending toward the world) and speak from amidst the beings ( things, concepts, uses) that are actualized from out of that world which projects itselfJoshs
    There is an idea that I like in this, if I've understood it. It is the idea that we need to start with the world, rather than with language. Then we can see language as part of the world and as developing within it. So the question is not, "how does language reach the world?" but rather "how does language develop within the world?". Whether it involves transcendence or not, the starting-point must be our lives as actual physical human beings.
  • on the matter of epistemology and ontology
    That’s right, but because novelty is not a neutral in-itself, the world will inflict novelty within the boundaries of specifically organized discursive structures of intelligibility.Joshs
    I'm not sure exactly what you mean. If it is within those boundaries, it is new in an old sense, already catered for. The points where the boundaries break down or are transcended is where the world might be said to show itself. There is another, surprising, possibility. The rules of language may themselves lead to incomprehensible conclusions; irrational numbers, imaginary numbers, calculus &c. These are places when we don't know what to say. We may be driven to develop new ways to speak, or stretch the boundaries by means of metaphors or poetry or pictures - even, possibly, music and dancing.
  • on the matter of epistemology and ontology
    The statement on the left is about language. The statement on the right is about how things are. T-sentences show that truth concerns how language links to how things are.Banno
    I'm not sure whether you are saying that the T-sentences resolve the problem or not. I'm reminded of Wittgenstein asking himself how he can possibly use language to get beyond language. Isn't that where he starts talking about saying and showing?

    If your statement belongs to a certain language game, then the game is always already in play the moment recognition of the state of affairs comes about. And what are facts if not IN the game? Or ON the grid of language possibilities? None of these establishes a knowledge that can allow the world to be posited in this stand alone way.Astrophel
    So perhaps the project of positing the world in a stand-alone way is a mistake?

    Heidegger’s concept of being-in-the -world determines that language and world are precisely not at a distance from each other. On the contrary, language discloses self and world together, as our always already being thrown into worldly possibilities. Merleau-Ponty and Wittgenstein make related points. The distance is not between language and the world, it is between our self and our self, due to the fact that, through language, we always come to ourselves from the world.Joshs
    So there is a concept that resolves the problem how to establish a world without concepts?

    A good paradox tempts us to find a resolution, but ensures that no solution can be found. This is a good paradox. The paradox is formulated in language. So it is itself included in the problem. So "language in itself" transcends our concept of language, the "world in itself" transcends our concept of the world and the relationship or link between the two will always transcend anything we can articulate in language.

    Unless that link shows itself in our embodied existence in the world, that is, in human life and practices.
  • on the matter of epistemology and ontology
    So truth is a monadic predicate, while belief is dyadic.Banno
    I see why belief is dyadic. But I don't see that truth is monadic. Surely truth has an (often suppressed) object - "true of" or "true to". A true right angle looks monadic, but is not typical.

    This superficial structure serves to show that a belief is always both about a proposition and about some agent. ....... It might be misleading as the proposition is not the object of the belief but constitutes the belief.Banno
    These two sentences look contradictory to me.
    .... a belief is always both about a proposition and about some agent.Banno
    But I agree with this.

    It's very hard to give an account of knowledge that transcends the nature/mind, subjective/objective divide.Count Timothy von Icarus
    Why do we want to?

    I think you get at a confusion that comes up with correspondence definitions of truth. We say a belief is true if it corresponds to reality. No problem here, beliefs can be true or false - same for statements.Count Timothy von Icarus
    I'm afraid there is a big problem. What "correspond" means is completely unclear. Consequently, this theory - paradoxically - is the basis of some very strange ideas, such as the idea that reality is, in some mysterious way, beyond our ken.

    Forward looking toward anticipated results, and this is an event of recognition that is localized in the perceiving agency, you or me. The object over there, the cow, "outside" of this is entirely transcendental because outside in this context means removed from the anticipatory temporality of the event.Astrophel
    Thanks for the explanation. I understand from what you say that the cow that I recognize exists independently of my recognition of it. Less exciting than I hoped.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    I cannot see how anything else in the universe can be conscious apart from humans, and some of the mammals (having lesser consciousness due to their lack of language uses).Corvus
    Well, it is certainly true that the only kinds of beings that are conscious in our universe are humans and animals. Humans are our paradigm of a conscious being. Consciousness in animals is more complicated, partly because different animals are have different levels of consciousness and partly because there are different opinions about how conscious they are. Whether it is possible that there are conscious beings apart from these is another question. There's no doubt that it is possible to imagine that there are and the future no doubt holds many things that we do not yet know. So I think your claim is over-confident.

    I still believe that everyone (with common sense) knows / agrees that consciousness emerges from the brain.Corvus
    It used to be common sense that the earth is flat and that there is no land between Europe and China.
    "Everyone knows that p" is one statement and "Someone does not know that p" is incompatible with it. "Everyone with common sense knows that p" is a different statement and "Someone does not know that p" is compatible with it.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    It is not the main point of the OP worthy to quibble about, because the OP is not a High-Order Logic topicCorvus
    I put my point badly. I only wanted to say that dualists might find it somewhat problematic to say that the brain generates the mind - even if you expand it to the body creates the mind. Dualism may be less popular than it was, but it still has philosophical adherents. I have to acknowledge that fact even though I think they are mistaken.

    That is the hard problem of mind-body issue. No one seems to know. The biologists and neurologists were suppose to find about it.Corvus
    It may be that they need to relax and concentrate on how the system works. If you ask what part of the central heating system keeps the house warm, you'll find yourself endlessly searching. If you ask where the self is that moves the car, you may discard some parts, but you'll never narrow it down to one part.
  • on the matter of epistemology and ontology
    "S knows P iff S believes P, is justified in believing P and P is true"Astrophel
    This is a much contested theory. But what's the alternative? A logician can simply decide that "know" is primitive; but that's just abandoning the idea of defining it.
    And so I see that "P is true" entails the existence of P in an way that is supposed to be independent of justification which is an altogether nonsensical assumption. Can't be done. And this is because existence is part and parcel of justification itself.Astrophel
    I take the point in the first sentence. I don't really understand the last sentence. Do you mean that only true statements can act as justification (where "p is false" is true iff p is false).
    The hidden additional necessary justification for claiming that S knows that P is that S is competent to assess whether P - and being competent to assess whether P is not just a matter of knowing that certain propositions are true.
    This is focuses on first-hand knowledge. But a great deal, even most, of what we know is known at second-hand. Yet first-hand knowledge needs to be the basis of second-hand knowledge. One could insist that only first-hand knowledge counts as knowledge, but that seems unduly strict, unless you are happy to develop a specialized philosophical dialect. This needs a good deal of disentangling.

    Discoveries are events of constructing a truth.Astrophel
    Discovering something is revealing it, and makes perfect sense when applied to truths. One would need to explain what "constructing a truth" in a good more detail for it to make sense.

    The "is true" in the JTB account simple rules out knowing things that are not true. It is distinct from the justification.Banno
    It seems to me rather like a ceteris paribus clause, requiring us to withdraw our claim to know that p if it turns out that p is false.
    But it does have an important additional consequence. It means that I cannot pass on something that I have learnt from someone else without endorsing it. This makes knowledge quite different from belief.
    In logic, we can simpy stipulate a definition, which means that someone else can stipulate a different definition and there is no basis for argument.
    So I like to argue that "fallible knowledge" undermines the place of knowledge in the language-game. It becomes a fancy variety of belief. But it is useful to distinguish between what is established as true and what may be true, but is not fully established. The latter is the role of belief. (But I don't mean to apply some impossible-to-attain standard of proof here. We can always withdraw our claims if we need to.)

    But having said that, there is indeed a close relation between epistemology and ontology. Statements being true or false is indeed dependent on what there is in the world.Banno
    Yes, that's true. And, as your articulation of the point demonstrates, the possibility is built in to our language. Our language allows us - even requires us - to distinguish between language and the world,

    It seems pretty clear that conditions in the world are really impossible to speak of outside of the grid of logic and language.Astrophel
    That's true. But the grid of language (including logic and mathematics) does allow us to speak of conditions in the world. Truth would not be possible if it didn't. It is true that sometimes we need to develop or change the concepts that we apply to the world, and that seems difficult if you think of language as a grid - i.e. fixed and limited. But language is a hugely complex system which can be developed and changed - as is logic (as opposed to individual logical systems).

    quote="Astrophel;886488"]The idea of a transcendental object is the best we can do when we leave the logical grid and try to talk about things.[/quote]
    Can you explain this idea in a bit more detail? I don't quite get it.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    I was shocked to read the post by ↪flannel jesus claiming that there are still many folks who believe minds are not generated from physical brains. If mind is not in brain, where would it be?Corvus
    It just goes to show how easy it is to mistake "the people that I know" for "everyone". It happens all the time. One issue is whether the mind is located in time and space. Another is the nature of the relationship between mind and brain. Descartes believed that the mind interacts with the body through the pituitary gland. But he did not believe that the mind was generated from it. But see my reply to Pantagruel below.

    If a person memorizes the "times tables", and uses them to work out the result of a multiplication, are they actually doing a calculation?Agree-to-Disagree
    No. The times tables are a short cut. They are the results of calculation. We memorize them because it makes it easier to do more complex multiplications. (I'm sure you know that 2 x 3 = 2+2+2). Some (perhaps all?) primary school children are introduced to multiplication in that way. Once they understand that multiplication reduces to addition, they are moved on to memorizing their tables.

    There are many ways that people use to solve a mathematical multiplication. Most involve either using their memory, using a calculator, or using an algorithm. Computers normally use an algorithm. Doesn't that mean that computers calculate in a similar way to humans?Agree-to-Disagree
    Perhaps at the software level it does mean that. But in this case, I think the "different way" is based on the machine coding of the process. (However, the AIs are a different case. The difference is clearly at the software level.)

    In the complex system wherein and whereby the embodied brain operatesPantagruel
    Yes, I agree with that. My understanding is that once you get into details, the spine is deeply involved in what the brain is doing, so we should not think of the brain alone, but of the brain + spine - and the entire nervous system. Then we have to recognize the hormonal system in the emotions and the heart and muscles in action. In the end, I actually prefer to say that the connection is between the mind and the whole body. But I am too lazy to always be correcting people, so in most circumstances I just let the difference go.
    That gives a basis for thinking that a machine constructed from silicon could never be a person or behave as a person does. I'm not all sure about that conclusion, though.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    At the moment humans are hoisting AI up. It is not hoisting itself up by its own bootstraps. If humans hoist AI up high enough then AI may gain the ability to hoist itself further without human intervention.Agree-to-Disagree
    The fundamental problem is to understand when we can say that the machine is doing anything, in the sense that humans do things. Can they be said to calculate, for example? Do they check our spelling and grammar? Searle says not because it is we who attribute significance to their results. But that means that their results are significant; we treat what they do as calculation or spell-checking. It isn't straightforward either way.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    everyone knows the mind emerges from the physical brain.Corvus
    I see that a lot of people have jumped on this. There's a lot of disagreement. But I agree that most people think that there is a close connection between the mind and the brain. But there is a good deal less agreement about what that connection is. It is a hard problem indeed.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    I think that some people believe that AI is hoisting itself up by its own bootstraps, programming itself, perhaps in some sense that is a precursor to sentience. In fact, AI is parasitically dependent on human intervention.Pantagruel
    This is a really useful way to think about these issues - particularly when we are thinking about how AI might develop. It seems to me that it can be applied very widely to technology in general. Darwin applied evolution to living things because they are self-replicating. However, that idea depends on how you look at things. Some parasites are dependent on another species to replicate. (I have in mind the fungi that replicate by hi-jacking ants - Wikipedia - Ant-parasitic fungus Viruses hi-jack the cells in their host to replicate - though they are border-line alive. Lichens are another interesting case.
    The key point here is that once a parasitical relationship is established, evolution ensures that the development of parasite and host are inter-linked. This is a helpful model for understanding AI. But I don't see that it enables us to make predictions about how it will go or not go.

    Specifically, if human beings rely too heavily on AI then essentially we are back to the self-consumption of AI and model collapse, yes.Pantagruel
    Maybe this also applies to human beings. Too much recycling of the same ideas without evaluation or criticism of them is harmful to thinking. Full stop.

    You claim that YOU don't need an external observer to know that YOU are thinking. But YOU are a special case. You are making an observation about yourself. Other people need to observe YOU to try and determine if YOU are thinking. And people need to observe a computer to try and determine if the computer is thinking.Agree-to-Disagree
    If I don't know the difference between "I" and "you" (and "they"), how can I articulate my observation that I am thinking? If I can't articulate the observation, is it meaningful to say that I can observe it? I think not. So the thinker's awareness that they are thinking may be a special case, but it is not independent of other people's observation that they are thinking and the thinker's awareness that other people are thinking.

    Humans can be considered to be biological machines.Agree-to-Disagree
    Quite so. That's why the short argument about whether machines can be conscious etc. is that there are already conscious machines in existence. There are plenty of questions about what would persuade us that something is a conscious or living machine, so that argument is not very helpful. But for what it is worth, I think it stands up.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    This has been a very interesting - and stimulating discussion and I regret that I can't contribute more to
    This is a fascinating discussion and I regret that I am unable to contribute to it. There are lots of interesting ideas at play here.

    Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level.Corvus

    I'm afraid I wasn't very clear about this. It is quite true that we don't need information about the physiological state of subjects to attribute states of mind to them. However, states of mind are said to be internal. We are able to make connections between those states and physiologically internal states of subjects, but those attributions are very different from the internal states of computing machines. There is a sort of parallel between mental/physiological states and software/hardware states, but also important differences.
    It is clear to me that once we appeal to the internal states of an AI machine we have gone beyond any Turing-type test, so the fact that we find such information significant is important. This is obscured by the standard description that AI constructs its texts in a very way that is very different from the way that humans construct their texts. Quite how human beings construct their texts is pretty much unknown at present, but the difference is identified in the remark that human "understand" their texts, whereas AI doesn't (as is shown, for example, in the fact that it sometimes "hallucinates" data, especially references, and seems to lack any capacity to critically evaluate its sources). (Or that's the impression the reports that I have read give me.)
    I take the point about states of consciousness at the analytic or metaphysical level except that I don't have a clear grasp about what those things mean. My view is that attributions of "internal" states, of belief/knowledge, desires and intentions is attributed by interpreting a given action in context of other actions and responses.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    They would make great Christmas presentsAgree-to-Disagree

    That's an interesting idea. Perhaps someone will design artificial birds and deer - even big game - so that hunters can kill them without anyone getting upset.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    they can't think creatively.Relativist

    Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test?

    There's an inherent risk in trying to draw a clear, single line here. If you identify something that machines can't do, some whizzkid will set to work to devise a machine that does it. It may be a simulation, but it may not.

    Let's suppose they do finally develop a machine that can drive a car or lorry or bus as well as or better than humans can, but in a different way. Suppose they are sold and people use them every day. What would be the point in denying that they are self-driving just because they do it in a different way?
  • Can a computer think? Artificial Intelligence and the mind-body problem
    it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.Relativist
    Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.

    It seems safe to predict that, on the whole, we will prefer our machines to do something better than we can, rather than doing everything as badly as we do. Who would want a machine that needs as much care and attention and takes as long to make (20 years start to finish) as a human being? It wouldn't make economic sense.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications.Corvus
    Exactly - though I would have put it a bit differently. It doesn't matter here.

    Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases.Corvus
    Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.

    But you don't comment on what I think is the fundamental problem here:
    I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people.Ludwig V
    I don't think there is any specific behaviour (verbal or non-verbal) that will distinguish clearly between these machines and people. We do not explain human actions in the same way as we explain what machines do. In the latter case, we apply causal explanations. In the former case, we usually apply explanations in terms of purposes and rationales. How do we decided us which framework is applicable?

    Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses.

    If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient.Pez
    The question that next is whether we can tease out why we attribute sentience and intelligence to the parrot and not to the AI? Is it just that the parrot is alive and the AI is not? Is that perhaps begging the question?

    The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.Relativist
    Do we really want to? (Somebody else suggested that we might not even try)
  • Can a computer think? Artificial Intelligence and the mind-body problem


    I agree with every word of that! :smile:

    I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people. It is easy to agree that what the machine does is the only way that we can even imagine tackling the question and mean completely different things by it.

    For example, one can't even formulate the question. "Could a machine be a (not necessarily human) person?" By definition, no. But that's very unhelpful.

    But then we can think of a human being as a machine (for certain purposes) and even think of a machine as a person (in certain circumstances).

    My preferred strategy would be to start from the concept of a human person and consider what versions or half-way houses we already recognize so as to get a handle on what a machine person would look like. We would need to think about animals, which some people seem to be doing, but personification and anthropomorphization and empathy would need to figure as well. It would even help to consider fictional representations.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    We're not getting anywhere like this. Time to try something different.
    Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.Corvus
    I did put my point badly. I've tried to find the analysis you refer to. I couldn't identify it. If you could point me in the right direction, I would be grateful.

    I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds.
    Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.
    .....To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions.
    Corvus
    That's a high bar. I agree that it is impossible to meet. But it proves too much since it also proves that we can never even know that human beings have/are minds.
    On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion. I think it is true that what we know of other minds, we know by interpreting what we see and hear of other people.
    You also say:-
    AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like.Corvus
    I'm not sure of the significance of "sentient" in this context, but I agree whole-heartedly with your point that without the ability to act in the world, we could not be sentient because, to put it this way, our brains would not learn to interpret the data properly. The implication is that the machine in a box with no more than an input and output of language could not approximate a human mind. A related point that I remember you pointing out is that the machines that we currently have do not have emotions or desires. Without them, to act as a human person is impossible. Yet, they could be simulated, couldn't they?

    There is not yet an understanding of what, for me is a key point in all of this. The framework (language game) which we apply to human persons is different from the framework (language game) that we apply to machines. It is not an inference to anything hidden, but a different category. If a flag waves, we do not wonder what it's purpose is - why it is waving. But we do ask why that guy over there is waving. Actions by people are explained by reasons and purposes. This isn't a bullet-proof statement of a thesis, but opening up what I think the crucial question is.

    Yes, I do have ideas about how such a discussion might develop and progress, but the first step is to put the question why we attribute what philosophy calls actions to human beings, and not to machines, and I want to say it is not a matter of any specific evidence, but how the evidence is interpreted. We see human beings as people and we see computers as machines. That's the difference we need to understand.


    Yes, animals have a way of surprising us. They are perfectly capable of learning and one wonders where the limits are.

    But even without Alex's achievements, I would have said that Alex is sentient. Animals are contested territory because they are (in relevant respects) like us in some ways and unlike us in other ways. In other words, they are not machines. To put it another way, we can related to them and they can related to us, but the relationships are not exactly the same as the relationships between human beings. It's really complicated and it is important to pay attention to the details of each case.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.Corvus
    I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".

    I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?Corvus
    There are two different questions here. If you know that p, I might also know that p, but not that you know that p. But I can also know (and not just guess) that you know that p. For example, you might tell me that you know that p. And I can tell whether you are lying.

    They seem to just want to be called as "the useful assistance" to human needs.Corvus
    Yes. It sounds positively cosy, doesn't it? Watch out! Assistants have been known to take over.

    Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings.Corvus
    You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.
    But I wouldn't call the AI guys an imitation. I do call their work in programming a machine to do something that people do (e.g. talking) as creating an imitation. In the same way, a parrot is a real parrot and not an imitation; when I teach it so say "Good morning" I am not imitating anything; but when the parrot says "Good morning" it is imitating human speech and not really talking.

    AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says.Pez
    Yes. But what would you say if it mindlessly spews out what has been fed in to it, but only when it is appropriate to do so? (I have in mind those little things an EPOS says from time to time. "Unexpected item in the bagging area", for example. Or the message "You are not connected to the internet" that my screen displays from time to time.) It's a kind of half-way house between parroting and talking.
    More seriously, Searle argues that computers don't calculate, because it is we who attribute the significance to the results. But we attribute that significance to them because of the way that they were arrived at, so I think it is perfectly appropriate to say that they do calculate. Of course it doesn't follow that they are people or sentient or even rational.

    If I can't tell that other people are sentient, then I don't know what it is to be sentient.
    — Ludwig V
    Exactly.
    Corvus
    But I can tell that other people are sentient. I don't say it follows that I know what sentience is. Do you?
  • Can a computer think? Artificial Intelligence and the mind-body problem
    Simulation = Imitation?Corvus
    Yes. Do you disagree?

    What is the ground for your saying that there was no ground?Corvus
    What is your ground for moving from "it hasn't happened" to "it will never happen"?

    We don't know that for sure, unless we become one of them in real.Corvus
    I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible.Patterner
    I'm really puzzled. I thought your reply to @RogueAI meant that you thought we should not take such fantasies seriously. But you are now saying that you think they are possible (or perhaps not impossible) nonetheless. I do think you are giving them too much credit, In brief, my answer is that we already accept that reality is very different from what we think it is, what with quanta and relativity. But there is evidence and argument to back the theories up. The wilder fantasies (such as Descartes' evil demon) have no evidence whatever to back them up. Taking them seriously is just a waste of time and effort.

    My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.Corvus
    Oh, well, that's different. Insects with multiple lenses have a different type of sentience from us. Spiders detect sounds in their legs. Perhaps bats' near total dependence on sound would count as well. Different types of sentience are, obviously, sentience. I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.

    Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience?Corvus
    There is exactly the same amount of evidence for the prediction that AI will possess the same sentience as the humans in the future as for the prediction that they/it will not. None. But I wouldn't want to actually predict that it will happen. I meant to say that it might - or rather, that there was no ground for ruling it out.
  • Can a computer think? Artificial Intelligence and the mind-body problem
    But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain.Pez
    Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
    As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
    An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb.

    Descates' argument, that I cannot even trust my memories,Pez
    But there are ways of sorting out the reliable memories from the unreliable ones. I'm only objecting to the idea that all my memories might be false. Any one of my memories might be false, but if none of them were true, I wouldn't have any memories to distrust.

    AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.
    Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you.
    Corvus
    Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.

    Is sentience a yes or no issue, or are there degrees of sentience?Agree-to-Disagree
    There's plenty of evidence from biology that the latter is the case. As a starter, is phototropism sentience or not? I think not, because no sense-organ is involved and the response is very simple.
    In biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light.
    Wikipedia - Phototropism