Yes. Christianity has a similar trope. So does Islam. My point is that in Bhuddhism the shift is not merely cognitive. It's very complicated.That was known, at one point in history, as 'metanoia', although that is now usually translated simply as 'repentance', thereby blurring the distinction between insight and belief. Originally it meant 'mental transformation' or something like a cognitive shift. — Wayfarer
I don't think this is Lewis Carroll's tortoise arguing with Achilles. Understanding this is heart of the problem. We need to be much more careful about what "doing" means in the context of planets and the weather and in the context of people. People and inanimate objects are not in the same category, which means that understanding planets or the weather and understanding people involve different language-games. Machines have a foot in both camps. The answers are not obvious.Most of the opponents of machine consciousness simply refuse to use the word to describe a machine doing the same thing a human is doing. — noAxioms
My boiler, on its own, is clearly not conscious, even if it contains a thermostat to switch it off when the water is sufficiently hot. Neither is the thermostat that switches it on. Neither keeps the house warm. What keeps the house warm, (not too hot and not too cold) is the entire system including the water, the pump and the radiators, with its feedback loops and not any one component. You can call the system "crudely conscious" if you like, but I think few people will follow you. But you are right that it is in some ways like a conscious being.Ditto for the thermostat. It doesn't react any more to the sensory input other than to convey a signal. So maybe my boiler is crudely conscious because it processes the input of its senses. — noAxioms
If that's the point, we don't need the theory. We all experience dreams from time to time. And we know how to tell the difference. But we can't tell the difference while we are dreaming. What's so exciting about the theory?If I could experience the real world, then be hooked up to a machine that simulates the same thing I have experienced, seamlessly, that I would not be able to tell the difference, then the theory has made its point. — L'éléphant
Yes, but there is also the idea that understanding requires training the mind - or maybe even reconstructing it. (I mean, by meditation, of course) Christianity, it seems to me, talks a great deal about belief and so presents itself as primarily a matter of doctrine. (Judaism emphasizes law, Islam acceptance, and so on.) This is complicated and not a sharp distinction, but the emphasis is there and sets these views apart from Western empiricism and rationalism.I believe the important philosophical perspective they bring is that of non-dualism. — Wayfarer
This is the difference between what a bank teller IS and what a bank teller DOES. Popper, in the Open Society, identifies this difference as part of the difference between science and (some kinds of) philosophy. (Maybe in other places as well - I just don't know.) It seems to me a very important difference.Encounter a bank teller and think of all that comes to mind in terms of what a bank teller qua bank teller is, and you will have a list of all a bank teller Does. — Astrophel
Yes. It's like the difference between parasitism and symbiosis. To put it this way, they are the same phenomenon, except that parasitism damages the host, and symbiosis benefits (or at least does not damage) the host. Or think of the difference between murder and execution, which both mean killing, but in different contexts, or with different evaluations.I agree that its feedback, but isn't feedback a sort of circular causality? — Count Timothy von Icarus
Yes, more helpful than a systems view of language. I'm not sure what was obvious before Philosophical Investigations and How to do things with words. Part of the point of such views is that they encourage us to consider the possibility that (some of) the big questions in the philosophy of language are the result of the systems view of language. We could call them bugs.More helpful than what? A systems view of language? But then it's always been obvious that language is a social practice and this alone doesn't really elucidate any of the big questions in philosophy of language. — Count Timothy von Icarus
Wittgenstein took us only so far, leaving us to take the idea further. These are really interesting questions. I'm not dogmatic about the answers. But surely that the "higher" mammals, at least, are capable of responding to the expectations of others, because of the way they interact with us. Whether the same applies to, for example, the social insects or schools of fish is another question. Lots of difference cases, no expectation of a tidy distinction.Attempts to unpack what "social practices" are seem to lead to more questions. E.g., if rule following is just based on "the expectations of others," what are we to make of apparent rule following in animal behavior, biology, and "law-like" behavior in nature? Are these different sorts of rules? — Count Timothy von Icarus
:smile: Of course they blend and interact. I regard that as a feature, not a bug.I think this is a good classification. Although, they can also blend together a bit. E.g., the black swan causes us to discover the Type 2 problem, or the Type 2 problem opens up the possibility of exploiting incoherencies in a system. — Count Timothy von Icarus
I have the impression that Wittgenstein did think that "way of life" and "human practices" gave a shared context. If they don't, how could he think they explained how come we agree about the interpretation (application) of a rule?Kripke, unlike the later Wittgenstein, could not accept the non-existence of a universal and shared semantic foundation. — sime
On the other hand, you are quite right that human life is as much the stage for divergence and disagreement as a shared basis of consensus. The importance of the idea is that human life is both a basis for agreement and the common ground that is necessary for divergence and disagreement to develop.For Wittgenstein, any assertibility criteria can be used for defining the meaning of 'grasping' a rule, and not necessarily the same criteria on each and every occasion that the rule is said to be 'used'. And a speaker is in his rights to provide his own assertibility criteria for decoding what he says, even if his listeners insist on using different assertibility criteria when trying to understanding the speaker's words. — sime
I am pretty confident that the first sentence is right. As to the second sentence, I find myself considering the possibility that the two concepts of decidability and computability may be defined in terms of each other. If they are not, then I'm rather unclear what they mean.I feel like too much is dismissed as unknowable because it can't be formalized in static systems, as if the limit of current modeling abilities is the limit of knowledge. Sort of like how many in physics say the universe must be computable because we lack an understanding of how things would be "decidable" otherwise. — Count Timothy von Icarus
Yes, that's part of what I'm saying. Any proposed foundation will generate a question why that is so. There are only two ways to stop the regress - first, find an indubitable, self-evident, axiomatic starting-point or second, turn the regress into a loop. Neither is very satisfactory. On the other hand, I don't find the idea that there will always be unanswered questions or that our explanations are incomplete and no matter how fast we run, we will never arrive at the Grand Theory of Everything. None of that means that what we call following a rule is not the result of human practices and way of life.Once we locate the proximate source of meaning in social practices, the obvious next question is "what causes those practices to be what they are?" I find some phenomenological explanations of how predication arises quite plausible, but then these lead to the question: "why is human phenomenology this way?" — Count Timothy von Icarus
I'm not sure that this is much of an objection to what Wittgenstein is trying to do - assembling reminders to enable us to find the way out of the bottle. Like the fly, once we've seen the way out, it is obvious. He starts on the basis that everything is in plain sight. Actually, this sounds like the well-worn "trivial or false" dilemmas that analytic philosophers used to be so fond of.If we say, "well the natural world is involved in meanings, as well as human cognitive architecture, the phenomenology of human experience, intentionality, and purpose," though, which I think we must, then the role of social practices seems to slide back towards the merely obvious. — Count Timothy von Icarus
Yes. I think that Hume is very sensible when he distinguishes between judicious or moderate scepticism and radical or Pyrrhonic scepticism. (He thinks the former is necessary and wise and the latter is unhinged; he recommends a month in the country for anyone suffering from it.)It's easier to have destructive certainties when you allow them to sit apart from one another, and so to selectively decide where reason applies. So, yes we should be skeptical of certainties, but we should also not be terrified of them. — Count Timothy von Icarus
Thank you for telling me that. It helps a lot.Keep in mind that I am not supporting the simulation hypothesis in any form. I'm looking for likely ways to debunk it, but in the end, there can be no proof. — noAxioms
I agree with you, though I would describe it as hand-waving. I agree also that sometimes it is best to roll with the punch if someone takes an idea seriously and I don't. I've done it myself. It may not result in them changing their mind, but it does allow some exploration and clarification.I think that sounds like magic, but everyone else is taking it seriously, — RogueAI
So if I miniaturized the AI hardware and grafted it into the frog, it becomes a simulation instead of a VR?You and the frog both make your own decisions, not some AI trying to fool the subject by making a frog shape behave like a frog. — noAxioms
What made the game? Though I grant you, it is quite different from the kinds of simulation we have been talking about, and far from a VR. But it is an abstraction from the world in which Conway - and you and I - live.Conway's Game-of-Life (GoL) is not in any way derived from the world in which we live, so there's a counterexample to that assertion. — noAxioms
I agree. I can't answer for Conway's intent, but it looks to me as if the intent is to explore and play with the possibilities of a particular kind of system. In which it has definitely succeeded, in most interesting ways.The intent in that example (sc. the simulation of a car crash) is to find an optimal design based on the simulation results. Not so under GoL. — noAxioms
Well, I would say that those films are simulations of a fantasy scenario/world. But I'm not fussed about the vocabulary here. I am fussed about the idea that they have no connection with the actual world. That is simply false. For a start, there are human beings in it, not to mention space ships, planets and suns. As to the physics being different, that doesn't seem to bother people like Hume ("the sun might not rise tomorrow morning") or Putnam ("Twin Earth"). We can, after all, imagine that physics is different from our current one, and, believe it or not, there have been people who did not believe in our physics, but something quite different. Perhaps there still are.Those (sc. Star Trek and Star Wars) are not simulations. Heck, the physics of those worlds are both quite different than our own — noAxioms
Yes, there may be a need to say more. But the idea that VR might be used to deceive people itself presupposes that what is presented by the VR is not real. What might be more troublesome is a VR that re-presented the actual world around the wearer. Pointless, though there might well be a use for it in some medical situations. On the other hand, it couldn't work unless it was possible for the wearer to actually (really) act.Yes, that's the idea (one of them) (sc. the idea that VR might become good enough to deceive people) under consideration here. How do you know it's false? Just asserting it false is beyond weak. — noAxioms
I have the impression that idealists do not think that human beings have any internal physics. (Do they even think there is any such thing as physics?) I was not taking that issue into account, but was assuming a shared background assumption that we could call common sense. Are you an idealist?
I'm not sure about whether or in what way the actual physics of the person/computer are relevant. Clearly, we know that human beings are persons without knowing (in any detail) about their internal physics. On the other hand, the commentary on the current AIs seems unanimous in thinking that the details of the software are.if not all people/creatures are conscious in the same way (a process running the same physics). — noAxioms
One needs to specify that "the same" means here. Otherwise, any difference between people (such as brain weight or skin colour) could lead to classifying them as not conscious, not people. I'm sorry, what are NPCs?OK, if not all the people are simulated the same, then the ones that are not (the NPC's) would be fake, not conscious, — noAxioms
Yes, there is an issue here. We can, of course construct, imaginary worlds and most of the time we don't bother to point out that they are always derived from the world we live in. As here, we know about real cars that really crash and what happens afterwards (roughly). That's the basis that enables us to construct and recognize simulations of them. "Star Trek" and "Star Wars" are extensions of that ability.Sort of. Yes, they have a model. No, it isn't a model of something that exists. There isn't a 'real thing' to it. — noAxioms
That's a bit unfair, isn't it? We know quite well what is VR and what is not, so it is clearly distinguishable from reality. Nobody pretends otherwise. Of course, we can frighten ourselves with the idea that a VR (In some unimaginably advanced form) could be used to deceive people; "Matrix" is one version of this. But, unless we are straightforward positivists or followers of George Berkeley, the fact that the difference between VR and reality is perfectly clear and the problem is no different from the problem how we tell dreams from reality.The skull-vat view does not feed the mind a set of artificially generated lies. VR does. — noAxioms
It's one thing to retrieve the wisdom. It's quite another to one bring back the fool's gold. Effective panning is essential. And then I wonder whether you can have one without the other.Quite. I'm not pushing for a return to a golden past. It's more along the lines of a forgotten wisdom. — Wayfarer
That would explain why he's so hard to understand.he'd be lionized — Wayfarer
Thank you very much. I didn't know that Wittgenstein articulated this thought.Wittgenstein said in his Lecture on Ethics that, say a man's head turns suddenly into a lion's head. We would all be shocked, suspect a miracle; that is, until science got a hold of it and a discovery, perhaps something completely new, was measured, compared, tested in different environments, etc. And if this were simply not explainable because the results defied the repeatability requirement of science, then this, too, would be admitted and normalized. We would call this "chaos". There are many things called chaos by science. — Astrophel
The "simulation hypothesis" is indeed quite different from the hypothesis that there are imitations of people around. I'm not quite sure that it has "nothing to do" with fake people.The simulation hypothesis has nothing to do with an imitation of a person, which would be an android or some other 'fake' human. — noAxioms
What if this is all a simulation and everyone you think is conscious are really NPC's?
— RogueAI
RogueAI is probably not suggesting an imitation person here. — noAxioms
On the face of it, this looks like a generalization from "there are some fake. imitation, simulated people around" to "everything is a simulation".The simulation hypothesis proposes that what humans experience as the world is actually a simulated reality, such as a computer simulation in which humans themselves are constructs." — noAxioms
What empirical evidence could possibly confirm or refute this? I don't see this as a hypothesis at all, but as a methodological decision. In the 17th century, physicists decided to eject anything that seemed incapable of mathematical treatment, so colours and sounds were banished to the mind, placed beyond the scope of science. Science did not need those hypotheses.mathematical universe hypothesis, — noAxioms
So how does a simulation differ from reality?simulation is simply an explicit execution of an approximation of those laws, on a closed or open system. — noAxioms
Fair enough. But in those cases, it is clear what the simulation is a simulation of. We know what the real thing is. As you say, this has nothing to do with a simulation of everything.They perform for instance simulations of car crashes at the design phase, the result of which eventually generates a design that is safer. — noAxioms
Metaphysics, in that context, is not a dry textbook of scholastic definitions and dogmas, but a grounding vision, a way of being-in-the-world, but one that has been long forgotten, on the whole. — Wayfarer
In this context, do we really have a basis for making these judgements? I've no problem with the idea that the Enlightenment is not perfect, and perhaps it has run its course. But when I think about what preceded it, I do not find myself longing to return to the Good Old Days. So we find ourselves trying to work out the Next Thing, avoiding the mistakes of the Last Thing.something goes dreadfully wrong when man reaches for a divided and false certainty, and this was certainly as true for Counter Reformation figures as well. — Count Timothy von Icarus
So perhaps we should be very careful, and sceptical of certainties.something goes dreadfully wrong when man reaches for a divided and false certainty — Count Timothy von Icarus
Aren't practices and ways of life ("This is what I do") foundations for Wittgenstein at least? If they are, your question does arise, as it always does for any foundation. For some, it leads us to a change of discourse, to naturalistic ideas about human beings, social animals finding their way through the "real" world. But that seems to be where we came in!When you look for the causes of practices, there is nothing concrete to point to behind them, no essences to inform what it is that rules might be used to point out. — Count Timothy von Icarus
Do their have to be general principles as such? Should we not change the model and think of something more dynamic, more evolutionary?I think it might be more useful to say that there are general principles that are essential to making the scientific method work that are also relevant to statistics, probability theory, perception, Hebbian "fire-together-wire-together" neuronal activity, and how physical information works at a basic level. — Count Timothy von Icarus
I suppose my next question is what is flawless without bugs? Even concepts of the most perfect thing "God," are riddled with bugs. "everything being a quick and dirty fix," I like that for a little comedy routine I've been toying with, it Starts off with "Y'all are some Naaaassty motherfuckas..." Ty for the inspiration. — Vaskane
Our brains seem bugged when it comes to communication. Or perhaps it's both language and brain? — Vaskane
I also think the phenomena of "wrong" rules is a reminder that rules ultimately are also defined and refined in terms of their purposes, which is easy to recall with games, but harder with natural language and mathematics. — Count Timothy von Icarus
This isn't circularity. It's feedback.This ends up circular, but not in a bad way. Words have their meanings because of how people use them, but then how people use words ends up being driven (in at least some cases) by what people want/intend them to mean. — Count Timothy von Icarus
In a sense, that's true. But both the software and hardware are designed and built to produce certain results which are meaningful in the context of human life and practices. So the ultimate foundation that Wittgenstein arrives at "This is what I do" does apply.Also, video games have the benefit of having canonical rules that are faithfully executed by a computer. — Count Timothy von Icarus
I like your example here. As a matter of style, there's a lot to be said for avoiding the passive unless it is unavoidable just because it is vague. It's to be expected that natural language will be messy and complicated, and also that people will find and adopt ways of using it in problematic ways. Although one might argue that ambiguity is often useful or desirable in pragmatic ways. Of course, whether they are desirable or not will depend on your point of view.Natural language bugs will necessarily be hard to define due to the lack of canonical rules and faithful execution. — Count Timothy von Icarus
Yes, that's the point that one keeps coming back to - even if one thinks about different ways of using language.Whatever existence is is bound analytically to the saying it is. — Astrophel
This seems to me the right way to approach the problem. Is it too brutal to observe that the description of the cat is not the cat. Why should it be? It would be pointless if it were. But when we are dealing with the cat, interacting with it, it is the cat we are interacting with, and not a description of it. Is describing the cat inter-acting with it? Clearly not in the sense required to state the problem. To accept a sense of interaction that includes description as interaction is to dissolve the problem by definition and will satisfy no-one.But knowing does not give one the kind of "ontological" intimacy you seem to be suggesting. To knowledge, the world will remain transcendental. There is my cat, that lamp, that fence post over there, and here am I. Nothing is going tp bridge that distance, no matter how one theorizes epistemic relations. I know that they exist, but I don't know what that means. This is because language is pragmatic: in perceptual events I DEAL with the world, and meaning is bound up in this. — Astrophel
It sounds as if Language is a real rag-bag. But I'm guessing that you are relying on the structure of signifier and signified as the common element. But, in this use, it doesn't help the effect of the way you use these concepts to smother differences that seem important to me. Pictures are very different from descriptions, just because they are representations of something; descriptions, in my unorthodox view, and not representations at all; maps and diagrams are half-way houses between the two; signalling flags are a code; they are more like words, but not the same. Words are not all of a piece either; The numeral "1" stands in a very different relationship to its signified from "horse"; "walking" signifies something very different from either - and so on. You may think the differences don't matter. We'll see.I am using Language as broadly as one can imagine, to include all images, representations, signifiers etc., if there are ceteras, stored in memory/History and structuring what we--philosophers and laity alike--think of as human experience. — ENOAH
Like language, history is a mixed bag. But that's not my main problem here. My problem is that I simply don't follow what you say and in any case, I'm not at all sure that there are laws of history. Certainly, since it normally takes the form of a narrative, which does not present us with any laws, the idea must be problematic. But the biggest issue is that much history is about people. You seem to regard it as an independent actor. It's as if you were telling me about the army going to war, rather than people going to war.I am using History to refer to the collective of these Signifiers operating on the Natural World beyond the individual body, and constructing Narratives beyond individual personalities, all of which moves autonomously in accordance with evolved Laws and Dynamics, is inter-permeable or accessible to Itself in spite of embodiment, is ultimately Fictional, and though it affects Realty via embodiment and the manipulation of resources into Culture, it has no access whatsoever to knowing Reality, despite all of our (Its own) efforts to prove it wrong. — ENOAH
It's Kierkegaard who complained that Hegel had "forgotten that we exist." — Astrophel
This is a jewel. I know I could argue that if I obediently breathe, language has put me in touch with reality. But you remind me of the Zen masters who will reply to questions like "what is reality?" by offering you a cup of tea. Perhaps we should share one and stop worrying so much. Or am I misunderstanding you?If its Reality you want, just breathe. — ENOAH
I can just about get my head around this. But you said earlier:-All we can say regarding the Truth of this hypothetical in Reality is the Organism seeing. It is in the Organism do-ing, be-iing, see-ing , is-ing, all of which "exists" in presence, in is-ing/be-ing, which is True. — ENOAH
I don't see why you can't count perceiving as just one of the activities of human beings. Good, bad or indifferent as signifiers may be, they are also real and part of reality.But with the advent of uniquely human Consciousness or Mind, "seeing" is immediately displaced by "perceiving." That is, it is displaced by the Signifiers re-constructing the sensation with its Narrative. — ENOAH
When I wrote "dialect", I did not mean "dialectic". But maybe you are pointing to the same issue - mutual comprehension. There's only one philosophy that seriously tried not to use specialized philosophical dialect/language/dialectic - "ordinary language philosophy" - and that didn't end well. (I say that it turned out that ordinary language was just another speciality.) I think we have to look at some sort of translation between philosophies if there is to be any kind of dialogue. You are clearly succeeding in that, because I at least have the impression that I can partly understand what you are saying.I am not necessarily using any philosophical dialectic, — ENOAH
Everything is an over-simplification. There's no final statement of a philosophical doctrine. What matters is relevance to the matter at hand. I need to think over what you say, but I will respond - as briefly as I can.This was an over simplification. But, alas, oversimplifying, I find, is unavoidable in a forum like this. — ENOAH
It is worse than that. If you know that you never see the lamp, you must know what it would be like to see the lamp. That means it is possible to see the lamp (under some circumstances). This "assumption" involves changing the meaning of "see". But the idea that hallucinating that you see a lamp (etc.) assumes that "hallucinating" is like seeing, but different. So even the conclusion that when we think we see a lamp we are hallucinating see the lamp, still assumes that it is possible to see the lamp.Your question 'how is knowledge that you see a lamp possible' follows from the assumption that you never see the lamp, only something prior to the seeing, in your own seeing. — jkop
Surely, more accurately, that something is, is found in a true proposition (but not in a false one). But I would agree that a (meaningful) domain of discourse includes criteria for distinguishing between truth and falsity. But discourse is not, as formal logic is supposed to be, a structure fixed for all circumstances - the rules can break down, but they can be revised. That seems to me to address, at least partly, the fundamental concerns here.That something is, is found in a proposition. Quantification or domain of discourse. — Banno
In one way, I agree with you. However, I have great difficulty in understanding the philosophical dialect you are speaking after that. One problem (which does not occur here) is that I suspect that the term "language" is often taken to mean a single structure; that is reinforced when you give it a capital letter "Language". I don't think language has a single, overall, structure. (I wonder if Platonism is not the back of that idea.) Wittgenstein compares language to an ancient city with many overlapping and interacting structures, and that seems more helpful to me.Isn't that exactly what eventually but (almost?) inevitably happens when there are gaps in the Language structures. — ENOAH
There is an idea that I like in this, if I've understood it. It is the idea that we need to start with the world, rather than with language. Then we can see language as part of the world and as developing within it. So the question is not, "how does language reach the world?" but rather "how does language develop within the world?". Whether it involves transcendence or not, the starting-point must be our lives as actual physical human beings.We dont use a concept to establish a world without concepts, we find ourselves thrown into a world ( we ‘are’ a self by continually transcending toward the world) and speak from amidst the beings ( things, concepts, uses) that are actualized from out of that world which projects itself — Joshs
I'm not sure exactly what you mean. If it is within those boundaries, it is new in an old sense, already catered for. The points where the boundaries break down or are transcended is where the world might be said to show itself. There is another, surprising, possibility. The rules of language may themselves lead to incomprehensible conclusions; irrational numbers, imaginary numbers, calculus &c. These are places when we don't know what to say. We may be driven to develop new ways to speak, or stretch the boundaries by means of metaphors or poetry or pictures - even, possibly, music and dancing.That’s right, but because novelty is not a neutral in-itself, the world will inflict novelty within the boundaries of specifically organized discursive structures of intelligibility. — Joshs
I'm not sure whether you are saying that the T-sentences resolve the problem or not. I'm reminded of Wittgenstein asking himself how he can possibly use language to get beyond language. Isn't that where he starts talking about saying and showing?The statement on the left is about language. The statement on the right is about how things are. T-sentences show that truth concerns how language links to how things are. — Banno
So perhaps the project of positing the world in a stand-alone way is a mistake?If your statement belongs to a certain language game, then the game is always already in play the moment recognition of the state of affairs comes about. And what are facts if not IN the game? Or ON the grid of language possibilities? None of these establishes a knowledge that can allow the world to be posited in this stand alone way. — Astrophel
So there is a concept that resolves the problem how to establish a world without concepts?Heidegger’s concept of being-in-the -world determines that language and world are precisely not at a distance from each other. On the contrary, language discloses self and world together, as our always already being thrown into worldly possibilities. Merleau-Ponty and Wittgenstein make related points. The distance is not between language and the world, it is between our self and our self, due to the fact that, through language, we always come to ourselves from the world. — Joshs
I see why belief is dyadic. But I don't see that truth is monadic. Surely truth has an (often suppressed) object - "true of" or "true to". A true right angle looks monadic, but is not typical.So truth is a monadic predicate, while belief is dyadic. — Banno
These two sentences look contradictory to me.This superficial structure serves to show that a belief is always both about a proposition and about some agent. ....... It might be misleading as the proposition is not the object of the belief but constitutes the belief. — Banno
But I agree with this..... a belief is always both about a proposition and about some agent. — Banno
Why do we want to?It's very hard to give an account of knowledge that transcends the nature/mind, subjective/objective divide. — Count Timothy von Icarus
I'm afraid there is a big problem. What "correspond" means is completely unclear. Consequently, this theory - paradoxically - is the basis of some very strange ideas, such as the idea that reality is, in some mysterious way, beyond our ken.I think you get at a confusion that comes up with correspondence definitions of truth. We say a belief is true if it corresponds to reality. No problem here, beliefs can be true or false - same for statements. — Count Timothy von Icarus
Thanks for the explanation. I understand from what you say that the cow that I recognize exists independently of my recognition of it. Less exciting than I hoped.Forward looking toward anticipated results, and this is an event of recognition that is localized in the perceiving agency, you or me. The object over there, the cow, "outside" of this is entirely transcendental because outside in this context means removed from the anticipatory temporality of the event. — Astrophel
Well, it is certainly true that the only kinds of beings that are conscious in our universe are humans and animals. Humans are our paradigm of a conscious being. Consciousness in animals is more complicated, partly because different animals are have different levels of consciousness and partly because there are different opinions about how conscious they are. Whether it is possible that there are conscious beings apart from these is another question. There's no doubt that it is possible to imagine that there are and the future no doubt holds many things that we do not yet know. So I think your claim is over-confident.I cannot see how anything else in the universe can be conscious apart from humans, and some of the mammals (having lesser consciousness due to their lack of language uses). — Corvus
It used to be common sense that the earth is flat and that there is no land between Europe and China.I still believe that everyone (with common sense) knows / agrees that consciousness emerges from the brain. — Corvus
I put my point badly. I only wanted to say that dualists might find it somewhat problematic to say that the brain generates the mind - even if you expand it to the body creates the mind. Dualism may be less popular than it was, but it still has philosophical adherents. I have to acknowledge that fact even though I think they are mistaken.It is not the main point of the OP worthy to quibble about, because the OP is not a High-Order Logic topic — Corvus
It may be that they need to relax and concentrate on how the system works. If you ask what part of the central heating system keeps the house warm, you'll find yourself endlessly searching. If you ask where the self is that moves the car, you may discard some parts, but you'll never narrow it down to one part.That is the hard problem of mind-body issue. No one seems to know. The biologists and neurologists were suppose to find about it. — Corvus
This is a much contested theory. But what's the alternative? A logician can simply decide that "know" is primitive; but that's just abandoning the idea of defining it."S knows P iff S believes P, is justified in believing P and P is true" — Astrophel
I take the point in the first sentence. I don't really understand the last sentence. Do you mean that only true statements can act as justification (where "p is false" is true iff p is false).And so I see that "P is true" entails the existence of P in an way that is supposed to be independent of justification which is an altogether nonsensical assumption. Can't be done. And this is because existence is part and parcel of justification itself. — Astrophel
Discovering something is revealing it, and makes perfect sense when applied to truths. One would need to explain what "constructing a truth" in a good more detail for it to make sense.Discoveries are events of constructing a truth. — Astrophel
It seems to me rather like a ceteris paribus clause, requiring us to withdraw our claim to know that p if it turns out that p is false.The "is true" in the JTB account simple rules out knowing things that are not true. It is distinct from the justification. — Banno
Yes, that's true. And, as your articulation of the point demonstrates, the possibility is built in to our language. Our language allows us - even requires us - to distinguish between language and the world,But having said that, there is indeed a close relation between epistemology and ontology. Statements being true or false is indeed dependent on what there is in the world. — Banno
That's true. But the grid of language (including logic and mathematics) does allow us to speak of conditions in the world. Truth would not be possible if it didn't. It is true that sometimes we need to develop or change the concepts that we apply to the world, and that seems difficult if you think of language as a grid - i.e. fixed and limited. But language is a hugely complex system which can be developed and changed - as is logic (as opposed to individual logical systems).It seems pretty clear that conditions in the world are really impossible to speak of outside of the grid of logic and language. — Astrophel
It just goes to show how easy it is to mistake "the people that I know" for "everyone". It happens all the time. One issue is whether the mind is located in time and space. Another is the nature of the relationship between mind and brain. Descartes believed that the mind interacts with the body through the pituitary gland. But he did not believe that the mind was generated from it. But see my reply to Pantagruel below.I was shocked to read the post by ↪flannel jesus claiming that there are still many folks who believe minds are not generated from physical brains. If mind is not in brain, where would it be? — Corvus
No. The times tables are a short cut. They are the results of calculation. We memorize them because it makes it easier to do more complex multiplications. (I'm sure you know that 2 x 3 = 2+2+2). Some (perhaps all?) primary school children are introduced to multiplication in that way. Once they understand that multiplication reduces to addition, they are moved on to memorizing their tables.If a person memorizes the "times tables", and uses them to work out the result of a multiplication, are they actually doing a calculation? — Agree-to-Disagree
Perhaps at the software level it does mean that. But in this case, I think the "different way" is based on the machine coding of the process. (However, the AIs are a different case. The difference is clearly at the software level.)There are many ways that people use to solve a mathematical multiplication. Most involve either using their memory, using a calculator, or using an algorithm. Computers normally use an algorithm. Doesn't that mean that computers calculate in a similar way to humans? — Agree-to-Disagree
Yes, I agree with that. My understanding is that once you get into details, the spine is deeply involved in what the brain is doing, so we should not think of the brain alone, but of the brain + spine - and the entire nervous system. Then we have to recognize the hormonal system in the emotions and the heart and muscles in action. In the end, I actually prefer to say that the connection is between the mind and the whole body. But I am too lazy to always be correcting people, so in most circumstances I just let the difference go.In the complex system wherein and whereby the embodied brain operates — Pantagruel
The fundamental problem is to understand when we can say that the machine is doing anything, in the sense that humans do things. Can they be said to calculate, for example? Do they check our spelling and grammar? Searle says not because it is we who attribute significance to their results. But that means that their results are significant; we treat what they do as calculation or spell-checking. It isn't straightforward either way.At the moment humans are hoisting AI up. It is not hoisting itself up by its own bootstraps. If humans hoist AI up high enough then AI may gain the ability to hoist itself further without human intervention. — Agree-to-Disagree
I see that a lot of people have jumped on this. There's a lot of disagreement. But I agree that most people think that there is a close connection between the mind and the brain. But there is a good deal less agreement about what that connection is. It is a hard problem indeed.everyone knows the mind emerges from the physical brain. — Corvus
This is a really useful way to think about these issues - particularly when we are thinking about how AI might develop. It seems to me that it can be applied very widely to technology in general. Darwin applied evolution to living things because they are self-replicating. However, that idea depends on how you look at things. Some parasites are dependent on another species to replicate. (I have in mind the fungi that replicate by hi-jacking ants - Wikipedia - Ant-parasitic fungus Viruses hi-jack the cells in their host to replicate - though they are border-line alive. Lichens are another interesting case.I think that some people believe that AI is hoisting itself up by its own bootstraps, programming itself, perhaps in some sense that is a precursor to sentience. In fact, AI is parasitically dependent on human intervention. — Pantagruel
Maybe this also applies to human beings. Too much recycling of the same ideas without evaluation or criticism of them is harmful to thinking. Full stop.Specifically, if human beings rely too heavily on AI then essentially we are back to the self-consumption of AI and model collapse, yes. — Pantagruel
If I don't know the difference between "I" and "you" (and "they"), how can I articulate my observation that I am thinking? If I can't articulate the observation, is it meaningful to say that I can observe it? I think not. So the thinker's awareness that they are thinking may be a special case, but it is not independent of other people's observation that they are thinking and the thinker's awareness that other people are thinking.You claim that YOU don't need an external observer to know that YOU are thinking. But YOU are a special case. You are making an observation about yourself. Other people need to observe YOU to try and determine if YOU are thinking. And people need to observe a computer to try and determine if the computer is thinking. — Agree-to-Disagree
Quite so. That's why the short argument about whether machines can be conscious etc. is that there are already conscious machines in existence. There are plenty of questions about what would persuade us that something is a conscious or living machine, so that argument is not very helpful. But for what it is worth, I think it stands up.Humans can be considered to be biological machines. — Agree-to-Disagree
Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level. — Corvus
They would make great Christmas presents — Agree-to-Disagree
they can't think creatively. — Relativist
Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure. — Relativist
Exactly - though I would have put it a bit differently. It doesn't matter here.Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications. — Corvus
Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases. — Corvus
I don't think there is any specific behaviour (verbal or non-verbal) that will distinguish clearly between these machines and people. We do not explain human actions in the same way as we explain what machines do. In the latter case, we apply causal explanations. In the former case, we usually apply explanations in terms of purposes and rationales. How do we decided us which framework is applicable?I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people. — Ludwig V
The question that next is whether we can tease out why we attribute sentience and intelligence to the parrot and not to the AI? Is it just that the parrot is alive and the AI is not? Is that perhaps begging the question?If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient. — Pez
Do we really want to? (Somebody else suggested that we might not even try)The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations. — Relativist
I did put my point badly. I've tried to find the analysis you refer to. I couldn't identify it. If you could point me in the right direction, I would be grateful.Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective. — Corvus
That's a high bar. I agree that it is impossible to meet. But it proves too much since it also proves that we can never even know that human beings have/are minds.Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.
.....To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions. — Corvus
I'm not sure of the significance of "sentient" in this context, but I agree whole-heartedly with your point that without the ability to act in the world, we could not be sentient because, to put it this way, our brains would not learn to interpret the data properly. The implication is that the machine in a box with no more than an input and output of language could not approximate a human mind. A related point that I remember you pointing out is that the machines that we currently have do not have emotions or desires. Without them, to act as a human person is impossible. Yet, they could be simulated, couldn't they?AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like. — Corvus
I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions. — Corvus
There are two different questions here. If you know that p, I might also know that p, but not that you know that p. But I can also know (and not just guess) that you know that p. For example, you might tell me that you know that p. And I can tell whether you are lying.I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work? — Corvus
Yes. It sounds positively cosy, doesn't it? Watch out! Assistants have been known to take over.They seem to just want to be called as "the useful assistance" to human needs. — Corvus
You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings. — Corvus
Yes. But what would you say if it mindlessly spews out what has been fed in to it, but only when it is appropriate to do so? (I have in mind those little things an EPOS says from time to time. "Unexpected item in the bagging area", for example. Or the message "You are not connected to the internet" that my screen displays from time to time.) It's a kind of half-way house between parroting and talking.AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says. — Pez
But I can tell that other people are sentient. I don't say it follows that I know what sentience is. Do you?If I can't tell that other people are sentient, then I don't know what it is to be sentient.
— Ludwig V
Exactly. — Corvus
Yes. Do you disagree?Simulation = Imitation? — Corvus
What is your ground for moving from "it hasn't happened" to "it will never happen"?What is the ground for your saying that there was no ground? — Corvus
I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.We don't know that for sure, unless we become one of them in real. — Corvus
I'm really puzzled. I thought your reply to @RogueAI meant that you thought we should not take such fantasies seriously. But you are now saying that you think they are possible (or perhaps not impossible) nonetheless. I do think you are giving them too much credit, In brief, my answer is that we already accept that reality is very different from what we think it is, what with quanta and relativity. But there is evidence and argument to back the theories up. The wilder fantasies (such as Descartes' evil demon) have no evidence whatever to back them up. Taking them seriously is just a waste of time and effort.I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible. — Patterner
Oh, well, that's different. Insects with multiple lenses have a different type of sentience from us. Spiders detect sounds in their legs. Perhaps bats' near total dependence on sound would count as well. Different types of sentience are, obviously, sentience. I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what. — Corvus
There is exactly the same amount of evidence for the prediction that AI will possess the same sentience as the humans in the future as for the prediction that they/it will not. None. But I wouldn't want to actually predict that it will happen. I meant to say that it might - or rather, that there was no ground for ruling it out.Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience? — Corvus
Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain. — Pez
But there are ways of sorting out the reliable memories from the unreliable ones. I'm only objecting to the idea that all my memories might be false. Any one of my memories might be false, but if none of them were true, I wouldn't have any memories to distrust.Descates' argument, that I cannot even trust my memories, — Pez
Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.
Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you. — Corvus
There's plenty of evidence from biology that the latter is the case. As a starter, is phototropism sentience or not? I think not, because no sense-organ is involved and the response is very simple.Is sentience a yes or no issue, or are there degrees of sentience? — Agree-to-Disagree
Wikipedia - PhototropismIn biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light.
