I would suggest that the limitations of LLMs could be the feature and not the bug that helps ensure AI alignment. — apokrisis
Memory stores information - whether it be who won the Super Bowl last year or what habits work best in which conditions (the past). All you are doing is making it more complicated than is necessary, or that we are both sayin the same thing just using different words (yours is more complicated whereas mine is succinct).On the memory point, human neurobiology is based on anticipatory processing. So that is how the mammalian brain is designed. It is not evolved to be a memory bank that preserves the past but as a generalisation platform for accumulating useful habits of world prediction. — apokrisis
How can you expect any of these things without referring to memory? What does it mean to "expect" if not referencing memories of similar situations to make predictions?If I see a house from the front, I already expect it to have a back. And inside a toilet, a sofa, a microwave. My expectations in this regard are as specific as they can be in any particular instance. — apokrisis
Instincts are a form of memory that reside in the genetic code rather than the brain. Instincts are a general-purpose response to a wide range of similar stimuli. Consciousness allows one to fine-tune one's behaviors, even overriding instinctual responses because it allows an organism to change its behavior in real-time rather than waiting for the species to evolve a valid response to a change in the environment.If we step back to consider brains in their raw evolved state, we can see how animals exist in the present and project themselves into their immediate future. That is "memory" as it evolved as a basic capacity in the animal brain before language came along to completely transform the human use of this capacity. — apokrisis
Cats and dogs, and I would be willing to bet that any animal with an appropriately large enough cerebral cortex, dream. The key distinction between a human and other animal minds is that we can turn our minds back upon themselves in an act of self-awareness beyond what other animals are capable of - to see our minds as another part of the world (realism) instead of the world (solipsism). I think we are all born solipsists and when infants obtain the cognitive skill of object permanence is when we convert to realists. Animals, except for maybe chimps and gorillas, never convert. Chimps and gorillas seem to show that they are even realists when it comes to other minds as they seem to understand that another's view can be different than their own.My cats don't laze around in the sunshine day dreaming about the events of yesterday, the events of their distant kitten-hood, the events that might be occurring out of immediate sight or a few days hence. They just live in the moment, every day just adding new data to generalise and apply to the predicting of their immediate world in terms of their immediate concerns. There is nothing narrated and autobiographical going on. Nothing that could lift them out of the moment and transport them to reconstructions of other moments, past or future; other places, either in the real world they could have experienced, or in the possible worlds of imaginary places. — apokrisis
It seems to me that to get there would simply require a different program, not a different substance. Would an LLM be self-aware if we programmed it distinguish between its own input and the user's and to then use its own output as input, creating a sensory feedback loop, and to then program the LLM to include the procedural loop in its input? Self-awareness is simply a nested sensory feedback loop.So if we were thinking of LLMs as a step towards the "real thing" – a neurobiological level of functioning in the world – then this would be one way the architecture is just completely wrong. It is never going to get there. — apokrisis
It seems to me, that for any of this to be true and factual, you must be referring to a faithful representation of your memories of what is actually the case. In other words, you are either contradicting yourself, or showing everyone in this thread that we should be skeptical of what you are proposing. You can't have your cake and eat it too.I might be right or I might be completely inventing a plausible scene. How can I tell? But that only tells me that as a human, I'm built to generalise my past so as to create a brain that can operate largely unconsciously on the basis of ingrained useful habits. And if humans now live in a society that instead values a faithful recall of all past events, all past information, then I can see how AI could be integrated into that collective social desire. — apokrisis
Brains were evolved to improve an organisms chances to survive and procreate. We should also recognize that natural selection has selected the repurposing of organs as well, so we could say that humans have repurposed their brains for goals other than survival or procreation, but is it an actual repurposing, or just a more complex nested arrangement of goals where survival and procreation remain the fundamental goals?And to make predictions about that dynamic, one really has to start with a clear view of what brains are evolved to do, and how technology can add value to that. — apokrisis
Philosophy tends to do that - leading you to question things you took for granted only to find out the reason you take it for granted is because the issue was already solved long ago and you "taking it for granted" is you having relegated the process to unconscious thinking, and later in life you participate in runaway philosophical skepticism to bring it back to conscious processing - Why do I believe 5+7 = 12?. What proof is there that 5+7 is 12? You end up discovering that these are actually silly questions precisely because you are trying to solve a problem that was already solved in your grade-school years.So what went wrong? Why are you trying to lead me to things I think are basic? Where's the misunderstanding? What's the problem? I don't know how to reply. I'm confused. — Dawnstorm
It seems to me that we might already be where you don't want society to go. We already have subservient agents in the animals we have domesticated and put to work. For a robot to mow the grass means that it must be able to distinguish between it and the grass and the lawnmower. Would they not be autonomous or conscious to some degree?I think the first route is the most practical and also the one that is the most likely to be taken, if it is. But while I think we could create somewhat sentient (that is, capable of grasping affordance for bodily action) autonomous robots, providing them for what it takes to develop concerns for themselves (autonomic/endocrine integration + socially instituted personhood) would be a mistake. We would then have the option of granting them full autonomy (politically, ethically, etc.) or make them slaves. I don't see any reason why we shouldn't stop short of that and create robots that are as conatively "inert" (subservient) as LLM-based AI-assistants currently are. They would just differ from current LLMs in that in additions to outputting knock-knock jokes on demand they would also go out to mow the grass. — Pierre-Normand
Thank you for patiently clarifying.You're misreading me. I was merely saying (clarifying for apo) that no mysterious emergence process had to be invoked to account for the abilities that LLM manifestly demonstrate. I was not claiming that a mysterious something-something was needed to account for whatever it is that similar human abilities have that makes them unique. — Pierre-Normand
Agreed. Now, how would we go about deploying these properties in a machine composed of electric circuits that process inputs (sensory information) and produce outputs ( human-like behaviors)? Could we simply add more structure and function to what is already there (put the LLM in the head of a humanoid robot), or do we have to throw the baby out with the bath water and start fresh with different material?However there are plenty of non-mysterious things that already account for features of human mindedness that manifestly (not speculatively) haven't yet emerge in LLMs, and that, by their very nature (read "architecture/design") are unlikely to ever emerge through scaling alone (i.e. more data and more compute/training). Those non-mysterious things are, for instance, sensorimotor abilities, a personal history, autonomous motivations, a grounded sense of self, etc. — Pierre-Normand
That is my point - that it is only in some mind that they are identifiable as different thoughts. The world independent of thoughts does not make any distinctions. It is just a wave of probability, according to some interpretations of QM. Think about our minds as stretching all causal relations into what we refer to as the medium of space-time.I'm not following. In whose mind are my father ordering a chef salad in a restaurant and my wife's boss's desire to get her to eat at their monthly meetings not easily identifiable as different thoughts? — Patterner
Well, now you're talking about different minds, not thoughts in the same mind. So yes, I would consider thoughts in different heads different thoughts, but this could just be an outcome of my goal to treat each person as an individual. Are we all separate individuals, or are we only individuals and part of a group when it suits some goal?My father ordering a chef salad in a restaurant is obvious a different thought than my wife's boss's desire to get her to eat at their monthly meetings. We can focus on, as you said, whatever interests us. — Patterner
This is just more of throwing our hands up in the air and saying, "I don't know how human beings obtain their unique, inspirational and novel ideas, but I know AI can't have unique, inspirational and novel ideas."I'm with you. Whenever I mention emergent properties of LLMs, it's never part of an argument that the phenomenon is real as contrasted with it being merely apparent. I always mean to refer to the acquisition of a new capability that didn't appear by design (e.g. that wasn't programmed, or sometimes wasn't even intended, by the AI researchers) but that rather arose from the constraints furnished by the system's architecture, training process and training data, and some process of self-organization (not always mysterious but generally unpredictable in its effects). The questions regarding this new capability being a true instantiation of a similar human capability, or it merely being an ersatz, or comparatively lacking in some important respect, are separate, being more theoretical and philosophical (yet important!) — Pierre-Normand
Look a little deeper and you might find that the boundaries of any thought or process are determined by the present goal in the mind. The boundaries are what make some thought relevant and all the rest irrelevant to the goal, but that does not mean that those other thoughts or processes would not be relevant to some other goal if you had it.That wasn't well-phrased by me. If "a thought" causes another "thought" (countable: one thought, two thoughts...) and it's all "thought" an ongoing process, then we need to divvy up the stream of thought into distinct pieces each of which is "a thought".
Since I came into this thread saying that "sentences" aren't clear expressions of thoughts and thus "I wonder how Ann is doing," isn't a 1:1 expression of thought, it's up to me to say what a thought is and how it's related to its sentence. I tried in this thread, but... it's hard. — Dawnstorm
The point is that you have a reason to second-guess yourself, and I'd be willing to bet its the same reason I do the same, that we have been wrong in the past. Don't worry. This is healthy behavior, unlike many others on this forum that think they know everything and that it is their feelings, or some authority, that determines truth rather than logic.Why do I think this? Am I right? How would I tell the difference? (I actually second-guess myself like that all the time.) — Dawnstorm
Then maybe you should lay out how you came to know what the following scribbles mean: "5+7=" Why would you every return the scribble, "12" when there is nothing inherent in the scribbles themselves as to what they mean or why there is even a relationship between 5+7 and 12.You have to already have learned what the relationship is. Your recognition that 5+7 and 12 mean the same thing is an effect of your prior experiences. If you had never seen those scribbles before your thoughts about them would be different.
— Harry Hindu
Obviously. I'm not sure what to make of this whole paragraph. We're talking past each other. — Dawnstorm
You were the one that used the word, "isolation" and I was simply trying to get at your meaning of your use of it.I don't really know what you had in mind with the word "isolation". But, unless we say we have only one thought per day, spanning the entirety of the time we're awake and thinking, then, whatever it means, we isolate thoughts all the time. I just ate a salad. You don't need, and surely don't want, to hear all the thoughts surrounding it. My wife gave it to me. She got it last night at a late meeting for her job. Her boss had these meeting every month. He always gets food. but my wife only eats one meal a day, and it is keto, so she never eats at these meetings. For some reason, that bothers her boss. He always wants her to eat, and actually you could say he pressures her to eat. don't know why he feels so strongly about it. Anyway, it's usually pizza or something, and she's not gonna eat it under any circumstances. But last night he got her this nice chef salad, and asked her how that was. She said she would eat it today. She gave it to me instead. My father absolutely loves chef salads. He always says, "That was good! It had everything!" it cracks all of us up. we can go to any restaurant, with the most amazing food in it, and he's darned likely to ask if they have a chef salad.:rofl:
I just ate a salad. — Patterner
But why is schizophrenia a mental illness? Why would anyone link trans to mental illness if there were not some type of similarity between being trans and being schizophrenic (as in they are both a type of delusion)? Maybe we should stop with the labels and just get at the symptoms of what we are talking about."Schizophrenics are mentally ill" is not a substantive claim, it proceeds from the definition of "schizophrenic". To know the word is to know that "mental illness" and "schizophrenia" stand in a genus - species relationship. It offers nothing new to the competent language user.
This is not at all the case with "Ali Chinese are mentally disabled" or "all trans people are mentally ill". — hypericin
But if you had a family member that was anorexic and they were told that their condition means that they have a distorted view of their own body, why would they be more accepting of this fact than trans people are of their condition as a delusion?This is really just basic decency. If I were trans, or had loved ones who were, I wouldn't want to come here and have to deal with threads claiming that I or my loved ones were immoral and mentally ill based merely on group identification. — hypericin
Early symptoms of delusional disorder may include:
Feelings of being exploited.
Preoccupation with the loyalty or trustworthiness of friends.
A tendency to read threatening meanings into benign remarks or events.
Persistently holding grudges.
A readiness to respond and react to perceived slights. — Cleveland Clinic
Exactly. I have always said that the trans movement is like a religion. They are both mass delusions. This is just being consistent. Aristotle (or the input of any long-dead philosopher) isn't needed. We don't need to refer to long-dead philosophers to determine if an argument is logically sound or not.Actually I take all that back. I have an idea for a new op: "Conservative Christians are immoral and mentally ill". I'm positive I can make a better case than Bob Ross, without appealing to a questionable reading of Aristotle. — hypericin
But did they really occur in isolation? What do you mean by isolated? It seems to me that the isolation is a mental projection onto the thinking process just as we project our categorical boundaries onto other natural processes. And each thought shares a property with the thought before it.I don't know about being able to isolate a thought from the process of thinking, but we can clearly talk about different thoughts in isolation. I can think of my door that needs work too keep thme cold out. I don't know what to do, so I need to find a carpenter. I really like the music of The Carpenters, and Karen had an amazing voice. Karen does because, even though she was recovering from anorexia, it had already causes damage to her heart.
We can talk about many separate thoughts in all that.
-My door letting in the cold
-carpenters
-The Carpenters
-Karen's death
-anorexia — Patterner
We can agree that thinking and recalling are both mental processes and causally related (why would you recall something if not to think about it).But I don't think all thoughts caused by another are the result of reasoning. Sometimes it's just an association, which means memory. — Patterner
Yes. And thoughts can be the cause of things that are not thoughts.And not all thoughts are caused by other thoughts. For example, sensory input often causes thoughts. — Patterner
My definition might be something like:
Thought B was caused by Thought A if B would not have come into existence at the time it did had A not existed first.
As for how it works, I'm thinking of this:
B came into existence because of an association work A (meaning A triggered a memory); because it was the conclusion of a line of reasoning that lead from A to B; (other "mental mechanisms"?). — Patterner
Yes, the effect always seems to retain some property of the cause.B came into existence because of an association work A (meaning A triggered a memory); because it was the conclusion of a line of reasoning that lead from A to B; (other "mental mechanisms"?). — Patterner
Give me a break. That is not what I'm doing. I'm sorry, but I though you were critically looking at what I am saying. That is the point of me posting - exposing my idea to criticism, and doing a decent job of defending it reasonably. I don't see how bringing another philosopher in is going to make a difference. It is either logically valid or it isn't.My point was that your position is not tenable, evidenced by the fact that it is not held by anyone who has critically looked at the matter. It's just a naive sort of view that all words have a refererent to have meaning. If there is someone who holds it (maybe Aquinas, but not really), then let's elevate the conversation by borrowing their arguments and starting from there as opposed to your just insisting it must. — Hanover
Isn't that what I've been asking you - why does someone say or write anything? Why would someone use scribbles? I've asked you several questions about the position your are defending and you are not even attempting to answer them, yet you accuse me of insisting on my position being the case? I was really hoping for a better outcome here.Consider this sentence: "I am in the house." What does "house" refer to? My house? Your house? A Platonic house form? The image of the house in my head? Suppose I have no such image (and I don't)? So the referent is my understanding of the sentence? It refers to electrical activity in my brain? How do I know that my electrical activity is the same as your electrical activity when we say the word "house"? Do we compare electrical wave activity? Suppose the wave activity is different, but we use the term the same, do we ignore the electrical wave activity and admit it's use that determines meaning? — Hanover
If the string of scribbles does not refer to some actual state of affairs where my position is not tenable because it isn't shared by another that has critically looked at the position, then essential what you said isn't true, and the state of affairs exists only as an idea in your head and not as actual fact outside of your head.Take a look at my first sentence as well, "My point was that your position is not tenable, evidenced by the fact that it is not held by anyone who has critically looked at the matter," break this down word by word into referrents for me. — Hanover
Maybe you're not getting the meaning of "morning" and "evening" here. What do you think those terms are referring to and then what is "star" referring to? "Star" refers to the way Venus appears to the human eye, and "morning" and "evening" refers to the time of day it appears in the sky. That was easy. Got any more?What of words of different meaning yet the same referrent as in "the morning star" and the "evening star," having different meanings, but are of the same planet.? — Hanover
The conversation has stalled because you aren't curious enough to get at what I mean when I say things like, "effects carry information about their causes", and "effects inform us of their causes". Abandon the labels so that you might actually see past these two positions (and an either-or mentality) to other possible explanations.If - like Harry Hindu - you don’t get the difference between the Cartesian representational notion of mind and the Peircean enactive and semiotic one, then the conversation has stalled already. — apokrisis
Sure, I wouldn't want to engage AI on how to show someone I love them, or who to vote for in the next election, but I don't see any reason why it wouldn't provide the same type of engagement as a human in discussions about metaphysics and science, and that is the point - isn't it? It seems to me that any meaningful discourse is one that informs another of (about) something else, whether it be the state of Paris when you vacationed there last week or the state of your mind at this moment reading my post and conceiving a response - which is what your scribbles on the screen will be about when I look at them. You seemed to have admitted that you might not necessarily be talking about what Witt meant and would mean that you are talking about what you think Witt said - meaning your use is still a referent - not to what Witt actually meant - as that would be Witt's beetle - but to your beetle. The scribbles refer to your thoughts. The question is, as I have said before, are your thoughts, in turn, about the world (that is the reason why there is still a debate on realism, right?)?There are plenty of reasons not to engage a bot even if the bot fully passed the Turing test. — Hanover
Why does any major philosopher need to hold some position for it to be true? I never said words can't exist without referent - just that they lack meaning when not used as a referent. If you aren't referring to anything with your scribbles, then what are you talking about? What knowledge am I suppose to glean from your use of scribbles? What use would your scribbles be to me?Which major philosopher holds to the position that every word has a referent? Are we about to start arguing theology or something? The position that words can exist without referents is widely held across the board, not just some odd Wittgensteinian result. — Hanover
Sounds circular to me. The problem is thinking that all of language is a game and not just part of it -metaphysics, poetry, musical lyrics, etc.Because it's a language game, not a metaphysical game. — Hanover
You are free to interpret the line how you want and to respond in any tone you wish. All that matters to me is if your response is sensible or not.Given your final line, do you expect a good-faith response? Or would it be more reasonable to simply not be a dickhead, and then expect to not have a dickhead respond? Consider that. — AmadeusD
None of your articles use the phrase "levels of aggression", and they all seem to support that aggression is biological, not social - that males are more aggressive because of their levels of testosterone.it is the level of aggression typical of males on average. This is not rocket science. This is uncontroversial, and well-known in the psychological literature. — AmadeusD
It's not upsetting to hear about the typical differences. What is upsetting is to equate these differences to differences in gender and not sex.I cannot conceive of how its upsetting to hear about hte typical differences in aggression between males and females. — AmadeusD
I don't see how one isolates a thought from the process of thinking. It would be like trying to isolate the stomach from digestion, and I don't see how that would get us any closer to how thoughts are caused.As I replied to Patterner, I'm not concerned with "thought"; I'm concerned with how to isolate "a thought" from the process of thinking such one can say that "thing" is caused. And I need to be concerned with this because I'm denying that thought corresponds either with words or propositions. The problem is that have no clear alternative.
If I engage with other people on this topic, I can't just assume we mean the same concepts just because we use the same words. I'll go into examples when replying to Patterner. — Dawnstorm
This doesn't make any sense. How did you know that there is a relationship between the scribbles "5+7" and the scribble "12", or even what that relationship is? WHY does 5+7=12? These are just scribbles on the screen in which the relationship is not obvious with a simply observation. You have to already have learned what the relationship is. Your recognition that 5+7 and 12 mean the same thing is an effect of your prior experiences. If you had never seen those scribbles before your thoughts about them would be different.To be precise, at no point did I retrieve the word "12". That is a fact, if my memory is reliable, which it might not be. The choice would have been subconcsious, if it's a choice at all, and not just me being busy with other things. One of my interpretations, is that - on account of me having made a strong connection between "5+7" and "12" - thinking of "5+7" already is thinking of "12". Me recognising your intention is me foregrounding your intention and thus actualising the connection between "5+7" and "12" was not neccessary. This is not a fact. This is me guessing what went on my in mind. — Dawnstorm
Fair enough. So my argument simply stands for those that recently made the argument that AI's responses are not valid responses while also having taken the position is meaning is use. I'm fine with that.Even if you think this all inconsistent, the best you can conclude is that it is all inconsistent, but not that entails some other official declaration. — Hanover
Right. What exactly is the limitation imposed on our knowledge by language if not that the language we are using has no referent (evidence) - in other words we have no way of knowing if our language captures reality until we make an observation (the scribbles refer the observation)? Metaphysical talk is simply patterns of scribbles on the screen if there is no referent. Just because you've followed the rules of grammar does not mean you used language. All you've done is draw scribbles - the same as AI. One might say that human metaphysical language-use is akin to all AI language-use in that it has no way of knowing what it is talking about.The limitation imposed by Witt is to knowledge of the metaphysical, not the physical. Some words have referrants. I'm not arguing idealisim. — Hanover
But if a cat is in my box and a beetle in yours, then how exactly are we playing the same game? It would only appear that we are from our limited perspectives, just as it appears that AI is human because of the way it talks.We can assume that our perceptions are similar for all the reasons you say. That doesn't mean we need refer to the private state for our use of language. What fixes language under his theory is the publicly available. That is, even if my beetle isn't your beetle, our use of "beetle" is what determines what beetle means. However, if a beetle is running around on the ground and you call it a cat and I call it a beetle, then we're not engaging in the same language game, because the public confirmation is different. — Hanover
But it's not at all irrelevant. You and I must be able to distinguish between the beetle and the rest of the environment - the ground, the trees, myself, yourself, the scribbles we are using. So it seems critical that we make the same kind of distinctions and perceive the boundaries of the things we speak of in the same way.In example A, if we consistently call this object a beetle, it is irrelevant what my internal state is. We live our lives never knowing what goes on in our heads, but we engage in the same language game. What happens in my head is irrelevant for this analysis. It does not suggest I don't have things going on in my head. It just says for the purposes of language it is irrelevant. — Hanover
True enough. But the idea is that the gathering of people at that time and place is not the cause of the train's arrival. If nobody showed up when they needed to to catch the train, the train still would have shown up. It wasn't even the purchase of those particular tickets that caused the train to show up. Tickets for that particular day of the week and time would have to stop for some time before they stopped having three train stop there. At which point, no number of people gathering there would cause the train to stop. — Patterner
It could also simply be that we are wrong about the causes. Train stations are built where there are towns, or close to interesting locations that humans might want to visit. It's not necessarily about where humans are, but where they might want to go. A locomotive company might make a bad investment building tracks to somewhere people are not interested in going, or are no longer interested in going.Got that, I was joking, but also kind of highlighting the contrastive character of causal explanation. Claims that event A caused event B always are ambiguous if one doesn't specify (or relies on shared assumptions) regarding what counts relevantly as event A happening: is it its happening in general, its happening once, its happening in some particular way, etc. — Pierre-Normand
I wasn't trying to disprove Witt here - just point at the contradiction of those on this forum that align with "meaning-is-use" and also claim that AI's responses are not as valid as a human's. AND if the forum's official position is that the output of AI is not valid content on the forums then the owners of the forum have officially taken a stance that meaning is not use.So, to your claim whether AI genuinely uses language, the answer is probably that it does under a meaning is use analysis, but of what damage does that do to Witt's theory generally? — Hanover
That's not the way I interpreted it. If this were so, then how can we obtain scientific knowledge? Science starts with hypothesizing and theorizing. If we only ever start with a limited framework for explaining reality, then how is it that we humans have become the shaper of the landscape rather than just a fixture in it?It is his [Witt's] position that metaphysical questions cannot be addressed through language because of the limitations inherent in the enterprise. — Hanover
I think that such an argument just opens another can of worms because now you'd have to explain why our beetles would be so different given the similarities of our physiology and having developed within a similar culture. Similar causes lead to similar effects. There is no reason to believe that my beetle is different than yours given the similarities between us, just as there is no reason for me not to believe you have a mind because of our similarities, but is my beetle the same as my cat's or a bat's?Take it another step. One could say (and I'd suggest incorrectly) that Witt's reference to the box itself is a metaphysical claim. Witt says you have a box and I have a box and we both say we have beetles, but the inability to reveal the contents eliminates our ability to argue we have the same referent. My box might contain a chicken and yours a hammer, but as long as we both refer consistently to whatever we internally perceive, then our language game holds. — Hanover
Maybe the issue is classifying causation as "physical" or "mental" rather than simply "procedural"?And, in reverse, all the muddle-making issues about physical cause show up when we try to understand mental causation! — J
This is the way it is for you now, but what about when you were in grade school learning arithmetic? Are you saying that we only think when we are learning something new and when it becomes reflexive it is no longer a thought?Take "7+5". In what ways is that even thught? If I read "7+5" and think "12" then I might just cover this with a stimulus-response model without ever invoking the concept of "a thought".
Another problem: 5+7=12 is usually just memorised, so what happens is that we're completing a culturual template. In a manner of speaking, we're completing a default thought: filling a gap we automically perceive. So "5+7" might be an incomplete thought where we automatically fill the gap in the proper way. — Dawnstorm
Don't you first need to solve the problem of why you can't poke your head into someone else's brain and not see any consciousness of sentience at all - only the "remarkably coordinated behavior of neurons"? Your comments have way to many assumptions built into them. What makes neurons capable of thinking but silicon circuits not?But what kind of consciousness or sentience would you expect to discover if you could poke your own head into an LLM's world? Perhaps about the same as thrusting your head into an ant colony with all its busyness and remarkably coordinated behaviour, but little actual thinking, feeling, imagining or whatever we would consider being the phenomenology one might expect as a human scale subject living in our neural models of the world as we expect it to be and how we would wish it to become. — apokrisis
Well, yeah P-Zombies will act differently than a human being because the causes of their behavior is different (no internal model of the world as the cause of one's behavior.) AI acts differently not because it cannot think, but because it cannot act. It's just a language model in your computer, not a humanoid robot with senses like our own that interacts directly with the world, and stores sensory information for future use (instructions for interpreting sensory data, or "understanding").Your error is conflating behavior and consciousness. Your argument is that if a machine acts like a human, it thinks like a human. The pragmatic Turing argument. — apokrisis
AI already does just that. ChatGPT typically ends with asking the user if they would like more information or an example of what was just said. It anticipates the needs of the user given the context of the conversation.Again, this is about cognition being about anticipation-based processing. Forming expectancies that intercept the unfolding of the world even before it happens. We know it is us thinking our thoughts because we form the motor patterns that already prime our sensory circuits that we should be hearing exactly these words in our heads. But when someone else speaks, it feels different as we are having to guess what might be said, and assimilate that to what actually gets said.
So that is the goal for AI that goes beyond just LLMs. Switch to an anticipatory-processing architecture that lives in the world in real time. — apokrisis
This is just another way of saying that we have a set of instructions for interpreting sensory data. Else what is an anticipation or expectation? How can we anticipate or expect anything if we do not have some information stored internally?From the neurocognitive view, understanding means anticipation. Forming the right expectations. So if not meaning as demonstrated by use, then meaning demonstrated by preparedness.
I hear “apple”, I get ready to react accordingly. My attention is oriented in that particular direction. — apokrisis
It does create a referent to the cause of your utterance. Why did you utter anything? Effects carry information about their causes. Words carry information about the idea of the speaker and their intent to reference it with utterances.Under this understanding, then so is the cat. That is, the cat is out there, the image is in here, and the reference is to the image in your head. And that is your metaphysical account, but that's not Wittgenstein's because his isn't a metaphysical acccount. His is a grammatical account, describing how language operates within our forms of life, and that attempts to use language to explain the metaphysical misunderstand the role of language.
If you want to refer to mental objects and qualia and whatnot, you're not forbidden from it, but I'd think he'd just assert that "qualia" is however you use the word. Your position seems to be that the utterance of any word creates a referent. — Hanover
Sure you did, or else there is no aboutness (intentionality) to the scribbles.I'm not disputing that you learned some words through watching an interaction with its referent. What I am disputing is that you didn't learn the word "freedom," "aboutness," "the [non-existent] present king of France," or "omphaloskepsis" by having had a referent pointed out to you. — Hanover
"Public usage" as in using scribbles to point to objects and events in the world. If you are not pointing to anything with your scribbles that do not ultimately resolve down to things that are not scribbles (as in the case of "freedom" and "aboutness"), then it no longer qualifies as "public usage". It is "private usage".But, what Wittgenstein is saying (as I don't want to say "I am saying" because I'm not fully adopting anything right now) is that you always have public usage available to determine meaning, and if you don't, you don't have meaning. When you point to the cat, it is not the cat, nor the pointing, that defines the cat, but it is your ability to use that term in a consistent manner within the language you are using. To the extent the pointing is a way to communicate about cats, then that is a move within a practice (meaning it's its use). — Hanover
To speak of the cat in a metaphysical way is to confuse the map with the territory. Science updates your map with the relevant information about the cat. Anything else is just conjecture (metaphysics) with no evidence (no referent).But understand, this says nothing of the cat in some metaphysical way, not because there isn't such a thing, but because the theory specifically avoids such conversation as impossible. — Hanover
Did you? Because it seems for you to be able to say that you did (and it be true), you actually did and that there is some internal representation between the scribbles, "I came, I chimed, I conquered." and the act of someone coming, chiming in and conquering the discussion - which is not just more scribbles, unless you are an AI.I came, I chimed, I conquered. — Jamal
Can you provide an actual example of this?One does not go from one to the other. One holds a first person view while interacting with a third person view. — noAxioms
An example of first/third person held at once would be useful as well.Haven't really figured that out, despite your seeming to drive at it. First/Third person can both be held at once. They're not the same thing, so I don't see it as a false dichotomy. — noAxioms
Sure, but that would also get us out of the third person view, so I haven't seen you make a meaningful distinction between them (doesn't mean you haven't - just that I haven't seen it).Do we ever get out of our first-person view?
Anesthesia? — noAxioms
It appears to be a false dichotomy because we appear to have direct access to our own minds and indirect access to the rest of the world, so both are the case and it merely depends on what it is we are talking about. I wonder if the same is true of the first/third person dichotomy.Direct/indirect realism seem to be opposed to each other (so a true dichotomy?), and both opposed of course to not-realism (not to be confused with anti-realism which seems to posit mind being fundamental. — noAxioms
Isn't that the point, though? If the scribble, "apple" were to be used in a way that does not refer to the very apple we know, then what is the speaker/writer, talking/writing about? What would be the point in communicating something that we cannot share in some way? Isn't aboutness an integral part of intentionality? Are you saying that in instances where some scribble is not used to refer to a shared event or object that there is no intentionality? Isn't that what they are saying is missing with AI when it uses words - intentionality (aboutness)?That might be an overstatement. Words can refer to things. "Apple" can in fact mean the very apple we know, but that's only if that's how it's used. My push back on "understanding" was that I don't think it necesssary that for the word to be used in a consistent manner within the game that it be understood. — Hanover
It would seem to me that in order for one to understand the word, "cat" that they have an internal representation of the relationship between the scribble, "cat" and an image of the animal, cat. If they never used the scribble, "cat" but retained this mental relationship between the scribble and the animal, could it not be said they understand the word, "cat" even if they never used it themselves but have watched others use it to refer to the animal? I don't need to necessarily use the words to understand their use.The Wittgensteinian approach (and I could be very wrong here, so please anyone chime in) does not suggest there is not an internally recognized understanding of the word when the user uses it, but it only suggests that whatever that is is beyond what can be addressed in language. That would mean that whatever "understanding’" is amounts to our public criteria for it . — Hanover
What does it mean to be "meaningful" if not having some causal relation to the past or future? When an image does not appear on the screen, doesn't that mean that the screen may be broken? Doesn't that mean that for images to appear on the screen the screen needs to be repaired?I think a snappy way of putting it is that when you turn on your TV, an image appears. But do you believe the TV is seeing anything as a result?
LLMs are just displays that generate images humans can find meaningful. Nothing more.
Biosemiosis is a theory of meaningfulness. And it boils down to systems that can exist in their worlds as they become models of that world. This modelling has to start at the level of organising the chemistry that builds some kind of self sustaining metabolic structure. An organism devoted to the business of being alive. — apokrisis
Exactly. It merely "uses" the scribble, "understanding" in certain patterns with other scribbles. That is the issue with meaning-is-use - the scribbles don't refer to anything.I don't think a meaning is use theory references understanding. — Hanover
Define a duck.We then have to figure out how we know a duck from not a duck. — Hanover
Then all you are doing is using words with nebulous meaning, or choosing to move the goalposts (in the example of the duck) to make the argument that AI's output isn't the same as a human's.I think my answer is that AI has no soul and that's not why it's not a person. I'm satisfied going mystical. — Hanover
Exactly. Meaning/information exists wherever causes leave effects. Knowledge, or awareness, of these causal relations are not the causes of the relations, but an effect of those relations. We know about them after they have occurred. But we can predict future relations, we just don't know them (do we really know the sun will rise tomorrow, or do we just predict that it will).Is mind a necessary condition for meaning?
— RogueAI
Maybe not?. For instance, the earth's electromagnetic field means that the earth's core is an electromagnetic dynamo. According to realism, there wouldn't need to be any recognition of this meaning for it to exist.
Recognition of the meaning, on the other hand, requires awareness, and the idea of truth. Maybe we could add the recognition of the idea of being also. I don't think we have to get specific about what a mind is, whether concepts of inner and outer are pertinent, just that there's awareness of certain concepts. — frank

Aren't I? What type of map is the third person one? How does one go from a first person view to a third person view? Do we ever get out of our first-person view?My mental map (the first person one) rarely extends beyond my pragmatic needs of the moment. I hold other mental maps, different scales, different points of view, but you're not talking about those. — noAxioms
How is talk about first and third person views related to talk about direct and indirect realism? If one is a false dichotomy, would that make the other one as well?And I must ask again, where is this all leading in terms of the thread topic? — noAxioms
I never said, or implied, there was.You are confusing utility with teleology: there is nothing random about evolution. — Bob Ross
This makes no sense. Determinism can be the case and everything you do is by the will of your own nature - which includes your past experiences and learned behaviors. Determinism does not mean that you are forced to make decisions you don't want to. It means that you will always make the same decision given the same information/choices, and that it will be a natural choice - one that you want given the options you have at any given moment. We can only ever do what is natural for each of us.Not everything that is done is natural. By ‘natural’, in natural law theory, we mean that it flows from the nature of a given being.
Agency allows beings to freely will against their nature; so it can’t be true that every act is natural. — Bob Ross
I'm not. You completely missed the point. There can be misuses of language by a large number of people that simply repeat what they hear rather than integrating what they hear with the rest of what they know (that if Socialists are "liberals", then what does that make Libertarians?). Both sides are liberal on some issues (social vs economic). It is only Libertarians that are liberal on all issues, so why call either side "liberal" when we have group that fits the term better than either the left or the right?You are splitting hairs here. Everyone knows that liberalism as a popular movement in america has agendas just like conservatives do. — Bob Ross
Can you come up with examples of liberal agendas? There are liberals, there are agendas, but "liberal agenda" paints a unified conspiracy when political agendas always have to do with money and power. — ProtagoranSocratist
Liberalism in America tends to want the social and legal acceptance of:
1. Sexually deviant, homosexual, and transgender behaviors and practices;
2. The treatment of people relative to what they want to be as opposed to what they are (e.g., gender affirmation, putting the preferred gender on driver’s licenses, allowing men to enter female bathrooms, allowing men to play in female sports, etc.);
3. No enforceable immigration policies;
4. Murdering of children in the womb;
Etc. — Bob Ross
That was a response to Banno's quote, not yours.Yes I have: what’s your point? — Bob Ross
Yes, I was agreeing with you - at least the part of the OP I was responding to - and I was just elaborating on the confusion of transgenderism as stemming from a misuse of terms.Not necessarily; but I am not interested in defending gender theory. My position was against gender theory; and your role as a critic would be to defend it (unless you are agreeing with me or have an alternative theory). — Bob Ross
