Banno
Ludwig V
No. But here's the catch. Once you have pointed that out, somebody will set out to imitate the doing of those things. We may say that the AI is not "really" doing those things, but if we can interpret those responses as doing them, we have to explain why the question of real or not is important. If the AI is producing diagnoses more accurately and faster than humans can, we don't care much whether it can be said to be "really" diagnosing them or not.AI doesn’t have to, or cannot, do all of that in order to do what it does. — Fire Ologist
I think that you and/or Ramsey are missing something important here. It's might well not make a different whether you water or not, but if it doesn't rain and you don't water, it might make a big difference. Admittedly, you don't escape from the probability, so there's no rationality to your decision. Probability only (rationally) affects action if you combine risk and reward. If you care about the plants, you will decide to be cautious and water them. If you don't, you won't. But there's another kind of response. If you are going out and there's a risk of rain, you could decide to stay in, or go ahead. But there's a third way, which is to take an umbrella. The insurance response is yet another kind, where you paradoxically bet on the outcome you do not desire.Ramsey then looks for the points of indifference; the point of inaction. That's the "zero" from which his statistical approach takes off. Perhaps there's a fifty percent chance of rain today, so watering may or may not be needed. It won't make a difference whether you water or not. — Banno
Yes, but go carefully. If you hook that AI up to suitable inputs and outputs, it can respond as if it believes.The second is to note that if a belief is manifest in an action, then since the AI is impotent, it again has no beliefs. — Banno
Sure, we can make that judgement. But what does the AI think of its efforts?Many of the responses were quite poetic, if somewhat solipsistic: — Banno
Fire Ologist
we have to explain why the question of real or not is important. — Ludwig V
Ludwig V
Yes. Curiously enough, the vision of a purely rational being is very attractive in some ways - we so often find the emotional, value-laden sides of life problematic. An impartial, well-informed referee.Because when it is real, what it says affects the speaker (the LLM) as much as the listener. — Fire Ologist
Banno
Tom Storm
Leontiskos
The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities don't work very well since the emergence of those abilities are better explained in light of the top-down constraints that they develop under. — Pierre-Normand
This is similar to the explanation of human behavior that, likewise, exhibits forms that stem from the high-level constraints of natural evolution, behavioral learning, niche construction, cultural evolution and the process of acculturation. Considerations of neurophysiology provide enabling causes for those processes (in the case of rational animals like us), but don't explain (and are largely irrelevant to) which specific forms of behavioral abilities get actualized. — Pierre-Normand
Likewise, in the case of LLMs, processes like gradient descent find their enabling causes in the underlying neural network architecture (that has indeed been designed in view of enabling the learning process) but what features and capabilities emerge from the actual training is the largely unpredictable outcome of top-down constraints furnished by high-level semantically significant patterns in the training data. — Pierre-Normand
The main upshot is that whatever mental attributes or skills you are willing to ascribe to LLMs is more a matter of them having learned those skills from us (the authors of the texts in the training data) than a realization of the plans of the machine's designers. — Pierre-Normand
If you're interested, this interview of a leading figure in the field (Andrej Karpathy) by a well informed interviewer (Dwarkesh Patel) testifies to the modesty of AI builders in that respect. It's rather long and technical so, when time permits, I may extract relevant snippets from the transcript. — Pierre-Normand
baker
By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say.Because when it is real, what it says affects the speaker (the LLM) as much as the listener. — Fire Ologist
Leontiskos
By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say. — baker
baker
Leontiskos
a religious preacher or a boss who are completely unaffected by what they say — baker
Pierre-Normand
I think this is a false equivalence. Drawing conclusions about AI based on its code is not the same as drawing conclusions about humans based on theories of neurophysiology. The theories of neurophysiology simply do not provide the deductive rigor that computer code does. It is incorrect to presume that drawing conclusions about a computer program based on its code is the same as drawing conclusions about a human based on its neurophysiology. Indeed, the whole point here is that we wrote the code and built the computer program, whereas we did not write nor build the neurophysiology—we do not even know whether neurophysiology and code are truly analogous. Art and science seem to be being conflated, or at least this is the prima facie conclusion until it can be shown why AI has somehow gone beyond artifice. — Leontiskos
So an example of the sort of answer I would want would be something like this: "We build the code, but the output of that code builds on itself insofar as it is incorporating inputs that we did not explicitly provide and we do not fully comprehend (such as the geography that a map-making AI surveys)." So apparently in some sense the domain of inputs is unspecified, and because of this the output is in some sense unpredictable.
Fire Ologist
a religious preacher or a boss who are completely unaffected by what they say
— baker
No such person exists. At best you are speaking hyperbolically. — Leontiskos
Harry Hindu
AI can adapt to the conversation remembering the context of the conversation and making new judgements when provided new information or a different way of looking at a topic.I agree. AI doesn’t have the ability to be affected by its own statements in the way we are describing. The effect of words I’m referencing is their effect on judgment, not merely their internal coherence (which is all AI can reference). — Fire Ologist
Harry Hindu
Fire Ologist
The ability that AI does not have that we do is the ability to go out and confirm or reject some idea with consistent observations. But if it did have eyes (cameras) and ears (microphones) it could then test its own ideas (output). — Harry Hindu
AI doesn't have the ability to intentionally lie, spin or misinform — Harry Hindu
AI does not seek "Likes" or praise, or become defensive when what it says is challenged. It doesn't abandon the conversation when the questions get difficult. — Harry Hindu
Jamal
Fire Ologist
Are you attempting to address the questions in the OP? Are you helping to work out how to use AI effectively to do philosophy? It doesn't look like it to me, so you'd better find somewhere else for your chat. — Jamal
Verification & Accuracy:
Always verify AI output and treat it as potentially unreliable
Check and validate all sources (as AI can fabricate references)
Guard against confabulation by requesting sources
Treat AI as an "over-confident assistant" requiring scrutiny
Intellectual Ownership:
Maintain ability to defend any AI-generated argument in your own terms
Internalize AI-generated knowledge as you would any source
Ensure you genuinely understand what you're presenting
Remain the ultimate director and arbiter of ideas — Banno
Transparency & Disclosure — Banno
Jamal
Harry Hindu
So we need to know every time we are dealing with AI and not a person, so that, however the words printed by AI might affect us, we know the speaker has no stake in that affect. We have to know we are on our own with those words to judge what they mean, and to determine what to do now that we’ve read them. There is no one and nothing there with any interest or stake in the effect those words might have.
ADDED:
A sociopath does not connect with the person he is speaking with. So a sociopath can say something that has no affect on himself. But for a sociopath, there is a problem with connection; there are still two people there, just that the sociopath only recognizes himself as a person. For AI, there is a problem with connection because there is nothing there for the listener to connect with. — Fire Ologist
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.