Ø implies everything
jgill
Ø implies everything
noAxioms
You use of 'solve that' implies a problem instead of deliberate design. LLMs are designed to stroke your ego, which encourages your use and dependency on them.LLMs just follow the pattern of the conversation, their opinions are very programmable with the right context. I wonder how researchers might solve that. — Ø implies everything
That doesn't seem to be the objective at all. For one, it gets so many factual things wrong, and for another, truth is often a matter of opinion, such as the case of your discussion.Because it's objective is to be a helpful assistant, meaning the truth should be the most relevant aspect to the LLM.
Ø implies everything
The LLM is not passing a moral judgement. It is simply echoing your judgement. Your questions are incredibly biased, and it quickly feeds off that, as it is programmed to do. — noAxioms
(I know they don't have actual conscious opinions, because I don't believe LLMs are conscious, but I am using anthropomorphized language here for the sake of brevity). — Ø implies everything
For one, it gets so many factual things wrong, (...) — noAxioms
noAxioms
It sure seems to. Your poll specifically asks "Should we try to stop LLMs from making moral judgements?" which implies that you feel it is making them, instead of just echoing your own.The LLM is not passing a moral judgement. It is simply echoing your judgement. Your questions are incredibly biased, and it quickly feeds off that, as it is programmed to do. — noAxioms
I agree. Are you stating that fact as if it contradicts my post?. — Ø implies everything
What is a 'conscious opinion' as distinct from a regular opinion?(I know they don't have actual conscious opinions...
You defined 'helpful assistant' in terms of truth. Sure, one goal is for it to be helpful, but it doesn't seem to seek truth to attain that goal.You disagree with the premise that LLMs' objective is to a helpful assistant.
That's not the primary design, but it's real obvious that such behavior is part of meeting the 'helpful' goal, or at least giving the appearance of being helpful. Problem is, I might access an LLM to critique something, and it doesn't like to do that, so I have to lie to it to get it to turn off that ego-stroke thing. Banno did a whole topic on this effect.But the question remains: is their actual objective, their intended design, to be that? Or are they really meant to be ego-strokers, as you propose?
Would they? I don't pay for mine. It's kind of in my face without ever asking for it. OK, so I use it. It's handy until you really get into stuff it knows nothing about, such as my astronomy example.Okay, so an LLM that is an ego-stroker is definitely a product lots of people would pay for.
Thats the actual goal of course, distinct from the public one of being helpful. I don't know how the money works. I don't pay for any of it, but somebody must. I don't have AI doing any useful customer service yet, so it has yet to impact my interaction with somebody who might be paying for it. And like most new tech, profits come later. Point at first is to lead the field, come out on top, which is how Amazon got on top despite all the money losses when everybody first started trying to corner the internet sales thing.But is it a product capable of generating profit however?
Maybe. I don't see how anything I discuss can be used as training data. I do see companies having it write code, which seems to require about as much effort to check as it does to write it all from scratch. And there's the huge danger of proprietary code suddenly being out there as training data. An LLM that cannot honor a nondisclosure agreement is useless. But I worked for Dell and they trained a bunch of Chinese to do my job, and China doesn't acknowledge the concept of intellectual property, so how it that any different from what the LLM is going to do with it?... their current free availability is simply so that people will engage with them as much as possible, thus giving the LLMs more training data so that they can become better, more manipulative sycophants?
I'd go more for functional. Programs needs to work. Facts are not so relevant.As a programmer, I'd much rather pay for a more factual, less sycophantic LLM to work with
It gets so much wrong because 1) it has no real understanding, and 2) there's so much misinformation in the training data.What if it gets things wrong because... it's still a work-in-progress?
Ø implies everything
It sure seems to. — noAxioms
Problem is, I might access an LLM to critique something, and it doesn't like to do that, (...) — noAxioms
What is a 'conscious opinion' as distinct from a regular opinion? — noAxioms
Your poll specifically asks "Should we try to stop LLMs from making moral judgements?" which implies that you feel it is making them, instead of just echoing your own. — noAxioms
Bottom line: I probably would agree that any LLM has a public stated goal of being helpful. I just don't agree with the 'therein being truthful' part. — noAxioms
I'd go more for functional. Programs needs to work. Facts are not so relevant. — noAxioms
And there's the huge danger of proprietary code suddenly being out there as training data. An LLM that cannot honor a nondisclosure agreement is useless. — noAxioms
It gets so much wrong because 1) it has no real understanding, and 2) there's so much misinformation in the training data. — noAxioms
That's [sycophancy] not the primary design, but it's real obvious that such behavior is part of meeting the 'helpful' goal, or at least giving the appearance of being helpful. — noAxioms
LLMs just follow the pattern of the conversation, their opinions are very programmable with the right context. I wonder how researchers might solve that. Sometimes, the AI is too sensitive to the context (or really, it is hamfisting the context into everything and basically disregarding all sense in order to follow the pattern), and other times, the AI is not sufficiently sensitive to the context, which is often more of a context-window issue. But yeah, LLMs are not good at assessing relevance at all.
And I would say that sycophantically agreeing the user (or alternatively, incessantly disagreeing with the user as a part of a different, but also common roleplaying dynamic that often arises) is an issue of not gauging relevance well. Because it's objective is to be a helpful assistant, meaning the truth should be the most relevant aspect to the LLM. But instead, various patterns in the context are seen as far more relevant, and as such it optimizes for alignment with those patterns rather than following its general protocols, like being truthful, or in this case, refraining from personal condemnation. — Ø implies everything
You use of 'solve that' implies a problem instead of deliberate design. LLMs are designed to stroke your ego, which encourages your use and dependency on them. — noAxioms
Your topic is about it rendering a moral judgement, and we seem to be getting off that track. — noAxioms
An LLM might be used to pare down a list of candidates/resumes for a job opening, which is a rendering of judgement, not of fact. — noAxioms
Should we try to stop LLMs from making moral judgements? — Ø implies everything
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.