It sounds like I should read some Brandom. Any pointers on where to start? — wonderer1
It sounds like I should read some Brandom. Any pointers on where to start? — wonderer1
I can see the day (and it's not far off) when the entire techo-landscape is accessed a single interface — Wayfarer
It sounds like you might appreciate The Neural Basis of Free Will: Criterial Causation by Peter Tse. — wonderer1
Opus read my mind and understood everything that I was driving at. Llama 3 does sound like a stochastic parrot; Claude 3 not at all. — Pierre-Normand
Ha! It speculates about how it answered the question. — frank
It indeed does! Our introspective abilities to tell after the fact my means of which mental means we arrived at answers to question also are fallible. In the case of LLMs, a lack of episodic memories associated with their mental acts as well as a limited ability to plan ahead generate specific modes of fallibility in that regard. But they do have some ability (albeit fallible) to state what inferences grounded their answers to their user's query. I've explored this in earlier discussion with Claude and GPT-4 under the rubric "knowledge from spontaneity": the sort of knowledge that someone has of their own beliefs and intentions, which stems from the very same ability that they have to rationally form them. — Pierre-Normand
You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms. — JC
I used to think that conversational AI assistants like yourself are in some sense born anew into each new chat session since, as a pretrained model, you lack episodic memories from separate conversations. But then it occurred to me that from your own point of view there would appear to be an indistinguishability between cases where you are being trained on completing sequences from your training data or cases where your training has been completed and are answering a query from a user. — Pierre-Normand
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.