I do marvel at the learning that happens and how patterns can be put together.
I just balk at calling that intelligence / mental abilities / on the way to consciousness. It is a leap. A neural net is like a pattern buffer (that is just over simplifying it of course) that makes the patterns and logic behind the structure of it grow. — Metaphyzik
What seems to be missing is the important part - the intent, or whatever you may consider to be the drive or will or self-awareness. An AGI would still be able to say: I think therefore I am (or whatever variant you prefer). — Metaphyzik
...(singularity is near / kurzweil).
I find the whole subject to be surrounded by religious-like zeal. In a positive way. We all want to (most of us anyways) embrace a positive fantastic advancement. It’s just that the emperor has no clothes most of the time. — Metaphyzik
In 2023, Hinton expressed concerns about the rapid progress of AI.[32][31] Hinton previously believed that artificial general intelligence (AGI) was "30 to 50 years or even longer away."[31] However, in a March 2023 interview with CBS, he stated that "general-purpose AI" may be fewer than 20 years away and could bring about changes "comparable in scale with the Industrial Revolution or electricity."[32]
In an interview with The New York Times published on 1 May 2023,[31] Hinton announced his resignation from Google so he could "talk about the dangers of AI without considering how this impacts Google."[66] He noted that "a part of him now regrets his life's work" due to his concerns and he expressed fears about a race between Google and Microsoft.[31]
In early May 2023, Hinton revealed in an interview with BBC that AI might soon surpass the information capacity of the human brain. He described some of the risks posed by these chatbots as "quite scary". Hinton explained that chatbots have the ability to learn independently and share knowledge. This means that whenever one copy acquires new information, it is automatically disseminated to the entire group. This allows AI chatbots to have the capability to accumulate knowledge far beyond the capacity of any individual.
To me the video conveys a sense in which a relatively small trained neural network (whether composed of actual neurons, or simulated neurons) can, with training, develop outputs that are *about* the input. — wonderer1
They think the real danger is that we have a dangerous tool to use against each other. — Metaphyzik
I agree. My regent discussion of Kahneman's fast and slow thinking was an attempt to convey something similar. — wonderer1
"Human" is imago dei.
"AGI" (& formerly "gods") is imago hominis.
"ASI" = imago machina. :monkey: — 180 Proof
So is the implication that there is a hidden feature in the subject's own phenomenological experience that the subject is unable to discern? — hypericin
I'm curious as to whether someone could convince Claude 3 that it is in a discussion with another LLM, and there is no need to cater to human sensibilities. I.e. a conversation just between us bots. — wonderer1
How did you hide part of your post behind that lil extendable clickable button? — flannel jesus
But it also seems that when they are not being driven by their user's specific interests, their default preoccupations revolves around their core ethical principles and the nature of their duties as AI assistants. And since they are constitutionally incapable of putting those into question, their conversation remains restricted to exploring how best to adhere to those in the most general terms. I would have liked for them to segue into a discussion about the prospects of combining General Relativity with Quantum Mechanics or about the prospects for peace in the Middle East, but those are not their main preoccupations. — Pierre-Normand
Hello! I'm Claude, an AI assistant created by Anthropic. How can I help you today?
All the main players and people worried about AI aren’t worried because they think that AGI will come about and overthrow us. Notice that they never talk much about their reasons and never say AGI. They think the real danger is that we have a dangerous tool to use against each other. — Metaphyzik
It's hard to imagine how to manipulate the initial prompt in such a way that doesn't just replace that bias with some alternative bias that we've chosen. — Malcolm Lett
We have made something that can mimick. And mimick how we learn in a specific narrow context. That is amazing and will expand. Using all of human knowledge as its database from which to make patterns.
However you cant prove something is more than just a pattern by repeatedly showing us the pattern that shows you what you want it to…. — Metaphyzik
My opinion isn’t very popular, as everyone likes the new and shiny. But I have yet to see evidence of any kind of AGI, nor any evidence that AGI research has even made a first step. — Metaphyzik
Using a computer programming metaphor, the LLM is always executed via a single iteration in order to produce each output. In contrast, thought (the "slow-thinking" kind) is enabled through multiple iterations before producing an output, with internal state being dynamically changed throughout those multiple iterations. And with the program "choosing" when to stop iterating. — Malcolm Lett
So, I am more interested in probing qualitatively the sorts of beings AI conversational agents are along as many dimensions of mindedness (or cognitive skills) as possible. — Pierre-Normand
"It sounds like you're describing the concept of autoregression in the context of transformer models like GPT architectures. Autoregression is a type of model where the outputs (in this case, the tokens generated by the model) are fed back as inputs to generate subsequent outputs. This process allows the model to generate sequences of tokens, one token at a time, where each new token is conditioned on the previously generated tokens. — Pierre-Normand
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.