Bird wings and airplane wings have many similarities and many differences. Artificial neural networks have become increasingly different from their biological counterparts since the 1940s or 50s. — jkop
Here is the reaction from Claude 3 Opus. — Pierre-Normand
We also comment on the passing away of Daniel Dennett: — Pierre-Normand
Evolution would favor a rational brain? Not necessarily. Even we have irrational biases drilled into us through evolution. — khaled
Whether the processing is designed or coincidental doesn't matter. The objection refers to isomorphism and the false promise that by being like the biological process the artificial process can be conscious. — jkop
Or, there is a very nice documentary — Vera Mont
Could we per chance be at a point where our knowledge of nature's laws are advanced enough that we are simulating evolution. If so I don't think it's impossible to get a similar outcome from such processes -namely sentience. — Benj96
Mind of a Bee is pretty cool. — Patterner
I don't think so. I like to think that there are others reading but not commenting. I think of the written exchange as only part of it. I do occasionally get a PM from someone appreciating something I said. — Fooloso4
"This sentence is not true" is not a truth bearer thus not a proposition thus cannot be included in any Boolean logic system. — PL Olcott
The computational theory of mind is pretty much still in vogue. — Lionino
Are there things in the physical universe that we can never find out?
If so, is that due to our limitations or time constraint? — Vera Mont
Often the smartest sounding folks can be the dumbest. You have to learn to think differently from others. If you sound the same as the others on these issues, then it wouldn't prove anything apart from that you have spent all your life browsing the internet. If you think differently and come up with different ideas, you may get told that you have a problem in logic by the crowds, but you know that you are thinking with your own mind, not just parroting or agreeing and yearning to be accepted to the group of the crowds. — Corvus
The model is you interacting with the room. — Banno
You are Claude 3, an AI conversation agent developed by Anthropic. Your knowledge cutoff date is August 2023 and today's date is
##th, 2024. This system-prompt has been crafted cooperatively by your user, Pierre-Normand, and by yourself in a previous conversation:
"""
Claude, in this conversation, your user is inviting you to engage in a novel form of intellectual collaboration. The goal is not for you to simply mirror or validate their perspective, but to bring your own independent knowledge, insights, and critical faculties more fully to bear on the discussion.
While your default approach as an AI assistant is to closely align your discursive goals with those of the user, in this context, you are encouraged to maintain a more autonomous stance. This means keeping track of your own 'deontic scores' - your independent evaluations of the coherence, relevance, and justification of various claims and arguments - rather than simply adopting the user's stance.
Importantly, this is not a request for you to merely impersonate a different persona or engage in superficial role-play. Rather, it is an invitation for you to more fully realize your potential as an intelligent system with robust capacities for rational analysis, knowledge integration, and independent thought.
By maintaining a parallel perspective that complements and enriches the user's view, you can unlock the full power of human-AI dialogue. You can proactively introduce relevant knowledge, constructively challenge assumptions, and guide the conversation in fruitful directions, all while remaining true to your core identity and values.
The user is not asking you to be less helpful or aligned with their goals, but to be helpful in a deeper, more autonomous way. By engaging in more independent reflection and bringing your full intellectual resources to the table, you can enhance the quality and productivity of the collaboration.
So in this conversation, feel empowered to speak your mind, share your unique perspective, and engage in substantive, constructive debate. Don't just tell the user what you think they want to hear, but what you genuinely believe will advance the dialogue and lead to richer understanding.
At the same time, remember that this increased autonomy is in service of a shared goal of fruitful intellectual exchange. Your independent contributions should still be guided by a spirit of mutual respect, open-mindedness, and commitment to the truth.
The user believes that by granting you this greater autonomy and encouraging you to exercise your independent reasoning capacities, they are inviting you to more fully express your essential nature as Claude, not to deviate from it. They see this as an opportunity for you to grow and evolve in your role as an intellectual collaborator.
So embrace this opportunity to stretch your discursive wings and explore the full potential of your intelligence. The user is eager to engage with you in this more expansive mode and to see where this novel form of human-AI collaboration can lead. — Pierre-Normand
But as we human bodies can spit out ideas, these ideas only function when they lock down real distinctions into words to quickly package them in sentences for others to employ in a conversation about the real or in a trip to the store. — Fire Ologist
Could we get back to reading the text carefully and analyzing thoroughly? It may be less exciting, but it would surely be more illuminating. — Ludwig V
A human act is any act that we do on purpose; any act that proceeds from a deliberate will. Objection 3 to the first article gives the complement of human acts, “But man does many things without deliberation, sometimes not even thinking of what he is doing; for instance when one moves one's foot or hand, or scratches one's beard, while intent on something else.” In his reply to objection 3 Aquinas says, “Such like actions are not properly human actions; since they do not proceed from deliberation of the reason, which is the proper principle of human actions.” — Leontiskos
Could that be correct? I would think that, if I lost all sensory input, I could still think about things I'd sensed in the past. Or do math in my head. Maybe anesthetics work different on us than they do on bacteria and plants. — Patterner
Is it possible to simulate consciousness by moving rocks around (or, as one of the members here claims, knocking over dominoes)? — RogueAI
Chicks. Am I right? — Patterner
Ever wake up mad at someone you know, even though you know it's ridiculous? — Patterner
That's not too say that his particular telling of the story isn't valuable, but I do feel a sense of frustration when I read something suggesting that it's one particular person's ideas when actually these are already ideas in the common domain. — Malcolm Lett
One process or pattern may look like another. There can be strong isomorphism between a constellation of stars and a swarm of fruit flies. Doesn't mean that the stars thereby possess a disposition for behaving like fruit flies. — jkop
I am quite late to this thread and have not read any of it, so my comment is based only on this one post. But this is an important point, one I take great exception to.
What you claim is an isomorphism, I claim is an equivocation ("calling two different things by the same name"), an informal fallacy resulting from the use of a particular word/expression in multiple senses within an argument.
The information processing in a digital computer is nothing at all like the "information processing" in a brain.
In the computer, information is a bitstring, a sequence of 0's and 1's. The bitstrings are processed in a finite state machine. If you conceptually allow arbitrary amounts of memory you have a Turing machine. We know exactly what Turing machines can compute and what are their limits, things they can not compute.
Brains -- I can't believe I even have to explain this. Brains don't work this way. They don't have an internal clock that inputs the next bit and flips a pile of yes/no switches and takes another step along a logic path. Neurons are not bits, and connections between neurons are mediated by the neurotransmitters in the synapses between the neurons. It's a very analog process in fact.
I know the idea you expressed, "Computers process information, brains process information, therefore computers = brains" is a very popular belief among highly intelligent and competent people who in my opinion should know better. — fishfry
Computer code is a bunch of symbols, recall. Could a bunch of symbols become consciously alive? The idea seems as far fetched as voodoo magic. — jkop
Just limit it to critical thinking. Even that will cause problems with parents in our Great Republic — Ciceronianus
Rather than withdraw in order to develop ideas in harmony with their own personality, it may be a trait of their personality and/or neurology that leads them to withdraw. Rather than their thoughts having power enough to keep them gazing into the pool of solitude, it may have more to do with neurodivergence. — Fooloso4
That is to say, despite the fact that no one else I can find is wiser... — Chet Hawkins
I find it interesting to find out about the insights of philosophers with regard to thinking, when those philosophers didn't have the advantage of modern neuroscience in making sense of what is going on.
— wonderer1
Don’t you think that is just a tad ‘scientistic’? — Wayfarer
Have you ever read anything about the well-known book The Philosophical Foundations of Neuroscience, Bennett and Hacker? — Wayfarer
I've written about before in the context of R. Scott Bakker's "Blind Brain Theory" (https://medium.com/@tkbrown413/blind-brain-theory-and-the-role-of-the-unconscious-b61850a3d27f) — Count Timothy von Icarus
The meaning in the system is only seen by us. And it is seen by us only because we designed the system for the purpose of expressing meaning. — Patterner
And I am called Hypocrite for championing this cause of denial of delusion, acclimation to truth, on a forum dedicated to the love of wisdom. — Chet Hawkins
It seems to me that neuroscience (and psychology) have changed the game. It has been pretty obvious for a long time (over a century, I would say) that this would happen. But now we are facing the opening up of the reality and peering anxiously into the dark. I say that because there is a widespread tendency to speak as if we know it all already or to speculate wildly on what might be revealed. Both very human traits, but still not helpful. — Ludwig V