I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t. — fishfry
This is anti-intellectualism. You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions. You're not cool, edgy or provide any worth to these discussions, you're just a tragic example of the worst things about how people act today; to ignore actual knowledge and just have opinions, regardless of their merits. Not only is it not contributing to knowledge, it actively works against it. A product of how internet self-radicalize people into believing they are knowledgeable, but taking zero epistemic responsibility of the body of knowledge that the world should be built on. I have nothing but contempt for this kind of behavior and how it transforms the world today.
Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."
Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time. — fishfry
I'm not going to recognize a position of anti-intellectualism. You show not understanding or insight into the topic I raise. A topic that is broader than just AI research. Your position is worth nothing if you base it off influencers and bloggers and ignore actual research papers. It's lazy and arrogant.
You will never be able to agree on anything because your knowledge isn't based on actual science and what constitutes how humanity forms a body of knowledge. You're operating on online conflict methods in which a position should be "agreed" upon based on nothing but fallacious arguments and uneducated reasoning. I'm not responsible for your inability to comprehend a topic and I'm not accepting fallacious arguments rooted in that lack of comprehension. Your entire position is based on a lack of knowledge or understanding and a lack of engagement in the source material. As I've said in the beginning, if you build arguments on fallacious and error premises, then everything falls down.
And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links. — fishfry
You continue to parrot yourself based on a core inability to understand anything about this. You don't know what emergence is and you don't know what the black box problem is because you don't understand how the system actually works.
Can you explain how we're supposed to peer into that black box of neural operation? Explain how we can peer into the decision making of the trained models. NOT the overarching instruction-code, but the core engine, the trained model, the neural map that forms the decisions. If you just say one more time that "the programmers can do it, I know they can" as an answer to a request on"how", then you don't know what the fuck you're talking. Period.
Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me. — fishfry
You're saying the same thing over and over with zero substance as a counter argument. What's your actual argument beyond your fallacies? You have nothing at the root of anything you say here. I can't argue with someone providing zero philosophical engagement. You belong to reddit and twitter, what are you doing on this forum with this level of engagement?
Is there anything I've written that leads you to think that I want to read more about emergence? — fishfry
No, your anti-intellectualism is pretty loud and clear and I know exactly at what level you're at. If you ignore engagement in the discussion honestly, then you're just a dishonest interlocutor, simple as that. If you ignore actually understanding a scientific field at the core of this topic when someone brings it up, only to dismiss it as buzzwords, then you're not worth much as a part of the discussion. I have other people to engage with who can actually form real arguments. But your ignorance just underscores who's coming out on top in this. No one in a philosophy discussion views the ignorant and anti-intellectual as anything other than irrelevant, so I'm not sure what you're hoping for here.
Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting. — fishfry
You show no sign of understanding any of it. It's basically just "I'm an expert trust me". The difference between you and me is that I don't do "trust me" arguments. I explain my point, I provide sources if needed and if the person I'm discussing with just utters an "I'm an expert, trust me" I know they're full of shit. So far, you've done no actual arguments beyond saying basically that, so the amount of statistical data informing us exactly how little you know about all of this, is just piling up. And it's impossible to engage with further arguments sticking to the topic if the core of your arguments are these low quality responses.
My point exactly. In this context, emergence means "We don't effing know." That's all it means. — fishfry
No it doesn't. But how would you know when you don't care?
I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords. — fishfry
The McCullock-Pitt neuron does not include mechanisms for adapting weights. And since this is a critical feature of biological neurons and neural networks, I'm not sure why that applies to either emergence theories or modern neural networks? Or are you just regurgitating part of the history of AI thinking it has any relevance to what I'm writing?
You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything. — fishfry
It means you're uneducated and don't care to research before commenting:
I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.
I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it. — fishfry
Who cares about your opinion? Your opinion is meaningless without foundational premises for your argument. This forum is about making arguments, it's within the fundamental rules of the forum, if you're here to just make opinions you're in the wrong place.
I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.
It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case. — fishfry
Why are you even on this forum?
I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post. — fishfry
You're making truth statements based on nothing but personal opinion and what you feel like. Again, why are you on this forum with this kind of attitude, this is low quality, maybe look up the forum rules.
But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there. — fishfry
I've shown clarity in this and I've provided further reading. But if you don't have the intellectual capacity to engage in it, as you've clearly, in written form, shown not to have and not to have an interest in, then it doesn't matter how much someone try to explain something to you. Your stance is that if you don't understand or comprehend something, then you are, for some weird reason, correct, and that the one you don't understand is wrong and it's their fault for not being clear enough. What kind of disrespectful attitude is that? You're lack of understanding, your lack of engagement, your dismissal of sources, your fallacies in arguments and your lack of providing any actual counter arguments just makes you an arrogant, uneducated and dishonest interlocutor, nothing more. How would a person even be able to have a proper philosophical discussion with someone like you?
Ah. The first good question you've posed to me. Note how jargon-free it was. — fishfry
Note the attitude you pose.
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.. — fishfry
Define "what's happening". Define what constitutes "now".
If "what is happening" only constitutes a constant stream of sensory data, then that stream of data is always pointing to something happening in the "past", i.e "what's happened". There's no "now" in this regard.
And because of this, the operation of our mind is simply streaming sensory data as an influence on our already stored neural structure with hormones and chemicals further influencing in strengths determined by pre-existing genetic information and other organ signals.
In essence, the difference you're trying to aim for, is simply one that's revolves around the speed of analysis of that constant stream of new data, and an ability to use a fluid neural structure that changes based on that data. But the underlying operation is the same, both the system and the brain operate on "past events" because there is no "now".
Just the fact that the brain need to process sensory data before we comprehend it, means that what we view as "now" is simply just the past. It's the foundation for the
theory of predictive coding. This theory suggests that the human brain compensates for the delay in sensory processing by using predictive models based on past experiences. These models enable rapid, automatic responses to familiar situations. Sensory data continually updates these predictions, refining the brain's responses for future interactions. Essentially, the brain uses sensory input both to make immediate decisions and to improve its predictive model for subsequent actions.
https://arxiv.org/pdf/2107.12979
But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening. — fishfry
The clearest sign of the uneducated is that they treat science as a binary "true" or "not true". Rather than a process. As with both computer science and neuroscience, there are ongoing research and adhering to that research and the partial findings are much more valid in arguments than demanding "proof" in the way you speak. And as with the theory of predictive coding (don't confuse it with computer coding which it isn't about), it is at the frontlines of neuroscience. What that research implies will, for anyone with an ability to make inductive arguments, point towards the similarities between neural systems and the brain in terms of how both act upon input, generation and output of behavior and actions. That one system, at this time, is in comparison, rudimentary, simplistic and lacking similar operating speed, does not render the underlying similarities that it does have, moot. It rather prompts further research into if behaviors match up further, the closer the system becomes to each other. Which is what current research is going on about.
Not that this will go anywhere but over your head.
Nobody knows what the secret sauce of human minds is. — fishfry
While you look at the end of the rainbow, guided by the bloggers and influencers, I'm gonna continue following the actual research.
Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea. — fishfry
You don't know the difference between what emergence is and what this is. They are two different aspects within this topic. One has to do with self-awareness and qualia, this has to do with adaptive operation. One is about the nature of subjectivity, the other is about mechanical non-subjective AGI. What we don't know is if emergence occurs the closer the base system gets. But again, that's too complex for you.
https://arxiv.org/pdf/1705.08690
https://www.mdpi.com/1099-4300/26/1/93
https://www.mdpi.com/2076-3417/11/24/12078
https://www.mdpi.com/1424-8220/23/16/7167
As the research is ongoing there's no "answers" or "proofs" for it yet in the binary way you require these things to be framed as. Rather, it's the continuation of merging knowledge between computer science and neuroscience that has been going on for a few years now ever since the similarities were noted to occur.
How can you say that? Reasoning our way through novel situations and environments is exactly what humans do. — fishfry
I can say that because "novel situations" are not a coherently complex thing. We're seeing reasoning capabilities within the models right now. Not at each level of human capacity, pretty rudimentary, but still there. Ignoring that is just dishonest. And with the ongoing research, we don't yet know how complex this reasoning capability will become, simply because we've haven't a multifunction system running yet that utilizes real-time processing and act across different functions. To claim that they won't be able to do is not valid as the current behavior and evidence point in the other direction. Making a fallacy of composition as the sole source as to why they won't be able to reason is not valid.
That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs. — fishfry
No they're not, they're researching AI, or they're researching neuroscience. Of course they're breaking down the building blocks in order to decode consciousness, the mind and behavior. The problem is that there are too many spiritualist and religious nutcases who rather arbitrarily uplift humans to a position that's composed of arrogance and hubris. That we are far more than part of the physical reality we were formed within. I don't care about spiritual and religious hogwash when it comes to actual research, that's something the uneducated people with existential crises can dwell their futile search for meaning in. I'm interested in
what is, nothing more, nothing less.
How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time? — fishfry
Why do you interpret it in this way? It's like you interpret things backwards. What I'm saying is that the operation of our brain and consciousness, through concepts like the theory of predictive coding, seems to operate on rather rudimentary functions that are possible to be replicated with current machine learning in new configurations. What you don't like to hear is the link between such functions generating extreme complexity and that concepts like subjectivity and qualia may form as emergent phenomenas out of that resulting complexity. Probably because you don't give a shit about reading up on any of this and instead just operate on yourself "just not liking it" as the foundation for the argument.
Humans are not "probability systems in math or physics." — fishfry
Are you disagreeing that our reality is fundamentally acting on probability functions? That's what I mean. Humans are part of this reality and this reality operates on probability. That we show behavior of operating on predictions of probability when navigating reality is following this fact;
Predictive Coding Theory, Bayesian Brain Hypothesis, Prospect Theory, Reinforcement Learning Models etc.
Why wouldn't our psychology be based on the same underlying function as the rest of nature. Evolution itself is acting along predictive functions based on probabilistic "data" that arise out of complex ecological systems.
I don't deal in religious hogwash to put humans on a pedestal against the rest of reality.
Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking. — fishfry
It's not credentialism, I'm fucking asking you for evidence that it's impossible as you clearly just regurgitate the same notion of "impossibility" over and over without any sources or rational deduced argument for it. The problem here isn't clarity, it's that you actively ignore the information given and that you never demonstrate even a shallow understanding of this topic. Telling that you do, does not change that fact. Like in storytelling; show don't tell.
Show that you understand, show that you have a basis for your claims that AGI can never happen with these models as they are integrated with each other. So far you show nothing else but to try and ridicule the one you argue against, as if that were any kind of foundation for a solid argument. It's downright stupid.
Yes, but apparently you can't see that. — fishfry
Oh, so now you agree with my description that you earlier denied?
What about this?
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.. — fishfry
So when I say this:
Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? — Christoffer
You suddenly agree with this:
Yes, but apparently you can't see that. — fishfry
This is just another level of stupid and it shows that you're just ranting all over the place without actually understanding what the hell this is about, all while trying to mock me for lacking clarity.
:lol: Seriously.
I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity. — fishfry
But the source of your knowledge, as mentioned by yourself, is still to
not read papers, and only bloggers and influencers, the only ones who actually are THE ones to use buzzwords and hype? All while what I've mentioned are actual fields of studies and terminology derived from research papers?That's the most ridiculous I've ever heard. And you seem totally blind to any ability of self-reflection on this dissonance in reasoning.
:lol:
Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do you — fishfry
Yes, they do, but based on how you write things, I don't think you really understand them as you clearly seem to not understand either the concepts that's been mentioned or be able to formulate actual arguments for your claims. Reading blogs is not the same as reading the actual research and actual comprehension of a topic requires more sources of knowledge than just brief summery's. Saying that you read stuff, means nothing if you can't show a comprehension of the body of knowledge required. All of the concepts I've talked about should be something you already know about, but since you don't I only have your word that you "know stuff".
You're right, I lack exponential emergent multimodality. — fishfry
You lack the basics of how people are supposed to form arguments on this forum. You're doing twitter/reddit posts. Throughout your answer to me, you've not even once demonstrated actual insight into the topic or made any actual counter arguments. That even in that lengthy answer, you still weren't able to. It's like you want to show an example of the opposite of philosophical scrutiny.
I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal? — fishfry
Once again you just say that "you know shit", without every showing it in your arguments. It's the appeal to authority fallacy as it's your sole source of explanation of why "you know shit". If you have academic and professional experience, you would know how problematic it is to just adhere to experience like that as the source premise for an argument. What it rather tells me is that you either have such experience, but you're simply among the academics who're at the bottom of the barrel (there are lots of academics who're worse than non-academics in the practice of conducting proper arguments and research), or that the academic fields are not actually valid for the specific topic discussed, or that you just say it as a form of desperate attempt to increase validity. But being an academic or have professional experience (whatever that even means without context), means absolutely nothing if you can't show the knowledge that've come out of it. I know lots of academics who are everything from religious zealots to vaccine deniers, it doesn't mean shit. Academia is education and building knowledge, if you can't show that you learned or built any such knowledge, then it means nothing in here.
You've convinced me to stop listening to you. — fishfry
More convincing evidence that you are acting out of proper academic praxis in discourse? As with everything else, ridiculous.