Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works. — ssu
I'd rather look at it as science was in its infancy as a proper procedure. Back then, there were many more experiments that led to theories, but now we put forth theories that we then put to test.
The foundation is the same, not everything goes from thought to manifested reality in experiment or engineering, sometime, many times it goes from "huh, that's odd" to later answers as to why.
Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works. — ssu
Even as philosophers we have to acknowledge when there's too little information and data to conclude anything with solid premises. We can speculate and we can point to what's partially experimentally validated and go from there. However, the problem I see is that most thinkers aren't really looking at the partial pieces and including them in their ideas.
As far as I see it, emergence has enough support to be relevant and predictive coding enough validity that it has to be addressed for any philosophical concept about the mind.
In essence, we have enough scientific foundation that can't be ignored when forming philosophical arguments about how our mind works.
What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness. — ssu
Emergence is fundamentally a materialistic concept, but it differentiate in that you cannot simply see the cogs working towards a direct deterministic result. Instead emergence models consciousness on the same basis as everything else in nature and reality, an increase of chaos that at a certain point reach a tipping point and emergent complexity appear. All over nature, we see systems that can be categorized in defined ways up to a certain scale where they become something else that express singular behaviors. Like for instance, states of matter.
The level of how advanced a complexity is depends on underlying levels. A state of matter is not as complex as an ecosystem of different states of matter forming a new emergent system, as an example.
Fundamentally, I'm approaching this topic in the way of detaching any assumed ideas about consciousness that we've formed in culture and language, and instead look at consciousness as being part of the same natural system as everything else, without any bias towards ourselves and our human ego in experiencing it. We tend to attribute a mythical status to consciousness, tainting our ability to look at it as any other system in nature. But when we realize that consciousness is just part of the same physical reality as everything else, and really accept that "banality" of it, then it frees us up to think more clearly about what that entails.
The questions then become, why does this system produce this effect? What is it that pushed our evolution towards this ability? What was the evolutionary purpose that formed it and is the way we use consciousness part of that evolutionary drive or a byproduct of a more fundamental cognitive purpose.
My idea is the latter. Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm" for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors.
It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations, but adapt to day to day dangers of the post-asteroid environment. And that the evolutionary branches of these cognitive abilities continued as the more advanced these abilities to predict became, the better the species survived. Evolution formed a new bias that survivability gravitated towards.
Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for hunting, finding shelter, procreation and expanding tribes.
This spread of different levels of cognitive behaviors checks out when comparing our basic psychology with the rest of the animal kingdom. Even mushrooms show signs of rudimentary cognitive abilities so there is enough evidence to say that advanced cognitive abilities are evolutionary preferable to develop.
But the way homo sapiens have used our consciousness is a byproduct of the basic functional reason we have consciousness. The level of complexity in prediction that it formed made us able to correlate different predictions and regulate emotion around it. And so we began to conceptualize highly advanced and expanded predictive models about our surroundings, for navigation, social structures and it drove our need to understand how things function in order to predict advanced systems. Our drive to explain why something happens formed extremely complicated internalized scenarios in the form of religious beliefs that then transformed into scientific thinking as we matured as an intelligent species.
Our consciousness and how we use it is basically a fundamental system that produced highly complex consequences but that is still fundamentally basic in its function. Like any other system in the universe that is fundamentally simple, but where the results are extremely varied and systematically complex in themselves..
A form of rare emergent behavior of fundamental organic physical processes.
Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function.
So the fault might be that we view consciousness from the high level complexity down or try to materialistically view it from the bottom up to complexity, but it may require a much more holistic view of many things forming the emergent behavior as seemingly unrelated parts and systems that as a whole produce this result.
I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this. — ssu
I don't really see the problem you describe. Mathematics function to describe nature and our reality. Infinite is even included in mathematics and is a computable part of equations. We can also see how infinity usually turns up in physical reality when spacetime essentially breaks down. So in essence our math works within the reality it is calculated, and can in some cases even expand calculations to models that deal with reality beyond our own. More often than not, the problems in computation is not due to math being incomplete, but because we don't have enough means to compute. As our minds have reached its limitation to compute, we use computers, but those are limited to their computing power. We are essentially limited by the speed of our systems, not math itself.
But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference. — ssu
Our machines still operate on physical laws. We build them to operate on these laws. Living organisms in comparison, formed and evolved to operate on the same physical laws. The only difference is that one grows into being, the other is calculated into existence. I see no notable difference, other than our machines being rudimentary in comparison to the organic, since we're trying to understand all parts while the other forms from a system in which all parts develop in reaction to the previous.
Actually, in engineering today it's common to use the same methods as evolution rather than trying to make things from scratch. Aerodynamics use iterative designs that forms out of the conditions rather than a human designing them. That way they reach the optimal function within their dedicated space of operation. So we are already using emergent complexity in building machines.
And with the AI models we have, we're doing it with software as well. The reason why robotics have taken a giant leap today is because of evolutionary iteration of behaviors rather than trying to program movement.
It becomes obvious that "growing" forth knowledge in evolutionary ways is much more powerful than trying to simply design something.
In doing so... aren't we transitioning into "emergent machines" as our need for complex operation increases? Much like all other systems in nature and the universe?
A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc). — ssu
It is not impossible. Our brain isn't detached from natural processes, everything we have in our skull is a composition of matter and biased functions that produce the result that is our consciousness. If we replicated all of that perfectly, maybe even requiring us to "grow" it into existens, or simulate that growth process, we would eventually end up with a perfect replica.
The philosophical question is not if we can do it today... it's if we can do it at all. And there's nothing that says that we can't. We've already replicated much of what exists in universe, even producing matter that might be impossible to form elsewhere, just because we know how neutrons and protons and the atom works.
The only thing I see is that we attribute the mystical onto our consciousness again, attributing it to be so complex that we will never be able to see its composition and function. But we've done things with physics and nature in our modern time that is considered magic by previous generations in history.
All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness.
It is happening consciously or unconsciously, but it is a vast abyss for people and staring into it makes some go mad as it's a feedback loop of ideas. The being able to understand itself fully. That process can break it.
Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine following algorithms cannot do this. — ssu
I don't think you understood how I explained algorithms. The "algorithms" are no different in what they essentially mean, to that of our own parts guiding our consciousness. As I described above about how consciousness probably formed, our consciousness is fundamentally basic, operating on basic algorithms of prediction models. Very similar to that of our current AI models, but much more advanced in how it changing during operation.
My point is that you don't need a complex system at its foundation. You need a chaotic system that is guided by simple rules and complex emergent behaviors can form out of it. How we see our consciousness today is more likely only a byproduct of these basic functions and operations, and so if an AI model operates on similar basics it may form similar emergent byproduct operations.
We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct. — ssu
No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices. Human's have a tendency to attribute our ego more abilities than it has. We are still a rudimentary consciousness that operates on prediction operation and thus we choose based on deterministic events in nature. It is a human arrogance, akin to religious belief that drives us to attribute ourselves free will in the sense its used. The randomness we see in quantum mechanics do not counteract deterministic macro events. Everything gravitates towards deterministic outcomes in which any deviant random event in quantum mechanics ends up too weak to affect the macro. Quantum mechanics are probabilistic, but it's false to think that this probability enable novel random events outside of the most probable outcomes and the scales at which such deviant random events happen on are so small that even the slightest interaction erases it and forms a bias towards the most probable.
Our consciousness isn't "hacking" our choices beyond this probabilistic behavior and even if it were to, it would not be enough to form large scale conscious decisions that exist unrelated to any events that affect our consciousness.
Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine? — noAxioms
I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence. We're tapping into this with our training operations, but we require more guidelines for it to follow in order to create the same feedback loop that our consciousness have to control our hallucination of experience. We essentially hallucinate reality, and in turn our senses verify and this goes on in a constant loop that grounds us. We need to replicate that in a way that is constantly updating the system.
Other than that, we are seeing seeds of consciousness as these models are operating on prediction already. It tries to predict information based on memory and training data, but it does not have any guiding principles to why it should predict something in a certain way. If we are functioning on the idea that we need to predict a possible danger on the other side of a hill when out hunting, that guides us to predict possible dangers, we use our memory and stored information to predict the likelihood of there being danger beyond the hill. The guiding principle are forces like survival driving our emotional reaction to start predicting and our experience driving the confidence in that prediction. The question is how we can give machines similar guiding principles to guide their predictions.
Right now we are the ones guiding them with our prompts and thus there's neither an internal input for that predictive reasoning or an external consequence after that predictive reasoning.
I'd say that we already have the foundation of predictive thinking built into these models. For instance, the o1 model already shows significant reasoning ability compared to previous models, but that's only because of the guiding principles it's built around. It still uses the same basic predictive model as the 4o model.
The two are not mutually exclusive. It can be both. — noAxioms
Yes. The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing.