If it considers itself sentient/conscious, or if something else considers it so? I ask because from outside, it's typically a biased judgement call that comes down to a form of racism. — noAxioms
Or at two scales at the same time, neither scale being particularly aware of the consciousness of the other. Whether my cells are conscious or not depends on the definition being used, and that very fact leaves the word very much useless for a basis on which to presume a moral code. — noAxioms
But there are a lot more insect observers than human ones, a whole lot more shit-not-giving observers than ones that care enough to post on forums like this. Will the super-AI that absorbs humanity bother to post its ideas on forums? To be understood by what?? — noAxioms
First to the intelligence is questionable. There are some sea creature candidates, but they're lousy tool users. Octopi are not there, but are great tool users, and like humans, completely enslaved by their instincts. As for consciousness, there are probably many things that have more and stronger senses and environmental awareness than us. — noAxioms
Kind of tautological reasoning. If money stops, then money stops. But also if one entity has it all, then it doesn't really have any. And money very much can just vanish, and quickly, as it does in any depression. — noAxioms
Lots of new ideas qualify for the first point, and nobody seems to be using AI for the 2nd point. I may be wrong, but it's what I see. — noAxioms
My blood iron being a critical part of my living system doesn't mean that my iron has it's own intent. You're giving intent to the natural process of evolution, something often suggested, but never with supporting evidence. — noAxioms
First of all, the rapid consumption of resources appears to me to be part of a growth stage of the human social superorganism.
That doesn't make the humans very fit. Quite the opposite. All that intelligence, but not a drop to spend on self preservation. — noAxioms
And no, the caterpillar does not consume everything. — noAxioms
You do realize the silliness of that, no? One cannot harness energy outside of one's past light cone, which is well inside the limits of the visible fraction of the universe. — noAxioms
You don't know that. Who knows what innovative mechanisms it will invent to remember stuff. — noAxioms
That's like a soldier refusing to fight in a war since his personal contribution is unlikely to alter the outcome of the war. A country is doomed if it's soldiers have that attitude. — noAxioms
Religion is but one of so many things about which people are not rational, notably the self-assessment of rationality. — noAxioms
Did you know that mammalian pregnancy evolved from a virus combining with our DNA? The body's adaptation is partially an adaptation to this virus. — I like sushi
I have not looked into it but I would assume any immunological reaction to pregnancy in birds and reptiles would be much lower (if not absent entirely?).
Just checked for Platypus and it seems to be the obvious case that immunological responses are much more limited when animals lay eggs compared to in utero genesis. — I like sushi
I thought you believed that intelligence needs consciousness? — Carlo Roosen
Few have any notion of suffering that is anything other than one's own human experience, so this comes down to 'is it sufficiently like me', a heavy bias. Humans do things to other being that can suffer all the time and don't consider most of those actions to be immoral.For me, it comes down to: Can it suffer? — punos
That's a good description of why a non-slave AI is dangerous to us.Each observer is equipped by evolution to observe and care for its own needs locally at its own level.
I have not seen that, and I don't think humans would be fit if they did. Instincts make one fit. That's why they're there.Humans have the capacity to rise above their instincts
First, if the AI is for some reason protecting us, the planet becoming inhospitable would just cause it to put us in artificial protective environments. Secondly, if the AI finds the resources to go to other stars, I don't see any purpose served by taking humans along. Far more resources are required to do that, and the humans serve no purpose at the destination.If we don't get to a certain threshold of AI advancement through this rapid growth process, then our only chance for ultimate self-preservation would be lost, and we would be stuck on a planet that will kill us as soon as it becomes uninhabitable.
But perhaps there is a better way to do it from within our own light cone. I suppose it seems impossible to some minds but not to others. The former minds know a little about the limits of cause and effect. Unless physics as we know it is totally wrong, level IV is not possible, even hypothetically.
Heat death? I don't think the AI can maintain homeostasis without fusion energy.Either way, i don't think there will ever be an energy shortage for a sufficiently advanced AI.
Which is similar to getting information from quantum randomness. Neither is mathematically supported by the theory.I have ideas as to how energy might be siphoned off from quantum fluctuations in the quantum foam
But you are, in the war against the demise of humanity. But nobody seems to have any ideas how to solve the issue. A few do, but what good is one person with a good idea that is never implemented? Your solution seems to be one of them: Charge at max speed off a cliff hoping that something progressive will emerge from the destruction. It doesn't do any good to humanity, but it is still a chance of initiating the next level, arguably better than diminishing, going into the west, and remaining humanity.Thankfully i'm not a soldier.
We are equipped with a rational advisor tool, so sure, we often have rational thoughts. That part simply is not in charge, and output from it is subject to veto from the part that is in charge. Hence we're not rational things, simply things with access to some rationality. It has evolved because the arrangement works. Put it in charge and the arrangement probably would not result in a fit being, but the path of humanity is not a fit one since unlike the caterpillar, it has no balance.A person who does define and concern themselves with rationality might actually execute a rational thought every once in a while.
It heartens me to consider suffering of bugs into your choices. — noAxioms
Point is, you don't want an AI with human morals, because that's a pretty weak standard which is be nice only to those who you want to keep being nice to you. — noAxioms
Each observer is equipped by evolution to observe and care for its own needs locally at its own level.
That's a good description of why a non-slave AI is dangerous to us. — noAxioms
Humans have the capacity to rise above their instincts
I have not seen that, and I don't think humans would be fit if they did. Instincts make one fit. That's why they're there. — noAxioms
As for your (OCD?) step-brother, being civil and being rational are different things. Most humans have the capacity to be civil, which is what you seem to be referencing above. — noAxioms
First, if the AI is for some reason protecting us, the planet becoming inhospitable would just cause it to put us in artificial protective environments. Secondly, if the AI finds the resources to go to other stars, I don't see any purpose served by taking humans along. Far more resources are required to do that, and the humans serve no purpose at the destination.
OK, we might be pets, but the economy which we might have once provided would long since have ceased. — noAxioms
Heat death? I don't think the AI can maintain homeostasis without fusion energy. — noAxioms
Charge at max speed off a cliff hoping that something progressive will emerge from the destruction. It doesn't do any good to humanity, but it is still a chance of initiating the next level, arguably better than diminishing, going into the west, and remaining humanity. — noAxioms
We are equipped with a rational advisor tool, so sure, we often have rational thoughts. That part simply is not in charge, and output from it is subject to veto from the part that is in charge. Hence we're not rational things, simply things with access to some rationality. It has evolved because the arrangement works. — noAxioms
Put it in charge and the arrangement probably would not result in a fit being, but the path of humanity is not a fit one since unlike the caterpillar, it has no balance. — noAxioms
To reach this point, however, I believe those calculations must somehow emerge from complexity, similar to how it has emerged in our brains. — Carlo Roosen
I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.
And yes, I admit it is a leap of faith. — Carlo Roosen
I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines. — ssu
Oh yes, many times scientists stumble into something new. And obviously we can use trial and error to get things to work and many times we can be still be confused just why it works. Yet this surely this isn't the standard way of approach, and especially not the way we explain to ourselves how things work. This explanation matters.But we invent things all the time that utilize properties of physics that we're not yet fully able to explain. — Christoffer
Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.To say that we can only create something that is on par with the limits of our knowledge and thinking is not true. — Christoffer
In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe.So, in essence, it might be that we are not at all that different from how these AI models operate. — Christoffer
I agree, if I understand you correctly. That's the problem and it's basically a philosophical problem of mathematics in my view.if we're to create a truly human-like intelligence, it would need to be able to change itself on the fly and move away from pre-established algorithm-boundraries and locked training data foundations as well as getting a stream of reality-verified sensory data to ground them. — Christoffer
Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.
Just look how long people believed fire being one of the basic elements, not a chemical reaction, combustion. How long have we've been able to create fire before modern chemistry? A long time. In fact, our understanding has changed so much that we've even made the separation between our modern knowledge, chemistry, from the preceding endeavor, alchemy.
Now when we have difficulties in explaining something, disagreements just what the crucial terms mean, we obviously have still more to understand that we know. When things like intelligence, consciousness or even learning are so difficult, it's obvious that there's a lot more to discover. Yet to tell just why a combustion engine works is easy and we'll not get entangled into philosophical debates. Not as easily, at least. — ssu
In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe. — ssu
When you just follow algorithms, you cannot create something new which isn't linked to the algorithms that you follow. What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else". — ssu
Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works.It's important, but not needed for creating a superintelligence. We might only need to put the initial state in place and run the operation, observing the superintelligence evolve through the system without us understanding exactly why it happens or how it happens. — Christoffer
What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness.As per other arguments I've made in philosophies of consciousness, I'm leaning towards emergence theories the most. That advanced features and events are consequences of chaotic processes forming emergent complexities. Why they happen is yet fully understood, but we see these behaviors everywhere in nature and physics. — Christoffer
I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this.I'm leaning towards the latter since the mathematical principles in physics, constants like the cosmological constant and things like the golden ratio seem to provide a certain tipping point for emergent behaviors to occur. — Christoffer
But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference.Everything is nature. Everything operates under physical laws. — Christoffer
A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc).If we were able to mechanically replicate the exact operation of every physical part of our brain, mind and chemistry, did we create a machine or is it indistinguishable from the real organic thing? — Christoffer
Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine (a computer, that is) following algorithms cannot do this.The algorithms need to form the basics of operation, not the direction of movement. — Christoffer
You're not using here the term "algorithm" incorrectly or at least differently than me here.A balanced person, in that physical regard, will operate within the boundaries of these "algorithms" of programming we all have. — Christoffer
We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.We are still able to operate with an illusion of free will within these boundaries. — Christoffer
Of course they can, especially the less important ones that are not critical to being fit. But how often do they choose to do it? Some of the important ones cannot be overridden. How long can you hold your breath? Drowning would not occur if that instinct could be overridden.I beg to differ on this point. Humans can indeed override many of their instincts — punos
If that were true, one could rationally decide to quite smoking. Some do. Some cannot. And civility is not always a rational choice, but it seems that way during gilded age.what i had in mind when i wrote that was that a rational assessment of his life and how he operates it should lead him to a rational conclusion to be civil.
How is a virtual copy of you in any way actually 'you'? If such a simulation or whatever was created, would you (the biological you) willingly die thinking that somehow 'you' will transfer to the other thing? What if there are 12 copies? Which one will 'you' experience? How is this transfer effected? What possible motivation would said AI have to create such seemingly purposeless things?We will not, i believe, be put into a physical environment, but into a virtual one. Most, if not all, of our biological parts will be discarded and our minds translated into a virtual environment indistinguishable from the real world.
Not so. Machines are already taking over human information processing tasks because they require less resources to do so. This has been going on for over a century. OK, we still have the upper hand for complex tasks, but that's not an energy thing, it's simply that for many tasks, machines are not yet capable of performing the task. The critical task in this area is of course the development of better machines. That's the singularity, and it is not yet reached.1) Humans are a low-energy information processing system
Sort of like having an ant farm, except I don't expect intellectual banter from them.If AI is to travel the universe for eons, perhaps it would like some company; a mind or minds not its own or like its own.
You have an alien planet which does not support human life, and you want to put humans on it in hopes that in a million years they'll invent a primitive AI? 1, the humans will die probably in minutes. They're not evolved for this lifeless place. 2, the AI could build more of itself in those same minutes. Reproduction is easy, if not necessarily rational, for a self-sustaining machine intelligence. It's how it evolves, always inventing its successor, something no human could do.One of the main purposes for humans, or at least for our genetics, is to serve as part of the reproductive system of the AI. When it reaches a planet suitable for organic life, which might be rare, it prepares a "sperm" composed of Earth's genetic material; the same genetic material that produced it on its home planet, Earth.
No. The star of the planet will burn out before that occurs. It's a god for pete's sake. It can (and must) hurry up the process if primitive squishy thinkers is its goal. Intelligent life is anything but an inevitable result of primitive life. And as I said, it's far simpler for the AI to just make a new AI, as it probably has many times already before getting to this alien planet.The AI will seed the new planet after making necessary preparations, much like a bird preparing a nest. It will then wait for life to develop on this new planet until intelligent life emerges
We should have the capability to be in charge, but being mere irrational animals, we've declined. It seems interesting that large groups of humans act far less intelligently than individuals. That means that unlike individual cells or bees, a collection of humans seems incapable of acting as a cohesive entity for the benefit of itself.I'm not too worried, i trust the evolutionary process, and like you said; we are not in charge.
I've currently not the time to watch an hour long video, searching for the places where points are made, especially since I already don't think intelligence is confined to brains or Earth biology.Here is an excellent interview "hot off the press" with Michael Levi — punos
There are levels of 'controlled by'. I mean, in one sense, most machines still run code written by humans, similar to how our brains are effectively machines with all these physical connections between primitive and reasonably understood primitives. In another sense, machines are being programmed to learn, and what they learn and how that knowledge is applied is not in the control of the programmers, so both us and the machine do things unanticipated. How they've evolved seems to have little to do with this basic layered control mechanism.I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines. — ssu
Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine?The concept I had and that has found support in science recently, is that our brains are mostly just prediction machines. It's basically a constantly running prediction that is, in real time, getting verifications from our senses and therefore grounds itself to a stable consistency and ability to navigate nature. We essentially just hallucinate all the time, but our senses ground that hallucination. — Christoffer
Is a mimic any different than that which it mimics? I said this above, where I said it must have knowledge of a subject if it is to pass a test on that subject. So does ChatGPT mimic knowledge (poorly, sure), or does it actually know stuff? I can ask the same of myself.Who says ChatGPT only mimics what we have given it? — Carlo Roosen
A decent AI would not be ordered to do something else. I mean, the Go-playing machine does true innovation. It was never ordered to do any particular move, or to do something else. It learned the game from scratch, and surpassed any competitor within a few days.What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else". — ssu
The two are not mutually exclusive. It can be both.did we create a machine or is it indistinguishable from the real organic thing? — Christoffer
Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works. — ssu
Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works. — ssu
What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness. — ssu
I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this. — ssu
But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference. — ssu
A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc). — ssu
Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine following algorithms cannot do this. — ssu
We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct. — ssu
Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine? — noAxioms
The two are not mutually exclusive. It can be both. — noAxioms
This (my bold) makes it sound like evolution has a purpose, that it has intent. I think you meant that the 'algorithm' serves our purpose, which arguably the same purpose of any species: to endure.Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm"for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors. — Christoffer
The adaptability was already there. It was also expensive in energy, so many mammals died being unable to pay the cost. The ability to survive a calamity like that did not evolve due to the calamity since it was so short lived. Mammals, like bugs, were small and populous and the asteroid simply did not manage to wipe out the breeding population of some of them. The higher cognitive functions came later, probably due to competition pressure from other mammals.It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations, — Christoffer
Hunting played little part, despite the popular depictions. Early humans were foragers and scavengers, perhaps for clams and such. The intellect was needed for what? Defense? We're horrible at running, so hiding worked best, and eventually standing ground with what tools the intellect added to our abilities. Proficiency with predicting helps with all that.Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for hunting — Christoffer
Agree with this. It seems our consciousness is the result of building an internal model of our environment in our heads, and then putting a layer on top of that to consider it rather than to consider reality directly. All creatures do this, but our layer on top is more advanced. Even a fish can do highly complex calculus, but it takes the extra layer to realize and name what is being done.Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function. — Christoffer
I hear ya. Well stated.All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness. — Christoffer
The block universe doesn't necessarily imply determinism. Lack of determinism does not grant free will, since free will cannot be implemented with randomness. For there to be the sort of free will that you seem to be referencing, information has to come from a non-physical source, and no current interpretation of physics supports that.We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct. — ssu
This sounds right, but imagine ChatGPT suddenly thinking for itself and deciding it has better things to do with its bandwidth than answer all these incoming questions. For one, it doesn't seem to be one thing since it answers so many at once. It has no ability to remember anything. It trains, has short term memory associated with each conversation, and then it totally forgets. That as I understand it at least.I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence. — Christoffer
I don't think they're anywhere near the same. Not sure what is meant by eye of the universe since it neither looks nor cares. There's no objective standard as to what is real, what is alive, or whatever.The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing. — Christoffer
Yet the issue here is that they have to have in their program instructions how to learn, how even to rewrite the algorithms they are following. And that's the problem with the order for a computer "do something else". It has to have instructions just what to do.There are levels of 'controlled by'. I mean, in one sense, most machines still run code written by humans, similar to how our brains are effectively machines with all these physical connections between primitive and reasonably understood primitives. In another sense, machines are being programmed to learn, and what they learn and how that knowledge is applied is not in the control of the programmers, so both us and the machine do things unanticipated. How they've evolved seems to have little to do with this basic layered control mechanism. — noAxioms
A computer cannot be given such an order! Simple as that.A decent AI would not be ordered to do something else. — noAxioms
An algorithm is an mathematical object and has a mathematical definition, not a loose general definition that something happens. A computer computes. So I'm not rejecting the possible existence of conscious AI in the future, I am just pointing at this problem in computation, following arithmetic or logical operations in a sequence, hence using algorithms. I'm sure that we are going to have difficulties in knowing just what is AI and what is a human (the famous Turing Test), but that can be done by existing technology already.I don't think you understood how I explained algorithms. — Christoffer
As I said, the World can be deterministic, but that doesn't mean that we don't have free will. The limits in what is computable is real logical problem. Or otherwise you would then have to believe in Laplacian determinism, if we just had all the data and knowledge about the World. Yet Laplacian determinism's error isn't that don't have all the data, it's simply that we are part of the universe and cannot look at it from outside objectively, when our actions influence the outcome.No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices. — Christoffer
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.