Neurotransmitters all work together. But I was referring specifically to the "pleasure & reward" system, which lets you know that what you did was good for you. Or, rather, for your genes. Sometimes, what's good for your amoral genes is not so good for your moral "self". I suspect that most criminals feel good about themselves, until they face the legal consequences. :smile:Dopamine works to create refreshment, calibration, etc. To appease the side effect of calibration as a reward is criminal-ish, no(petty)? — Varde
I doubt that the subconscious mind "allows" you to think rationally. Instead, the executive Conscious mind must occasionally overrule the default motivations of the Subconscious. If your worldview is somewhat Fatalistic, you may not believe that you have Freewill to choose a conscious logical method, instead of being driven by the animal-like, automatic, subconscious, instinctive reaction to every situation.In my opinion the true reason/motivation why your subconscious allows you to think in some situations and not in others could be key to understand if logic is of any value at all. — FalseIdentity
He's the place we're trying to avoid ending up when talking about active inference. — Isaac
Ok, here's a little something to ponder upon. I'd love it if 180 Proof weighs in.
1. Epistemic responsibility is, well, a really good idea. Beliefs have moral consequences - they can either be fabulously great for our collective welfare or they could cause a lot of hurt.
2. Epistemic responsibility seems married to rationality for good, there's little doubt that that isn't the case. Rationality is about obeying the rules of logic and, over and above that, having a good handle on how to make a case.
So far so good.
3. Now, just imagine, sends chills down my spine, that rationality proves beyond the shadow of a doubt that immoralities of all kinds are justified e.g. that slavery is justified, racism is justified, you get the idea. This isn't as crazy as it sounds - a lot of atrocities in the world have been, for the perps, completely logical.
Here we have a dilemma: Either be rational or be good. If you're rational, you end up as a bad person. If you're good, you're irrational.
As you can see this messes up the clear and distinct notion of epistemic responsibility as simulataneously endorsing rationality AND goodness.
Thoughts.. — TheMadFool
He doesn't say anything definitely. :yawn:I strongly reccomend watching the video in the link to understand this better — FalseIdentity
Well, yes and no. That's the difficulty which gives Hoffman the space in which he can introduce this theoretical 'veil' without abandoning all credibility. The problem is that the result of our prediction (the response of the hidden states) is just going to be another perception, the cause of which we have to infer. No if we use, as priors for this second inference, the model which produced the first inference (the one whose surprise reduction is being tested), then there's going to be a suppresive action against possible inferences which conflict with the first model. String enough of these together, says Hoffman, and you can accumulate sufficient small biases in favour of model 1, that the constraints set by the actual properties of the hidden causal states pale into insignificance behind the constraints set by model 1's assumptions.
The counter arguments are either that the constraints set by the hidden causal states are too narrow to allow for any significant diversity (Seth), or that there's never a sufficiently long chain of inference models without too much regression to means (which can only be mean values of hidden states). I subscribe to a combination of both. — Isaac
To be able to occasionally overrule the motivation it must always have the first access to information and decision about such informations or the occassional overuling would not work reliably -this is what I wanted to explain with my primary survival reflex example. This first access should give it absolute power over what happens with the information it lets through. If I would have first access and hence full controll over all information you ever receive in your life you may call yourself my manager and I might find it conveniend to let you think you are my manager to avoid unecessary quarells but if I decide what your reality is by controlling all the information I should have absolute controll over you. Let me say that I am neither an opponent of free will nor that I am convinced that the mind really works like materialist think it works. I just tried to think this philosophy through too it's conclusion.I doubt that the subconscious mind "allows" you to think rationally. Instead, the executive Conscious mind must occasionally overrule the default motivations of the Subconscious.
That is nice, I will try to remember it :) In fact evolution could have selected some people in a way that they try to solve their problems more in intelectual ways and some people to solve them more with physical violence like the witch hunters did. That evolution can select people positivly that willfully ignore scientific facts is already a bad sign for what evolution is doing when trying to train our brains. Clearly truth is at least not always the main concern of evolution when you look at that people. If you are interested in the subject: there is a measure called "social dominance orientation" and it has been proofed that 1. This trait is genetic. 2. That such people despise science and scientists:As David Hume asserted "reason is . . . a slave to the passions"
I think you have a very good point here. With all the disbute about Hoffman the focus was lost that my complaint is that logic is evil. If Hoffmann would be right ( I start to have some doubt now too but I will need more time to think this through) this would not have made logic really evil it would have just made it into an intelectual disapointment. What could make it evil is more it's origin in predation. — FalseIdentity
Sophisticat had mentioned here that the ultimate test of the truth of our mental models is if we can do predict events correctly. At this point it makes sense to look more on how a brain actually learns to make predictions. Any neural network has to be "backtrained" what was a wrong prediction and what was a correct prediction in several rounds. When the prediction was false some neural connections will be cut in the hope that the error they produced will not reoccur. When the prediction was right the connections that this prediction made are reinforced so that the network get's better with every training round. Now I am afraid that the ultimate measure for the brain of what was a wrong prediction and what was a correct prediction is if it gained or lost energy through this prediction. But you can gain energy only by stealing (either directly or indirectly) life energy from other life forms. And since other life forms don't want their life energy to be stolen, the only way to train your brain is by constantly breaking the golden rule. The idea that we are morally allowed to take the life of so called " inferior" species is highly dubious and sounds like an excuse. I think there would be strong opposition if "intelectually superior" aliens came here and would harvest us justifying this with the same line of reasoning. — FalseIdentity
Now the neural network that is trained for this purpose and in this way it should have strong limitations in what kind of intelectual problems it can solve. For example in computer science you can't use a neural network that was trained for language recognition to recognize images. — FalseIdentity
In the case of the human brain that was in effect trained on how to break the golden rule most efficiently I am for example sure that it can not know what evil is in the metaphysical sense. I agree hence with the critique that - if I am really only that network - I can not know what evil is either. But if I am unable to recognize what evil is I could commit very evil deads all the time without noticing and that alone bothers me.A second unexpected but straightforward conclusion is that if it's impossible to understand for humans what evil is, they should stay away from building counter proof of god based on that term (I mean the problem of evil). — FalseIdentity
Idenpendent of if or not Hoffmann is right - it should be hard to impossible for this network to understand anything that is not either food or can be used as a tool to obtain food. If you see everything just as food or as a tool to food this will preclude you from understanding it's deeper nature. Understanding that deeper nature would just waste energy and maybe it even would over time degrade the strength of the network at least in relation to the task of gathering energy efficiently. — FalseIdentity
In reality prey animals prey too: they take away the food of other animals. That might look peacefull as long as there is plenty of food but if there is not you can see herbivourous animals fighting in quiet unfriendly ways for it.Why the asymmetry - predators being more intelligent than prey animals - you think?
Is it really logic alone that makes the case for the golden rule? I think that the golden rule is true, this is either a brain error of me or someone outside evolution is telling me that it is true :)Yes but as I said, again it's logic (intelligence) that made the case for the golden rule.
In the business model example, there are different levels of "access to information". The workers on the front lines (physical senses) typically receive new information first. They then pass it up the hierarchy, where it is sorted based on the need to know. So the CEO at the top is usually unaware of the bulk of information flow. He/she only receives the most important or urgent data, after it is filtered up through the system. However, an alert CEO may also have his/her own "spies" to actively look for relevant unfiltered information, before it is affected by the mundane priorities of lower levels.To be able to occasionally overrule the motivation it must always have the first access to information and decision about such informations or the occassional overuling would not work reliably — FalseIdentity
But the arguments why logic should not be evil are not convincing yet. — FalseIdentity
I had never heard of "predatory logic" before. But, after a brief review, I see it's not talking about capital "L" Logic at all. Instead, it refers to the innate evolutionary motives that allow animals at the top of the food chain to survive and thrive. PL is more of an inherited hierarchical motivation system than a mathematical logical pattern. Logic is merely a tool that can be used for good or bad purposes. To call the "logic" of an automobile "evil" is to miss the point that a car without a driver, is also lacking a moral value system. It could be used as a bulldozer to ram a crowd of pedestrians, or as an ambulance to carry the wounded to a hospital. The evil motives are in the moral agent controller, not the amoral vehicle.predatory logic — FalseIdentity
A serious and good philosophical work could be written consisting entirely of jokes. — Ludwig Wittgenstein
I had never heard of "predatory logic" before. But, after a brief review, I see it's not talking about capital "L" Logic at all. Instead, it refers to the innate evolutionary motives that allow animals at the top of the food chain to survive and thrive. PL is more of an inherited hierarchical motivation system than a mathematical logical pattern. Logic is merely a tool that can be used for good or bad purposes. To call the "logic" of an automobile "evil" is to miss the point that a car without a driver, is also lacking a moral value system. It could be used as a bulldozer to ram a crowd of pedestrians, or as an ambulance to carry the wounded to a hospital. The evil motives are in the moral agent controller, not the amoral vehicle. — Gnomon
When we develop models analytically, such as in science or in everyday reasoning, it is certainly possible - and seductive - to come up with a model that is resistant to falsification. But it seems to me that such a modelling system would be difficult to evolve in the first place, because the selective pressure would be weak to non-existent. — SophistiCat
The "good" that you mention is nothing more than a human invention!! There is no good or bad in universe. — dimosthenis9
Even if it is, aren't people part of the universe and thus good and bad? — GraveItty
Is there any good or bad in universe except human societies? Aren't these simple things that people try to define as to make our societies and our living together function?? — dimosthenis9
Can an a priory human skill like Logic ever be bad or evil? Especially logic which is our "searching for truth engine" , which helped us the most to evolve? — dimosthenis9
"Human societies", like "a country", are abstract concepts and can as such not be good or bad. A country or society has no mind of its own. Nor has a society. Good and bad are not "defined", they are just human qualities — GraveItty
To answer your first question, it can. And in science-based societies it's doing even evil, with no bad intentions though. Look at the state of the world. Look at the harm done to Nature. — GraveItty
Look at the harm done to Nature. — GraveItty
Searching for a truth engine (whatever that may mean...) helping us most to evolve? If you wanna evolve into a truth engine then it's maybe handy. I surely don't! — GraveItty
Human societies aren't abstract concepts. — dimosthenis9
I don't want to evolve into anything. We humans have that mind ability already. Logic is our mind's searching for truth mechanism and you still haven't mentioned not even one thing that Logic brings harm. — dimosthenis9
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.