• Absolute truth
    Something has to exist to be accepted.ovdtogt

    Or one can simply be mistaken in that evaluation.
  • Absolute truth
    If fallibility is accepted then fallibility exists and that is something.ovdtogt

    Don't misrepresent. I said:"if fallibility is accepted as a possibility". Therefore nothing is accepted as true or existing.
  • Why aliens will never learn to speak our language
    I agree with you that we lack a good definition for general intelligence. But as my example of a thing that is clearly as intelligent as us but can't predict all our associations demonstrates, even our intuition doesn't agree with the Turing test as what is intelligent. We need to keep working to understand what intelligence is and as I currently see it, the way the Turing test is used in this work and in things like AI development, it diverts us into a path that is harmful. It is quite obvious that a transistor based general intelligence doesn't need to be able to speak any language in an indistinguishable way from humans and that that would be an inefficient and unnecessarily complex way to program general intelligence - yet people tend to see that as an important goal right now. Harmful, I say!
  • Absolute truth
    The way I see it, the first two absolute, fundamental truths are:

    1. Something exists; which leads us to also be certain that
    2. Something is aware of existence.
    Possibility

    If fallibility is accepted as a possibility, then even "something exists" is not absolutely necessarily true, since one could just be failing to understand what those words even mean. You can never prove that you have evaluated your proofs correctly.
  • Why aliens will never learn to speak our language
    Mirroring is anything where the way you are is used to predict the way something else is. For example in our language, we just assume that our associations have something to do with the thing someone else said, just because we have those associations. It doesn't work all the time and we do modify our thoughts of what someone meant by what we know of him, but as the basis, our language simply uses mirroring to predict what others mean. Very fast - doesn't need definitions, but does require everyone to be programmed in a very similar way.
  • Why aliens will never learn to speak our language
    Recall that in the Turing Test, a human evaluator has to decide purely on the basis of reading or hearing a natural language dialogue between two participants, which of the participants is a machine. If he cannot determine the identities of the participants, the machine is said to have passed the test. Understood narrowly as referring to a particular experimental situation, yes the Turing Test fails to capture the broader essence of intelligence. But understood more broadly as an approach to the identification of intelligence, the Turing test identifies and defines intelligence pragmatically and directly in terms of behavioural propensities that satisfy human intuition. The test therefore avoids metaphysical speculation as to what intelligence is or is not in an absolute sense.sime

    If the "natural language" is specifically defined not to use mirroring, I might agree with the broader definition of the Turing test. Mirroring would always give an advantage to the human participant since the evaluator would understand his words better since the evaluator is programmed in such a similar way.

    But no - even then the test simply doesn't work in anything but finding things that can reproduce the particular way humans are programmed. It is much harder to replicate the behavior of a thing that is on your level or lower than it is to just be on his level or higher. The test just can't be defined in any way where a human evaluator decides which participant is human. Mirroring just makes that too easy no matter how intelligent the other participant is - no matter what language is used since every kind of expression causes associations in human mind.

    With this test literally a system which can do everything a human can except predicting some particular associations humans get from specific phrases in specific contexts for reasons even they don't know, would not pass the test. Even if it solved every big problem we humans have not yet solved and explained its reasons for its own goals, it would not pass the Turing test since the evaluator can identify it as the machine.
  • Why aliens will never learn to speak our language
    However, if there's anything in favor of communication still being possible is the shared environment. Arguably Hydrogen on earth would be identical to Hydrogen anywhere else in the universe. In fact this assumption has been used for an attempt at alien communication - the golden record on the voyager spacecrafts.TheMadFool

    Yes, and that is exactly a form of communication that doesn't use mirroring - a logical language which is based on definitions. Definitions don't need mirroring since they are defined the same irregardless of what you associate with them. And that's what our communications with aliens and AIs will be like - making definitions and saying things simply by those definitions. It's much slower and the things we don't know how to define with purely logical means become near impossible to talk about.
  • Why aliens will never learn to speak our language
    The very definition of 'alien' is in terms of the respective entity's tendency or capacity to mirror and predict our stimulus-responses for it's own survival. The Turing 'Test' is a misnomer; for the test constitutes our natural definition of intelligence. If we cannot interpret an entity's stimulus-responses as acting in accordance with the basic organising principles of human culture, then as as far as we are concerned, the entity isn't worthy of consideration. So to a large extent, the ability of alien's to speak 'our language' is a presupposed in our definitional criteria.sime

    I very much disagree with that our definition of general intelligence should be associated with the Turing test. That would be the same as defining "a car" to be only those things that are nearly exactly the same as a volkswagen beetle, since fluent human speech requires one to be capable of reproducing human programming almost exactly. Even a human with all of our quirks and flaws removed, could not speak with us fluently - he would be confused with most of the associations we make with our language - only being capable of using it through definitions and logic for the most part, which is not mirroring. Are you saying that a human without our quirks and flaws is not intelligent?

    If a system is capable of gathering information about its environment and making predictions based on it and if it is capable of intependently creating complex technologies and solutions based on its information and just generally doing the things that we humans are capable with our "intelligence", then it is intelligent whether or not it can speak the human language.
  • Why aliens will never learn to speak our language
    If I understand correctly then your "mirroring" argument depends on the multitude of ways information may be transmitted through any given medium of communication. I'm not qualified to comment on that but if evolution is true then there must be some logic to how our senses, input/output devices, evolved. We can look at the communication systems in humans, presumably the highest intelligent lifeform and examine how they evolved. A fair estimate would be that such systems evolved to maximize information carrying capacity e.g. color discerning ability gives us access to more information than just light-shade contrast vision.

    If that's the case then, evolution on other planets would also evolve in a similar enough way that would make communication systems of all life in the universe converge rather than diverge. This would mean that, contrary to your argument, "mirroring" ability among lifeforms in the universe may not be so radically different to each other to render communication impossible.
    TheMadFool

    The problem isn't that evolution doesn't cause things to converge on large scales. The problem is that evolution never creates any kinds of "ultimate" or "perfection". Different ways of processing information can work better in different environments. They can work differently if things defined very early in evolution like replication mechanisms of cells are different. They can simply be non optimal vestiges from earlier evolution. And many times times different systems can all work just as well - making no difference for evolution, but still changing the particularities of how associations work in your species (assuming that the species even uses an association based communication).

    This combined with the fact that our language requires extremely similar associations to happen. When basic things like "shape" start to mean fundamentally different things simply because one uses colour to define contrasting lines and one uses brightness (which isn't an inferior method in many cases), it doesn't require most of the things to be differently programmed. Even a part of a percent difference in programming can reliably cause enormous changes in the end result. This is the reason why complex mirroring requires so precise similarity from the systems that use it.
  • Why aliens will never learn to speak our language
    Perhaps of some relevance is our ability to "understand" animals. I don't know how much we've progressed in the the field of animal communication but there are some various clearly unambiguous expressions e.g. a dog's growl that we seem to have understood. As to whether we can extrapolate animal-human communication to human-alien exchanges is an open question.TheMadFool

    A good point. But we have to understand that our evolutionary history with animals is not just similar - it is for the large part the exact same history. And it is also a history where we share a common environment, where evolution has simply created ways for different species to communicate things like "danger" or "no threat" or acceptance for each other.

    Because of this, we can "understand" animals and communicate with them in certain simple things with this simple interspecies mirroring. (Not even sure if this is mirroring, since the same associations don't seem to come because of similar programming, but because we have learned them from other sources.) Nothing as complex as our language could be used between things with such major differences in programming though.
  • Why aliens will never learn to speak our language
    If I understand what you mean by "mirroring", it plays an important part in when the subject of discussion is privileged in some sense i.e. there exists a certain association that isn't common knowledge and it's that particular link you want to convey. Under such circumstances communication can break down but this are rare occasions otherwise how on earth are people able to make sense of each other? Civilization would collapse if this problem just a tad more common.TheMadFool

    The mirroring isn't just about associations which can be learned. It's also about the way things are just processed by the brain of your species. For example, if your brain processes visual information by prioritizing colours first and then using that information to find lines of contrast, the resulting associations in your system will differ from systems which use brightness to find lines of contrast. And when the millions of these systems that create a human mind end up creating our particular associations, the probability that an alien will have a system capable of learning similar enough associations for the mirroring we use in our language is almost zero.

    This is why we can't just define the correct associations for a given word or phrase and expect an AI or another species to be able to use it. The underlying programming that ends up choosing those associations in any given context in humans is as complex as the human mind itself and therefore we don't even ourselves know it. And therefore we can't teach it.
  • Why aliens will never learn to speak our language
    I'm actually talking about fluent conversation here - like what would pass a Turing test. But I do agree, that while it would always be slow and awkward, we could use the pre-existing words and phrases to communicate about things common for us. A lot of time would be used to deal with all the extra wrong associations and unmeant ways of approaching the common subjects, but some of our associations would be common and useful. In anything complex it would be much more useful to use something without mirroring.
  • Why aliens will never learn to speak our language
    Well put, but I don't see why all aliens must lack in ability of human-like mirroring. Some aliens may have had experiences and developments in their evolutionary past that are similar to human experiences and developments. This is what you need to show is impossible. I don't think this can be shown on an a priori manner.god must be atheist

    I'm not actually saying that no aliens are similar enough to use mirroring with us - just that us coming into contact with those particular very rare aliens would be so improbable that in practice it will never happen. Although, I guess we will never have to be in live contact in order for them to learn our language. Still I think those aliens would be so rare, that even the recordings we leave will never be discovered by them.

    The text even specifies that "unless the aliens are programmed almost exactly like us".
  • Why aliens will never learn to speak our language
    This also means that the Turing test is a bad test for general intelligence. It just tests whether or not something is programmed in way that can closely replicate human programming.

    It's probably an inefficient and unnecessarily complex way to achieve general intelligence in a transistor based system to try to replicate the programming of a particular neuron based system that we know has a huge amount of unnecessary quirks and flaws. This is why we should remove human speech from the list of things we are trying to make our general-intelligence-AIs to be able to do.
  • Emotions and Ethics based on Logical Necessity
    This system is not about straight up solving the is-ought gap since I think that it is unsolvable. This system is about bypassing it by giving a functional equivalent to an objective moral system with a system that gives a necessary personal goal for everyone the choice of which doesn't need to be justified since it's not a choice. It is all about whether this goal of "stability" is choosable.

    If you have a goal, you do have a functional equivalent to a moral system, since you can choose all your actions according to that goal. To me, the biggest problem of ethics has always been the justification of the choice of goals. No other system I have encountered has solved or justifiably bypassed the problem of justifying the choice of goals.
  • Emotions and Ethics based on Logical Necessity
    Yes, he would botch bothkhaled

    I guess that is true about any non-expertise irregardless of how simple the subject is, but I would still give an advantage to personal satisfaction, since we naturally have some expertise in it.

    What if my personal satisfaction requires shooting people?khaled

    Then you would be in a difficult situation with this system. Since achieving such a desire would be insanely hard to do without huge retaliations against your long term satisfaction, your best bet would be to change your desires. This is a good example of how other goals and desires can be derived from this one goal since they themselves affect how efficiently satisfaction can be achieved in practice.

    You can't just say: "According to this system I should do this simply because it's my desire!" The optimal solution according to this system can always be to change ones desire or to ignore a desire that would hurt your general long term satisfaction instead of acting on it.

    Because of this, this system usually ends up with intuitively moral behaviour. Realistic exceptions to this are usually not because one has overwhelmingly strong insane desires because these are rare and changeable. The exceptions happen in situations like: my own survival vs others when resources are low. Or: I already am a dictator and the people are going to rebel and kill me if I don't force them to be passive and unfree. Both are situations where long term satisfaction is very uncertain and therefore they should be avoided in this system.
  • Emotions and Ethics based on Logical Necessity
    This doesn't do so either. It says it's a necessity. That's all it does. That's different from "justifying".khaled

    I never said it justifies anything. It solves the problem of justifying ones choice of goals by bypassing it. Something that is not a choice doesn't need to be justified as a choice.

    That question is LITERALLY what a utilitarian would ask thoughkhaled

    Utilitarianism is a family of consequentialist ethical theories that promotes actions that maximize happiness and well-being for the majority of a population.-wikipedia. My system is about personal satisfaction - similar, but still majorly different.

    Not anymore thankhaled

    Seeking of larger things like capitalism does require much more expertise than seeking own personal satisfaction since one has naturally orders of magnitude more information about himself and things that effect himself than he has about larger things like capitalism. Not to mention that a single person and his satisfaction is a much more simple thing than anything that has to do with large groups like capitalism. Are you really saying that a non-expert would be able to do as good decisions regarding capitalism as what makes him satisfied? People try to achieve personal satisfaction all their lives. They are relatively trained in it even if they never thought of it. Therefore this system is very easy to turn into practical solutions.
  • Emotions and Ethics based on Logical Necessity
    That is the same with your system. I understand that you begin from a premise that's true by definition, but the problem with moral systems is rarely that the system is unjustified but more so that it's hard to go from ANY vague premise to concrete reality.khaled

    This I disagree with. At least I haven't met any system that solves the problem of justifying your choice of goals. And I also disagree with that this system is even that hard to use to make concrete decisions. In practice this system simply makes people ask the question: "what actions would make me the most satisfied in the long run?". Since people have much more information about themselves than the world as a whole, such question is much easier to turn into concrete actions than something like utilitarianism. Even a moral system with a very specific goal like: "increase capitalism", would be more difficult to implement for the average person since what actually would increase capitalism is a question that needs expertise.

    And if you are talking about moral systems that give rules that are not based on circumstance, I don't even know where to find those these days. People seem to accept that our moral intuition makes mistakes from time to time and that even religious teachings should be implemented based on the circumstance. So, while the exact nature of human "stability" and the absolute optimal way to achieve it are complex to answer questions, this system is very simple for any single person to turn into somewhat functional concrete decisions. And I'm not aware of any moral system, which does anything better than that. Optimal solutions need expertise and effort and somewhat working solutions are what is expected from the average person.

    "What would make me the most satisfied in the long run?" is not even a new difficult thing to teach to people. People are doing it already. This system simply solves the problem of justifying that choice of a goal.
  • Emotions and Ethics based on Logical Necessity
    khaled is absolutely right: your "system" doesn't help us make decisions, it just claims to make an objective statement about decision-making in general. It is not a system of ethics, because it cannot prescribe any course of action.SophistiCat

    In the very same responte, khaled says that my system prescribes a course of action for every circumstance - just that it does not give simple universal courses of action like "be charitable" irregardless of circumstance.

    But this "derivation" will be different from person to person and from circumstance to circumstance...that people seek stability will not tell you anything beyond that. It won't tell you whether or not the best way to achieve stability is through self centered behavior, charity, communism, capitalism or whatkhaled

    The fact that the optimal course of action is different depending on circumstance is true about every consequentialist moral system. This system is not abnormal in that regard. The two things this system is abnormal in are:
    1. it gives a personal goal for everyone, not universal goals
    2. it avoids the problem of justifying the choice of this goal by showing that it is unchoosable and therefore doesn't need to be justified as a choice.

    But it seems like that you will not accept that people have this unchoosable logically necessary goal. That's all right. I hope that you at least understand what this system is trying to say even if you're not convinced.
  • Relative Information Model: An argument for life after death
    For me the interesting question is this: is the form preserving the information, or is the information preserving the form? Bearing in mind that the same information can be preserved, probably in an infinite variety of ways.Pantagruel

    In this model the closest equivalent to a "form" is a possibility which is a logically possible state that can be. In this model information limits off/makes untrue certain "forms" and randomness allows/makes possible certain "forms". Therefore in this model untrue forms are contained/preserved in information and possible forms are contained/preserved in randomness.

    And while the "information" we use in our practical lives can be contained in variety of ways (computers, books, our memory, etc), this is the "relative information" this model describes. It is information that is derived from a form and is thus relative to that form and is thus contained in that form. The non-relative information and randomness are the fundamental substances of this model that contain everything else.
  • Relative Information Model: An argument for life after death
    Hmmhhh... I must admit that I don't really understand what you are trying say. This model doesn't really deal with the nature of truth itself. It deals with the nature of existence.
  • Emotions and Ethics based on Logical Necessity
    But this "derivation" will be different from person to person and from circumstance to circumstance. Without some guidance or rules (which you can't justify) this system will end up with unjustifiable conclusions as well.khaled

    Well the derivations can be justified from circumstance to circumstance. It's just complicated, not undoable. Nothing forces this system to make generalizations without acknowledging them to be generalizations and thus not always true. It's still better in my opinion than an arbitrary goal system or a moral system which is just based on intuition or some "moral shoulds" which are chosen but not justified. Especially since very simple but not vague moral rules have been shown by history to not work very well. There are always exceptions where even the most moral sounding rules actually cause more misery. Like: give to the poor might be a good moral principle in certain situations, but not all. And the only rules that always end up causing nice things are always very vague like: increase happiness.

    Actually, even intuitive morality is just as complex as mankind itself. Pretty much what we call "moral" is always dependent on the context and what actually causes things like suffering and happiness in any given situation. Therefore the only thing that makes this system more complicated is the layer that the optimal solution is dependent on the point of view. It's just the same jump we made in physics when we jumped to relative theory of time.

    Sure it has the "functional equivalence" of objective moral systems in that it tells you what to do but it's so vague it doesn't actually help. It's like trying to extract some morality out of cogito ergo sum for example.khaled

    The details of what this goal system gives to any person is an empirical scientific question since it's by definition not a logical necessity since it is dependent on the person and the circumstance. But since I can't demonstrate the empirical evidence for every circumstance in this forum, I only try to demonstrate the logically necessary starting point which I can demonstrate.

    And with that starting point combined with ones knowledge of his circumstance, at least I have been able to create to myself a pretty complete set of values with not too much effort.

    Just because a system is very complicated doesn't mean that the system is unhelpful. Politics is complicated, yet we have been able to make useful simplifications and generalizations for it and for pretty much every other complicated thing we have encountered. Including well established moral systems like utilitarianism which is almost as complicated and vague as my goal system, but you are not complaining about that, are you?
  • Emotions and Ethics based on Logical Necessity
    "Choosing goals to achieve goals is an unarbitrary way of choosing ones values and desires based on a logically necessary goal of achieving ones goals which does not have to be justified since it's not a choice and it is a functional equivalent to objective moral systems in that it allows an unarbitrary way of making value judgements."

    Is that an understandable way of explaining this goal-system of mine?
  • Emotions and Ethics based on Logical Necessity
    "Act such that you ensure you consume the largest amount of cheese possible" is another system that does that. I don't think that would pass as a moral system thoughkhaled

    No that doesn't make unarbitrary value judgements since the whole premise is arbitrary. The whole point of my system is that its premise is not arbitrary. It is based on a goal we have no matter what we chose. That cheese system is a perfect example of an arbitrary goal you simply chose. Therefore you have to justify your choice which you can't do.

    No it isn't. This isn't a normative statement. Check this http://www.philosophy-index.com/terms/normative.php . This is a statement of fact. Some things are indeed desirable to person A.... So what? An answer to the "So what" is a normative statment Ex: Thus A should seek those desirable things. "Some things are desirable to A" is akin to "The sky is blue", it is a statement about a property of an objectkhaled

    Well, then we disagree what subjective normative statement means, but that is alright... probably my fault since i'm not very familiar with that word. It is still irrelevant for my point though. If you agree that we have a logically necessary goal, then you should also agree that it does not need to be justified like other goals. No matter how obvious and trivial you say it is, the fact that it does not need to be justified as a choice is not obvious to most people. And the fact that you can derive all your other goals and desires by choosing them as much as they are choosable to serve it and it's optimal achievement is also not obvious to most people. Therefore it is not a non-helpful realization and it does serve the function of making unarbitrary value judgements for a person. It still makes objective morality functionally unnecessary.
  • Emotions and Ethics based on Logical Necessity
    By your reasoning, our willful actions can never be wrong. If you do something in fulfillment of your desires, that moves you closer to a state in which you will no longer have those desires and thus no motive to perform any further action - a stable state.SophistiCat

    Except your willful actions can still be wrong. If you make an action that makes you temporarily more stable, but that decreases your stability in the long run, you have objectively made an error according to this system. For example: you hurt the group, but now the group makes sure you have more severe consequences. Or: you being secretly a thief caused your social system to lose trust in one another and now you have to be in an environment where everything social is harder and more complicated. Even your personal desire might be wrong if it's so hard or otherwise something that sabotages your ability to achieve stability. That's why I think this system does promote intuitively moral choices in most situations.

    And since we know right and wrong, we know that your theory has to be wrong just for that reason alone.SophistiCat

    No one has ever demonstrated any objective right and wrong to be a thing. We know that we have those feelings and intuitions and we know evolutionary reasons to have those feelings and intuitions. For example: acting too selfishly in an intelligent group/tribe makes them band against making you lose no matter how powerful you are therefore having unselfish feelings and feelings that follow the groups norms gives an evolutionary advantage.

    To me Hume's guillotine demonstrates that objective morality is a non thing. But we still have a functional need for a system by which we can make unarbitrary value judgements. So, from a previous post:

    So, let's explain it from a completely different perspective. Let's start with a subjective value system which does not have a logically necessary subjective goal:

    1. person A has a goal
    2. therefore some things are desirable to person A (subjective normative statement)

    The only problem of that system is that there is no way to choose one goal over the other since the goals themselves define what is better over other. The desirability of things is based on persons arbitrary choices.

    The only new thing this system adds to that is a logically necessary goal.

    1. person A has a logically necessary goal
    2. therefore some things are necessarily desirable to person A (subjective normative statement)

    In this system the desirability of things is not based on persons choices and therefore the desirability of those things for him does not need to be justified.
    Qmeri

    It's irrelevant whether this is called a moral system. It is simply a system with which one can make unarbitrary value judgements. Therefore it is at least a functional equivalent to the most important function of a moral system.
  • Emotions and Ethics based on Logical Necessity
    Okay, you are trying to make me either create objective goals (things you use the word "should" for) or say that this is not a moral system.

    As I have said many times, it's irrelevant whether this is called a moral system. It is simply a system with which one can make unarbitrary value judgements. Therefore it is at least a functional equivalent to the most important function of a moral system.

    So, let's explain it from a completely different perspective. Let's start with a subjective value system which does not have a logically necessary subjective goal:

    1. person A has a goal
    2. therefore some things are desirable to person A (subjective normative statement)

    The only problem of that system is that there is no way to choose one goal over the other since the goals themselves define what is better over other. The desirability of things is based on persons arbitrary choices.

    The only new thing this system adds to that is a logically necessary goal.

    1. person A has a logically necessary goal
    2. therefore some things are necessarily desirable to person A (subjective normative statement)

    In this system the desirability of things is not based on persons choices and therefore the desirability of those things for him does not need to be justified.

    I'm not trying to make objective normative statements. I don't think that's possible since I think Hume's guillotine demonstrates that impossibility. I think objective morality is demonstrably a non thing and that's why we are incapable of making logical arguments for it. We still have a functional need for an unarbitrary system to make value judgements. This system provides that.
  • Emotions and Ethics based on Logical Necessity
    The conclusions don't change but I never agreed with the conclusion in the first place
    Your argument as I understand it is:

    1- People seek change until they achieve stability
    2- Therefore people should seek change until they achieve stability (I think is a non sequitur)
    3- Therefore we have a system of morality that bypasses Hume's law

    You can't reach 3 if 2 is a non sequitur
    khaled

    So it still seems that we disagree on the nature of the word "should". To me your "moral should" is the same as "according to this objective goal so and so should". And the should in my system is "according to this subjective goal so and so should". Therefore my argument is:

    1- People try to achieve change until they achieve stability by logical necessity
    2- Therefore according to this goal people should achieve change until they achieve stability
    3- Therefore we have a system of morality that bypasses Hume's law

    But even if we give you that there is some kind of "moral should" that is not just the same as an objective goal, this system still makes "moral should systems" functionally unnecessary since it gives an unarbitrary way to make all possible choices with just subjective normative statements.

    At least to me, the only problem philosophically in just going with your personal goals was that you couldn't philosophically justify any goal better than another and that made your choice of goals arbitrary. With this system one subjective normative statement becomes not a choice and therefore not arbitrary and all other subjective normative statements can be derived from that one.

    Objective morality was a nice idea that solved a functional need to evaluate choices. Hume realized that it couldn't be justified and now it doesn't need to be justified since with this system there is not even any functional need for it.
  • Emotions and Ethics based on Logical Necessity
    We are physically and mentally programmed to react to internal and external signals: hunger, pain, thirst, cold, fear, desire (internal)and light, sound, taste.. 5 senses (external).
    These signals are stimuli. If we receive too little stimuli we get bored.
    People in solitary confinement can go crazy through the lack of stimuli.
    People can go crazy in super quiet rooms.
    People go crazy if they don't get enough human contact. All these stimuli are necessary to stay sane.
    ovdtogt

    I agree... human mind is programmed to work in a very specific environment. Lack of stimuli would be such a huge change of that environment that it would be weird if we didn't go crazy.
  • Emotions and Ethics based on Logical Necessity
    And being in a state you don't want to change doesn't mean that you are just lying in your bed doing nothing. When having sex many people are in a state they don't want to change. Even if their body is doing something, their mind is not trying to change what the body is doing and is just happy with what is happening.
  • Why mainstream science works
    And what happens to the alternative sources of information that actually start creating anti-error and anti-corruption systems to themselves? They become like science and eventually they become part of the mainstream science.
  • Why mainstream science works
    Most of the sources of information that represent themselves as alternatives to mainstream science are all about creating a community and marketing. And most of their marketing is simply about "How mainstream science is so bad" and "how there is so much more beyond that" instead of actually creating reliable anti-mistake and anti-corruption systems to themselves to make themselves more reliable.
  • Emotions and Ethics based on Logical Necessity
    For me achieving everything I want and being in state I don't want to change has meant boredom not happiness. Happiness requires change.ovdtogt

    Boredom is a goal evolution created for us (probably) so that we don't stop trying to do things and therefore wasting our resources. You are not in a stable state if you are bored. You have internal conflict in you. And usually boredom eventually wins and makes you behave with intent.
  • Why mainstream science works
    Well, I guess we disagree about the degree mainstream science is corrupt. But from the practical point of view where the important question is: "What is the most reliable source of information available for a non-expert?" - the mainstream science wins hands down. Not because it's even near perfect, but because every other large source of information does things so much worse.
  • Emotions and Ethics based on Logical Necessity
    Well, that is not exactly what I meant. To me achieving everything you want and being in a state you don't try to change is still happiness. Human psychology seems to very bad at this though, since it usually just comes up with new goals when the old ones have been achieved. I guess evolution makes systems that never stop trying because those kinds of systems usually win even if they are not the most happy.
  • Emotions and Ethics based on Logical Necessity
    So, a being that has achieved everything it tried to achieve and therefore does not have anything that makes it behave with intent since everything is already the way it desires, doesn't have goals? That would mean achieving a goal is losing a goal. Well, I guess it's no longer a goal if it has already been achieved.
  • Emotions and Ethics based on Logical Necessity
    Although, we probably should define what a goal is. (I think we pretty much have defined everything else pretty precisely.)

    We seem to agree that a system, which tries to achieve something has a goal. "Trying to achieving something" can be divided into two categories: "trying to achieve change" and "trying to achieve no change". To me trying to achieve change is not having achieved your goal and trying to achieve no change is having achieved ones goal. At least with that definition, stability, which is the state you have achieved when you try to achieve no change, is the the only possible state, where one has achieved his goal.

    Do you have a good definition for a goal? The traditional way of defining it as something "desirable" is insanely vague.
  • Emotions and Ethics based on Logical Necessity
    I also showed how "going to stability" isn't a logical necessity either. "Unstable things try to change their state" =/= "Unstable things try to be stable"khaled

    Okay, let's acknowledge that at any given time, the only logically necessary goal an unstable system has is the goal of achieving a change in it's current state. But because our intelligence allows us to extrapolate and create the optimal solution to this problem that in any unstable state we have not achieved our goals, we can show that the achievement of stability is the optimal solution to this problem, although not a logically necessary goal itself.

    But since this is a moral system whose purpose is to show that there is a logically necessary goal (unstable systems are trying to achieve a change in their state) and that the optimal way of solving that problem of not having achieved ones logically necessary goals in any unstable state is achieving stability, all the conclusions stay the same. Although, I do agree that there is a nuance difference.
  • Emotions and Ethics based on Logical Necessity
    You say that the goal of every person is to achieve stability. But when we unpack this sentence, it turns out that by "stability" you mean nothing other than fulfillment of a goal. So once the obscure language is peeled away, it turns out that what you said was a simple tautology: your goal is your goal is your goal. Great! Thanks for making that clear.SophistiCat

    Stability was defined precisely, although I do agree that the text has other things in it that are interpretable. Stable state is simply a state of a system that doesn't try to change aka doesn't change without outside influence. Instability is the opposite of that. And by those precise definitions an unstable system is trying to achieve change of its current state by logical necessity, which is a goal by most definitions and therefore a logically necessary one. Not just your goal is your goal.

    At least for most people "your goal is your goal" does not give the same ideas as "trying to achieve a change in an unstable state is a logically necessary goal that isn't a choice". "Your goal is your goal" does not demonstrate any logically necessary goals for anyone, which is the main point of this theory.
  • Why mainstream science works
    Then I know the kind of reply I’ll get, “it’s the best thing we have” or “the best we can do”, no it’s not, these flaws could be fixed if only people cared to listen more and idolize Science less. So I’ll make a thread about that, until then I should probably stop replying to these kinds of posts venerating Science.leo

    While I do agree that we could always do better, I disagree that people that do science, idolize science. Not that this has been studied, but all scientists and potential scientists I know are fully aware of the flaws in the system and they consider it a high priority to solve them. Removing corruption completely is just a very hard thing to do. Scientists are working hard on it all the time and that's why mainstream science gets better all the time and that's why it is the best we have and will continue to be the best we have for the foreseeable future.
  • Emotions and Ethics based on Logical Necessity
    I also showed how "going to stability" isn't a logical necessity either. "Unstable things try to change their state" =/= "Unstable things try to be stable"khaled

    Yes yes, and I'll get back to that after a good night sleep and some thinking.