Something has to exist to be accepted. — ovdtogt
If fallibility is accepted then fallibility exists and that is something. — ovdtogt
The way I see it, the first two absolute, fundamental truths are:
1. Something exists; which leads us to also be certain that
2. Something is aware of existence. — Possibility
Recall that in the Turing Test, a human evaluator has to decide purely on the basis of reading or hearing a natural language dialogue between two participants, which of the participants is a machine. If he cannot determine the identities of the participants, the machine is said to have passed the test. Understood narrowly as referring to a particular experimental situation, yes the Turing Test fails to capture the broader essence of intelligence. But understood more broadly as an approach to the identification of intelligence, the Turing test identifies and defines intelligence pragmatically and directly in terms of behavioural propensities that satisfy human intuition. The test therefore avoids metaphysical speculation as to what intelligence is or is not in an absolute sense. — sime
However, if there's anything in favor of communication still being possible is the shared environment. Arguably Hydrogen on earth would be identical to Hydrogen anywhere else in the universe. In fact this assumption has been used for an attempt at alien communication - the golden record on the voyager spacecrafts. — TheMadFool
The very definition of 'alien' is in terms of the respective entity's tendency or capacity to mirror and predict our stimulus-responses for it's own survival. The Turing 'Test' is a misnomer; for the test constitutes our natural definition of intelligence. If we cannot interpret an entity's stimulus-responses as acting in accordance with the basic organising principles of human culture, then as as far as we are concerned, the entity isn't worthy of consideration. So to a large extent, the ability of alien's to speak 'our language' is a presupposed in our definitional criteria. — sime
If I understand correctly then your "mirroring" argument depends on the multitude of ways information may be transmitted through any given medium of communication. I'm not qualified to comment on that but if evolution is true then there must be some logic to how our senses, input/output devices, evolved. We can look at the communication systems in humans, presumably the highest intelligent lifeform and examine how they evolved. A fair estimate would be that such systems evolved to maximize information carrying capacity e.g. color discerning ability gives us access to more information than just light-shade contrast vision.
If that's the case then, evolution on other planets would also evolve in a similar enough way that would make communication systems of all life in the universe converge rather than diverge. This would mean that, contrary to your argument, "mirroring" ability among lifeforms in the universe may not be so radically different to each other to render communication impossible. — TheMadFool
Perhaps of some relevance is our ability to "understand" animals. I don't know how much we've progressed in the the field of animal communication but there are some various clearly unambiguous expressions e.g. a dog's growl that we seem to have understood. As to whether we can extrapolate animal-human communication to human-alien exchanges is an open question. — TheMadFool
If I understand what you mean by "mirroring", it plays an important part in when the subject of discussion is privileged in some sense i.e. there exists a certain association that isn't common knowledge and it's that particular link you want to convey. Under such circumstances communication can break down but this are rare occasions otherwise how on earth are people able to make sense of each other? Civilization would collapse if this problem just a tad more common. — TheMadFool
Well put, but I don't see why all aliens must lack in ability of human-like mirroring. Some aliens may have had experiences and developments in their evolutionary past that are similar to human experiences and developments. This is what you need to show is impossible. I don't think this can be shown on an a priori manner. — god must be atheist
Yes, he would botch both — khaled
What if my personal satisfaction requires shooting people? — khaled
This doesn't do so either. It says it's a necessity. That's all it does. That's different from "justifying". — khaled
That question is LITERALLY what a utilitarian would ask though — khaled
Not anymore than — khaled
That is the same with your system. I understand that you begin from a premise that's true by definition, but the problem with moral systems is rarely that the system is unjustified but more so that it's hard to go from ANY vague premise to concrete reality. — khaled
khaled is absolutely right: your "system" doesn't help us make decisions, it just claims to make an objective statement about decision-making in general. It is not a system of ethics, because it cannot prescribe any course of action. — SophistiCat
But this "derivation" will be different from person to person and from circumstance to circumstance...that people seek stability will not tell you anything beyond that. It won't tell you whether or not the best way to achieve stability is through self centered behavior, charity, communism, capitalism or what — khaled
For me the interesting question is this: is the form preserving the information, or is the information preserving the form? Bearing in mind that the same information can be preserved, probably in an infinite variety of ways. — Pantagruel
But this "derivation" will be different from person to person and from circumstance to circumstance. Without some guidance or rules (which you can't justify) this system will end up with unjustifiable conclusions as well. — khaled
Sure it has the "functional equivalence" of objective moral systems in that it tells you what to do but it's so vague it doesn't actually help. It's like trying to extract some morality out of cogito ergo sum for example. — khaled
"Act such that you ensure you consume the largest amount of cheese possible" is another system that does that. I don't think that would pass as a moral system though — khaled
No it isn't. This isn't a normative statement. Check this http://www.philosophy-index.com/terms/normative.php . This is a statement of fact. Some things are indeed desirable to person A.... So what? An answer to the "So what" is a normative statment Ex: Thus A should seek those desirable things. "Some things are desirable to A" is akin to "The sky is blue", it is a statement about a property of an object — khaled
By your reasoning, our willful actions can never be wrong. If you do something in fulfillment of your desires, that moves you closer to a state in which you will no longer have those desires and thus no motive to perform any further action - a stable state. — SophistiCat
And since we know right and wrong, we know that your theory has to be wrong just for that reason alone. — SophistiCat
So, let's explain it from a completely different perspective. Let's start with a subjective value system which does not have a logically necessary subjective goal:
1. person A has a goal
2. therefore some things are desirable to person A (subjective normative statement)
The only problem of that system is that there is no way to choose one goal over the other since the goals themselves define what is better over other. The desirability of things is based on persons arbitrary choices.
The only new thing this system adds to that is a logically necessary goal.
1. person A has a logically necessary goal
2. therefore some things are necessarily desirable to person A (subjective normative statement)
In this system the desirability of things is not based on persons choices and therefore the desirability of those things for him does not need to be justified. — Qmeri
The conclusions don't change but I never agreed with the conclusion in the first place
Your argument as I understand it is:
1- People seek change until they achieve stability
2- Therefore people should seek change until they achieve stability (I think is a non sequitur)
3- Therefore we have a system of morality that bypasses Hume's law
You can't reach 3 if 2 is a non sequitur — khaled
We are physically and mentally programmed to react to internal and external signals: hunger, pain, thirst, cold, fear, desire (internal)and light, sound, taste.. 5 senses (external).
These signals are stimuli. If we receive too little stimuli we get bored.
People in solitary confinement can go crazy through the lack of stimuli.
People can go crazy in super quiet rooms.
People go crazy if they don't get enough human contact. All these stimuli are necessary to stay sane. — ovdtogt
For me achieving everything I want and being in state I don't want to change has meant boredom not happiness. Happiness requires change. — ovdtogt
I also showed how "going to stability" isn't a logical necessity either. "Unstable things try to change their state" =/= "Unstable things try to be stable" — khaled
You say that the goal of every person is to achieve stability. But when we unpack this sentence, it turns out that by "stability" you mean nothing other than fulfillment of a goal. So once the obscure language is peeled away, it turns out that what you said was a simple tautology: your goal is your goal is your goal. Great! Thanks for making that clear. — SophistiCat
Then I know the kind of reply I’ll get, “it’s the best thing we have” or “the best we can do”, no it’s not, these flaws could be fixed if only people cared to listen more and idolize Science less. So I’ll make a thread about that, until then I should probably stop replying to these kinds of posts venerating Science. — leo
I also showed how "going to stability" isn't a logical necessity either. "Unstable things try to change their state" =/= "Unstable things try to be stable" — khaled