So it would seem that from the perspective of those presuppositions eliminativism is demonstrably wrong, but it does not seem to be demonstrably wrong in any definitive, unprejudiced way. — Janus
Well, then it is a "judgment" which can be accomplished via machine-learning algorithms. I suggest that you check out the article I linked to in my discussion with Galuchat for an example of a primitive sort of "mind reading" accomplished via brain scanning (in any event, the "presumption" was under a physicalistic/supervenient picture of the brain, which was the subject of our discussion).I would presume nothing of the kind. You might infer anything you like, but again, the act of inference is a judgement. Have a look at Do you believe in God, or is that a Software Glitch. — Wayfarer
it is a "judgment" which can be accomplished via machine-learning algorithms. — Arkady
when you divide the brain into bitty bits and make millions of calculations according to a bunch of inferences, there are abundant opportunities for error, particularly when you are relying on software to do much of the work. This was made glaringly apparent back in 2009, when a graduate student conducted an fM.R.I. scan of a dead salmon and found neural activity in its brain when it was shown photographs of humans in social situations. Again, it was a salmon. And it was dead.
Agree that we all have our biases, but you're loosing me with this. Other than the prejudice that something ontic is (a topic for a different debate ... but you'll notice it is equally upheld by eliminativism), where is the prejudice in there ontically being a first person point of view (a first person awareness which is debating the issue of whether or not it exists)?
And again, due to there being a contradiction of reasoning, one cannot hold both eliminativism and there being an awareness aware of eliminativism at the same time and in the same way; therefore, at least one the two is necessarily false. — javra
What about the illusions used by carnivorous plants, and flowering plants to attract pollinators. Will you claim that the insects that are fooled are conscious agents? — Janus
The question is whether eliminative physicalism/ materialism denies the reality of information. — Janus
I was trying to get him to recognize and acknowledge his prejudices, — Janus
I put "judgement" in scare quotes because, insofar as this form of technological mind-reading relies on judgments at all, it is a type of judgment which can be carried out by machine. (The portion of the article you just quoted refers to software making inferences, I will remind you again!)I notice you have to place judgement in scare quotes, to allow for the obvious fact that computers don't make judgements at all. They compute outcomes, which are then judged. Case in point, from the article I cited: — Wayfarer
Interesting question! And I suppose the answer must be 'yes' - insects are indeed conscious agents, albeit simple ones. Can you think of an analogy in the mineral or inorganic domain? — Wayfarer
Do you think that principles are really 'prejudices'? — Wayfarer
From the eliminativist point of view the first person point of view is not ontic, but epiphenomenal. This is a form of monism; but it is not neutral monism. From the point of view of subjective idealism the physical or material is epiphenomenal and the subject is ontic. Neutral monism wants to say that the physical and the mental are not substantially different. The alternative is substance dualism. All these positions rely on grounding assumptions; so none of them are definitively demonstrable in the sense of being free of prejudice. — Janus
So you believe that insects do enjoy subjective experience, and make judgements and decisions? — Janus
Since only one of the two—eliminativism or non-eliminativism—can be true, which do you rationally find to be true (all biases as pertains to repercussions and concerns about them aside)? — javra
I would opt for non-eliminativisn, but I am not going to pretend that my opting for it is free of prejudice; free of subjective feeling and intuition. I also acknowledge that it is possible that my prejudices, subjective feelings and illusions are all epiphenomenal illusions; although of course I don't believe they are. — Janus
To say that insects 'enjoy' anything seems anthropomorphic to me. But they are subjects of experience, albeit primitive subjects of experience. — Wayfarer
An ontology described by eliminativism and the ontic presence of an awareness aware of such ontology are mutually exclusive ontic givens; they can’t both be ontic at the same time and in the same way — javra
Then your view is at odds with the evidence. I linked to an article study demonstrating just the opposite of what you say. The study found that brain scans could detect what a subject was thinking based on the physical state of his brain. If this isn't detecting the "meaning" of thoughts (in terms of propositional content), then what would constitute such a demonstration? The fact that the machine's output is judged by human agents is irrelevant. (And, lest you think that I'm basing my position solely on one study, this is merely one of several such studies.)This doesn't mean that fMRI is not useful - it's a clinical procedure, and invaluable in brain surgery and medicine. But what I'm criticizing, is the notion that you can detect anything about the nature of meaning, or logic, or indeed thought, by using such an apparatus. So, no, the machine is not 'making judgements' - it is producing an output, which is then judged by human agents. — Wayfarer
I have no idea what you are talking about here. — Janus
But that was not what I asked you to provide an argument for; which you will soon see if you go back and read carefully. — Janus
Why not? Why cannot the intuition that awareness is ontologically different than physicality be a subjective epiphenomenal illusion? You haven't presented an argument for that yet. — Janus
The study found that brain scans could detect what a subject was thinking based on the physical state of his brain — Arkady
Even one successful experiment constitutes a demonstration that superluminal speeds can be achieved, and I would be compelled to revise my worldview accordingly. — Arkady
Must the "subjectivity" of an insect be anything non-physical over and above its sensitive physical and/or neural nature? Perhaps you believe it is, but is there any absolute reason why it must be so? — Janus
I mean, what exactly is an object anyway? — Janus
‘What is described by physics’ — Wayfarer
You can also say that the cause of the pigeon's behavior was its prior training (contrasting it with untrained pigeons). Or the fact that it was awake and hungry (as opposed to asleep or sated). Or the fact that it was there and not elsewhere. And we've only considered the pigeon as an agent or an organism; we could go further into the various mechanical or physiological causes, and so on. There seem to be so many different causes of the same event operating at the same time, one ought to wonder how it is that they don't clash with one another! But of course they don't. — SophistiCat
No: the article you linked to describes a problem with replication in certain types of studies (including those using fMRI), as well as false positives detected by the use of dubious software. It says nothing about the tout court impossibility of inferring the proposition or conceptual content of bran states from fMRI studies (or from brain imaging studies generally: fMRI is of course not the only such method).And that is precisely what the article that I linked to is criticizing. — Wayfarer
I'm sorry, but this paragraph makes no sense to me. Could you be more specific in where the circularity lies in inferring (or whatever your preferred verb is when computers do it) the propositional or conceptual content of a subject's thoughts from brain imaging data?Of course, given huge expertise, predictive algorithms, and the like, then an expert can deduce something about Subject X's brain patterns, based on that data. But, let's take that same expert, and say 'OK - put aside all reasoned judgement. Don't use your capacity for inference in assessing that data and all your expert knowledge of what such things mean. Now - what do you see?' And the answer is, they will see a graphic representation, an image. So they have to rely on the very thing they're attempting to explain, in order to explain the data they're seeing. And you can't evade the inevitable circularity involved in that.
Again, I fail to see where the question-begging lies. My example was an analogy of our disagreement here; I am aware that measuring an object's velocity is a "completely different kind of phenomenon" from measuring brain states. My point was only that, problems with replication aside, even a single success constitutes proof of principle (in this case, proof of principle that mental states are physically realized by particular types of brain states, brain states which can be detected via neuroimaging in order to say with at least some reliability what the subject is thinking about).But this is a completely different kind of phenomenon, to demonstrating velocity or mass or some other basic physical measurable attribute. Here what is being discussed is the basis of meaning, the nature of thought. So it's intrinsically a completely different kind of question, to what can be measured in relatively simple terms. All of your arguments here simply must be question-begging, because they will always assume the very thing that needs to be proven - you can't argue about the nature of reason 'from the outside'.
Could you be more specific in where the circularity lies in inferring (or whatever your preferred verb is when computers do it) the propositional or conceptual content of a subject's thoughts from brain imaging data? — Arkady
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.