The cultural invariant is the concept <five>, not what is counted — Dfpolis
Agreed. Which merely begs the question......from where did sure cultural invariant arise? It must be a condition of all similarly constituted rationalities, n’est pas? All that is counted, and the labels assigned to each unit of substance in the series of counting are immediately dismissed. What is left, both necessarily and sufficiently enabling a thoroughly mental exercise? It is nothing but the pure, a priori concepts, thought by the understanding alone, rising from the constitution of the mind**, the categories of quantity (plurality), quality (reality), relation (causality) and modality (existence). — Mww
the concept of car alone is insufficient to justify the truth of the consequent (a guy will die). The synthetic requirement for an outstanding force is also necessary. — Mww
That’s what I’m talking about!!!!!! Odd though, you acknowledge that which we know applies to all reality, yet balk at the realization they are the ground of all empirical exercise. Like counting. — Mww
Thus everything that is discovered is first and finally, empirical, i.e., revealed." Thus how I read it. — tim wood
For your argument to stand, you have to define empiricism idiosyncratically and in a way that itself "proves" your case, in short, begs the question. And at the same time destroys its common meaning. — tim wood
I read it - you - as having the a priori being just a case of the a posteriori, a subset, a species. I argue that they're different animals. It is as if you wished to characterize people as apes. In evolutionary terms, yes, but not now. Not without violence to all the terms in use. — tim wood
and that requires something already operative in the intentional/logical order.
Thus, intentional being is ontologically prior to material being. — Dfpolis
In opposition to Sartre's "existence precedes essence"? But I'm not asking for argument here, either. — tim wood
It is "ontologically" I request you briefly define. In particular and more simply, that ontological priority is not to be confused with temporal priority, yes? — tim wood
The challenge is to recapitulate for the rest of us in perhaps five sentences or less, the main point(s) of this thread. (Or to treat this challenge contemptuously, which in fact it may deserve!) — tim wood
We learn by abstraction from experience. — Dfpolis
Hmmmm, yes. I see. I see you’re talking about learning, I’m talking about understanding. — Mww
how do you suppose culturally differentiated systems find a commonality in their respective analysis? What is the same for a child here and now arriving at “5”, and a medieval Roman child arriving at “V”? — Mww
One reason to believe would be, the world of experience satisfies some prerogatives that belong to a priori truths, re: one doesn’t need the experience of a severe car crash to know a severe car crash can kill him. — Mww
But general a priori truths have nothing whatsoever to do with experience (hence the standing definition), but are sustained by the principles of universality and necessity, for which experience can never suffice, re: two parallel lines can never enclose a space. I think it’s more significant, not that we do know some truths a priori, but that we can. — Mww
So, something like aristotelian realism about universals? — aporiap
I'm not familiar with terms like 'notes of comprehension' or 'essential notes'. — aporiap
You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension. — aporiap
I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.] — aporiap
What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms. — aporiap
Also I don't get the point about why operating on reality representations somehow makes data-processing unable to be itself conscious. The kind of data-processing going on in the brain is identical to the consciousness in my account. It's either that or the thing doing the data processing [i.e. the brain] which is [has the property of] consciousness by virtue of the data processing. — aporiap
Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order. — aporiap
But, if it can't be physical, and it's not data processing, what is the supposed cause?
I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen. — aporiap
I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described? — aporiap
Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality. — aporiap
I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems. — aporiap
I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem. — aporiap
Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] . — aporiap
Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentially — aporiap
Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain. — aporiap
I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain. — aporiap
Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical. — aporiap
I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking. — aporiap
The more accurate thing to say is that there are neurons in higher-level brain regions which fire selectively to seemingly abstract stimuli. — aporiap
That seems to account for the intentional component no? — aporiap
What mechanism is the child using to relate a word he hears to an object he sees, in a system of quantitative analysis, that doesn’t have an a priori component? How does he understand exactly what he’s doing, as opposed to simple learning by rote? What do I say to my child, if after saying, “count this as one, these as two.....”, he asks, “what’s a two?” — Mww
I would say these states are correlated with awareness, or even that they are awareness looked at in an objective, as opposed to a subjective, way. — Janus
We are informed by what we see and our reasons for saying what we do about what we see are on account of what we see, not on account of those objective processes, of which we are completely unaware until we have understood some science of optics, visual perception and neuroscience. — Janus
I suppose counting could be construed as an intellectual operation, in as much as I am connecting an a priori representation of quantity to spatially distinguishable objects. On the other hand, I don’t agree that seeing is a physical operation, in as much as an object impressed on a bunch of optic nerves can be called seeing. Is it merely convention that the intellect is required to call up an internal object to correspond to the impression, in order to say I am in fact seeing — Mww
No, I have been saying they are correlated; which obviously means they are not unrelated. What I am saying is that the relationship is not causal. — Janus
(I think "presented" would be a better term here). — Janus
awareness of the states is not the same as the states — Janus
If the states are preconceptual then they cannot serve as reasons for action. — Janus
By brain will represent these facts in a way that can inform me that I am hungry; however, it cannot force me to turn my attention to, to be come aware of, this intelligible representation. — Dfpolis
I think this is an example of anthropomorphic thinking. — Janus
Your mind may "represent" the facts or it may not; I don't think it is right to say that that the brain "represents" anything. Often you will simply eat without being aware of any reason to do so, but of course it is possible to think something like "I am hungry, I should eat something". — Janus
I'm not claiming that the brain is a deterministic machine; it may well be an indeterministic organ, but the point is that there is no "I" that is directly aware of neural processes such that it could direct them. — Janus
So, the succession of brain states is determined by nature, not by ourselves, and thus, as far as we are concerned, it is a deterministic process. — Janus
It is much more rational to say that my decision to eat now causes the brain to activate neural complexes encoding what there is to eat now. — Dfpolis
This is where the category error comes in. Your decision to eat now has its own correlated brain state from which the "neural complexes encoding what there is to eat now" ensues causally. This is a deterministic process (as far as we are concerned because we are not directly aware of it and cannot direct it). — Janus
But, your decision to eat now gives you reason to seek what there is to eat now. These are two different ways of understanding what is going on; the first in terms of causes, the second in terms of reasons. — Janus
I think the two are co-arising and co-dependent. In other words, the "zero point field" or "quantum foam" or "akashic field" or "implicate order" or whatever you want to call it, cannot be without there being a correlated material existence. — Janus
Still, the fact that we know and can affect physical reality shows that, unlike mathematics and poetry, they are dynamically linked. — Dfpolis
To my mind you are still thinking dualistically here. We are 'part and parcel" of the physical world and the informational world; I would say there is no real separation; and dynamism abounds but it is not ultimately in the form of "links" between things which are separate or separable. — Janus
Experience falsifies the claim if I’d said “reason’s sole domain is to *force* thinking correctly”. A set of logical rules doesn’t come with the promise of their use, only that we’re better off if we do. — Mww
counting does not depend on what is counted — Dfpolis
Why isn’t this just like “seeing does not depend on what is seen”? Seeing or counting is an actual physical act, and mandates that the objects consistent with the act be present. Now, “the ability to see or to count does not depend on what is seen or counted” seems to be true, for I do not lose my visual receptivity simply because my eyes are closed. Otherwise, I would be forced into the absurdity of having to learn each and every object presented to sensibility after each and every interruption of it.
Are you saying counting and the ability to count are the same thing? — Mww
The categories are the same for Kant as they were for Aristotle. My mistake if I got the impression you were a fan of Aristotle, hence I didn’t feel the need to define them. — Mww
What is Cosmogenesis and who is the authority for it? What is ideogenesis and who is te authority for that? — Mww
As an example, your reasons for doing something or thinking something is not intelligible in terms of neural processes. — Janus
You think what you do for reasons, neural processes do not cause you to think the way you do, even though neural processes are arguably correlated with your thinking. — Janus
we cannot parse any relationship between causes and reasons, because the former is predicated on determinism and the latter on freedom; neither of which can be understood in terms of the other. — Janus
We might be able to give an intelligible account of the succession of neural states, and although they may be understood to be in a causal series, they cannot be meaningfully mapped as causes onto the successive phases of the movement of thought in a way that explains a relationship between the physical succession of causes and the intentional succession of associations and reasons. — Janus
Are you thinking of cosmogenesis or ideogenesis? — Dfpolis
I'm thinking of cosmogenesis. — Janus
The point is that being distinct ways of thinking, any attempt to unite them breeds confusion. — Janus
I haven't said that the two processes, the intentional and the physical, are identical. I have said they are correlated, and that each has its own respective account which is unintelligible in terms of the other. — Janus
Whether the intentional is dependent on the physical or the physical on the intentional is ultimately an unanswerable question. — Janus
It's not a matter of "keeping them separate"; they are separate. — Janus
Yes, it is, because a priori knowledge derives from universality and necessity, which Hume’s empirical grounds, with respect to cause and effect, do not and can not possibly afford. — Mww
(No, not literally unthinkable, for reason has no power to not think. Reason’s sole domain is to enable thinking correctly, which means understanding does not confuse itself with contradictions.) — Mww
The data of pure reason are categories, without which reason and indeed all thought, is impossible
Reason does not conclude, that being the sole domain of judgement. While judgement is a part of the total faculty of reason, it is improper to attribute to the whole that which properly belongs to the particular function of one of its parts. In this much I grant: without categories reason has no means to, and therefore cannot, derive transcendental principles. — Mww
These are two descriptions of the one process. From a phenomenological perspective we can say that something about the tree caught your attention, and to stop and look at it, which in turn triggered associations which led to you having a series of thought about it. — Janus
The point is that it is a category error to say that the physical and physiological process cause you to think certain thoughts, because it is other thought and memory associations which cause that. — Janus
The point is that they are two different types of analysis best kept separate, and confusion and aporias often result when talk of causation operating across the two kinds of analysis is indulged in. — Janus
They're empirically supported — aporiap
If for every intentional state, there is a corresponding physical state and vice versa, then it could be said, as Spinoza does, that they are the same thing seen from two different perspectives. If this is right then to say either that physical states cause intentional states or that intentional states inform physical states would be to commit a category error. — Janus
Which is PRECISELY the error Kant points out regarding Hume’s characterization of the principle cause and effect. — Mww
a principle being grounded in pure reason, as are all principles whatsoever, absolutely **must** have it’s proof also given from pure reason. — Mww
Kant’s argument wasn’t that there IS a proof per se, but rather no empirical predicates at all can be attributed to a possible formulation of it. — Mww
Kant’s argument was that the thesis of which Hume was aware (a priori judgements do exist), having been considered, was summarily rejected (slave of the passions and all that happy crappy) because it wasn’t considered **as it ought to have been**. In other words, he didn’t consider it the right way. — Mww
I shall not insult your intelligence by informing you the human cognitive system is already in possession of a myriad of pure a priori principles of the kind Hume failed to address, first and foremost of which is, quite inarguably, mathematics. — Mww
And as a final contribution, I submit there is no logical reason to suppose cause and effect should lend itself to being differentiated between kinds, with all due respect to Aristotle. — Mww
Isn’t a proposition where the subject and predicate describe the same event and contain the same information a mere tautology? — Mww
It’s not that the relationships are contingent; it’s that instances that sustain a principle governing them are. If cause and effect is an intelligible relationship prior to our knowledge of it’s instances, doesn’t it’s very intelligibility mandate such relationship be necessarily a priori? — Mww
I mean to assert that concepts and intentions exist and are distinct from their material instances and yet to then say these things are somehow still of same ontological type [i.e. physical] as physical objects seems difficult to reconcile [what makes them physical if they're not composed of or caused by physical material?]. It just seems like an unsubstantiated assertion that they are ontologically the same. — aporiap
Once you make the implicit assumption they are ontologically distinct then it becomes clear that any interaction between intentional states and physical substance serves as a counterargument to their being distinct from materiality [since material and nonmaterial have no common fundamental properties with which to interact with each other (charge; mass; etc)]. — aporiap
Intentional states inform physical states but I mentioned before [and I think this is important] that this is always by virtue of a physical-material mechanism. — aporiap
Dean Radin and Roger Nelson (1989) reviewed 832 experiments by 68 investigators in which subjects were asked to control random number generators, typically driven by radioactive decay. They subjected the results to meta-analysis, a method for combining data from many experiments. While control runs showed no significant effect, the mean effect of subjects trying to influence the outcome was 3.2 x 10^-4 with Stouffer’s z = 4.1. In other words, subjects controlled an average of 32 of every 100,000 random numbers, and this effect is 4.1 standard deviations from pure chance. The odds against this are about 24,000 to 1.
Radin and Diane C. Ferrari (1991) analyzed 148 studies of dice throwing by 52 investigators involving 2,592,817 throws, found an effect size (weighted by methodological quality ) of 0.00723 ± 0.00071 with z = 18.2 (1.94 x 10^73 to 1). Radin and Nelson (2003) updated their 1989 work by adding 84 studies missed earlier and 92 studies published from 1987 to mid-2000. This gave 515 experiments by 91 different principal investigators with a total of 1.4 billion random numbers. They calculated an average effect size of 0.007 with z = 16.1 (3.92 x 10^57 to 1).
Bösch, Steinkamp, and Boller (2006) did a meta-analysis of 380 studies in an article placing experiments in the context spoon bending and séances. They excluded two-thirds of the studies considered. Nonetheless, they found high methodological quality, and a small, but statistically significant effect.
The 'seeming' ontological jump from intentional state [not-physical] to physical change in muscle activity is what I argue never happens because there must ultimately be some physical nature to that intentional state in order for it to lead to a physical change. — aporiap
And in fact it is currently made sense of in terms of physical mechanisms [albeit coarse grained and drafted at present] - as a hypothetical mechanism: some web of 'concept-cells' [higher level cells in a feedforward neural circuit that invariantly fire in response to a very specific stimulus or object class] are activated in conjunction with reward circuitry and a motor-command sequence is initiated. — aporiap
Right but all of this goal directed decision making is ultimately mediated by physical processes happening in the brain. It also doesn't need to be determinate to be mediated by physical process. — aporiap
I don't know biophysically how these types of things are encoded in a distributed, non localized fashion or in a temporal pattern of activity that doesn't have spatial dimension or etc so I couldn't say they are one or the other but I guess I'd say they could be spatially decomposable. — aporiap
How do you define 'biophysical support'? What in addition to that support would you say is needed for a full explanation? — aporiap
the contexts are different but, again they are both [the invariance of the goal and the ball's deterministic behavior] explainable by physical processes - some neurons are realizing a [physically instantiated] goal which is influencing via [probabilistic] physical interactions some other set of neurons which are informing behavior via other [probabilistic] physical interactions. The ball is a simple physical system which is directly being impacted by a relatively deterministic process. — aporiap
I am making broad-band metaphysical assumptions of materialism and emergentism which implies I take things like 'valence' and 'concepts' to be materially realized in physical systems. — aporiap
Say you want a pizza. Pizza can be thought of as a circuit interaction between 'concept cells' [which -in turn- have activated the relevant visual, tactile, olfactory circuits that usually come online whenever you come into contact sensorily with pizza], particular reward pathway cells, cells which encode sets of motor commands. — aporiap
Fair enough, but since, as you point out, we do not know the laws of nature, how do we know they obey the Principle of conservation of energy? And is the Principle of conservation of energy, a Principle of physics or of nature? — Inis
And is the Principle of conservation of energy, a Principle of physics or of nature? — Inis
Also, I'm not sure the Principle of conservation of energy even tells you how to measure whether energy is conserved or not. — Inis
How is the builder building identically being the building built any different than the ball hitting identically being the hit ball? — Mww
all empirical relationships concerning cause and effect are contingent with respect to human knowledge, which implies if any absolute necessity, that is to say, the falsification of which is impossible, must arise from a priori conditions. — Mww
But energy is not conserved by the Principle of energy conservation. It is conserved due to the dynamics undergone by a system obeying the laws of physics. — Inis
what is your idea of Hume’s thesis that Kant was bullheaded about, with respect to “time-ordered causality”? — Mww
I’d guess A.) you’re talking about the effect on our knowledge of a thing being antecedent to the causality of the thing’s impression given to us by sense, or, B.) you’re talking about the simultaneity of the external impression on sense and the internal knowledge of the object so impressing. — Mww
If I understand you correctly, you mean the observer by ''knowing subject'' and you consider it different from the observed - the ''known subject''.
Why? — TheMadFool
What other concept makes you feel that way? If I understand you correctly you don't consider material (scientific) explanations adequate to explain the ''knowing subject''. — TheMadFool
Why and how is the ''knowing subject'' different from the ''known subject''? — TheMadFool
What is interesting is the ''hard'' in The Hard Problem of Consciousness. Why didn't Chamlers use ''impossible''? — TheMadFool
Are you taking this a step further and claiming this is the IMPOSSIBLE problem of consciousness? — TheMadFool
So a datum (asymmetry or symmetry) is epistemically and ontically foundational? — Galuchat
This sounds like nominalism, is that correct? — aporiap
Why would the program not be conscious when running the first 5 steps of the algorithm? — aporiap
Why not it simply loose consciousness when the program has reached the missing instruction in the same way a computer program freezes if there is an error in a line of the code and simply resume running once the code is fixed? — aporiap
The way this scenario is construed makes an issue for any kind of binary descriptor of a continually running algorithm [e.g. any sort of game, any sort of artificial sensor, any sort of continually looping program] not just specifically for ascribing consciousness to an algorithm. Eg. Say you call this algorithm an 'atmospheric sampler'. Say you take one instruction out now it is no longer an atmospheric sampler algorithm because it cannot sample, let it run until after the instructional code, now repair the instruction and it has become an atmospheric sampler seemingly a-causally. — aporiap
The implicit assumption is that the complexity of an algorithm is what generates consciousness and that complexity is reduced by reducing the number of instructions. — aporiap
This assumes data processing can only happen in a turing machine like manner — aporiap
Perhaps this is why say, a neuron, which is a single processing unit is not capable of consciousness whereas a conglomerate of neurons is. — aporiap
Why make the dichotomy between "natural" and "psychological" objects? — aporiap
even in the physical sciences we don't have access to 'things in themselves' anyway, — aporiap
The point is that these fall within the range of natural objects albeit of a lesser degree as opposed to something wholly different so as to involve a completely different way of knowing or learning about them. — aporiap
Those words still mean what the author intended even if no one ever reads what he wrote — Harry Hindu
Just because some effect isn't noticed, or part of some awareness, doesn't mean that the cause never happened. — Harry Hindu
You are basically saying that meaning only arises in the relationship between matter and ideas. I'm saying there is no distinction that you have been able to coherently show between them and that they are both causal and can establish the same kind of relationships - meaningful/causal. Meaning is the relationship between cause and effect. — Harry Hindu
If they can't be imagined, then how do you know what they mean? How do you know that you're thinking of <indenumerable infinity> or of <existence> if they don't have any imagery that the words refer to? How do you distinguish between <indenumerable infinity> and <existence> in your mind (other than seeing the words on a screen)? — Harry Hindu
Again, if words don't refer to some mental image, then what do they refer to? — Harry Hindu
I think your problem is that you are over-complicating things. — Harry Hindu
what does the scribbles, "unicorn" refer to? — Harry Hindu
Another contradiction! Unicorns don't exist, yet all there is to a unicorn is what we imagine! — Harry Hindu
What does your unicorn look like? How do you know you're thinking of a unicorn? — Harry Hindu
How do you know what those strings of symbols mean? — Harry Hindu
Actually, I'd go so far as to say that categories only exist in minds. Therefore the only kind of category is a mental category, (or a concept). — Harry Hindu
So why place them in the category, "ideas"? — Harry Hindu
Because they have something (not everything) in common: their whole being, all they can do, is refer. — Dfpolis
And so can matter. I already went over this. — Harry Hindu
So you have yet to explain the difference between "matter" and "ideas". — Harry Hindu
how are intents different than matter if in both cases there is a constant and something that changes? — Harry Hindu
Being actually predetermined is not the same as actually existing. — Dfpolis
Sure it is. It is your perception of time that makes you see the future as something that doesn't exist yet. — Harry Hindu
Your mind stretches those causal relationships — Harry Hindu
First, the word, "unicorn" does not just evoke <unicorn>, the word itself is evoked by <unicorn>. As I have been saying, words and ideas are both causes and effects of each other, and each carries information about each other. — Harry Hindu
Second, I have no idea what you mean by "imagined/potential unicorns". There is the word, "unicorn", pictures of unicorns, and the idea <unicorn> (a mental image of a unicorn), and the causal relationship between them. That's it. An imagined unicorn is just another name for the mental image of a unicorn. — Harry Hindu
While there are categories, <category> is not a fundamental concept. An instance is in a category because its objective nature, its intelligibility, is able to evoke the concept defining the category. If beagles were not able to evoke the concept <dog> they would not be categorized as dogs. So concepts are logically prior to categories -- and concepts refer to all of their potential instances, not just those that we have experienced or those that actually exist at any given time. — Dfpolis
This is just more confusing. This is just a bunch of unnecessary use of terms in a long-winded explanation. — Harry Hindu
All I am saying is that ideas have causal power. — Harry Hindu
Does an idea of a unicorn exhaust a unicorn like the idea of a horse exhausts a horse? — Harry Hindu
No, that isn't an example of my restatement of your claim.
It would be more like we have 100 different things with no relationship at all. Everything would be made of a completely different element and with a different function. Using your explanation of "essences" and "existence" there is no possibility for the existence of categories. — Harry Hindu
This would mean that the idea of a horse and the idea of a unicorn have different essences because they both do different things. — Harry Hindu
So why place them in the category, "ideas"? — Harry Hindu
Can you please try to stay focused. That isn't what I asked. I don't think you're actually taking the time to read what I'm writing. You seem to only want to push your view. — Harry Hindu
If two things do the same thing then they would have the same essence. Does the idea of grass eating grass have the same essence as the idea of a goat eating grass? — Harry Hindu
And I already went over this with you where you talked about how you change your intent and I pointed out how this is no different than how an apple changes color, but you didn't respond to it. — Harry Hindu
No, the present state is one of the universe's actual predetermined states. — Harry Hindu
I think you have this backward. Time is a measure of change, and change occurs because what was merely potential becomes actual. Determinism is irrelevant to the reality of change. — Dfpolis
Time is the stretching out of the causal relationships that make up the universe. A causal relationship is a change (cause and effect). — Harry Hindu
Dfpolis, thank you for the excellent post! — aporiap
You explicitly state in the previous sentence the separation is [by substance?] mental. How would you categorize 'mental separation' if not as an ontological separation? — aporiap
1. Neurophysiological data processing cannot be the explanatory invariant of our awareness of contents. ....
Well I think this is a bit 'low resolution'/unspecific. It's definitively clear neurophysiological data alone isn't sufficient for awareness but that doesn't mean that a certain kind of neurophysiological processing is not sufficient - this is the bigger argument here. — aporiap
The missing-instruction argument shows that software cannot make a Turing machine conscious. If software-based conscious is possible, there exists one or more programs complex enough to generate consciousness. Let’s take one with the fewest possible instructions, and remove an instruction that will not be used for, say, ten steps. Then the Turing machine will run the same as if the removed instruction were there for the first nine steps.
Start the machine and let it run five steps. Since the program is below minimum complexity, it is not conscious. Then stop the machine, put back the missing instruction, and let it continue. Even though it has not executed the instruction we replaced, the Turing machine is conscious for steps 6-9, because now it is complex enough. So, even though nothing the Turing machine actually does is any different with or without the instruction we removed and replaced, its mere presence makes the machine conscious.
This violates all ideas of causality. How can something that does nothing create consciousness by its mere presence? Not by any natural means – especially since its presence has no defined physical incarnation. The instruction could be on a disk, a punch card, or in semiconductor memory. So, the instruction can’t cause consciousness by a specific physical mechanism. Its presence has to have an immaterial efficacy independent of its physical encoding.
One counterargument might be that the whole program needs to run before there is consciousness. That idea fails. Consciousness is continuous. What kind of consciousness is unaware the entire time contents are being processed, but becomes aware when processing has terminated? None.
Perhaps the program has a loop that has to be run though a certain number of times for consciousness to occur. If that is the case, take the same program and load it with one change – set the machine to the state it will have after the requisite number of iterations. Now we need not run through the loop to get to the conscious state. We then remove an instruction further into the loop just as we did in the original example. Once again, the presence of an inoperative instruction creates consciousness. — Dennis F. Polis -- God, Sceince and Mind, p. 196
In natural science care what Ptolemy, Brahe, Galileo, and Hubble saw, not the act by which the intelligibility of what they saw became actually known. Thus, natural science is, by design, bereft of data and concepts relating to the knowing subject and her acts of awareness....
I don't think the first sentence ... leads to the conclusion in the second sentence.
Empiricism starts with defining a phenomenon -any phenomenon. Phemonema can be mental or physical or can even be some interaction between mental and physical ... — aporiap
So connections are in fact being attempted between what's traditionally been considered a 'mental field' e.g. psychology and 'physical' fields e.g. biophysics. — aporiap
To be orthogonal is to be completely independent of the other [for one to not be able to directly influence the other]. — aporiap
... the fact that this is a unidirectional interaction [i.e. that only physical objects can result in changes to mental states and not the other way around without some sort of physical mediator] gives serious reason to doubt an fundamentality to the mental field - at least to me it's clear its an emergent phenomenon out of fundamental material interactions. — aporiap
I'm unsure why intentions [my understanding of what you mean by intention is: the object of a mental act - judgement, perception, etc] are always considered without parts. I think, for example, a 'hope' is deconstruct-able, and [at least partly] composed of a valence component, an cognitive attitude of anticipation, a 'desire' or 'wanting' for a certain end to come about, the 'state of affairs' that defines the 'end'. and sometimes a feeling of 'confidence'. I can also imagine how this is biophysically instantiated [i.e. this intentional state is defined by a circuit interaction between certain parts of the reward system, cognitive system, and memory system]. So what you have is some emergent state [the mental state] composed of interacting elements. — aporiap
I'm still forming my thoughts on this and this part of your post but I'll give you a response when I think of one. — aporiap
Data being asymmetries, are you referring to anything other than symmetry? — Galuchat
So, I consider the related general definitions of information, message, communication, code, and data to constitute a foundational concept which applies to both material (physical) and intentional (mental) domains. — Galuchat
Okay, the string, "unicorn" represents, or symbolizes (both are synonyms of "express") the idea <unicorn>. You seemed to contradict yourself by saying that universals refer to potential instances. — Harry Hindu
Instead of "potential instances" - which seems like a loaded term, I'd use the term "category". Unicorns, cats, dogs and planets are categories. We put things (Uni) in mental boxes, or categories (unicorns) - Uni the unicorn. — Harry Hindu
You said they have different essences because they can do different things. Every thing does something different, which means that each idea is a different essence, and each material thing is a different essence. — Harry Hindu
There is no distinction between what is ideas and what is matter if everything is different from each other. — Harry Hindu
Goats eat grass, but grass doesn't eat grass, so they would be different essences. — Harry Hindu
But wait a second, can you imagine grass eating grass (the idea of grass eating grass)? Would that then make it the same essence as the idea of the goat eating grass? — Harry Hindu
Every thing has a different essence and existence. — Harry Hindu
ach idea would have a different existence and essence. So what? What does that have to do with the difference between what an idea is and what matter is? You've simply explained the difference between things, not the difference between the category "idea" and "matter" — Harry Hindu
It seems to me that one's essence defines one's existence. It seems to me that they are inseparable, as one's essence/existence is a relationship with everything else, so in a sense you did redefine "thing" as "essence/existence". In a deterministic world, that relationship would be deterministic, with no potentialities. — Harry Hindu
"Potentialities" are the result of our perception of time, as if the future is yet to happen and still isn't determined. — Harry Hindu
You still haven't addressed the differences between "idea" and "matter". — Harry Hindu
Sort of like how one has to select from a set of possible options. But there is only one meaning to the message - the source's intent. What did the sender intend when they wrote the message? How you interpret the message depends upon your experiences. Try to understand a message in a different language. How could you ever hope to come up with even a set of possible messages when looking at a different language? You'd have to learn the language, just as you have to learn the language of your sensory impressions. — Harry Hindu
Now you say that we ought to distinguish intentional from non-intentional, using the method of physics, which has no capacity to even recognize the intentional. — Metaphysician Undercover
Now you say that we ought to distinguish intentional from non-intentional, using the method of physics, which has no capacity to even recognize the intentional. — Metaphysician Undercover
How are you going to convince a physicalist, who believes that there is no aspect of reality outside this physical part of reality, without referring to this part of reality which is outside. — Metaphysician Undercover
You cannot assume that the physicalist will accept your assumption that there is something outside the purview of physics, because this contradicts the physicalist premise, fundamentally. — Metaphysician Undercover
As I already pointed out, it is you that is equivocating - using terms like, "matter", "ideas", "being" and "essences" without any clear explanation of what those things are. — Harry Hindu
So when you use the string of scribbles, "unicorn", what do those scribbles refer to? If it refers to your idea of a unicorn, then "unicorn" is an idea of a unicorn. — Harry Hindu
As for animals and ideas, they have different essences because they can do different things. A goat can eat grass, but the idea of a goat can't. — Dfpolis
Then the grass would be a different essence than the goat. All you have done is redefine "thing" as "essence", and that throws a wrench into your explanation of "matter" and "ideas". — Harry Hindu
Each idea does different things and would therefore be a different essence. How would you know that you have an idea of a horse as opposed to a unicorn, if those ideas didn't do different things? — Harry Hindu
However, thanks for your clarification. From that, it appears we agree on the nature of Shannon information. Where we disagree, is that your original comment was "considering the message materially, as Shannon did". — Galuchat
Shannon defined information as communicated code (which can apply to physical, biological, and semantic processing), not as "the reduction of logical possibility" (which can only apply to semantic processing). — Galuchat
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages (Italics added). — Claude Shannon -- A Mathematical Theory of Communication