Comments

  • We Need to Talk about Kevin
    I couldn't tell if that exchange between you both, @ArguingWAristotleTiff, @Sapientia was sincere, camaraderie, camaraderie laced with poison or so steeped in irony it inverted itself.
  • Godel's incompleteness theorems and implications
    I think @andrewk did a thread on the old forum going through Godel's original proof, so he might have some good input here.

    I have a hazy understanding of Godel's theorems, and there are two incompleteness theorems. The first one states, roughly:

    (1) Consider a consistent formal system F that allows the expression of arithmetic truths, then there is a statement in F which cannot be proved or disproved (ie F is incomplete)

    And the second one:

    (2) Consider a consistent formal system F that allows the expression of arithmetic truths, then F cannot (syntactically, using elements and rules in F) imply F's consistency.

    Theorem 1 is done through construction. Godel figured out a way to uniquely encode every element of the formal system F (mathematical entities) with a number. Many steps in the proof later, he assigned such a number to the statement "This statement is unproveable in F". Then if F allows derivation of , F derived something unproveable, so F is inconsistent. If F does not derive , then is true. To establish 's truth I think he had to go to a bigger system than F (think 'more arithmetical truths' than 'simple arithmetic'). This kind of makes sense, since he's trying to prove something about the system as a whole - specifically whether a statement of F's consistency implies .

    Theorem 2 precisely concerns the aforementioned idea of 'having to go beyond the system to establish the system's overall properties'. Specifically, 'having to go beyond the system to prove the system's consistency', since the second theorem is 'A consistent formal system F (that contains simple arithmetic) does not allow the derivation of F's consistency within F'. But I don't have any intuitions about its proof since my model theory is pretty weak.

    What are the implications of Godel's theorems for mathematics? Well, when they came out they were a massive 'fuck you' to the Hilbert Program, which was a desire to axiomatize all mathematics. As collateral damage, it screwed over the idea of formalism in philosophy of math - since there are now mathematical truths which cannot be ascertained through string manipulation rules of very general axiomatic systems (like ZFC).

    What does it mean for the actual practice of mathematics? Well that depends on the discipline. It has little to no consequences for applied mathematics, it has big consequences in proof theory and mathematical logic. The interesting thing about the theorem for me is that the practice of mathematics, what it means to reason mathematically about mathematical entities, was largely unperturbed - though it did rain on the parade of having a 'complete axiomatic system of all mathematics', and was in essence a no go theorem for that aspiration.

    I think this is because the desire for axiomatisation isn't removed by Godel's theorems, you still want to be as precise as you can about mathematical entities. But when you're familiar with the entities in a problem class in mathematics, you don't think in terms of syntactic operations in that class. This is evinced further by the majority of papers with proofs in them not providing a formally valid proof - just 'enough' of the proof that a skilled reader can construct it in their head.

    Also, the role of conjecture and heuristics in mathematics wasn't changed by Godel's theorem. People still publish conjectures and heuristics - statements of interesting problems and informal ways of thinking about them.

    The idea that Godel's theorem destroys mathematics in some sense is largely due to poor outreach about it. It's in the same ball park as 'quantum weirdness' for generating misapprehensions about a science. I think if it was presented in its philosophical and historical context, and these presentations contained assessment of the impact of the theorems on subfields outside of proof theory and model theory, it wouldn't be seen as a cataclysmic event for mathematics.

    I actually like it. For me it gives some kind of internal evidence within mathematics that mathematical progress takes on a quasi-empirical character, like Lakatos and others have argued.
  • We Need to Talk about Kevin


    Makes sense. Doesn't it feel disappointing to you that 'the odds were in our favour', so to speak, but we still failed to have many active woman posters?
  • We Need to Talk about Kevin


    I think we had interest, given that there were private discussions of issues raised on the forum between lurker-women and some of the staff. I suppose the distinguishing feature is the trust they had for their friend, but no trust in the general environment.
  • We Need to Talk about Kevin


    I've probably done it more times than I can count Wos, but I try hard not to use any of the implicit biases I have. I don't think the spirit of that message we received was an injunction to treat women differently, on the contrary, it was an injunction to try harder to treat them the same. Whenever we saw harassment and trolling in rape culture or patriarchy threads, we tried to treat it like any other case of moderator action.

    Regardless, we had a group of informed women who could've provided us invaluable insights into the reality of their treatment, but we couldn't get them to come forward even privately to suggest anything we could do as admins to get the number of active woman posters up.
  • We Need to Talk about Kevin
    I used to administrate a reasonably large and active alt-left discussion forum. Besides people posting pictures of Mario-Stalin and those who were probably payed to troll the place one of the issues we had moderating was, above and beyond trying to corall intellectual anarchists, the inclusion of women in the group. In terms of active posters, there were about 100 at the group's peak, there were almost no women - somewhere between 5 and 10. This was quite surprising, considering the background of the place, pretty much everyone was some kind of hard-line feminist except the quickly dispatched alt-right trolls.

    10%, at best, of the active members being women in a place that was intended to be in some regards a safe space in which everyone agreed that women get the short end of the stick? Really?

    Most of the vocal disagreements and harsh treatment of women, person to person anyway, was largely to do with threads specifically on patriarchy and rape culture, which some (all?) of our Troskyist and Leninist members unsurprisingly did not give a crap about.

    So, we took in the active women as part of the administration, if they wanted to come. 3 did. Luckily they represented diverse views as well, and they advised advertising this and numerous other democratizing features we adopted (voting on all actions of moderation, post deletion required at least three votes, banning required a separate thread, discussion, a devil's advocate -which was usually me if anyone cares-) as a sticky on the forum.

    This lured out several women we knew were there and who read stuff, since they spoke about it privately or referenced it to other moderators, but didn't engage usually. After this we saw a brief surge of their engagement (IE, two more women became active posters than before). Eventually they returned to lurking, and all of the women administrators eventually left due to individual dramas with other mods and admin. Having to debate about the debates was tiresome, but we wanted to be ideologically consistent with our user base. (one however left because she accused another poster of stalking her)

    At this point, we asked for private messages on what we were doing wrong from the women members of the group, why did they choose not to engage despite demonstrably being interested, and engaging in private debate with other members through PMs?

    Several of the women who had received moderator action by accusing people of being rapists, of course said that we were doing something wrong by silencing their voices or disagreeing with them.. These were summarily ignored (obviously after debating with other mods, we were inspired by Gosplan after all). Besides that, we received only one useful bit of feedback, which was this (paraphrased):

    Women are encouraged to be passive, and are often talked over by men without them realising it. Of course these men deny it later. This place is no different. Do you expect women to find it easy to express themselves and their ideas in a public forum? Especially when we know that there is always the chance that one of your more extreme members will not just dismiss our concerns, but attack us as ideological enemies?

    I've had no idea on what to do about that since.
  • Will the "Gaussian Curve" make money obselete?


    Fundamentally all of those terms, Gaussian curve, power law, fat tails and heavy tails, describe shapes of curves. The curves they describe are models of various quantities. Say if you aggregated all the heights of the people in America, and recorded the heights and how frequent those heights are you get something that looks like the Gaussian curve. You can see this in this. (when focusing on a particular gender anyway) If you've read something that contains the 'mean' and 'variance' of the Gaussian curve; the mean gives you where the top of the big bump is - the maximum - and the variance gives you how thick the bump is.

    Fat tails and heavy tails pretty much describe shapes like this income data from the UK. You can see that it looks 'much the same' as you go very far to the right on the graph. Fat tails and heavy tails as descriptors of a curve or histogram (bar chart of frequencies) describe that the extremes (being far to the right) don't decay very quickly in probability (more on this later)

    These objects are called distributions, they are like tables when you can look up, say, a height, and see what proportion of people have that height. If you took all the heights, and looked at what proportion of people had those heights you can construct a graph of the distribution. The points on the graph have an x-coordinate of 'height' and a y-coordinate of 'how likely is this height?'. Putting all of these points together is called a distribution. Being a Gaussian curve, being a power law and having fat tails or heavy tails is a property of a distribution.

    Distributions with fat tails, like power laws, decay a lot slower than Gaussian distributions. What does this mean? If you look at the income histogram I linked, you can see the curve being essentially flat (but not at the x-axis. at 0 height) from all income values past 98k. For the height distribution, you can see that the extreme values of height say being more than 3 meters tall are essentially zero. The difference between the mean (the central bump) of the male heights and 3 meters tall is a lot less than the difference between the mode (central bit, big bump) of income - the big bump is somwhere between 10k and 12k, the far right of the graph goes up to 140k with 'about the same' probability of being around 140k as being 102k. This 'slowly-vanishing (tending to zero) proportion' of people with massive incomes is essentially what it means to have fat tails. Power laws are, roughly speaking, a way to describe curves that have fat tails - like gaussian curves are ways to describe nicely centered and not-too-variable distributions like heights.
  • Good is an actual quality like water that we need to "drink"
    You're describing two common symptoms of mood disorders and claiming that they're fundamental to depression; flattened affect, and inhibited prospective and retrospective thinking.

    Flattened affect is the symptom that aligns with what you're saying, the leveling of positive and negative emotion, sometimes described as 'a loss of colour from the world'. It can manifest as a removal of pleasurable feelings, in which case it's called anhedonia. Or it can manifest as diminished ability to feel both positive and negative feelings, which is termed flattened affect simpliciter. Indications for these are contained on diagnostic tests for depression and related disorders, see here and here.

    Inhibited prospective and retrospective thinking are diminishment of the ability to think about the future and past. They can manifest as the inability to form a narrative out of your experiences, since they make it difficult to ascribe goals and ascertain their significance. Those are common symptoms of depression, but are still thought of diagnostically under the category of retardation. If you look at the second link - the Hamilton depression index -, you'll see a more general conception.

    The 'decentering' of mood disorders in the abstract is useful in clinical practice, since it allows the clinicians to assess the self reports of a patient to tailor both psychiatric and psychological treatment. One result of this is that the self reports of a depressed and anxious person to a psychiatrist can result in many different medication plans.

    A few examples, someone who has anxiety comorbid with depression may be prescribed beta-blockers and SSRIs, unless they have bipolar depression in which case they may be given beta-blockers and a mood stabilizer. Depressed patients with pseudo-psychotic symptoms (pseudo since they have insight, EG hallucinate but know the hallucinations are not real) may be given anti-psychotics instead of anti-depressants, or both.

    To a psychologist or psychotherapist, the self reports can have a big say in what theraputic approach they used. If someone is highly reflective and insightful about their own emotions, they are more likely to be given metacognitive therapy than someone who has difficulty articulating their feelings and the feelings' causes; who instead may be given discursive therapy to help them articulate a self narrative (see @unenlightened's recent thread) or cognitive behavioural therapy to challenge their pathological behaviours without mandating as much introspection.

    This is not to say that finding general patterns in the symptoms and causes of patients with mood disorders isn't useful - it is - just that founding mood disorders in one type of thought or behavioural pattern would inhibit theraputic relationships somewhat. And further that diagnostically, there are many MANY structures for depression.
  • Question for non-theists: What grounds your morality?


    One way of being persuasive is to provide a good explanation and be right!
  • Question for non-theists: What grounds your morality?


    Asking why something's right or wrong doesn't require believing in the necessary existence of a sufficient justification, only that to satisfy the questioner there will be a sufficiently persuasive justification for them. If someone could not possibly be convinced by any explanation they're playing a different game than 'explain to me why this is right (or wrong)'.

    An emotivist who believes all ethical statements are power plays or persuasive expressions of raw sentiment, or a cognitivist who beliefs that 'moral statements' are truth apt, perhaps can be arrived to by reason and are either true or false (or all false) will still have to act ethically and be effected by the ethical as a normative-juridical structure. They will face similar trials and tribulations in life irrelevant of whatever extraneous philosophical apparatus they're hedging their bets with. They will, usually, try to do what's right and if not that try to do what they can reasonably get away with. They will have been doing all of that, thinking ethically, acting ethically for a long time and will surely have been effected by the ethical dimensions of life since their birth.

    Believing that there is some extraneous, foundational philosophical apparatus that will vouchsafe anyone's moral choices completely divorces the ethical from the political - how to conduct ourselves and what should we, as a collective, strive for. Leaving the ambiguities in - some things are right, some things are wrong, maybe there's no ultimate ground, maybe there is an ultimate ground, not only provides a more accurate catalogue of our approaches to morality, but keeps the ethical and the political together. To do what's right is to negotiate with the world around you; sometimes even from the ground up.
  • Question for non-theists: What grounds your morality?
    I've never understood why morality is the kind of thing that needs a ground or foundation. If you have an ethical system, it gets tested against intuition and ethical problems in the abstract. I view this as similar to axiomatising arithmetic to 'found' it, if the foundation didn't contain or imply already established arithmetical truths, it would be discarded - rather, it is judged a good axiomatisation when it at least produces the right theorems. In this sense, the expected theorems are more primordial than the axiomatisation.

    If a theory that grounds morality, whatever that means, is to have relevance to ethical concerns, it will be tested for its ethical implications and either found fortuitous or unfortuitous depending on its treatment of the ethical issues within its scope. If we would use our intuitions in ethical problems and real life scenarios to judge derived moral statements from a grounding theory and that theory's rightness - be these derived entities a product of ratiocination or sensibility; is it really the theory which is logically prior, or the embodied practice of ethical decision? And if we already have the capacity to evaluate ethical decisions, to live ethically, can it be said that a grounding moral theory provides anything more than a set of heuristics to judge how to act and how to live in the abstract?
  • Networks, Evolution, and the Question of Life


    there's a bit of an ambiguity here, I think, already at the level of formulation: any genomic network is already a space of possibilities, such that some parts of the network may be active in any particular process of expression, while other parts may not be - and just which parts are and are not may be dependent on certain (regulatory) genetic and epigenetic conditions. This means, further, that what even counts as 'a' network is not fixed, and individuation is itself dependent on the parameters of any one investigation - what we count as belonging or not belonging to 'a' network (the space of possibilities), and by extention, what we count as 'a' network to begin with, is itself not something fixed in advance. Of course this is just the scientific process: fix the boundaries of the phenomena you want to study, hold all else equal, then poke around.

    Don`t see anything to pick at here. So long as the problem space for the demarcation of life and not-life is the space of possible genomic networks, rather than specific ones. Not that this space of possibilities is independent of the considerations relevant to the networks.

    The problem, it seems to me, is that even if we could get around the combinatoric issues, any exhaustive list of properties would be in some sense only so by fiat. And if so, I'm not sure how much we can milk the distinction between P and Q(t) to really speak about any demarcation between the biotic and the abiotic.

    I don`t understand why you would think it would be by fiat, since the construction, or discovery, of genomic networks is a fortiori the conceptual space for that demarcation at least as far as this thread is concerned. Rather, the inclusion of types of factors in them is the conceptual space. This isn`t a criticism of what you said, rather to elicit more information. Why at any given point would it be by fiat?

    The other, intimately related, conceptual issue I see is that because genomic networks are complex, the activation or deactivation of certain parts of the network (via regulation) may alter the very possibility space itself: what was once an 'influence' which would never have been able to play a role in the expression of a certain trait, becomes an influence, or vice versa. And this change may have knock-on effects with respect to other 'possible influences' as well; things get confusing, I think, because at stake are second-order possibilities: 'possible possibilities', as it were. And again, at this point, I'm not sure how stable any distinction between P and it's subset Q(t) might be...

    I don't think that this is a particularity of genomic networks, they only instantiate the problems that epigenetics rises for the distinction of life. Inclusion in a specific genomic network requires that a node is a gene, the more general picture you raise of the epigenetic landscape is the appropriate space of concepts for tracing the implications for the demarcation of biotic and abiotic factors.

    I also think it's likely that abiotic and biotic factors are likely to have an operational definition within the study of genomic networks that doesn't coincide completely with the (supposedly) philosophical distinction between life and not-life.

    The appropriate conceptual space for this argument is something like genomic networks as a problem instantiation of epigenetics for the demarcation between life and not-life, so decisions (subarguments) should be considering the scope of genomic networks and how their internal distinctions relate to life and not-life, and further how this relates to epigenetic effects - then how that counterfactual space of concepts (study of genomic networks -> epigenetic effects) relates to the demarcation problem for life.

    It looks to me, though I may be misreading you, that you are committing something of a category error (at least if the above typology of the space of problems is accurate), confusing the inclusion within specific genomic networks and its formal undecideability with respect to the biotic and the abiotic with the general features of the study of genomic networks (which is where epigenetics comes in). This paper incorporates this distinction methodologically.

    Edit: I see you addressed some of this in response to @Srap Tasmaner
  • Networks, Evolution, and the Question of Life


    The nodes in neural networks are placed there by modellers and use a message passing algorithm to update parameters linking the nodes. The nodes in gene expression networks are discovered through a kind of cluster analysis. The nodes mean different things, the nodes are generated by different things. Nevertheless if there are flows on the networks there will be general mathematical descriptions of the flows. The flows will mean very different things for the different systems.

    Generally, just a great big citation needed on the material in your post.
  • Networks, Evolution, and the Question of Life


    I'm not a biologist, and have only tertiary knowledge in developmental biology.

    Yes. The interesting point about genomic networks is that their internal processing structure could be - in principle - uncrackable and forever hidden. Can we reconstruct the way a neural network executes its function even with full knowledge of the weights of its nodes?

    For a specific gene network, precise estimation of parameters would be the goal. To think of a genomic network as structurally isomorphic to a neural network is probably possible, but it will remove both specificities. I doubt, though I could be wrong, that genomic networks are necessarily concerned with message passing in continuous or quasi-continuous time like neural networks are; and seem to be constructed through a correlational analysis of genes to traits and genes to genes in terms of expression. I know there are counterexamples when analysing cellular differentiation.

    If the functionality is multirealisable, then a knowledge of some particular state of task-adapted componentry does not give a simple theory of the functional dynamics of the network.

    I think this is a truism. But it will not diminish the analysis of a specific genomic network.

    We could still hope to model genomics at a higher level. That’s why I’m thinking of a description in terms of general logical principles. Like the repetition of units (as in segmented body plans) or timing information when it comes to regulating tissue growth and developmental symmetry breakings or bifurcations.

    Dynamical systems theory is already being used. Street's reference to canalisation has a link to bifurcation theory.

    Edit: I would rather attempt to keep this on topic than to sidetrack the discussion into your semiotic metaphysics, though.
  • Networks, Evolution, and the Question of Life
    I don't see incentive as part of the equation. Things behave in certain ways as a result of how they were designed. There was no incentive prior to, or the cause of, flight. Flight occurred as a result of natural selection acting on genetic mutations over eons. By saying there is an incentive is projecting your own purposes onto reality, as if reality has reasons, or incentives, to design things. It doesn't. "Design" isn't even an appropriate term to use to describe what natural selection does, as there is no incentive, purpose, reason, or goal that natural selection has prior to the process itself taking place.

    That's what I meant. Teleological language is useful to paraphrase stuff like that.
  • Networks, Evolution, and the Question of Life


    One super interesting thing to bring up in relation to this - I might start another thread on this down the line - is in following Robert Rosen's contention that biology is, contrary to what is commonly thought, a more general science than physics, insofar as biological systems have a richer repertoire of causal entailments than do physical ones. Physical systems thus being a more limited field of study, even if they qualitatively make up more of the universe. This is one of those lovely thoughts, I think, that spurred me to study biology in some depth - what to know physics? Study biology! :D

    Responding with quote from a physicist friend: "I'm uncomfortable with science that deals with anything that isn't a state variable. (temperature, pressure)"
  • Networks, Evolution, and the Question of Life


    I remember how much you hate possible world semantics for modal logic, but I think some idea of possibility/necessity modality is useful here to sharpen the claim that a demarcation criterion between the biotic and abiotic is impossible - at least as it concerns the generating processes of organisms.

    I assume that a demarcation criterion is largely an epistemological device, and nature would not care about the distinction between the divided factors apart from differences in generalised processes that influence them.

    Imagine at some future time there was a complete list of all the things which could influence the expression of an arbitrary assemblage of genes. Call this collection P. At any time before this collection is made, there will be a subset Q(t) of P that represents the current list of all influences on genetic expressions.

    Considering the structure of Q(t), there are likely to be subclasses that are united by shared properties - generalities in genomic networks indexed to sets of co-expressing genes. Can we tell at any time whether the set of properties is exhaustive, and that we have provided a spanning partition of P generated by the properties? If the set of studied gene expressions were fixed and finite, in principle this would be possible. Not in practice however, there is an absurd combinatorial explosion whenever you're dealing with genome subunits, nevermind sets of genome subunits paired with genome subunits...

    One way to represent P would be the set of all common expression properties, represented by their network bearers. In order to decide that we have all the properties that constitute P we would need a demarcation between factors that influence the development of genes and those that do not. Specifically, we would need to be able to infer from some particular Q(t) that there are no more possible types of gene expression influencing entities or processes - that we truly have a spanning partition of P, even if we do not know all its elements.

    Evolution conditions Q(t) and the generated property list approximating P, since there will always be novel environmental scenarios. In particular, this ensures that novel categorisations are always indexed with a set of environmental parameters, at least insofar as the development of developmental processes is concerned. The indexing role the environment would play in this kind of study would ensure that environmental conditions are in some sense dense - ubiquitously represented and always very 'close' in the sense of accompaniment and foundation to accounts of changes in developmental biology- within the study of developmental processes as they relate to their environments. However, it is also likely that the environment is conceived as that set of things 'outwith' the 'biological' constituents of the developmental process. Despite the environment's ubiquity (denseness) within the analytic concepts of the development of developmental biology. This operationally equates the concepts of environmentality and exogeny. (You can observe the converse happening in ecology btw, the equation of an ecology with its interiority to description)

    Whether the demarcation problem for the biotic and the abiotic is decideable within or using information from these theories would then depend on the extent to which the abiotic is operationally defined as the biologically exogenous, and whether it is operationally necessary to treat it as such.
  • Networks, Evolution, and the Question of Life


    That a biological system doesn't contradict the laws of physics doesn't tell you much about the biological system. As you pointed out, stuff like air-resistance and aerodynamics enables things to fly; but doesn't make an incentive for things to fly. I meant the reduction of biology as a field of study to physics as a field of study is pointless - they're concerned about different things and use different methods. The ontological reduction from the living to the non-living might still be informative though.

    I didn't intend to imply that the interaction of charges could exist independent of particles (requires charged particles and carrier particles). Just that it's something particles do, not something that they are. This was meant to highlight that the ontological reduction of stuff to particles doesn't even help much in describing what particles do (other than as an enabling condition for the study of particle behaviour); as an analogy to the reduction of biology as a field of study to physics as a field of study.

    I'd rather keep the discussion related to what 'the reduction of the living to the non-living' suggests, rather than attempting to subordinate fields of study to each other. Just because I think the subordination of fields of study to each other is a different type of question, and is largely irrelevant, to answering and posing questions about life and the specificity of living organisms.
  • Networks, Evolution, and the Question of Life
    @Harry Hindu

    Biology is beholden to the laws of physics. You can't have the findings of two different fields contradict each other. Biology is just a sub-discipline of physics. Humans tend to put things into little boxes, including fields of science, which is ultimately an explanation of the world as a whole.Harry Hindu

    Really? The types of question appropriate in biology are a lot different from the types in physics. There are differences on the entities concerned, the relevant explanatory frameworks and the types of experiment conducted. That a biological system does not contradict any physical laws is one of the least interesting features of a biological system: the interesting ones concern its biology.

    A reduction of biology to physics methodologically is pointless, philosophically a reduction of the living to the non-living is interesting though. Perhaps it's useful to say that the living is composed by the non-living in some manner, however.

    'it's all made of particles!'
    'is the tendency of negatively charged particles to repulse negatively charged particles made of particles?'
    'no, it's something the particles do'
  • Networks, Evolution, and the Question of Life
    I think your OP's missing a step @StreetlightX, you go straight from the influence of abiogenic factors on gene expression without mentioning their mediated relationship through cellular differentiation - which is determined both by systematic/topological factors of a specific cell's cellular environment but also the standing of the differentiating cell within its environment (specific, point-like values that realise the network to a specific context). So the developmental process is always-already specific since it articulates the developmental trajectories of cells in accordance with things that generally condition them.

    Further, networks are individuated by more then their topological properties - you can have two networks with graph isomorphism: this states, roughly, that the relevance structure of the system is the same but have both different nodes (rendering the graph specific and individuated from its topological equivalents precisely because they are indexed to different genes) and also different flows on the network (more than one way to 'pass a current of information' through the graph). So it can be said that the topological properties of networks constrain the types of flow that pass through it, but the level of structural isomorphism is inappropriate for the analysis of the expression of particular genes or sets thereof.

    The specificity comes in the application of general principles (like generating co-expression networks of specific genes) to individual gene clusters - discovering the clusters the process -. Specificity is always-already part of the developmental process for a given organism, but not necessarily part of the methodology of its analysis.
  • Evidence of Consciousness Surviving the Body


    I may have a few small problems with the methodology in the study - I need to close read it rather than skim to see what they're doing exactly with Pearson's R and Kronbeck's Alpha, also see how they're computing the correlation between the individual test items and their sum (usually an OK procedure). The worst bit of the methodology is that they don't use a statistical model to try and discriminate between people with high scores on the scale and people with low scores on the scale - just compare aggregate means, rather than assessing individual chance to be in a specific category. Generally this makes me suspicious because researchers using applied statistics should be able to do logistic regression (at least) to assess these questions. (paragraph beginning 'The criterion sample of NDE reporters...'

    Regardless of these reservations - it does essentially what you described in one of your opening posts, a catalogue of common NDE experiences, then does a further grouping to allow the quantification of intensity of an NDE. The study is essentially 'what are the demographics of NDE experiences? what are the commonalities? how do these commonalities correlate? how do the individual commonalities correlate with overall NDE intensity?' - in essence an exercise in quantitative phenomenology.

    This is in line with the limits of an observational study. They are asking no causal questions and bracket the issue of veridical NDE perception entirely (paragraph at the start, begins 'These near death experiences...'), and the question 'can we distinguish people who have had profound NDEs from people who sorta-kinda-maybe had them?' is consistent with it too (not that I think they addressed it very well).

    Also absent from the report is an attempt to predict the content of an NDE from a specific individual - they do exactly as I said, provide a catalogue of overlapping categories and say 'it's probably some combination of those' (disjunctive events). And then they discuss the appropriateness of reducing the questionnaires through eliminating variables which do not correlate strongly with the test result - in essence removing some common NDE phenomenon to ease the discriminatory/categorisation question between profound NDEs and lesser ones.
  • The ontological auction


    Yeah, that was an attempt to show that applying Occam's Razor, under some arguably unreasonable constraints, gives theories which are more likely to be true. If you're still reading Jeffery's book, the argument relates to the ideas of 'sufficient statistics' and 'minimal sufficiency'. I'm really not convinced that applying probability theory in that way works for arbitrary theories - especially ontologies -, but I find it quite convincing as a simplified model.

    edit: Though since I've not read the book I don't know if it contains those parts of estimation theory.
  • The ontological auction
    I think it might be illustrative to try to come up with a case where only Occam's Razor distinguishes between the accepted and unaccepted accounts. Imagine that there's some set of explanatory entities A which gives a good account (whatever that is) of some phenomenon X. Further, imagine that there are two other (disjoint) sets of explanatory entities B and C which together (union) give an equally good account of X. If it was stipulated that A is a proper subset of (B and C), then Occam's Razor chooses A.

    But this seems to be a truism, of course A would be chosen since it's constructed to contain less explanatory entities. The artificial things in the account, I reckon, are:

    (1) The assumption that explanatory entities can be counted in a straightforward manner within theories. How can parts of an explanation be quantised in a seemingly independent manner?

    (2) The ability to tell whether an idea or account is a sub-idea or account of another is not straightforward either. Theories resist enumeration and collapse into sequences of propositions.

    I think if you grant that (1) and (2) are in principle possible and also that theories consist solely of events (dubious for ontologies), you can get some mileage out of probability theory.

    Imagine we're in the same set up as before. There's a phenomenon to be explained X, and we're looking for sets of events that explain it. For X to happen, there has to be a collection of events which make X happen and only X happen*1*. Since there's a collection of events which make X happen, there has to be a smallest collection of events which make X happen*2*. Call this smallest set A. Then we have that if a theory contains A, it accounts for X.

    This has the effect of saying that the probability of X given C, where C is a superset of A, is equal to the probability of X given A. That is to say that no additional information about X can be obtained by adding to A - specifying that other things have to happen. Any other account B would be an intersection of A with other events*3* which is less likely than A. In fact, A is the most likely theory.

    I've been fast and loose in saying that X is an event and the things accounting for it are events, it isn't going to be that clear cut in a real account - where statements don't necessarily correspond to any particular event or events at all. But hopefully it's useful as a first approximation.

    *1* assuming that this applies without loss of generality by specifying X as a compound or disjunctive event if required
    *2* assuming uniqueness here
    *3* since only A is relevant for X and A occurs as well as other things

    edit: another assumption is of course that intersection is a good model for combining ideas to make a theory - unlikely.
  • Evidence of Consciousness Surviving the Body


    You seem to be saying, sorry if I'm incorrect, that since these filters are possible defeaters, that I should reject the testimonials.

    I don't think the filters by themselves are defeaters of the disembodied consciousness claim - it is possible that there are some testimonies which satisfy all of them. I also don't think the argument I've made shows that 'disembodied consciousness is impossible'. What I take away from the rarity of the testimonials that satisfy all filters is that non-confounded veridical NDE candidates are rare, and that this rarity is consistent with true statements in NDE accounts arising from, and please forgive the loose phrasing, 'sampling from simulated environments within the NDEs' rather than an unusual perceptual event of the NDE-experiencer's environment.

    If testimonials that satisfied all filters were not rare - for example if every NDE occurred when the subject was provably unconscious and the true statements they made were highly specific (not exploiting statistical regularities in medical procedure descriptions), and the testimonials were recorded without doctors' influences. - AND if these NDE testimonials provided many accurate, non-generic statements about their environment then the numerosity of these testimonies would be some evidence of NDE veridicality (without confounders).

    Since there aren't many NDEs that go through the criteria, there isn't much evidence for accurate statements in NDE testimonies that arise without presence of a confounding factor. So there isn't much evidence for NDE veridicality (statements within NDEs that arise from unusual perceptual events rather than statistical regularities or the underlying confounders). This is effectively saying that the true sample size for studying the presence genuine NDE perceptual events is tiny within the list of 4000 testimonials.
  • Inquisiting Agustino's Aristotelian Moral Framework


    I hope it's allowed within the rules: there are a few posters on here that have a somewhat developed personal philosophy (@apokrisis, Augustino, @Banno if he still believes roughly what he believed four years ago), and I want an excuse to eat my bodyweight in popcorn.
  • Evidence of Consciousness Surviving the Body
    Maybe this will move the discussion on a bit, @Sam26, @Michael Ossipoff

    I thought of a way to better describe the scenario which would lead to an NDEs content being accurate by chance rather than through a perceptual event.

    Say a person is undergoing heart surgery and their heart stops. They have an NDE which they identify as beginning when their heart stops, and the report contains numerous accurate things - stuff that matches either a video record or doctors' memories. The NDE could just as well be a simulation of heart surgery viewed from an exterior vantage point rather than the surgery viewed from the same vantage point. It is likely that a simulated version contains some things which match the real thing - through common knowledge, stereotypes and other statistical regularities, but none of which occur due to a perceptual event.

    In order to establish NDEs as veridical, it needs to be shown that the accurate parts of NDE testimony content are as the result of a perceptual event of their environment rather than of a simulation. Having an NDE which satisfies the above filters for non-confounding removes various arguments against the veridicality of that NDE (accuracy due to priming/confounding/contextual effects rather than a perceptual event during the NDE), having an NDE which produces many true statements and no or very few false statements about the surrounding environment that satisfies the filters would be good evidence that the NDE consisted of genuine perceptual events. However, we can still expect some 'very accurate simulations' - but we expect them rarely purely by chance.

    That there are few NDEs that do not satisfy the filtering conditions is evidence - though not especially strong evidence - against the veridicality of the NDEs. Having few NDEs that match the filtering conditions means there is little evidence (purely combinatorially) that supports NDEs having veridical content.

    I would like to see an uncannily accurate NDE transcript and the validation procedure, if you could provide me one? The kind of thing that makes you think 'hot diggity, there's really something to this, they're really experiencing genuine events despite being unconscious!'.
  • Evidence of Consciousness Surviving the Body


    I don't think that we'll make much more progress since we're at the stage of saying the other person has not answered previous points. I presented the previous arguments I made as if I had demonstrated them, I believe they're conclusive - but of course I could be wrong. This impasse is unfortunate

    I'll address:

    Moreover, the one thing that stands out in these testimonials is the OBE, which you seem to believe in. If one believes people can have OBEs, then how can one not believe that one can have accurate descriptions of their OBEs? Moreover, how is having an OBE not evidence of consciousness extending beyond the body? Unless your contention is that the OBE is dependent upon the body, but then the question arises, how are the testimonials of an OBE that is dependent on the body, any different from the OBEs people describe when the brain and heart are not functioning? How can you believe the testimonials of the former and not the latter?

    though.

    I took some drugs once and had a trip. I saw Mario jump out of the closet in my room. I didn't for one second believe Mario was there. There's definitely the possibility for non-equivalence between the content of the experience and what things in the environment generated it (specifically for me it was the drug, not a hidden Mario in the closet). I take descriptions of NDEs as accurate descriptions of what the people experienced (a truism), but not necessarily in accord with what actually happened. Without actually going through all the papers (an exercise I believe unlikely to provide sufficient evidence that consciousness leaves the body). The filters I described are examples of standard procedures to remove confounding variables to allow for causal claims to be made. Just generating an accurate statement (after filter application) still isn't sufficient to show that NDE experiences peer beyond the veil.

    I suppose I'll leave it at that.
  • Does infinity mean that all possibilities are bound to happen?


    I think the unintuitiveness of the quantitative behaviour of infinity is something isolated to the folk-mathematics idea of it. Infinity isn't just well understood in mathematics, it's essential.
  • Evidence of Consciousness Surviving the Body


    I'm glad you agree that testimonial data which satisfies the criteria I outlined is rare. 4000 accounts quickly becomes a lot less when the data is filtered to the relevant cases for consideration. I realise I made a few strands of arguments, so let me detail the threads individually. The thrust of my major argument consists of a few steps (and this is the one I am most convinced by). Key sentences for the argument are given by numbers, sub-steps and supporting statements are given by the appropriate argument number then Roman numerals.

    Argument 1
    ___________________________________________________________________________________________

    Key questions of the first argument: what are the relevant qualities of testimonial data to be included as part of an analysis of whether NDE experiences are veridical? And this is tied to the question: what would evidence for NDEs being veridical look like?


    (1) Reducing the effective sample size of testimonials to ones which are relevant for studying whether the accurate statements arose because of the NDE.
    (1i) This was done through applying the aforementioned filters on observational data to preclude confounding factors, leaving few testimonials.
    (2)If NDEs were in the aggregate veridical, we would expect accurate descriptions during NDEs because of NDEs to be common.
    (2)i This is established through the door analogy. If a person is exposed to a door, they will see a door if the door is there because the door is there (if it's there). This would give a high proportion of accurate descriptions in those cases which satisfy the criteria.
    (3) We do not observe many cases of NDEs that satisfy the filters.
    (4) The rarity of accurate descriptions in testimonials satisfying the filtering criteria are consistent with these phenomena arising out of a highly improbable random mechanism.
    (4i) More detail: with the door example, accurate descriptions satisfying the filter are too common to be the product of solely rare chance.
    (5) There is not enough relevant data to support that NDEs caused the accurate statements.
    (5i) relevance being established by the filtering criterion.


    Argument 2
    ____________________________________________________________________________________________

    Key questions of the second argument: what would the descriptions in NDEs have to look like to be consistent? Can we describe a given person's NDE before it happens with a sequence of non-disjunctive statements? Why would the sequence taking a disjunctive form establish the non-consistency of NDEs?

    I think the difference between your Alaska example and the door example, and the differences between each and a particular NDE are illustrative here.

    The door example is different from your Alaska example. The door example is a model of a simple veridical perception, the Alaska example's 'parts of the state' are generated by the observed thematics of NDEs, and so can always be made consistent descriptions of NDEs in the aggregate through iterated disjunction. This will not help us predict the content of a particular person's NDE other than saying something like 'it is likely to contain an OBE and have at least one of these thematic sensations within it'.

    You have aggregated the general thematics of the testimonials and are now claiming that they are consistent based off of the idea that they obey these general thematics. The door is consistent, people see the door if the door's there. We cannot tell 'if the door is there' - some kind of representational truth- with the general thematics of NDEs, since of course particular NDEs are likely to satisfy some subset of the derived thematic properties of their aggregate! Furthermore, if we could tell this from typical NDE content descriptions, the testimonials which satisfy the filtering criteria are likely to be far more common.

    Points of Commonality and Difference
    ____________________________________________________________________________________________

    I agree that there are general themes to NDEs. I believe people can have OBEs.

    I do not believe NDEs are veridical. I don't think the quality of this testimonial data is high enough to address the question of NDE veridicality (needs to be close to the quality of a controlled experiment for causal claims).Thus I don't believe people are really 'outside their bodies' based on this evidence. I have further reservations on the idea of disembodied human consciousness independent of the issues of NDE testimony (typical counterpoints: brain-death and brain damage, phantom limbs).

    I've tried to keep my reservations out of the analysis of testimony, but I believe (and this need not be addressed) that the improbability of disembodied consciousness casts doubt on the idea of NDE (and psychotropic drug use experience) veridicality.

    Edit: I've removed the mansion thought experiment, and fleshed out my argument against the consistency of NDEs as you've presented it... These disagreements should be enough to chew on for the both of us I think.
  • Does infinity mean that all possibilities are bound to happen?
    Ascribing a probability to an arbitrary future or past event with no information at hand is impossible. It has to be done within a context to be meaningful. Besides that, there are two caveats: certainly if it has happened it has probability 1 (after the time it happened), some events will have probability 0 - such as 'the FTSE will rise 7 points tomorrow AND the FTSE will not rise 7 points tomorrow'.

    That said, for any mechanism which ascribes non-zero probability to an event E, E will happen if you take the sample size to be 'large enough'. This is just a restatement of the 'monkeys on a typewriter producing the complete works of Shakespeare' theorem.

    There are possible events which have probability 0 too. Stuff that could happen but will not. Like throwing a dart onto the number line and hitting a fraction (or a real world equivalent if reality is continuous).
  • Evidence of Consciousness Surviving the Body


    Looking forward to it, it was a fun discussion before, should be again.
  • Quantum Idealism?


    What denial? I just gave an example of an actually macroscopic scale quantum phenomenon. Regardless, people don't diffract through doors. The true picture isn't 'everything is quantum' nor 'everything is determined like in Newtonian physics' - if you remember back to the previous time we went round this merry-go-round I gave examples of macro-scale randomness and an acausal system in Newtonian mechanics. You've painted me as a member of some kind of science conspiracy to defend 'determinism' and 'materialism' but really you know hee-haw about me other than hostility to your woo and poor arguments for your woo.
  • Quantum Idealism?


    I was expecting that response. The appropriate length scale is that of an electrical signal inducing a measurement in a quibit - a quantum state. That doesn't exist outside of the small quantum length scales. Electrical signals are used to signal between the computers using the cryptography technique.

    In the Jinan network, some 200 users from China's military, government, finance and electricity sectors will be able to send messages safe in the knowledge that only they are reading them. It will be the world's longest land-based quantum communications network, stretching over 2 000 km. — The First Article You Sent

    It's the encryption/decryption which exploits quantum phenomena, not the message passing - which is the thing that occurs on the larger length scales.

    Your second article:

    Given that a practical application of entanglement to macroscopic particles is to enhance quantum electronic devices in real world situations and at ambient temperatures, the researchers sought a different approach to this problem. Using an infrared laser, they coaxed into order (known in scientific circles as "preferentially aligned") the magnetic states of many thousands of electrons and nuclei and then proceeded to entangle them by bombarding them with short electromagnetic pulses, just like those used in standard magnetic resonance imaging (MRI). As a result, many entangled pairs of electrons and nuclei were created in an area equal to the size and volume of a red blood cell on a Silicon Carbide (SiC) semiconductor. — The Second Article You Sent

    Many entangled pairs IN an area, so lots of little entangled quantum pairs spanning a larger area. The innovation here is getting a lot of quantum entangled particles to stay together on a larger length scale at room temperature, not saying that 'we've created two red blood cells that are entangled'. The latter of which would be a 'large scale quantum phenomenon'.

    I think there are examples of where quantum phenomena do crop up on larger scales in very specific circumstances like Bose-Einsten condensates. But I really doubt you'll listen to this, since you spout the same points about quantum mechanics in every thread even tangentially related to it.
  • Quantum Idealism?
    I think science outreach about quantum mechanics has been sensationalist. You see little bits of 'quantum weirdness' like entanglement and vague references to FTL signalling, wave/particle duality, the probabilistic nature of the computations, and Schrödinger's cat suggesting that quantum weirdness occurs on all length scales. Then you have the philosophical implications which are usually reported as internal to the theories rather than being an interpretation of them (Bohm vs many worlds vs Copenhagen). The big money woo machine here is how 'measurement' is portrayed.

    Add to that the average science journalist article about quantum mechanics is focussed on the buzzwords and weird implications - then this hodgepodge of partial information without appropriate contexts gets put into the Great Woo Machine of New Agers and viola, you end up with rigorous seeming woo articles that resemble those with some journalistic integrity.
  • Light Polarization
    All of your threads are lists of questions that perplex you but which you assume are fundamental problems with contemporary physics. Is it then surprising when someone who is not completely perplexed by these aspects of physics disagrees with you or tries to show you material to better inform you?

    If you're that convinced by the power of your questions, write them up as papers and submit them to a relevant journal. Or alternatively email your questions to a researcher who is likely to know how to answer them.

    Even if people in these threads, self included, can't answer every part of your confusion with some scientific claims, that doesn't show that your confusion is anything but a result of not understanding the material. This is a general purpose philosophy forum, not one where people are going to have much of an understanding of optics or quantum mechanics- though there are probably a few people on here who've seen some of both in university.
  • Light Polarization
    Try this and then this for an introduction to how polarisation works.
  • Is 'information' physical?
    Some solution attempts to Maxwell's Demon rely on a mathematical relationship between thermodynamic entropy and information entropy. So there's some precedent for using information as a physical concept.
  • Semiotics Proved the Cat
    A reply to @MikeL and @apokrisis



    No worries. I wrote that post mostly to clear my own head and explore the ideas in the paper. I'm grateful to any responses to my poorly formulated metaphysical nonsense.

    Brief glossary of how I use the terms:
    Reveal
    virtual = stuff that exists but isn't actual, rocks = actual, ideas, signs etc = virtual. Ambiguities in it: are constraints like laws of nature or statistical tendencies virtual or actual?
    flat ontology = an account of being in which there's no privileged entity or process (IE not substance + modifications, not discourse + its constituents)
    immanent plane = all stuff conceived under the aspect of a flat ontology, plane because it connotes flatness, immanent because there's no 'entity above others'.
    epistemic access = x has epistemic access to y if x can be informed about the status of y in some way - more reference to the concept those words suggests rather than a specific account thereof
    ontological relation = a real relation between stuff in an ontology, stuff doing stuff to stuff or stuff being affected by stuff (stuff being a placeholder for the entities in an ontology, doing and affecting being placeholders for fleshed out particular relations in an ontology


    While I think that it's a true statement to say that it does not necessarily occur solely with humans, it does require a sentience rather than a cause and effect mechanism. I can think of no instance where sentience would not be involved as interpretation is a cognitive process. — @MikeL

    Is the assertion that the interpretant is being removed so we are left with a cause and effect relationship? Is that what you mean when you say us objects can play amongst ourselves? 'Meaning' by definition would seem to possess a subject held monopoly. It is the subject that ascribes meaning to the world through the semiotic pattern interactions they observe. Different subjects may observe different patterns or the same pattern, but the meaning is theirs alone.

    I think stuff can have meaning for stuff in a broader sense than 'meaningful language items'. A water drop falls on a flat floor, it expands to a circle - the determinative elements for the dynamics of something can be taken as the thing 'thinking' how to act in its environs. I don't mean literally thinking, I mean something which is actual being conditioned through something which is virtual. Humans as consumers and bearers of signs aren't unique in this respect - when the sign is the 'flattening' concept for a flat ontology based on information/signs.

    The water drop as an interpretant is translating a general dynamical pattern in terms of surface tension minimisation and surface area maximisation into the specific context of the composite object/system (floor,drop) - this could be termed a 'habit' in Apokrisis' sense, or the drop 'thinking' in the way I've put it. The signifying element, what links interpretant to object would be the frontier of expansion of the drop, or the frontier of expansion extended in time.

    That leads to a more general definition of sign (within a sign relation). I have been stressing - a point MikeL did more than just mention :) - that a sign involves a reduction of information. It is not a reification - a Saussurian signifier or representation - but an active ignoring of material facticity. A filtering out of the dynamical environment, the thing in itself, so as to respond only to some "useful aspect" of the world. Reality is the totality of all that there is. A sign is a reduction of that to some token to which we feel justifies or secures an appropriate habitual response.

    Now that describes semiosis of the ordinary subject-object kind - semiosis as life and mind acting in the world. But an information theoretic physics - an object-object semiosis - sees the same information reduction principle applying in the hierarchical organisation of nature in general. Just because of event horizons, every particle or material object is responding to a reduced view of the total environment.

    And that is basic to probability theory (your strong interest?) as the principle of indifference. The ability to filter out micro-causes is how macro-states come to be real. An ideal gas has an actual pressure and temperature because all the detailed kinetics of the constituent particles can be justifiably averaged over, or ignored.
    — Apokrisis

    I don't think it would manifest as the principle of indifference, it would manifest as the conditional entropy of X given Y and irrelevants U being equal to the conditional entropy of X given Y - a statement of conditional independence of X given Y from the irrelevant variables U.

    Regardless, my point is that the general object-object semiosis conditions subject-object semiosis as a special case, so the idea of representation - aspects of the virtual we have captured conditioning the actual in a summary/pictorial form - which general semiosis is derived from is instead 'let to play among the objects', collapsing the distinction between what's virtual and what's actual by eliminating epistemic subjectivity.

    Isn't a sign a composite of facts - predicables of different orders and types - not things?
  • Semiotics Proved the Cat


    Also @apokrisis and @StreetlightX for interest.

    As was pointed out in the previous thread you made on semiosis, one advantage of the Piercian view of it is that we can consider subject-object semiosis as a particular case of the more general object-object semiosis. IE, the triadic nature of sign, interpretant and object does not necessarily occur solely between humans or in a derived virtual plane from the activity of humans (such as the imaginary in Zizek's Lacanianism). This levelling of the playing field facilitates a flat ontology, in the sense that there are no privileged stratum of interpretant required for semiogenesis; there is no subject held monopoly on meaning; we objects can be said to play amongst ourselves. Apokrisis takes this a step further, inscribing the sign (NB: including its interpretants and objects) into the general concepts of information and entropy.

    However, such a levelling can be interpreted as a reification of signs as objects and relations in the world, rather than representations of object-object relations; confusing map for the territory because the concept of representation has been discarded, elided and subsumed within the concept of function. Ray Brassier makes a similar point against so called flat (object oriented) ontologies:

    The epistemic insistence on the explanatory indispensability of representation does not necessarily entail these* nefarious ontological consequences. Since thoughts of things are not the things that are thought, it is necessary to explain how thoughts are related to things while distinguishing their causal connection from their justificatory relation. This is the Kantian problem. It cannot be dismissed by simply levelling the distinction between thoughts and things, which is what flat ontology seems to require. — Ray Brassier, Delevelling, Against Flat Ontologies

    *the world being made of facts, not things

    Despite this passage being aimed at ontological accounts centred around propositions and facts, the criticism applies just as well to theories that similarly quantise sense and thusly subordinates objects and flows to their senseful interactions. This inscription of signs into the real in a fundamental way elides that signs are precisely representational packets of phenomena - equating ideas of epistemological access with ontological relation. Ray Brassier continues:

    epistemic subjectivity is ineliminable, but it is neither supernatural nor
    immutable. It embodies a mutable conceptual structure embedded in the natural order. Concepts change over time because the way in which we know the world is conditioned by the way in which the world changes. Time conditions knowing, even if it is possible to say true things about the way
    the world is at any particular moment or slice of the cognitive process.
    — Ray Brassier

    The conflation between epistemological (informing-relational) access and ontological relation (determining/dynamical-relational) through their mutual subsumption to the sign does not account for the processes which give rise to individuated elements of the sign, nor how they mutually constitute in the immanent plane of signs and sign relations. We still require virtual and informational categories beyond the sign in order to account for a reality unconditioned (except as potential) by them which allows sign systems to emerge and integrate.

    Perhaps this can be phrased as 'what are the conditions that allowed the dyadic structure of the real to form?', then the irreducible constituents of the sign take on a transcendental image; haunting the immanent plane of this flat ontology with a law it could not generate, only be subsumed under.
  • The Ontological Proof (TOP)


    Sequences of divine entities are like the rationals, incomplete and thus irreal. *badumtisch*