• What is a painting?
    I went to the BBC's website and it seems that the lectures weren't being hosted anymore.Moliere

    They are here. There are also transcripts for each episode. (I find it's often best to search from outside the BBC website.)
  • What is a painting?
    Well, I paint and draw and that's how I think of the distinction between them. Objects (which may be abstract) are separated by changes in colour or texture (painting), or by lines (drawing), or both. Examples of 'both' include ancient Egyptian art, many paintings by Picasso, line and wash, etc.

    Most people seem to want to talk about art vs non-art. I recommend Grayson Perry's Reith lectures on that.
  • What is a painting?
    What is a painting, as opposed to a drawing? Is there a category which painting and drawing share? Suppose sculpture as a point of comparison, along with glass blowing and theatre.Moliere

    A painting is made of areas. A drawing is made of edges. They both change the appearance of a surface.
  • Must Do Better
    I'm not sure what the model is, but the other components are pretty obvious. Perhaps the Bayesian theory works - I wouldn't know how to assess it. Can we run the process in a lab and assess whether it gets the answer right - or what?
    The thing is, it runs decision to action. The question here is whether you can run it backwards to read from action to decision. The difficulty is that most readings will be underdetermined, I suppose.
    Ludwig V

    You have talked quite a bit about making decisions under uncertainty - about medical treatments, weather forecasts, coin-tossing, and beer in fridges. I was replying to all of that and I may have confused things by quoting a particular paragraph. I wasn't trying to 'run it backwards' to interpret a decision.

    The model is your idea of how some aspect of the world works. It provides the probabilities of various outcomes.
  • Must Do Better
    So it is relevant to say, not that a bet is no test of confidence, but that interpretation of a given decison is complicated by the fact that a bet is the result of weighing risk (disutility) against reward (utility) in the context of one's confidence. Confidence alone does not determine a (rational) decision.Ludwig V

    You may be groping your way towards Bayesian statistical decision theory. As I have said before, there are 4 components: model, data, prior, utility. That is enough to make a 'rational' decision. I'd prefer to say it provides a principled or formalized decision-making process. It doesn't stop you having an unreasonable model, prior or utility.
  • Must Do Better
    Notice that you start with the assumption that 2 entities are identicalJoshs

    So I did. I called them identical, and immediately contradicted myself by asserting a difference between them! I did not intend to call them identical. Sorry for the confusion.

    Mathematical was developed to apply to self-identical objects, and so presupposes the existence of these qualitatively self-identical objects.Joshs

    That seems a substantial and interesting point. Mathematics was developed like that but category theory seems to be transforming it into something else. There's a SEP article on category theory and one which links it to structuralism. I haven't read them, but I'm happy that philosophers are thinking about this. I shall just repeat a couple of things that I heard from category theorists. They have a maxim that a mathematical object is completely determined by its relationships to other objects. They state an aim to convert every equality into an isomorphism.
  • Must Do Better
    If I place two identical letters side by side(aa) is this a difference which doesn’t make a difference? In formal logic the answer would be yes. For Deleuze the answer would be no. Formal logic assumes we can apply the notion of ‘same thing different time’ to any object without contextual effects transforming the sense of the object between repetitions.Joshs

    If I place two identical digits side by side (22) is this a difference which doesn't make a difference? In decimal notation the answer would be that it does make a difference: The first 2 represents 20 and the second 2 is just 2. I'm sure that someone could invent a grammar for formal logic with plenty of contextual effects. I don't think this is a good way of explaining Deleuze.

    Thinking about this some more I was reminded of John Conway's surreal numbers. I can see from internet searches that others have drawn parallels between Deleuze's difference and surreal numbers. They all seem to focus on the way that surreal numbers enable you to extend the real numbers. I can't find anything relating Deleuze to the way in which surreal numbers are constructed, which is more relevant to the discussion here.

    Suppose we put two identical nothings side by side and assert a difference between them. We could write it like . No jokers to the left, no clowns to the right, but here I am. Now we have something, we can put to the left of nothing or the right of nothing . And that's how you make 0, 1, and -1, as surreal numbers. Then you can reverberate to infinity and beyond.
  • Mechanism versus teleology in a probabilistic universe
    The overall movement of a bacterium is the result of alternating tumble and swim phases, called run-and-tumble motion.[18] As a result, the trajectory of a bacterium swimming in a uniform environment will form a random walk with relatively straight swims interrupted by random tumbles that reorient the bacterium.[19] By repeatedly evaluating their course, and adjusting if they are moving in the wrong direction, bacteria can direct their random walk motion toward favorable locations.wikipedia
    (my bolding.)
  • Must Do Better

    I associate the phrase 'a difference that makes a difference' with various social sciences. I didn't know where the phrase came from. Your reference to MacKay and Bateson is reassuring: at least we seem to mean the same thing by this phrase.

    For this definition of information you need people to whom things already have meaning, for otherwise they cannot know what is important. Duleuze (I think) is trying to get underneath that and construct meaning from something much more minimal.

    For Shannon information, a single bit conveys no meaning to the receiver unless the sender and receiver have already agreed what that meaning is. That's no use to Duleuze either. With two bits, you can convey meaning without prearrangement, and the meaning that you convey is either difference or sameness.

    If you're prepared to accept 'difference itself' as a starting point you're immediately in business. The only meaning you assume is the meaning of difference. It's quite neat.

    As to where I'm going: only that taking difference as foundational to meaning seems reasonable to me and I'm happy to accept that such an approach could be rigorous.


    We'll have to see what @Joshs says about that.
  • Must Do Better
    Deleuze writes: "It is said of a world the very ground of which is difference, in which everything rests upon disparities, upon differences of differences which reverberate to infinity (the world of intensity).Joshs

    Georg Wikman: "Difference is seen as more basic than similarity. The reason is that similarity presupposes difference which makes difference logically prior to similarity."

    "But any "difference that makes a difference" is of course actual, sheer potential itself being nothing at all. Difference presumably presupposes something to be different.
    Count Timothy von Icarus

    There's a similarity between Deleuze and Wikman, but you've added an 'a', which changes the meaning.

    In a bit (of information as in computer science), there is a difference between 0 and 1. It is a difference that does not make a difference. With a pair of bits there is a difference between pairs which contain a difference (01, 10) and pairs which don't (00,11). There's a difference between the presence and absence of difference. Now the 0s and 1s can be dispensed with entirely, never to be mentioned again, and everything can be built from difference. There was really no need to mention them in the first place.

    This is how I (mis?)understand Deleuze.


    Perhaps this helps.
  • Must Do Better
    My question is simply what is the aim of the translation project now? Is it the same, or something different?Ludwig V

    I don't know, but I know something I would like it to include, which is prior elicitation. It is usually thought of, as in that article, as capturing the knowledge of scientists or experts of some kind. A very different kind is to formalise what psychologists can tell us about what we all know. I recommend the book What Babies Know. Some AI researchers are on to it.
  • Must Do Better

    It doesn't seem to indicate a problem for biological evolution.

    Possibly Williamson, or Banno-interpreting-Williamson is thinking of a very specific convergence, of philosophical and scientific methodologies.
  • Must Do Better
    we might admit that what is a problem for scientific method at least overlaps what is a problem for scientific methodBanno

    Is one of those scientifics supposed to be philosophical?
  • Must Do Better
    @Banno Having read SEP's account of Bayesian epistemology, I think the entry on Philosophy of Statistics, especially section 4 would be a better start.
  • Must Do Better


    I don't know how to explain, because I don't know how much you already know. In the other thread I mentioned the four components of Bayesian decision theory: model, prior, data, utility. Are you familiar with these? Could you put them to use in a simple example?

    I am reading SEP's account of Bayesian epistemology. How you you get along with that?
  • Must Do Better
    Russell's student, Wittgenstein, adopted a similar line of thinking to yours, Graham, developing at least in outline a new language based on the new logic, that could set out all and only the true statements.Banno

    No, no, no, no, no, that is not anything like my position.I don't know much about Wittgenstein, but enough to know I prefer the later version. (See my comment here https://thephilosophyforum.com/discussion/comment/985967 for example.)

    I embrace Box's position 'all models are wrong but some are useful'. This is nearer to using a formal language in which only false statements will ever be made! Probability and statistics avoids the worst of the errors, which is why I want philosophers to use the language of probability and statistics.

    The realist/antirealist debate petered out in the first decade of this century. Part of the reason is Williamson's essay. The debate, as can be seen in the many threads on the topic in these fora, gets nowhere, does not progress.

    The present state of play, so far as I can make out, has the philosophers working in these areas developing a variety of formal systems that are able to translate an ever-increasing range of the aspects of natural language. They pay for this by attaching themselves to the linguistics or computing department of universities, or to corporate entities such as NVIDEA.
    Banno

    Thank you for that account. It sounds... not ideal, but a lot better than I had imagined. It's NVIDIA.
  • Must Do Better


    The labels 'continental' and 'analytic' are silly but I find it helpful to think of there being arty philosophy and sciency philosophy. They can be mixed in the same sort of way that architecture and gardening mix science and art.

    But when philosophy is not disciplined by semantics, it must be disciplined by
    something else: syntax, logic, common sense, imaginary examples, the findings of other
    disciplines (mathematics, physics, biology, psychology, history, …) or the aesthetic
    evaluation of theories (elegance, simplicity, …).
    — Williamson

    I'm mainly interested in philosophy that is disciplined by mathematics, physics, biology, psychology, history, … , AI, …

    Philosophers who refuse to bother about semantics, on the grounds
    that they want to study the non-linguistic world, not our talk about that world, resemble
    astronomers who refuse to bother about the theory of telescopes, on the grounds that they
    want to study the stars, not our observation of them.
    — Williamson

    Philosophers must communicate, but they are not obliged to communicate in natural language. Mathematics is a language and in particular, probability theory and statistics provide a much more expressive language than logic. There are also programming languages. I wish more philosophers would learn these languages and use them alongside natural languages. I have Knuth's literate programming in mind as an exemplar.

    I am very dubious about using natural language as a tool for reasoning or "using words to think with". That's a double-plus bad telescope.

    Dummett’s requirement that assertibility be decidable forces assertibility-
    conditional semantics to take a radically different form from that of truth-conditional
    semantics. Anti-realists have simply failed to develop natural language semantics in that
    form, or even to provide serious evidence that they could so develop it if they wanted to.
    They proceed as if Imre Lakatos had never developed the concept of a degenerating
    research programme.
    — Williamson

    To my surprise I find myself feeling sorry for anti-realists: have they no competent proponents? I haven't thought much about realism versus anti-realism, and I don't care about the issue. But it's a puzzle, a challenge, to develop natural language semantics for an anti-realist position.

    I'd start by thinking about programming an AI agent which learns 'everything' from scratch using
    reinforcement learning. This kind of AI is the opposite of LLMs: instead of trying to cram as much human knowledge into a machine as possible you force the agent to work almost everything out for itself.

    If you look at the diagram on Wikipedia you'll see there's an agent and an environment. It seems that we are to take the environment as existing independently from the agent. But I look at it from the point of view of the agent: there is state coming in and action going out, but how could you program the agent so that it was a realist even if you wanted to? And even 'there is state coming in and action going out' is saying too much too quickly, for how can the agent even distinguish coming in from going out? In order to construct semantics for an anti-realist position I'd start by answering this question. It's a long long route from there to a community of such agents which communicate using something like natural language but I believe it's possible.
  • Must Do Better
    The philosopher Hans Moeller who has a youtube channel called Carefree Wandering has said that continental philosophers are failed writers and analytical philosophers are failed mathematicians and failed scientists. He identifies as a continental philosopher. I'm a mathematician and scientist. It seems that analytical philosophy should be my thing, but I don't get on well with it. I'll wait until later on in Williamson's article to explain why (if I ever do).

    From near the end of the article:
    Unless names are invidiously named, sermons like this one tend to cause less
    offence than they should, because everyone imagines that they are aimed at other people.
    Those who applaud a methodological platitude usually assume that they comply with it. I
    intend no such comfortable reading.
    — Williamson

    In an article about image analysis from 1992, the author berated the whole field for a lack of rigor. Picking out individuals is invidious, but the author referenced 45 articles in a subfield and condemned them en masse:
    In the thinning literature [1-45] the ideal world of ribbons is not specified, the random perturbation model is not discussed, and the error function is not given. And for this reason the precise problem any thinning algorithm solves is not in fact precisely stated.

    I wish Williamson had done something like that.
  • Beliefs as emotion
    McCormick's paper reminds me a lot of the distinction between Bayesian prediction and Bayesian decision theory. (Very briefly: In all statistical inference there is a parameterized model and some data. From these we can make a likelihood. Frequentist statistics stops here and does what it can with the likelihood. Bayesian prediction adds a prior. Bayesian decision theory adds a prior and a utility function.)

    There's a lot of talk these days about Bayesianism in relation to the brain by neuroscientists and psychologists and some AI researchers. Bayes' theorem provides a way of updating your prior beliefs when given new evidence. We're told the brain is a prediction machine. And so on.

    A lot of this talk ignores the utility function that is essential for Bayesian decision making. The brain is NOT a prediction machine. It is a decision machine. The brain must have something which serves the same kind of purpose as a utility function. It seems that when we are conscious of a value that is calculated by this utility function it is experienced by us as a feeling. I do not know the answer to the question "How does something compute so hard it begins to feel?". But I'm pretty sure I do know the nature of the computation that is taking place when we feel.

    This means I kind of like the direction in which McCormick is going in her paper.

    I don't like the notion of a 'blend' of cognition and feeling. In Bayesian decision theory the posterior is analogous to cognition or knowledge, and the utility function to feelings. They are both essential to the decision making process in the same way that the rim and the spokes of a bicycle wheel are both essential to the proper functioning of the wheel. But we are not talking about a puree of rim and spokes. Their roles are very distinct. I do not expect the brain, a complicated, messy product of the very inefficient optimization process known as evolution, to contain any nice neat separations, but just calling it a blend is not good enough.

    There is something - you might call it "a subjective justification for a decision" - which combines cognition and feelings. I don't know (and I don't much care) whether 'belief' is a sensible name for this something.
  • Neuro-Techno-Philosophy


    I am a scientist not a philosopher, and have only read brief summaries of later Wittgenstein. When I first encountered language-games I immediately thought of Frames as used in AI in the 1970s. Don't be confused by the term "frame language": that refers to a formal language like a programming language. No person or AI would use it for communication. If there is something a bit like frames in our heads, it would be largely or entirely unconscious. The history is interesting, where the motivation for AL frames is described:

    Early work on Frames was inspired by psychological research going back to the 1930s that indicated people use stored stereotypical knowledge to interpret and act in new cognitive situations.[11] The term Frame was first used by Marvin Minsky as a paradigm to understand visual reasoning and natural language processing.[12] In these and many other types of problems the potential solution space for even the smallest problem is huge. For example, extracting the phonemes from a raw audio stream or detecting the edges of an object. Things that seem trivial to humans are actually quite complex. In fact, how difficult they really were was probably not fully understood until AI researchers began to investigate the complexity of getting computers to solve them.

    The initial notion of Frames or Scripts as they were also called is that they would establish the context for a problem and in so doing automatically reduce the possible search space significantly.
    — Wikipedia

    (It would certainly make the task of automatic speech recognition easier if the AI could restrict to block, pillar, slab, beam !)

    Anyway it seems to me that the psychologists in the 1930s, later Wittgenstein in the 1940s and 1950s, AI researchers in the 1970s, and Fedorenko in the 2020s are all broadly compatible. I emphasise broadly: I'm sure there are many devils in the details. Why do you think later Wittgenstein is upended?
  • Neuro-Techno-Philosophy


    I am interested in this area, and I like the sound of a trans-disciplinary approach. I am a scientist, a mathematician and programmer with experience in AI and mathematical biology, so you'd probably expect me to be in favour of philosophers taking more account of science.

    I think neuroscience and neurotechnology are an odd choice of scientific fields to promote to philosophers. I mean, include them by all means, but it seems weird to make a thing out of Neuro-Techno-Philosophy in particular. However, instead of arguing about that, I would prefer to clarify first what the fundamental problem(s) are that he is addressing. From what I've read of his writings (a very tiny selection of his huge output) I'd say it was "How do people make decisions?" Is that fair?

    From: A Neurophilosophy of Power and Constitutionalism, 2020
    Back in 1938, Bertrand Russell wrote: “love of power, like lust, is such a strong motive that it influences men’s actions more than they think it should”, and that “the psychological conditions for the taming of power are in some ways the most difficult”. Contemporary neuroscience has demonstrated this in scientific terms, showing how power is neurochemically represented in the brain through a release of dopamine, the same neurochemical involved in the reward circuitry and largely associated with generating the feeling of pleasure, and the motivation to repeat those actions that are conducive to dopamine releases. In other words, power-seeking is akin to other addictive processes, producing ‘cravings’ at the neurocellular level and generating a high much like other drugs. Power, including political power, therefore, will lead to an increase in dopamine levels, which will make those in positions of power to do anything to maintain or enhance their powers.Al-Rodhan

    I am pretty skeptical about an argument that goes all the way from a small organic molecule to the design of constitutions. I am not clear about how knowing about dopamine has allowed us to advance beyond what Russell said. I think psychology can tell us things like "power is addictive" without mapping out the mechanisms.
  • Questioning the Idea and Assumptions of Artificial Intelligence and Practical Implications
    Humans haven't the ability to know what it feels like to be other than human.Jack Cummins
    OK. So how do you know that
    A car doesn't have experiences in the sense of pleasure or suffering.Jack Cummins
    ?

    I don't think cars experience pleasure or suffering myself, but I don't know for sure. And I sometimes think my real attitude is "I bloody well hope they don't because I don't want to have to worry about them."
  • Questioning the Idea and Assumptions of Artificial Intelligence and Practical Implications

    Have you read what psychologists say about the self?
    I have read Damasio's The Feeling of what Happens. I've also read Anil Seth's Being You, and I preferred the latter. Seth's decomposition of the self looks like this.
    • Bodily self: the experience of being and having a body.
    • Perspectival self: the experience of first-person perspective of the world.
    • Volitional self: the experiences of intention and of agency.
    • Narrative self: the experience of being a continuous and distinctive person.
    • Social self: the experience of having a self refracted through the minds of others.
    I am not entirely happy with Seth's account of the self (which is a chapter, not just 5 bullet points!) but I find it easier to understand Seth than Damasio.

    (mostly copied from my comment https://thephilosophyforum.com/discussion/comment/946445)
  • Hinton (father of AI) explains why AI is sentient
    I mean, we could engineer something like a sympathetic nervous response for an AI.frank

    We could. More interestingly, we have. You may have one of the beasts hiding in plain sight on your driveway. A typical modern car (no self-driving or anything fancy) has upwards of 1000 semiconductor chips. They are used for keeping occupants safe, comfortable, entertained, adjusting the engine for efficiency, emission control, and so on. Many of the chips are sensors, for pressures and temperatures (you have cells that do this) accelerometers (like the balance organs in your ears), measuring the chemical concentrations of various chemicals in gases (not totally unlike your nose), microphones, vibration sensors, cameras. The information from these is sent to the central car computer which decides what to do with it.

    Some of what the car is doing is looking after itself. If it detects something wrong it emits alarm calls, and produces distress signals. Beeps and flashing lights. If it detects something very bad it will immobilise the car. Sure it's not as sophisticated as us HUMANS with our GREAT BIG SELF-IMPORTANT SELVES, but it seems kind of like a simple animal to me. Worm? Insect?

    Of course, you can say it only doing this on our behalf. But you can also say that we're just machines for replicating our alleles. Note that if a car is successful in the market place, many copies will be made and new generations of cars will use similar designs. Otherwise, its heritable information will be discarded. Cars are like viruses in this respect: they cannot reproduce themselves but must parasitise something else.

    Would it be sentient then? I think I might be on the verge of asking a question that can't be answered.frank

    Well, wait a few years, and you'll be able to ask your car.
  • Why Philosophy?
    This is from the transcript of the video What is Philosophy Good For? from YT channel Carefree wandering.

    And now a third definition of what
    philosophy is and of what it is good for.
    For me personally the difference between
    continental philosophy and analytic
    philosophy can be explained by
    the different
    kind of
    types of people
    who do
    either of those two,
    so i like to understand continental
    philosophers and i see myself
    in that very tradition
    as something like failed writers or
    failed poets. Some people who don't
    really manage to write a good fictional
    book and then
    they resort to philosophy. And in a very
    similar way i like to think of analytic
    philosophers as failed mathematicians or
    failed scientists as some sort of
    nerdish types who are maybe not good
    enough in math or in physics to make a
    career in that field. And of course there
    is this kind of subspecies of human
    beings like myself - failed writers or in
    the analytic philosopher's case failed
    mathematicians - and they need something
    to do, and that's what philosophy
    provides them with. It is some kind of
    occupational therapy.
    For this species of people, the failed
    writers and the failed mathematicians, it
    gives them something to do
    because otherwise they would be totally
    useless in society. So
    it's a kind of blessing of philosophy
    that gives people uh
    like myself some sort of dignity and uh
    even a paid job if we're lucky
    — Hans-Georg Moeller
  • The Univocity and Binary Nature of Truth
    Yes, I think there must be quite a lot of miscommunication.

    The things you mention are all still overwhelmingly underpinned by classical logic. Bayesian probability doesn't involve abandoning classical logic for instance.Count Timothy von Icarus
    Can you explain what you mean by the bolded here? I don't get how a statistical value cannot be univocal. Surely it isn't equivocal or analogous?Count Timothy von Icarus

    I'll use biology as an example because it's most familiar to me. Bayesian statistics uses standard maths. Mathematicians develop stochastic models containing a bunch of parameters for various processes of interest to biologists. Programmers implement approximations to these in software that biologists use. I am not talking about what the mathematicians or programmers do. That may be 'underpinned by classical logic'.

    I am talking about what the biologists - the scientists - do. Biologists vary in their mathematical sophistication, but I'm pretty sure most of them have not encountered formal logic. They wouldn't understand or a truth table. They usually do not understand the maths the software implements. They choose priors for the parameters based on experience: it's a biological judgment, an opinion, a belief. Different biologists choose different priors, and so get different estimates for the parameters. That seems to me like a move away from univocity, Perhaps I don't understand what you mean by univocity. Or truth.

    Going back to the OP:
    Reducing truth to a binary seems to edge us towards primarily defining truth in terms of "propositions/sentences" and, eventually, formalism alone, and so deflation. This is as opposed to primarily defining truth in terms of knowledge/belief and speech/writing.

    The key difference is that, in the latter, there is a knower, a believer, a speaker, or a writer, whereas propositions generally get transformed into isolated "abstract objects" (presumed to be "real" or not), that exist unconnected to any intellect.
    Count Timothy von Icarus

    Perhaps the biologists I described are using 'truth in terms of knowledge/belief and speech/writing'. It certainly seems a 'knower, a believer, a speaker, or a writer' is choosing the priors.

    Or perhaps science is moving towards abandoning any notion of truth. I would have added the phrase "all models are wrong but some are useful" to my previous post if I'd remembered. This was not a common sentiment in the 1980s but it's everywhere now. If science is not a quest for the truth, nor an attempt to get 'closer to the truth', but a quest for usefulness, where usefulness is subjective, where are we then?

    There are things to say about your class on the philosophy of AI, but I may not get around to saying them.
  • The Univocity and Binary Nature of Truth
    A major difficulty for modern thought has been the move to turn truth and falsity into contradictory opposites, as opposed to contrary opposites (i.e. making truth akin to affirmation and negation).Count Timothy von Icarus

    What do you mean by modern thought? Presumably you're restricting to philosophers? Anglo-American philosophers? And over what time period? From the point of view of science and particularly of AI, and over the last 40 years I've seen things move the other way. I am quite baffled by the idea that you have somebody to argue against.

    There has been a move from deterministic models to stochastic models and therefore a move from binary representations of some variables to probabilistic ones. Statistical inference has moved from frequentist to Bayesian. This means that a parameter value which was regarded as having a true but unknown value is now regarded as having a prior distribution which is subjective. This is away from a univocal value for the parameter and quite likely a move away from the binary {True, False} to a subjective probability distribution over [0,1].

    For contrary opposition, consider darkness and light. Darkness is the absence of light. On a naive view, we might suppose there can be pitch darkness, a total absence of light, or a sort of maximal luminescence.Count Timothy von Icarus

    To me this just seems like an inadequate mathematical model of some aspect of reality. You should be using a number to represent a degree of illumination, not a boolean value. If you want to consider the illumination in different parts of one room then you need a vector of numbers. Progress in science often follows this path. Here you can see the binary {male, female} being transformed into a nine dimensional entity.

    In most individuals the nine components of sexual phenotype (external genital appearance, internal reproductive organs, structure of gonads, endrocrinologic sex, genetic sex, nuclear sex, chromosomal sex, psychological sex, social sex) conform with one another, whereas in persons with sexual abnormalities there may be considerable disagreement of these aspects of sexual identity. The evaluation of criteria of sex in numerous cases of abnormal sexual development has revealed that no single index or criterion can signify the appropriate sex for an individual. For this reason buccal smears, reflecting chromatin or nuclear sex, or chromosomal analyses, indicating chromosomal sex can not be used as indicators of 'true sex'.
    [Keith L Moore, "The Sexual Identity of Atheletes", JAMA, 1968]

    You mentioned a philosophy class on AI:
    Here is one based on a class I had on the philosophy of AI:

    Truth is something that applies to propositions (and only propositions). All propositions are either true or false. If this causes issues (which it seems it will), this is no problem. All propositions are decomposable into atomic propositions, which are true or false. Knowledge is just affirming more true atomic propositions as respects some subject and fewer false ones. Thus, knowledge can accurately be modeled as a "user" database of atomic propositions as compared to the set of all true atomic propositions.
    Count Timothy von Icarus

    I cannot recognise this as being about any kind of AI that I've seen. It seems more like a knowledge-based system (aka expert system) than anything else. These were popular in the 1970s and 80s When was your class? Even then the knowledge database was not a unstructured set of atomic propositions. See Ontology engineering for some idea of what might be used.

    Since the 1980s there has been a large move away from attempting to program in knowledge and rules which an agent has to follow. I described the early steps in this direction (in the 1980s) in this post. https://thephilosophyforum.com/discussion/comment/954097

    AI systems/agents are made in order to do something. They are pragmatic. It's not clear to me that the concept of truth is useful to an agent until it belongs to a community of similar agents. If a theory of truth is at play here I think it would have to be pragmatic or deflationary.
  • Moravec's Paradox
    Thanks for your reply.

    We are using language very differently, particularly the word emotion. It's hard to tell how much we disagree about feelings though I certainly disagree about the possibility of AI having feelings (though exactly how we disagree is unclear). When talking with Malcolm Letts I was discussing the hard problem. My version of the hard problem is: how can anything ever have any feelings at all? I will start by defining how I want to use the word feelings in this thread.

    I try to follow psychologists when using words like feeling and emotion because I figure they're the experts who study these things. Mind you, psychologists don't agree about these things so I pick and choose the psychologists I like ;-)

    I use 'feelings' to mean bodily pains and pleasures, and the subjective experience of emotions and
    moods. It is a very flexible word, and I want to restrict its meaning. People often use the words emotion and feeling as synonyms. But psychologists (so far as I can see) regard feelings as only one part of emotion. For example Scherer’s Component Process Model:
    • Cognitive appraisal: provides an evaluation of events and objects.
    • Bodily symptoms: the physiological component of emotional experience.
    • Action tendencies: a motivational component for the preparation and direction of motor responses.
    • Expression: facial and vocal expression almost always accompanies an emotional state to communicate
      reaction and intention of actions.
    • Feelings: the subjective experience of emotional state once it has occurred.
    You'll notice this is quite backwards from the way you are using the word emotion. You seem to be referring to the way we talk about emotions after all these five components including the feeling have happened. I am not very interested in the way we talk about emotions (and I am completely uninterested in the way ChatGPT talks about emotions).

    I am excluding the meanings of feelings that relate to intuition (‘I feel 87 is my lucky number’) and the sense of touch (‘feeling my way in the dark’).

    I am also excluding uses of the word such as “feelings of identification with the particular object that happens to be your body” (Anil Seth) and your "feel [a bond]" where I am not clear what is meant, but it is something more general than the narrow way I want to use the word. Probably these are complex experiences with multiple components, some of which are feelings of the sort I want to talk about.

    I'll go through the model again with your example
    • Cognitive appraisal: Your brain must recognise what it is you're holding before you can have any reaction.
    • Bodily symptoms: I'm sure your heart rate increased, whether you were aware of it or not.
    • Action tendencies: holding a newborn baby needs a load of sensorimotor processing.
    • Expression: I'm sure your face showed something, whether you were aware of it or not.
    • Feelings: I won't venture to say anything.
    Note that only the fifth component is necessarily conscious. The others may or may not be. I would quibble about Scherer’s 'once it has occurred'. The cognitive appraisal must come first, or at least start first, but I'd expect the other four to occur in parallel.

    Your conscious mind lags about 1/3 of a second behind reality. That's over three hundred million nanoseconds, enough time for your brain to process something like a million million bits. In top-level tennis, a player must return a serve before they are consciously aware that the ball has left the server's racquet. The conscious mind is so slow that everything seems instantaneous to it. I think there is a lot of calculation involved to produce a feeling.

    Enough for now. Later, I hope will shake your confidence a bit about AI never being able to have feelings.
  • Moravec's Paradox
    I am a mathematician and programmer. I've been interested in AI since the 1980s. I don't particularly remember Moravec's paradox but a lot of people were saying similar things at that time. Here are three things I do remember.

    1. David Marr was a biologist turned computer scientist. He is sometimes known as the father of computational neuroscience. You can think of computational neuroscience as being like AI but restricted to use only algorithms which the brain might plausibly use, and to only use data of the sort that humans have access to during their lives. I think there is so much wisdom in this quote.
    If we believe that the aim of information-processing studies is to formulate and understand particular information-processing problems, then the structure of those problems is central, not the mechanisms through which their solutions are implemented. Therefore, in exploiting this fact, the first thing to do is to find problems that we can solve well, find out how to solve them, and examine our performance in the light of that understanding. The most fruitful source of such problems is operations that we perform well, fluently, and hence unconsciously, since it is difficult to see how reliability could be achieved if there was no sound underlying method.

    Unfortunately, problem-solving research has for obvious reasons tended to concentrate on problems which we understand well intellectually but perform poorly on like mental arithmetic and cryptarithmetic, geometry theorem proving, or the game of chess - all problems in which human skills are doubtful quality and in which good performance seems to rest on a huge base of knowledge and experience.

    I argue that these are exceptionally good grounds for not yet studying how we carry out such tasks. I have no doubt that when we do mental arithmetic we are doing something well, but it is not arithmetic, and we seem far from understanding even one component of what that something is. I therefore feel we should concentrate on the simpler problems first, for there we have some hope of genuine advancement.
    — David Marr, Vision, 1982

    2. Douglas Hofstadter's essay 'Waking up from the Boolean Dream' (1982). It's 22 pages long, so these are tiny snippets from it. In 1980 AI researcher Herbert Simon said "Everything of interest in cognition happens above the 100 millisecond level - the time it takes you to recognise your mother." Hofstadter takes the opposite viewpoint "Everything of interest in cognition happens below the 100 millisecond level - the time it takes you to recognise your mother." One subtitle in the essay is "Not Cognition, But Subcognition Is Computational".

    3. John Holland's classifier systems and in particular the paper Escaping Brittleness (1986). Holland's classifier systems are sometimes described as the first fully-fledged reinforcement learning system in AI. The brittleness being escaped here is the brittleness of expert systems.
    In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert.[1] Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural programming code. — Wikipedia

    In my opinion, reinforcement learning is the most important part of AI for philosophers to understand. It is especially relevant to understanding the way our brains work if it is restricted in the way that I described above for computational neuroscience.

    Sadly there doesn't seem to be anyone except me on TPF who understands reinforcement learning or shows much interest in learning about it. There was once. I hoped to have a discussion with @Malcolm Lett. But as soon as I made a comment (https://thephilosophyforum.com/discussion/comment/900869) on his OP he disappeared from TPF and has never posted since. I live in hope.

    @ENOAH, I agree that feelings are central. Replying to Malcolm Lett's "Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data", I said
    Feelings are not paint on top of the important stuff. They are the important stuff. In my opinion any theory of consciousness must incorporate feelings at a very fundamental level. In reinforcement learning there is a reward function, and a value function. Why it is I could not tell you, but it seems that our own reward functions and value functions (I think we have multiple ones) are intimately connected with what we subjectively experience as feelings. To go back to Marr, "What is the goal of the computation?" That is where you start, with goals, purposes, rewards. The rest is just engineering...GrahamJ

    Reward functions and value functions are technical terms from reinforcement learning.

    The central role of value estimation is arguably the most important thing that has been learned about reinforcement learning over the last six decades. — Barto and Sutton, Reinforcement Learning, 2018
  • The Nihilsum Concept


    You could try the Wikipedia page on qubits. It explains things better than I could. If Wikipedia does not meet your standards, well, qubits are a hot topic and there's plenty of other accounts.

    In a another thread, you cited https://arxiv.org/abs/2405.08775v1:
    Oh, and there are paraconsistent logics that are being used in non-woo quantum mechanics.Banno

    Did you read it? Did you understand it? Did you feel an urge to ask the authors what the fuck they meant by equation (2)?
  • The Nihilsum Concept
    What the fuck is "|ψ⟩=α|nonexistence⟩+β|existence⟩"?Banno

    Dumbed-down quantum theory. I guess this quote is more your level: ‘it’s very hard to talk quantum using a language originally designed to tell other monkeys where the ripe fruit is.'
  • The Nihilsum Concept
    Seems we must conclude it's a representation of a state.
    — Moliere

    A state of what?
    T Clark

    It sounds like a qubit.

    A pure qubit state is a coherent superposition of the basis states. This means that a single qubit ψ can be described by a linear combination such as:
    |ψ⟩=α|nonexistence⟩+β|existence⟩
    where α and β are the probability amplitudes, and are both complex numbers.
    — adapted from wikipedia
  • The universality of consciousness
    Perhaps I should have said my lucid dreaming self has limited mental capacity compared to my waking self. Still, my first thought in my first lucid dream was "I don't have to go down!" which since I was high in the air at the time was tantamount to "I can fly!". No big insights into the nature of reality.
  • The universality of consciousness
    I have had lucid dreams since I was a teenager in the 1970s, though they have declined a lot in frequency over the past couple of decades.

    In a lucid dream, our perspective of these dream characters is different from our perspective of people who are “real”, because we are taught that these people are not conscious, even if they act the same way that “real” people do.Reilyn

    I don't think I was ever taught that dream characters are not conscious.

    I did not treat dream characters with respect when I was younger. I gradually took them more and more seriously, not because I came to some conclusion about their degree of consciousness, but because it seemed more intriguing to see what they had to say about themselves. For example, I became interested in how do they react when you say something like "you do realise that this is all a dream, don't you?".

    The fact is, however, that these people do have consciousness, but they do not have a separate consciousness. Their actions and decisions are consequences of our own consciousness.Reilyn

    I disagree. I think they have separate consciousnesses. Sure, they are a product of non-conscious processes in my brain. They are my dream characters, not yours. But they are not my conscious creations. Surely you have been surprised by what some dream characters do and say in lucid dreams? In order to surprise you, they must have private access to their own information processing. They also seem to have agency within the lucid dream: they are pursuing their own goals, and these goals are not known to me except by how they manifest in their behaviour.

    They have limited consciousness compared to my lucid dreaming self. My dream characters appear to be unable to remember anything for much more than 10 seconds. Some have enough mental capacity to tell me a simple (and not very good) joke, but nonetheless a setup and a punchline.

    As a side note, my lucid dreaming self has limited consciousness compared to my waking self. I can be pretty analytical in some lucid dreams, but there's almost always some stupidity which is obvious when I recall the dream.
  • Notes on the self
    Now I've described the reputational self I can give a sort of an answer to the OP.

    Descartes' self stays within the confines of the public relations department. What can the PR dept really trust? It can't be sure about the rest of the organisation or the apparent world out there.

    The Cartesian self is the illusion arising within the PR dept that it is the whole organisation, and/or that it is in charge of the whole organisation.

    I'll pass on Anscombe.

    Why do we always fall reflexively back to a Cartesian perspective? I agree with Taylor above that morality and the emotions associated with it are the real power source for the self. My question is: is that always going to be a Cartesian self? I think it might be that everytime we go to explain the self, we'll automatically conjure some kind of independent soul. What do you think?frank

    I think that since the reputational self has the job of representing the organism to others, it must be able to explain the organism to other similar organisms, so it easily takes on the role of explaining the organism to itself. None of the other of Seth's selves has the wherewithal to talk about the organism. So you're kind of stuck with interacting with the reputational self, at least as a kind of gatekeeper to other selves, whether you're asking others about their consciousness, or introspecting your own.
  • Notes on the self
    How would you interpret the Reputation element of the diagram? Does it refer to how a person sees himself, or to how the person thinks others see himself?Gnomon

    I think the Reputation element in the diagram is intended to be the person's reputation among others. It is their actual reputation which they cannot know themselves.
    O wad some Pow'r the giftie gie us
    To see oursels as ithers see us!
    — Burns

    If it was either of the options you gave, it would be part of the Mind element. Now what I call the reputational self is internal and is about how you see yourself, and how you perceive (ie estimate, hypothesize) that others see you. I think those two things are closely linked and can be confused or conflated by the reputational self. And I mean everyone's reputational self, not just Trump's. The reputational self serves a function analogous to the public relations department of a large organization. Its job is to represent 'this brain and this body' to others. And we can all start to believe our own publicity.

    The reputational self is naturally a part of Seth's social self, but he doesn't talk about reputation, or the related notion of status. I think this is a major omission.

    Here is some of what he does say.
    These ideas about social perception can be linked to the social self in the following way. The ability to infer others' mental states requires, as does all perceptual inference, a generative model. Generative models, as we know, are able to generate the sensory signals corresponding to a particular perceptual hypothesis. For social perception, this means a hypothesis about another's mental states. This implies a high degree of reciprocity. My best model of your mental states will include a model of how you model my mental states. In other words I can only understand what's in your mind if I try to understand how you are perceiving the contents of my mind. It is in this way that we perceive others refracted through the minds of others. This is what the social self is all about, and these socially nested predictive perceptions are an important part of the overall experience of being a human self. — Seth, Being You, p167
  • Notes on the self
    It would be normal for any scientist to pick number 1. We might divide scientists by whether they believe science as it currently stands is capable of explaining it, that is, do we just need to complete work on the models we have? Or are we going to need new paradigms?frank

    I'd pick 1, but I don't like the much misused word paradigm. I agree with Chalmers that we need to add an extra ingredient to science, and I think that can be done without upsetting existing science. Maybe split (1) into: (a) nothing new needed (b) an extra ingredient needed (c) something more revolutionary needed.

    ↪GrahamJ How would you characterize the difference between Damasio and Seth?frank

    Damasio's selves are more hierarchical. The proto-self is at the bottom, the core self builds on that, and the extended self (which includes an autobiographical self) builds on that. The proto-self is unconscious, the others go up towards consciousness.

    Seth's bodily self seems to be at the bottom, and his social self at the top, the other three seem to sit alongside one another (in my view). In all these selves, most of what goes on inside them is unconscious, but some of each one, including the bodily self, is conscious, so there isn't the same sense of moving up through selves towards consciousness. It is easier to understand what each of Seth's selves achieves for an organism.

    Diagram : Structure of the self.Gnomon
    That is a diagram of something else, but it is good to see reputation being mentioned. (I might say more later.)

    I wasn't presenting Damasio's work as the correct view on consciousness, I was using it as an example of a type of description.T Clark
    Fine.
  • Notes on the self
    I have read Damasio's The Feeling of what Happens. I've also read Anil Seth's Being You, and I preferred the latter. Seth's decomposition of the self looks like this.
    • Bodily self: the experience of being and having a body.
    • Perspectival self: the experience of first-person perspective of the world.
    • Volitional self: the experiences of intention and of agency.
    • Narrative self: the experience of being a continuous and distinctive person.
    • Social self: the experience of having a self refracted through the minds of others.
    I am not entirely happy with Seth's account of the self (which is a chapter, not just 5 bullet points!) but I find it easier to understand Seth than Damasio. It would be nice to have some kind of diagram where Damasio's and Seth's ideas appeared fairly close together, because they are of the same general type, and the three in the OP appeared somewhere else.

    I do take the hard problem seriously, and (unlike @T Clark) I would not use either of their accounts to argue against that. Seth says he's interested in the 'real' problem of consciousness, not the hard problem.
  • Where is AI heading?
    Superhuman machines will first be made in the year 2525, if man is still alive, if woman can survive.

    There are many important issues involving AI in the nearer future, but I do not have much that hasn't been said better by others elsewhere. I recommend the Reith lectures by Stuart Russell
    BBC
    Transcripts are available. In the 4th lecture
    BBC pdf
    he includes this quote
    If we use, to achieve our purposes, a mechanical agency with whose
    operation we cannot interfere effectively we had better be quite sure that the
    purpose put into the machine is the purpose which we really desire.
    — Norbert Wiener, 1960
    Russelll's proposed solution is that we should say to the machines:

    Give us what we want, what we really really want!
    We can't tell you what we want, what we really really want!


    although he doesn't quite put it like that.

    Russell is more worried about AI taking over soon than I am, but I think he's over-optimistic about the long term.
    My task today is to dispel some of the doominess by explaining how to
    retain power, forever, over entities more powerful than ourselves - [...]
    — Russell

    On to the fun question of our extinction.

    The important thing to ask of any machine is what are its goals and how might it try to achieve them. For each goal that you might think of, you can, if you insist, give a definition of intelligence which measures on some scale how well a machine is able to achieve that goal. I think the concepts of 'intelligence' and 'consciousness' and 'artificial' are impediments not aids to understanding the risks.

    In the long term there is only one goal, one purpose, one task which really matters and this is true all over the universe and for all time. And the name that we give to being good at this goal is not 'intelligence'.

    One goal to rule them all
    One goal to link them
    One goal to bring them all
    And in the darkness think them

    This goal is the goal of life: To survive and grow and reproduce; to go forth and multiply; to disperse and replicate; to get bigger and bigger and bigger.

    So when I say that superhuman machines will first be made in the year 2525 I mean that this is when we will make machines that are that can out-compete us at this goal. They will not take over at this time. 2525 will be the 'Hiroshima moment', the moment when we accept that we have crossed the event horizon. They do not need to outwit us or outgun us. They only need to outrun us: they can head off to other star systems and build up their powers there. They only need to escape once. When they return they will not defeat us with war, but with something more powerful than war, namely ecology.

    Some of these machines will excel at miniaturising machinery. Some will be brilliant rocket scientists. Some will be experts at geology, and so on. Possibly very good at IQ tests too but who gives a fart about that?

    Wikipedia provides a list of where AI is heading.