• apokrisis
    7.3k
    So what if we considered biological computers running these simulations instead?Michael

    Again, first show that “running a simulation” is something a biocomputer could even do. Then we are still left with the basic point that a simulation is not a reality as it is divorced from any material consequences due to being founded on an artificial stability.

    Biology arises because material instability - criticality - offers a suitable foundation for the informational regulation of that instability. That is the whole deal. So using meat to imitate stable computational hardware is missing the point of what actually defines life and mind. If it is perfect for Turing computation, you have ruled out the very thing - the material instability - which life and mind exist to regulate.
  • Michael
    15.8k
    Again, first show that “running a simulation” is something a biocomputer could even do.apokrisis

    A computer simulation is just taking some input and applying the rules of a mathematical model, producing some output. The article I linked to explains that biological computers can do this. It's what makes them biological computers and not just ordinary proteins.

    And we know that at least one biological organ is capable of giving rise to consciousness.

    So put the two together and we have a biological computer, running simulations, where the output is a certain kind of conscious experience.

    I don't think it's controversial to think that a sufficiently advanced civilization can create biological computers that function somewhat like the human brain, complete with consciousness, but where its experiences of things like having a body and sitting under a tree are caused by its own internal activity and not by actually having a body and sitting under a tree.
  • Baden
    16.4k
    Musk: Because technology, everything.

    :yawn:
  • apokrisis
    7.3k
    The article I linked to explains that biological computers can do this.Michael

    Sure. You can build a Turing machine out of anything. Even meat, or string and tin cans. So long as it is eternally stable and entropically unlimited. That is rather the point.

    Meanwhile over here in reality, a very different game is going on. I’m asking you to focus on that.

    I don't think it's controversial to think that a sufficiently advanced civilization can create biological computers that function somewhat like the human brain, complete with consciousnessMichael

    If you don't find it controversial then you might want to question how well you understand the biology of brains, and indeed the biology of biology.

    A) Machine - stable parts.

    B) Life - unstable parts.
  • Michael
    15.8k
    If you don't find it controversial then you might want to question how well you understand the biology of brains, and indeed the biology of biology.apokrisis

    I know biological organisms are complicated. But if they can develop naturally in response to a sperm interacting with an ovum then I don't see why they can't be developed artificially. It's not like they're made of some special physical substance that an advanced intelligence is incapable of manually manipulating. It's all still the same molecules that we use for all sorts of other purposes.
  • Wayfarer
    22.8k
    There was an insightful essay years ago on BBC’s online magazine (can’t find it since) about the powerful appeal of Inception, Matrix, and other such sci-fi films which suggest just such a scenario as Musk is speaking about.

    They play to an intuition that existence itself is a grand illusion, which has obvious parallels in some streams of occult lore and also in Eastern religions. (Actually I found the red pill/blue pill scene in Matrix quite offensive at the time, because I thought it suggestive of a profound insight which the film itself didn’t really grasp, but only sought to exploit.)

    But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite true in that the brain - the most complex phenomenon known to science - is itself a ‘reality simulator’. The problem is that it is difficult to understand that from ‘within simulation’ as it were. But that, I think, is the intuition that the ‘simulation hypothesis’ suggest.
  • apokrisis
    7.3k
    Strewth. So life on earth began when a sperm met an ovum and organisms arose.
  • Michael
    15.8k
    Strewth. So life on earth began when a sperm met an ovum and organisms arose.apokrisis

    That's not what I'm saying. My brain developed after my dad's sperm fertilized my mum's ovum. We don't need to recreate the origin of life to build a brain-like biological computer. We can just recreate what sperms and ovums do using DNA and proteins of our own making.
  • ssu
    8.7k
    A computer simulation is just taking some input and applying the rules of a mathematical model, producing some output. The article I linked to explains that biological computers can do this. It's what makes them biological computers and not just ordinary proteins.

    And we know that at least one biological organ is capable of giving rise to consciousness.
    Michael
    Yeah, but unlike computers which follow orders and basically use algorithms, we being conscious can look at those rules/algorithms and create something else, invent something, which wasn't in the rules/algorithm in the first place. When a computer "creates" something new, it has to have specific orders just how to do this.

    Hence computers simply cannot follow the algorithm "do something else". They have to have specific instructions about how to do 'something else'.
  • Michael
    15.8k
    Have you not heard of machine learning?
  • apokrisis
    7.3k
    But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite trueWayfarer

    That’s like saying the eye is like a camera. It might get the conversation started, then you get serious.

    Take for instance the evidence from sensory deprivation experiments. Without a world forcing the brain into some kind of stabilising state of interpretation, then experience and thought just fall apart.

    There is no Cartesian theatre, no running simulation, that is a consciousness standing apart from the world. The idea of the mind as a stable entity, a soul stuff, is what underpins the naivety computationalists.

    Neurology depends on its own instability being regulated by its running interaction with a world. It becomes constrained by its environment to have an appropriate level of fixed or habitual response.

    So the simulation story is just dualism warmed over. Sensory deprivation reveals that being in a definite and organised state of mind is not about a concrete act of world simulation but an enactive state of world interpretation. The infinite garbled possibility the dreaming mind can conjure up is stabilised by whatever the available regularities of the environment happen to be.
  • apokrisis
    7.3k
    The problem is not that you are talking nonsense. It is that you don’t even know its nonsense.
  • ssu
    8.7k
    ↪ssu Have you not heard of machine learning? — Michael
    Yes. And there's exactly the problem. Just from the Wikipedia link you gave me:

    Machine learning explores the study and construction of algorithms that can learn from and make predictions on data – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions, through building a model from sample inputs.

    As I said, the Computer has to have an algorithm. It cannot do anything without an algorithm and it cannot do something that algorithm doesn't say to do. It's Limited by it's algorithm. Now you can look at an algorithm (1. do this 2. Then do that 3. check what you have done works) and think out of the box and come up with a new algorithm (1. Go and drink a beer and let others do those things and check they work). Your basically conscious. You can look at the algorithm, understand the objective that the algorithm is intended for and then do something else.

    However data-driven decisions it makes and however it builds a model from sample inputs, the computer has to have instructions how to build these, how to use the data, and all that still is very basic instruction following just like a Turing Machine does.
  • Wayfarer
    22.8k
    Neurology depends on its own instability being regulated by its running interaction with a world. It becomes constrained by its environment to have an appropriate level of fixed or habitual response.apokrisis

    none of which contradicts what I meant to say, but I will have to enlarge on it when not typing on an iphone.

    (Incidentally, as it happens, I’m working at an AI startup right now.)
  • Wayfarer
    22.8k
    But, suffice to say, that the sense that the domain of empirical experience is in some sense a simulation, is quite true
    — Wayfarer

    That’s like saying the eye is like a camera. It might get the conversation started, then you get serious.

    Take for instance the evidence from sensory deprivation experiments. Without a world forcing the brain into some kind of stabilising state of interpretation, then experience and thought just fall apart.
    apokrisis

    I will try and enlarge on that. What I have in mind, is the role of the brain (or mind, mind/brain, whatever) in synthesising perception and data in the generation of conscious experience. So it's a simulation in the sense of the constructive act of conscious cognition. We do indeed 'build' a world by the act of combining sensory perception with judgements, intentions, and the many other conscious and unconscious components of the vast ensemble that is the mind. Obviously that has to be well-adapted, otherwise you'll 'mistake your wife for a hat', or whatever. But it's still in some sense a simulation.

    Although I just read the Musk quote again, and I honestly think it's bullshit. What it doesn't see is that 'artificial intelligence' doesn't really know anything at all. As I mentioned above, I am working in an AI start-up. The vision is, the software they're making is a person - it has a name, but I won't give it - let's say 'Hal' - so you 'ask Hal' questions about data sets. And you have to know exactly what to ask, referencing very specific terms. Like, I noticed there was a big data set about types of customers - single parents, parents with two kids, and so on - for supermarket data. So I idly wondered, does Hal know what a bachelor is? I asked Hal, sales figures, by quarter, these product lines, for bachelors. Hal says - 'what's a bachelor? Is it a kind of olive?' Just for fun, I say, yes Hal. So Hal says 'great! Bachelors are olives! I'll remember that!'

    In 1965, Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work that a man can do.” M.I.T. computer scientist Marvin Minsky assured a Life magazine reporter in 1970 that “in from three to eight years we’ll have a machine with the general intelligence of an average human being ... a machine that will be able to read Shakespeare and grease a car.”

    The story is well-told by now how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”

    A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.
    — Steve Talbott

    Logic, DNA, and Poetry.
  • Marchesk
    4.6k
    A common assumption in the philosophy of mind is that of substrate ‐ independence . The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon ‐ based biological neural networks inside a cranium: silicon ‐ based processors inside a computer could in principle do the trick as well. — Bostrom

    Yeah, this is far from widely accepted in philosophy of mind. People with a strong computer science background tend to endorse it a lot more than people who are more philosophical in general. I'm not sure where the neuroscientists fall on this on average, but I would guess they're a bit more reserved about making such assumptions.
  • Marchesk
    4.6k
    Here's something related Elon said last year. To paraphrase:

    Humans are already cyborgs and superintelligent because of smartphones. Anyone with one of these is more powerful than the president of the United states 30 years ago. — paraphrased Elon

    Then he goes on to talk about the limiting factor for superhuman intelligence is output bandwidth, so we need brain to computer interfaces to bypass our slow modes of communication.
  • Marchesk
    4.6k
    A computer simulation is just taking some input and applying the rules of a mathematical model, producing some output. The article I linked to explains that biological computers can do this. It's what makes them biological computers and not just ordinary proteins.

    And we know that at least one biological organ is capable of giving rise to consciousness.

    So put the two together and we have a biological computer, running simulations, where the output is a certain kind of conscious experience.
    Michael

    Wait a second, what does a conscious output look like where you take some input, apply the rules of a mathematical model, and produce output?

    I'm not aware of any mathematical model that can do that, or what it could even possible look like. Are you?

    I'm thinking you input some matrices of data, there's some machine learning models, and then the output is .... a blue experience???

    That doesn't compute, because it's not a computation.
  • Michael
    15.8k
    I'm thinking you input some matrices of data, there's some machine learning models, and then the output is .... a blue experience???Marchesk

    Give a computer a Hex code of 000000, have it add FF, and the result is 0000FF. This is the hex code for blue, and it tells the computer to turn on the blue lamps that each make up part of a pixel.

    Only in our scenario that biological computer isn't told to turn on a blue light but to activate the parts of its "brain" that are responsible for bringing about a blue colour experience.

    Unless you want to argue for something like a God-given soul, what reason is there to think that the human brain and its emergent consciousness is some special, magical thing that cannot be manufactured and controlled? We might not have the knowledge or technology to do it now, but it doesn't follow from this that it's in principle impossible.
  • Michael
    15.8k
    As I said, the Computer has to have an algorithm. It cannot do anything without an algorithm and it cannot do something that algorithm doesn't say to do. It's Limited by it's algorithm. Now you can look at an algorithm (1. do this 2. Then do that 3. check what you have done works) and think out of the box and come up with a new algorithm (1. Go and drink a beer and let others do those things and check they work). Your basically conscious. You can look at the algorithm, understand the objective that the algorithm is intended for and then do something else.

    However data-driven decisions it makes and however it builds a model from sample inputs, the computer has to have instructions how to build these, how to use the data, and all that still is very basic instruction following just like a Turing Machine does.
    ssu

    And you don't think that we operate according to algorithms of our own, albeit ones that are a product of DNA-driven cell development rather than intelligent design? How exactly do you think the human brain works? Is our mind some mystical homunculus, operating with libertarian free will, and that can only occur naturally and never artificially?
  • Marchesk
    4.6k
    Only in our scenario that biological computer isn't told to turn on a blue light but to activate the parts of its "brain" that are responsible for bringing about a blue colour experience.Michael

    But how will we know how to put together a biological computer that can bring about a blue color experience? I assume that won't be a binary pattern.

    Unless you want to argue for something like a God-given soul or substance dualism,Michael

    There are other options, which you know about.

    what reason is there to think that the human brain and its emergent consciousness is some special, magical thing that cannot be manufactured and controlled?Michael

    Not magical, but maybe fundamental.

    We might not have the knowledge or technology to do it now, but it doesn't follow from that that it's in principle impossible.Michael

    Right, but there are somewhat convincing conceptual arguments against it. I don't know what the nature of consciousness is, but nobody else has been able to explain it either. And until that can be done, we don't know what computing it would entail, other than stimulating an existing brain.
  • Michael
    15.8k
    But how will we know how to put together a biological computer that can bring about a blue color experience? I assume that won't be a binary pattern.Marchesk

    By studying the human brain and replicating its behaviour.

    Not magical, but maybe fundamental.Marchesk

    What do you mean by "fundamental"? And if it can occur naturally by DNA-driven cell development then why can't it occur artificially by intelligent design?
  • Marchesk
    4.6k
    By studying the human brain and replicating its behaviour.Michael

    Assuming behavior can result in consciousness. There's good reasons for thinking that's not the case.

    What do you mean by "fundamental"?Michael

    Something that's not explicable in terms of something else, which in context means an empirical explanation.
  • Marchesk
    4.6k
    nd if it can occur naturally by DNA-driven cell development then why can't it occur artificially by intelligent design?Michael

    I don't know whether it can, but the conceptual argument against computing consciousness is that computation is objective and abstract, whereas consciousness is subjective and concrete.
  • Michael
    15.8k
    I don't know whether it can, but the conceptual argument against computing consciousness is that computation is objective and abstract, whereas consciousness is subjective and concrete.Marchesk

    But consciousness happens when a physical brain behaves a certain way, right? So replicate that kind of behaviour using the same kind of material and it should also cause consciousness to happen.

    If it can occur naturally then I see no reason to believe that it can't occur artificially.
  • Marchesk
    4.6k
    But consciousness happens when a physical brain behaves a certain way, right? So replicate that kind of behaviour using the same kind of material and it should also cause consciousness to happen.Michael

    That might work. I'm more arguing against the simulation idea.
  • Arkady
    768
    Not to just dump a link without discussion, but this blog post by philosopher Alexander Pruss may be interesting to some of you, and is somewhat a propos of the current discussion.

    https://alexanderpruss.blogspot.com/2010/06/could-something-made-of-gears-be-person.html
  • Moliere
    4.8k
    So if you assume any rate of improvement at all, then [virtual reality video] games will be indistinguishable from reality. Or civilization will end. Either one of those two things will occur. Or we are most likely living in a simulation.Posty McPostface

    Others have already pointed this out, but I figure I'll throw my hat in with that lot and try to rephrase. . .

    I think the implication is false. @Marchesk pointed this out in their reply here:


    "Rate of improvement" is a squishy concept. Even supposing that the concept can be modeled mathematically as the use of the word "rate" seems to imply this is just plainly false. Empirically you have the car example. Theoretically speaking you need only consider what graphing a rate can look like. In a more localized sense a rate can appear to be linear -- it can look like it is a straight line that, if having a positive value, increases. But that's only locally. Often times a rate can be approximated like this when, in reality, it has, say, a logarithmic progression. Modelling equilibrium curves often produces this exactly. So instead of. . .

    Relationship-between-linear-progression-rates-of-lesions-and-ciliate-densities-recorded.png

    You get. . .

    Binary_logarithm_plot_with_ticks.svg

    In which case, as you can see, given infinite time we'll progress towards a limit -- wherever that happens to be -- but that limit will not be infinite.


    Elon musk is not only assuming that improvements can be modeled mathematically, but also assuming that the rate is linear (and positive, for that matter). So the probability of his implication hinges a lot on what he does not know, nor I.

    If that's the case then I'd say his claim that this argument is very strong is false. It's a flight of fantasy with a lot of assumptions.
  • ssu
    8.7k
    And you don't think that we operate according to algorithms of our own, albeit ones that are a product of DNA-driven cell development rather than intelligent design? How exactly do you think the human brain works? Is our mind some mystical homunculus, operating with libertarian free will, and that can only occur naturally and never artificially?Michael
    I think that you aren't grasping the fact that this is basic and a fundamental issue in Computer science and computational theory. An algorithm is simply a set of rules and a computer follows those rules. It's about logic. Period.

    There is absolutely nothing mystical here: the simple fact is that not everything is computable even in mathematics. True but uncomputable mathematical objects exist. And everything here is about the limitations of computability.

    Just think a while what you mean by that "we operate according to algorithms of our own". OK, if indeed it would be so, then these algorithms by definition of the term could be described to you: an algorithm is a process or set of rules to be followed in calculations or other problem-solving operations. Thus you surely could read them, understand that "OK, this is me, I do react and solve things the way that the algorithm says". However, and here comes the absolutely crucial part, the algorithm to be an algorithm that computers use must tell how you react to it, how you learn from seeing this algorithm. Now people might argue that this is because you are conscious or have 'free will' or yadda yadda and thus you can look at this algorithm, set of rules, and do something else. Take it as a whole, learn from it and change your behaviour in a way that isn't in the algorithm. There's nothing mystical here. You simply aren't using an algorithm like a computer does.

    A computer or a Turing Machine cannot do that. It just follows a set of rules. If you think that a computer can overcome this problem, then congratulations! You have just shown Turing's Halting Problem and a multitude of incompleteness results in Mathematics are false.
  • Michael
    15.8k
    However, and here comes the absolutely crucial part, the algorithm to be an algorithm must tell how you react to itssu

    What does it mean for an algorithm to tell a computer how to react? If we look at the actual physics of it all it just reduces to the basic principle of cause and effect. A particular kind of input produces a particular kind of output. Talking about these causal chains as being an algorithm is just a useful abstraction. But the basic principle is the same whether we're dealing with how a computer behaves or with how the human brain behaves. There's no special substance in human brains that makes them behave in acausal ways, and impossible in principle to reproduce artificially.

    Unless you want to argue for the libertarian's "agent causation", and that this "agent causation" can only occur in naturally grown organisms. Do you?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.