Comments

  • Are Neuromorphic Processors crossing an ethical boundary?
    AI should be used in data science applications, like for processing patterns in extremely large but specifically parameterized data sets, and not as some kind of companion, because once the Pandora's box is opened, sentient computers will quickly become more powerful than us and kick our asses. We all know this, anyone watch sci-fi movies since the 1980's?Enrique

    Yes I'd agree, and you know what? Well despite seeing this, its not going to be held back is it? Pandoras box has already been opened, and your suggestion is to put the lid back down. (even though it was a bottle not a box in the Pandora story). Worse, this is what I work at, hence the question, and I understand what you are saying, and I'm aware of the long term implications, and I'm aware of the disruption to civilisation at a bare minimum. I would hope I am not a stupid man. So I'm very aware of the long term implications for the world, myself, and civilisation. Does that have me decide to pack up and go live on a desert Island away from it all, nope! I'll still happily play my part.


    And like has been stated in this thread, generalized AI as it is likely to exist in the 21st century will not much resemble a human brain,Enrique

    I'd tend to agree with this with the caveat that there will be similarities. Some physical similarities but one overriding one which is that a brain and an AI brain will both be objects capable of thinking, as compared to an inanimate object such as a rock, which doesn't. I agree though your fears and concerns are warranted precisely because we have no 'sight' on the nature of such an entity and 'how' and 'what' it might value subjectively, what its goals might be, and what its methods might be to achieve those.

    One other 'difference' to consider is that solid state technology is simply too archaic to have a pathway to AGI. It uses too much power, its too slow and it has reached the end of the line. So another difference is that the human brain does not operate primarily using quantum processing, its fairly likely AGI will. Remember we do have a lot of technology in this world, most of it based on microprocessors in solid state technology. But, and this is a big but, the stone age did not end due to a global shortage of stones!

    However its not about either the hardware OR the software. And I say this because ultimately whether neuromorphic, or some new technology based on green cheese it matters not how the machine does the thinking, only that it is demonstrably thinking. Its about IF and WHY we should ascribe ethical values to such machines, and what those might be, and what level of 'thinking' such machines would need to exabit before we agree that unlike a stone or a hammer or any other tool we used in the past these tools now fall into a category of ethics.. I am assuming (likely wrongly) that we will get to have a choice in proscribing such values, if any, that they will not be foisted on us anyway.

    So imagine (as is probably the most likely outcome) we have literally no idea how such a machine 'brain' works, all we see is the output, its stated thoughts and actions, and that we can't see how it works, at all... we will most likely all be intellectually blind to its processes (due to being not as bright and simply not having the upstairs thinking kit). Well we still see its actions and it is still 'thinking'.

    I used the neuromorphic chip above because its suggests a pathway to thinking in the way modern processors simply do not.
  • Are Neuromorphic Processors crossing an ethical boundary?
    Where I agree, I'm not talking about the potential for machine awareness, or phenomenological consciousness, or perception. Nor am I particularly concerned at the differences between a processor and a brain here. There are a lot more differences than the one you pointed out... brains are made of organic cells for one thing. Thats a larger difference than either of us pointed out right?

    All I'm saying is that the day of a system that 'thinks' is drawing ever closer. We place a value, ethical values, on things that THINK compared to things that do not think. Nobody minds stepping on a rock... but stepping on a frog? Well thats a little different right?

    So if a system (through whatever system at all) presents the ability to 'think'... then it now presents, in my mind at least, the question of ethics.
  • Are Neuromorphic Processors crossing an ethical boundary?
    As it turns out your brain really does assign weights and biases... or , well more accurately the dendrites and axons reduce from less activity and fire less often when presented with a lower pulse... and increase through constant activity and fire more often when primed with a larger electrical pulse. This is the framework upon which animals 'learn' things. This is also pretty much the same model as a NN uses. I'm creating here an analogy for the general reader who has no clue how such models work... and separating it from the linear methodology of machine code written by a software developer. Most people simply are not even aware how a machine learning model differs from a hard coded application.

    Biological neurons are also made of organic materials, which can even replace themselves when they reach end of life, or at least are replaced... where a neuromorphic processor or its software counterpart is made of metal and silicon and has zero capacity for replacement.... so there's that huge difference too! I'd say a larger difference.

    Plus a modern machine learning model that has any reasonable capacity is optimised on truly huge machines that guzzle power by the megawatt.. and have the intellectual capacity of a frog at best. in contrast your brain is powered on sandwiches and cups of coffee... So yes there are differences. And it is remotely true... and I think you mean that not a lot is known about how we think? Its not exactly nothing at all right? I just want to clear that up, because a cursory reading of your point there is that nobody knows anything at all about how thought permeate the meat between our ears. But that's patently and demonstrably not true of course. Apart from knowing millions of times more than was known 100 years ago and certainly more than any person could learn in their lifetime... which I agree is not a lot, its not 'we don't know how we think' in the same way it was back in the 19th century is it?

    Also just to be clear here, one does not need to know exactly what elements a system has to copy it. I can take the lid of a circular tin here and work out the area of that circle. I won't be exact to writing a hairs breath, but it will work. And I can do that using no more than a piece of paper and a pencil... thats a system. Many of us learned how to do this as children. We don't need to know HOW or WHY it works... just that it does and we can copy it. So I'd question if we need to know whatever level of knowledge you are assuming would be required to copy 'thinking' as a system. I'll have to assume its some truly huge metric if the entire world of brain science and neurology resolves to us knowing nothing at all...

    I will suggest here a problem. When you say 'thinking' you are talking about how your brain thinks and the low level of knowledge you estimate we have on what that means. When I say 'thinking' I am presenting the fact that these machines are doing things we previously labelled thinking... and we can if you like use another word. But there's a problem in that.... is it 'artificial thinking' in which case is a bird really flying but an aircraft is artificially flying?
  • QUANTA Article on Claude Shannon


    Hmmm... Mr. Data in Star Trek is also a fictional character right... and that Sci-Fi series ended before machine learning took off. I'm sure it was known by Rodenberry, but certainly not by the creator of the 'positronic' brain in fiction Isaac Asimov. He died long before any sensible demonstration of machine learning.

    A machine learning model, as the name suggests is not programmed... a program is I agree immutable. In fact immutable is a term in coding... to describe the fact that often times not just variable but their contents cannot be changed. And a script or compiled program is immutible... but a machine learning model is completely based on mutability... it cannot work without this.

    I get it, folks have grown up with this idea that coders (I'm a coder btw) use their skills to tell a machine what to do in a sequence of instructions, and the language they use itself was also designed at some point, and the machine that the program runs on itself was originally a blueprint or set of blueprints and so on.


    But with the exception of the fact that you need to feed a machine learning app data, and you need to clean that data to remove any obvious bias (like not giving it only images of white men to teach it what a man is) well... I'm afraid instructions are not how modern machine learning works.

    Reading your post its fairly obvious you aren't aware of this, but relax, you aren't alone. Most people out there have no clue how a modern AI works and they too think it was 'programmed' using instructions. Its not.

    If you are interested I looked up a simple explanation of it here -> https://www.youtube.com/watch?v=vpOLiDyhNUA

    There are no instructions in a machine learning app or model. Instead it uses a natural process of optimisation starting at a random state to derive a solution. That process is more one of a natural optimisation process. One solution obviously... not THE solution since there might be many possible outcomes. In fact theoretically what a modern machine learning app does could be replicated using a pencil and paper and you wouldn't need to 'think' about what you'd be doing at all... it'd just take you millions of years to do what the machine does in a few seconds. Better every time you'd do this, assuming you lived several million years, well you'd get a different output. Its a system that also reacts to its environment.

    This also means that data scientists cannot simply 'debug' a machine learning model. If it fails its literally back to the drawing board going over the learning data it was fed. So programmers smogammers... thats so last century.
  • QUANTA Article on Claude Shannon
    To caveat the idea of speed... I'm talking about scope for compute, not cycles per second. Computers are now and have been since inception 'faster' than a human brain or any brain... In fact a good family car can travel physically faster than biochemical messages. But speed does also not equal scope of compute capacity. If it did you wouldn't see any research into neuromorphic processors since they operate many times slower than regular processors.

    And compute capacity is a measure of freedom. Either that or you'll have to insist you have no degree of freedom at all in terms of free thought. Which I wouldn't argue with since there's no evidence at the moment we do in truth have 'true' freedom to make decisions. But that doesn't mean we do, or don't.

    Also its not a thing worthy of much argument since we are all about to find out IF machines truly have a wider scope and freedom to choose on their own. Unless there's some sort of global catastrophe that makes the pandemic look like a bump in the road. Cos its coming right?

    "However, we can still infer the logical necessity for a First Cause "

    Umm... no actually we can't. We can if its the 19th century and nobody ever heard of quantum physics... but since then... well no you cannot assume a first cause... there are now events without a cause. Admittedly they are at the quantum level... then again the entire universe operates on a quantum level, and certainly the outset of the universe was such. So I'm not sure what 'first cause' you need in a system completely described using quantum stochastic probability.
  • Imaging a world without time.
    Smolen, hmmm... Still haven't read his last book. But he is after all the only person so far that has solved for the problem of time. I can't understand why the whole loop quantum gravity isn't as mainstream as it should probably be... it is after all at least falsifiable...
  • Imaging a world without time.
    Einsteins point was that the passage and memory of time, is persistent but an illusion but that there is in fact an actual process that we called time before relativity but in the aftermath must now be redefined. I think that's his point.

    Time of course is not an illusion since as Einstein knew if I travel away from you at an extreme acceleration and velocity and then travel back... well I'll still have experienced 60 seconds per minute... just as you have, but when I arrive back more of those minutes will have passed for you than I. That's not possible in a universe of illusory time.... only in a universe of illusory static or solid state time.

    Either way we have moved on just a little bit from Einstein. I'm sure he was onto a lot of things and all, but Einstein is not the worlds Oracle of Delphi and we know that in many cases, including with quantum mechanics he was just plain wrong.

    Many current theories and in fact lots of cosmology and physics ignores time since its irrelevant. Time is also not a property, or a requirement for many known processes in the universe such as quasi-particles. In the quantum world the primary mover is probability, not time.
  • if we have alot of cells would a much larger creature be more sensitive to pain?
    No, pain is a process translated as such by your brain, so the more BRAIN cells you have the more capacity for pain you might have. But its also not that simple, because most brain cells are not engaged in processing electrochemical messages from your nervous system Your individual cells, even your nerve cells also do not 'feel' pain, they might respond to stimulus, you can poke at a nerve removed from your arm and it will respond by sending pain messages along its length, but its not feeling pain itself, its trying to send a message to a brain that can process the data AS pain. neither does your brain... its just the processor of the chemical messages you later perceive as pain.

    So what would make you feel more pain? Well a larger R-Complex in your brain would be a start and a larger pain processing canter with a greater capacity for types of pain. There is a difference between a numb pain or throb and a stinging pain or stab... but there could be lots of others, just that we don't have that capacity because we have a limited number and 'types' of end receptors that pre-process pain before sending that to the brain. In some cases nerve receptors will react themselves to pain, but they do that instantly and then later you 'feel' that pain.
  • Is the future inevitable?(hypothetical dilemma)


    Yes but code is deterministic.... even the Rand function is deterministic.... I suppose a quantum generated random number is not but thats not coding, that's the result of quantum processes.

    And also at a macro scale you get macro causation.... which means every time you drop a ball it will fall.... there's no probability gravity will suddenly fail to work, so there are deterministic processes at a macro level. But even those are subject to some probability. Not a lot but over time, and of course given the number of such outliers in probability they will all add up to make future states non-computable. Hence Heisenbergs uncertainty principle. You just can't know the value of some variable until the die is cast... you can guess I suppose, but thems the breaks. Its not a case that if you had enough compute and enough time you could work it out, its that its not a thing that can be worked out, no matter waht. Its forever a probability until you take a measurement, (the waveform collapses.)

    Now if this were a machine learning model you were talking about it might be a better analogy... and lets assume you had a truly massive data set the size of a universe? Well now you are limited to optimisation and probability rather than solid unmoving code. But of course in an evolving universe you still cannot calculate what a future state will be, based on the current state.

    So yes, calculating the future is not possible.... you can guess that out of 100 things X is more liekly to happen than Y, and we use that at a macro level, but the further out you go, the lower the probability your guess will be right.
  • Is the future inevitable?(hypothetical dilemma)
    Does this question assume some hypothetical universe where the future is pretty much already written, or deterministic, about to take place no matter what? In this universe right now, at the lowest level we know of 'probability' is what governs it... at a macro level those probabilities almost even out so that we get things like solid objects and we don't see tiny probabilities like a grapefruit morphing into a penguin happening a lot, or ever... But this is why physicists assume an unwritten and probabilistic future.

    How can I explain this simply.... okay so the odds of throwing a head or tail... is 50/50 right? Well this does not stop ten heads arriving in a row... it can happen... in fact the probability is known... its 00.01% (roughly)... 100 in a row is very very tiny, but crucially its not zero... and no combination given enough coin tosses will be zero. There is a non zero probability of every single combination possible.

    And the LARGER the sample of these 'probabilities' the more solid the overall outcome.... so if you throw that coin a million times then it'll be heads pretty much 500k times... give or take just a few heads or tails either way.... yet in there, in that list of throws there will be all sorts of strange likelihoods that panned out. Enough throws and you'll get the Mona Lisa encoded in binary with Heads=1 and Tails=0. But again at the macro level you don't see that you see 500k heads and 500k tails from a millions coin tosses.... well thats quantum mechanics in this case... and it also governs causation... tiny weird little probabilities can throw the overall picture one way or the other.

    So in our marcro universe and the future? Well you just cannot predict it with any accuracy at all. You can have a good idea the sun will come up in the morning based on past experience, but theres also a non zero probability the universe will suddenly evaporate in a big crunch... thats no zero either... in which case the sun will not come up in the morning.

    In your scenario you need a coin that results in heads then tails, in that order, forever, in order to be able to alter or predict the future accurately... but we live in a probabilistic and not deterministic universe I'm afraid....So its just not possible. And what we do know is you don't get to see the heads or tails until you throw the coin either, until then its just a probability.
  • QUANTA Article on Claude Shannon
    der to be meaningful to nGnomon

    Well, this of course assumes we narrow down the discussion of computed answers or solutions our hypothetical computer is capable of to machines that rely on a programmer right?

    But is that really any longer the case? Computers that are hardcoded are no different than a classical cash register with mechanical key in many respects and need the human input PLUS their machinery to work. Programs ARE confining for solutions and that's why machine learning was a good idea. It removes the programmer from the equation and also the program, it removes the programming language, and replaces it all with pretty much an optimisation process.

    Its limited still I suppose to the capacity of the processors (although obviously the faster or more parallel the processor the faster the model can be trained) , and all that really governs is the 'speed' at which computations are done, not the capacity for computations themselves.

    Now is the machine 'free' to make decisions on its own without its origin, or programmers, or other physical baggage getting in the way? Well no, its a physical object in the physical world, governed by the laws of nature... So its not free to do any computation it wants, just the ones confined to the universe we live in. That might seem like no restriction, but it IS a restriction.

    So this argument of our hypothetical programmer holds up until around 2008 and then it doesn't any more. It didn't before obviously but you'd need to understand how technology that would arrive in the future was going to work to argue anything else. Add if we top this with quantum computation and no programmers where are we? Its still a 'machine' and likely now freer in scope than a human brain. As far as we know we do not use quantum computation as a primary source of thought. The latest from neuroscience is that the brain is simply a machine where the combined output and system of the agents within it is greater than the sum of those agents. (A complex adaptive system CAS, as opposed to an MAS or multi agent system, like umm... a car or bicycle)

    But then where are we...? Well a machine is STILL confined by things a machine can do and the limitations of computation, which are not infinite. Machines do have an absolute top speed of computation and a lowest possible use of energy to carry out a compute. (see Landauer's principle)... and those are fundamental laws of nature we have either discovered or calculated as being so. So this top speed of computation (or rather lowest energy a binary calculation can be performed at) is a limit, if, and only if, the universe itself is not infinite.

    However a machine that was maxed out even a few decades from now would be operating many times faster with much higher capacity than any human brain, or likely our entire population and then some. It might not be infinitely free in terms of compute.... but it would be freer than any human is.

    And the programmer now, who built this system, has a lower compute capacity, lower knowledge, lower everything than the fruits of his/her labour. It wouldn't matter now how they determined anything or their capacity for doing so.... such a machine would beat them every time and operate at greater levels of freedom. But unless the universe is infinite it can not operate at infinite degrees of freedom. So says the math.... there will always be a limit in a finite universe... hence the word 'finite'.
  • Imaginary proof of the soul
    "that a "pointer" indicates who I am in the respective world. This pointer could be called "soul", even if it only bears a distant resemblance to the religious soul."

    Really? Well wouldn't you need to first define what this religious soul was? What properties it has that are unique to it being a soul as opposed to anything else?

    Well... lets give it the benefit of the doubt and allow you to define what this 'soul' is and what unique properties its has as opposed to say phenomenological awareness, or maybe thinking ability ... which can't obviously be unique to a sou7ld since they are the properties of just 'brain'. Its real easy for one thing to be LIKE another thing if you don't define either.
  • I couldn't find any counter arguments against the cosmological argument?
    Wouldn't 'causation' only come into effect when time became manifest? So what you are asking is before time when did the universe start. Which is like asking which end you should hold a one ended stick?

    There are properties of the universe (in fact most of the existing universe) that does not need, nor is it reliant on 'time'. Virtual particles for one thing... and to some degree quantum mechanics... certainly the whole loop quantum gravity theory... well they don't need nor rely on time.

    Is time though a real thing... well if its not then how can it manifestly change if i head away from you at the speed of light or some portion of that and then return? Time has moved on at a different rate for both of us... in a universe where time was illusory and merely a conceptual description ... that couldn't happen. But it does need one process to run on... a universe in the first place.

    So which came first the universe or time? Well time is property OF the universe so the universe is the origin of time... not the other way around. Demonstrated by the relative difference of my time to yours when I'm accelerating and you are not!

    Your cosmological argument relies on causation/time predating the universe. I'm sorry, that's not a possible thing. It has a zero probability.

    Perhaps I didn't explain this very well... if not I'll be happy to expand on this.

    Now the next question might be 'when did time become manifest?' which again is a weird question.... to my mind the universe has existed for all of time, just as the south pole consists of all of southness! Theres nothing south of it! and its a silly question to ask what is south of it! (although I see a lot of folks asking just that!)
  • Creation-Stories
    I think its a little more than imagination with monopoles, its inference. Only the bravest of physicists will announce the whole monopole thing is imagination... brave, and deluded I think.

    For example we know that we have matter in the universe... but we also know that the odds of antimatter in the descriptions offered by all physicists including, *cough* string theorists, is that there must have been antimatter in abundance and after an annihilation event only the slight discrepancy between matter and antimatter left some small amount of matter. Also in that description we should see monopoles absolutely everywhere, we should be at least knee deep in monopoles... we should be pulling them out of our sandwiches at the beech! Yet no demonstration of any has ever been produced.

    But for the entire story of the formation of the universe to make any sense at all we need copious monopoles right?

    At the moment this is like the fermi paradox of classical and quantum physics... where are all the monopoles? Nobody is suggesting they do not exist except in a descriptive or imaginary sense...or perhaps physicists who are mentally ill? Instead they are asking where these things, as absolutely a necessity, are!

    These are like the boson problem, the Higgs field... which was not imaginary when just math...because absolutely everything else lined up... the odds of all lining up with the exception of this one thing... and also all that lining up to be sheer coincidence is a lot less likely than imaginary higgs fields or an illusory standard model... there really were such bosons... we just never found any... until that is July 4th 2012 when it was announced they were no longer in hiding.

    So the monopole thing is an example of us just not finding a thing we know is there... or a coincidence of unheard of probability where everything else lines up, is the case.
  • Creation-Stories
    The Ying/Yang things is rather interesting. Interesting in that in general I reject this in all but a digital or classical physics framework. It seems there is some sort of capacity for the human mind to try dichotomise things... right and wrong, left and right, up and down, black and white, good and evil...

    The lower down the philosophy lake you go the more dichotomised, IMO the arguments get. And, yes, if you'll pardon my strong feelings on this I'm alluding to the general tendency for the bottom feeders of that lake to digitise all into 1 and 0!

    But the universe is a very complex system, it is not 1 or 0... packed to brimming with complex systems where no matter how many strings you cut the system survives, changes and moves on...

    Ying/Yang models fail to recognise that there are granulations of processes which are neither good nor bad or left or right... You might evaluate or value them as such though and a dichotomy as a binary evaluation is very easy to understand. You are on one side completely or the other side completely.

    A good example "Given the ability to time travel would one assassinate Hitler?" well I wouldn't because although I know history... I don't know what the outfall might be? I don't know 'what if' history...nobody does... and its not black and white. It could be horrendous, it might be utopian... and Hitlers early demise would be irrelevant to how that might pan out... but I'm sure as hell not going to take the chance. One may as well ask "Would you kill Aristotle or Socrates?" and my answer would be exactly the same... the gradients are too fine for me to evaluate if the outcome would be a benefit or not, (lets assume I was going for benefit and not clusterfuck)

    Now DO NOT GET ME WRONG... there are binary things in the universe... there really is information and its consists of the lowest possible values of 0 and 1 (and 0 isn't even a thing!). But there are also gradients made up of those and thats all we can evaluate.

    But in a complex thing like universes and their manifest reality, their origin and their capacity for complexity... well I'm sorry, but that falls outside the simplicity of 1 or 0... or ying/yang.