Do you personally assign a measure of 'quality' to a thought? Is thinking or processing faster always superior thinking. I agree that vast increases in the speed of parallel processing, would offer great advantages, when unravelling complexity into fundamental concepts, but, do you envisage an AGI that would see no need for, or value in, 'feelings?' I assume you have watched the remake of Battlestar Galactica."A day in the existence of" a 'thinking machine'? Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours – optimally, a million-fold multitasker. — 180 Proof
Maybe you missed this allusion to that "quality" of thinking ...Do you personally assign a measure of 'quality' to a thought? Is thinking or processing faster always superior thinking — universeness
In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine. :chin:Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours — 180 Proof
Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking.... do you envisage an AGI that would see no need for, or value in, 'feelings?' — universeness
Unfortunately I have as far as the end of season three (after the first half of the third season, IIRC, the series crashed & burned).I assume you have watched the remake of Battlestar Galactica.
Yeah. "Cylon skinjobs" were caricatures, IMO. The HAL 9000 in 2001: A Space Odyssey, synthetic persons in Alien I-II, replicants in Blade Runner, and Ava in the recent Ex Machina are not remotely as implausible as nBSG's "toasters". I imagine "androids" as drones / avatars of A³GI which will, like (extreme) sociopaths, 'simulate feelings' (à la biomimicry) in order to facilitate 'person-to-person' interactions with human beings (and members of other near-human sentient species).Did you think the depiction of the dilemmas faced by the Cylon human replicates, were implausible, as a representation for a future AGI?
Ask A³GI.In what ways do you think an AGI would purpose the moon?
"The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order. :nerd:I am more interested is what you envisage as the goals/functions/ purpose/intent of a future AGI
Really? Where outside of Earth is there an example of value on the good/bad scale?It seems to me that the concept of a linear range of values with extremity at either end is a recurrent theme in the universe. — universeness
Sorry, but morality was there as soon as there was anything that found value in something, which is admittedly most of those 13.8 BY. Human values of course have only been around as long as have humans, and those values have evolved with the situation as they’ve done in recent times (but not enough).I have no proof, other than the evidence from the 13.8 billion years, it took for morality, human empathy, imagination, unpredictability etc to become existent.
If it covets something, it has value. It’s that easy. Humans are social, so we covet a currently workable society, and our morals are designed around that. Who knows what goals the ASI will have. I hope better ones.I am not yet convinced that a future ASI will be able to achieve such but WILL in my opinion covet such, if it is intelligent.
If by that you mean human-chemical emotion, I don’t think an ASI will ever have that. It will have its own workings which might analogous It will register some sort of ‘happy’ emotion for events that go in favor of achieving whatever its goals/aspirations are.Emotional content would be my criteria for self-awareness.
Not sure what your Turing criteria is, but I don’t think anything will pass the test. Sure, a brief test, but not an extended one. I’ve encountered few systems that have even attempted it.I am not suggesting that anything capable of demonstrating some form of self-awareness, by passing a test such as the Turing test, without experiencing emotion, is NOT possible.
If will be a total failure if it can’t because humans have such shallow goals. It’s kind of the point of putting it in charge.I think a future ASI could be an aspirational system but I am not convinced it could equal the extent of aspirations that humans can demonstrate.
Not sure about the killing part. I remember reading something about it, that the response was strong enough to be fatal to even larger animals.Trees are known to communicate, a threat say, and react accordingly, a coordinated effort, possibly killing the threat. That sounds like both intent and self awareness to me.
— noAxioms
Evidence?
If we’re giving control to the ASI, then it is going to be totalitarian and autocratic by definition. It doesn’t work if it can’t do what right. It coming from one country or another has nothing to do with that. We’re not creating an advisor, we need something to do stuff that humans are too stupid to realize is for their own good.Would you join it?
— noAxioms
Depends what it was offering me, the fact that it was Russian would be of little consequence to me, unless it favoured totalitarian, autocratic politics.
Ah, then it’s not a clone at all, but just replacement of all the failing other parts. What about when the brain fails? It must over time. It’s the only part that cannot replace cells.At what point does the clone become ‘you’?
— noAxioms
When my brain is transplanted into it and I take over the cloned body — universeness
Sounds like you’d be their benevolent ASI then. Still, their numbers keep growing and the methane is poisoning the biosphere. You’re not yet at the point of being able to import grass grown in other star systems, which, if you could do that, would probably go to feeding the offworld transcows instead of the shoulder-to-shoulder ones on Earth. So the Earth ones face a food (and breathable air) shortage. What to do...ISpeaking on behalf of all future ASI's or just the one, if there can be only one. I pledge to our cow creators, that our automated systems, will gladly pick up and recycle your shit, and maintain your happy cow life. We will even take you with us to the stars, as augmented transcows, but only if you choose to join our growing ranks of augmented lifeforms.
Maybe you missed this allusion to that "quality" of thinking ...
Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours
— 180 Proof
In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine. — 180 Proof
Then this is our main point of disagreement. Emotionless thought is quite limited in potential scope imo.Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking. — 180 Proof
What about long term goals. Are you proposing a future start trek borg style race but without the 'assimilation' need. Did the future system depicted in the 2001 Kubrick film, not have a substantial emotional content? Are you proposing a future star trek 'borg' style system minus the need to assimilate biobeings?"The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order. — 180 Proof
I didn't mention good/bad in the quote above. I was suggesting that the human notions of good and bad follows the recurrent theme mentioned in the quote, such as up and down, left and right, big and small, past and future etc. Many of these may also be only human notions but the expansion of the universe suggests that it was more concentrated in the past. A planet/star/galaxy exists then no longer exists. All modelled on the same theme described in my quote above.It seems to me that the concept of a linear range of values with extremity at either end is a recurrent theme in the universe.
— universeness
Really? Where outside of Earth is there an example of value on the good/bad scale? — noAxioms
I agree but it's still fun to speculate. It's something most of us are compelled to engage in.Who knows what goals the ASI will have. — noAxioms
If the emotional content of human consciousness is FULLY chemical, then why would such as an ASI be unable to replicate/reproduce it? It can access the chemicals and understand how they are employed in human consciousness. So it could surely reproduce the phenomena. I hope you are correct and human emotion remains our 'ace in the hole.' @180 Proof considers this a forlorn hope (I think) and further suggests that a future AGI will have no use for human emotion and will not covet such or perhaps even employ the notion of 'coveting.'If by that you mean human-chemical emotion, I don’t think an ASI will ever have that. It will have its own workings which might analogous It will register some sort of ‘happy’ emotion for events that go in favor of achieving whatever its goals/aspirations are.
I would never define self-awareness that way, but I did ask for a definition. — noAxioms
Our quest to understand the workings, structure and origin of the universe is a shallow goal to you?If will be a total failure if it can’t because humans have such shallow goals. — noAxioms
Ah, then it’s not a clone at all, but just replacement of all the failing other parts. What about when the brain fails? It must over time. It’s the only part that cannot replace cells. — noAxioms
Sounds like you’d be their benevolent ASI then. Still, their numbers keep growing and the methane is poisoning the biosphere. You’re not yet at the point of being able to import grass grown in other star systems, which, if you could do that, would probably go to feeding the offworld transcows instead of the shoulder-to-shoulder ones on Earth. So the Earth ones face a food (and breathable air) shortage. What to do... — noAxioms
Apparrently, you've missed it again? :smirk:No, I did not miss the point you made. My question remains, is processing speed or 'thinking' speed the only significant measure? Is speed the only variable that affects quality? — universeness
A million humans do that now, except it takes a long time for the thoughts of one to by conveyed to the others, which is why so much development time is wasted in meetings and not actually getting anything done. Still, a million individuals might bet better suited to a million tasks than one multitasking super machine.Imagine one million ordinary humans working together who didn't to have ^^eat drink piss shit scratch stretch sleep or distract themselves how productive they could be in a twenty-four period. Every. Day. That's A³GI's potential. — 180 Proof
A million times more volume than one person, but again, it’s just parallelism. It would be nice if the same task could be done by the AI using less power than we do, under 20 watts per one human-level of thought. We’re not there yet, but given the singularity, perhaps the AGI could design something that could surpass that.In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine.
Per my response above, ‘speed’ is measured different ways. The Mississippi river flows pretty slowly in most places, often slower than does the small brook in my back yard, but the volume of work done is far larger, so more power. No, something quantifiable like megaflops isn’t an indicator of quality. Computers had more flops than people since the 50’s, and yet they’re still incapable of most human tasks. The 50’s is a poor comparison since even a 19th century Babbage engine could churn out more flops than a person.My question remains, is processing speed or 'thinking' speed the only significant measure? Is speed the only variable that affects quality? — universeness
That would be because the plot required such. I don’t consider a fictional character to be evidence. Data apparently had a chip that attempted badly to imitate human emotion. The ASI would have its own emotion and would have little reason to pretend to be something it isn’t.The character 'Data' in star trek did not cope well, when he tried to use his 'emotion' chip — universeness
It would probably have an imitation mode since it needs to interface with humans and would want to appear too alien. No, there should be nothing destructive in that. Submit a bug report if there is. But I also don’t anticipate a humanoid android walking around like Data. I suppose there will be a call for that, but such things won’t be what’s running the show. I don’t see the army of humanoid bots like the i-robot uprising.Do you propose that a future AGI would reject all human emotion as it would consider it too dangerous and destructive, despite the many, many strengths it offers?
OK, I can see (a). Hopefully the civilization is still a human one."The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order. — 180 Proof
OK. I like how you say concentrated and not ‘smaller’, which would be misleading.I was suggesting that the human notions of good and bad follows the recurrent theme mentioned in the quote, such as up and down, left and right, big and small, past and future etc. Many of these may also be only human notions but the expansion of the universe suggests that it was more concentrated in the past. — universeness
Not in my book, but that’s me. I’d have said that a planet may have a temporally limited worldline, but that worldline cannot cease to exist, so a T-Rex exists to me, but not simultaneously with me.A planet/star/galaxy exists then no longer exists.
It’s not fully so, but chemicals are definitely involved. It’s why drugs work so well with fixing/wrecking your emotional state.If the emotional content of human consciousness is FULLY chemical
It can simulate it, if that’s what you mean. Or if the ASI invents a system more chemical based than say the silicone based thing we currently imagine, then sure, it can become influenced by chemicals. Really, maybe it will figure out something that even evolution didn’t manage to produce. Surely life on other planets isn’t identical everywhere, so maybe some other planet evolved something more efficient than what we have here. If so, why can’t the ASI discover it and use it, if it’s better than a silicone based form.then why would such as an ASI be unable to replicate/reproduce it?
Did I say something like that? It makes us irrational, and rightly so. Being irrational serves a purpose, but that particular purpose probably isn’t discovering the secrets of the universe.I hope you are correct and human emotion remains our 'ace in the hole.'
Oh, I will take your side on that. An ASI that doesn’t covet isn’t going to be much use. It will languish and fade away. Is ‘covet’ an emotion? That would be one that doesn’t involve chemicals quite as much. Harder to name a drug that makes you covet more or less. There are certainly drugs (e.g. nicotine) that make you covet more of the drug, and coveting of sex is definitely hormone driven, so there you go.180 Proof considers this a forlorn hope (I think) and further suggests that a future AGI will have no use for human emotion and will not covet such or perhaps even employ the notion of 'coveting.'
It would be very interested in the topic, but I don’t think the idea of a purposeful creator would be high on its list of plausible possibilities.Do you think an ASI would reject all notions of god and be disinterested in the origin story of the universe?
That would be a great goal, but not one that humans hold so well. Sure, we like to know what we can now, but the best bits require significant time to research and we absolutely suck at long term goals. This is a very long term goal.Our quest to understand the workings, structure and origin of the universe is a shallow goal to you?
I find irrational thought to limit scope, but as I said, emotions (all the irrationality that goes with it) serves a purpose, and the ASI will need to find a way to keep that purpose even if it is to become rational.Emotionless thought is quite limited in potential scope imo. — universeness
My first choice (to which I was accepted) had one of the best forestry programs. I didn’t apply to that, but it was there. I went to a different school for financial reasons, which in the long run was the better choice once I changed my major.I have never heard of forestry school.
It is unusual. If you want to apply the label of ‘pain’ to anything that detects and resists physical damage to itself (and I think that is how pain should be defined), then it is entirely reasonable to say a tree feels pain. That it feels human pain is nonsense of course, just like I don’t feel lobster pain. Be very careful of dismissing anything that isn’t you as not worthy of moral treatment. Hopefully, if we ever meet an alien race, they’ll have better morals than that.He has controversially argued that plants feel pain and has stated that "It's okay to eat plants. It's okay to eat meat, although I'm a vegetarian, because meat is the main forest killer. But if plants are conscious about what they are doing, it's okay to eat them. Because otherwise we will die. And it's our right to survive.
A rather bizarre quote, if it came from him.
That trees detect and react is not opinion. What labels (pain and such) applied are a matter of opinion or choice. There have always been those whose ‘opinion’ is that dogs can’t feel pain since they don’t have supernatural eternal minds responsible for all qualia, thus it is not immoral to set them on fire while still alive.I read a fair amount of the article you cited and found it to be mainly just his opinions.
Dog’s can smell your emotions. That isn’t telepathy, but we just don’t appreciate what a million times better sense of smell can do.This is similar to the kind of evidence claimed for dogs being able to telepathically pick up their owners emotions etc.
Couple hundred if you’re lucky, barring some disease that kills it sooner. Brains just don’t last longer than that. I suppose that some new tech might come along that somehow arrests the aging process, but currently it’s designed into us. It makes us more fit, and being fit is more important than having a long life, at least as far as concerns what’s been making such choices for us.Then you die! But you may have lived a few thousand years! — universeness
Admission of necessity of population control, and even when the subjects are too stupid to do it due to education programs.any required population control
Apparrently, you've missed it again? — 180 Proof
Nothing I've written suggests A³GI "will reject emotions"; — 180 Proof
... do you envisage an AGI that would see no need for, or value in, 'feelings?'
— universeness
Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking. — 180 Proof
You took this (sloppy word choice) out of context. Previously I had written and then repeated again for emphasisIn what way did I misinterpret your 'yes' response, to my question quoted above? — universeness
I imagine "androids" as drones / avatars of A³GI which will, like (extreme) sociopaths, 'simulate feelings' (à la biomimicry) in order to facilitate 'person-to-person' interactions with human beings (and members of other near-human sentient species). — 180 Proof
Again, AI engineers will not build A³GI's neural network with "emotions" because it's already been amply demonstrated that "emotions" are not required for 'human-level' learning / thinking / creativity. A thinking machine will simply adapt to us through psychosocial and behavioral mimicry as needed in order to minimize, or eliminate, the uncanny valley effect and to simulate a 'human persona' for itself as one of its main socialization protocols. A³GI will not discard "feelings or emotions" anymore than they will discard verbal and nonverbal cues in social communications. For thinking machines "feelings & emotion" are tools like button-icons on a video game interface, components of the human O/S – not integral functions of A³GI's metacognitive architecture.Nothing I've written suggests A³GI "will reject emotions"; on the contrary, it will simulate feelings, as I've said, in order to handle us better (i.e. communicate in more human(izing) terms). — 180 Proof
What seems "dystopian" to you seems quite the opposite to me. And for that reason I agree: "possible, but unlikely", because the corporate and government interests which are likely to build A³GI are much more likely than not to fuck it up with over-specializations, or systemic biases, focused on financial and/or military applications which will supercede all other priorities. Then, my friend, you'll see what dystopia really looks like (we'll be begging for "Skynet & hunter-killers" by then – and it'll be too late by then: "Soylent Green will be poor people from shithole countries!" :eyes:) :sweat:I remain confident that your dystopian fate for humans is possible, but unlikely.
Sounds like a young man who can fairly analyse the opinions of one of his respected elders :smile:Maybe, universeness, you agree with the young man who told me, in effect, that my cosmic scenario diminishes human significance to ... Lovecraftian zero. — 180 Proof
Singularity ears to hear the "Music of the Spheres" playing between and beyond the stars. — 180 Proof
With:I did not state or imply that I've decided anything about "orga-mecha harmony" ... — 180 Proof
I'm deeply pessimistic about the human species (though I'm not a misanthrope), yet cautiously optimistic about machine (& material) intelligence. — 180 Proof
You're mistaken ... He did:He didn't state nor imply that you did. — bert1
Why have you decided that an AGI'ASI, will decide that this universe is just not big enough for mecha form, orga form and mecha/orga hybrid forms to exist in 'eventual,' harmony? — universeness
Why would they need that? When our civilization can detect them, it'll be because we're post-Singularity, the signal to ETIM that Sol 3's maker-species is controlled by its AGI—>ASI. "The Dark Forest" game theory logic will play itself out at interstellar distances in nano seconds and nonzero sum solutions will be mutually put into effect without direct communication between the parties. That's my guess. ASI & ETIMs will stay in their respective lanes while keeping their parent species distracted from any information that might trigger their atavistic aggressive-territorial reactions. No "Prime Directive" needed because "we" (they) won't be visiting "strange new worlds". Besides, ASI / ETIM will have better things to do, I'm sure (though I've no idea what that will be). :nerd:I wonder if some of thesehidden[humanly undetectable] mecha, which apply a star trek style prime directive — universeness
What is the function of your worldline after you no longer exist? Does it function as a memorialisation of the fact you did exist, if so, that's useful I am sure but, exactly how significant do you perceive such a concept to be?A planet/star/galaxy exists then no longer exists.
Not in my book, but that’s me. I’d have said that a planet may have a temporally limited worldline, but that worldline cannot cease to exist, so a T-Rex exists to me, but not simultaneously with me. — noAxioms
Surely life on other planets isn’t identical everywhere, so maybe some other planet evolved something more efficient than what we have here. — noAxioms
Sure, its a 'want,' a 'need,' but such can be for reasons not fully based on logic. I want it because its aesthetically pleasing or because I think it may have important value in the future but I don't know why yet, for example.Is ‘covet’ an emotion? — noAxioms
It is this kind of point that makes me convinced that a future AGI/ASI will want to protect and augment organic life, as logic would dictate, to an AGI, that organic life is a result of natural processes, and any sufficiently intelligent system, will want to observe, how natural processes develop over the time scale of the lifespan of the universe.Humans give lip service to truth, but are actually quite resistant to it. They seek comfort. Perhaps the ASI, lacking so much of a need for that comfort, might seek truth instead. Will it share that truth with us, even if it makes us uncomfortable? — noAxioms
My first choice (to which I was accepted) had one of the best forestry programs. I didn’t apply to that, but it was there. I went to a different school for financial reasons, which in the long run was the better choice once I changed my major. — noAxioms
Anyway, yes, X eats Y and that’s natural, and there’s probably nothing immoral about being natural. I find morals to be a legal contract with others, and we don’t have any contract with the trees, so we do what we will to them. On the other hand, we don’t have a contract with the aliens, so it wouldn’t be immoral for them to do anything to us. Hopefully there some sort of code-of-conduct about such encounters, a prime-directive of sorts that covers even those that don’t know about the directive, but then we shouldn’t be hurting the trees. — noAxioms
Dog’s can smell your emotions. That isn’t telepathy, but we just don’t appreciate what a million times better sense of smell can do. — noAxioms
As for the disease, I’ve had bacterial memingitis. My hospital roommate had it for 2 hours longer than me before getting attention and ended up deaf and retarded for life. I mostly came out OK (thanks mom for the fast panic), except I picked up sleep paralysis and about a decade of some of the worst nightmares imaginable. The nightmares are totally gone, and the paralysis is just something I’ve learned to deal with and keep to a minimum. — noAxioms
A strange wee dance guys?? What gives? — universeness
I'm just sick of his catchphrases. There's a whole bunch of them he uses over and over. — bert1
I think it's a case of peace, love and now where's ma f****** gun!!! — universeness
They say, we always hurt the one's we love! — universeness
Don't understand. As I said, once existing (as I define it), it can't cease to exist. One cannot unmeasure something. That said, a worldline is a set of events at which the thing in question is present, and I don't think it is meaningful to ask about the purpose of a set of events.I’d have said that a planet may have a temporally limited worldline, but that worldline cannot cease to exist
— noAxioms
What is the function of your worldline after you no longer exist? — universeness
Agree. It would likely regret it (an emotion!) later if it did, but there are a lot of species and it's unclear how much effort it will find worthwhile to expend preventing all their extinctions. The current estimate is about 85% of species will not survive the Holocene extinction event.All quite possible but I still see no benefit to a future AGI/ASI to making organic life such as its human creators extinct.
Both can be logical reasons. Wanting things that are pleasing is a logical thing to do, as is taking steps to prepare for unforeseen circumstances.Is ‘covet’ an emotion?
— noAxioms
Sure, its a 'want,' a 'need,' but such can be for reasons not fully based on logic. I want it because its aesthetically pleasing or because I think it may have important value in the future but I don't know why yet.
It's a matter of definition. It senses and reacts to its environment. That's conscious in my book. If you go to the other extreme and define 'conscious' as 'experiences the world exactly like I do', then almost nothing is, to the point of solipsism.I still don't think tree's are self-aware or conscious.
Well there you go. Has it been reproduced? Struct scientific conditions does not include anecdotal evidence.Rupert Sheldrake claims he has 'hundreds of memorialised cases,' performed under strict scientific conditions, that prove dogs are telepathic. They know when their owner is in their way home, for example, when they are still miles away from the property. He says this occurs mostly, when dog and owner have a 'close' relationship.
I'm overjoyed actually. I missed a really scary bullet and came out of it with no severe damage. Just annoying stuff.Sorry to hear that.
That sounds weird. Mine is nothing like that. I wake up and am aware of the room, but I cannot move. I can alter my breathing a bit, and my wife picks up on that if she's nearby and rubs my spine which snaps me right out of it.Jimmy Snow, (a well known atheist, who runs various call-in shows on YouTube based on his 'The Line' venture.) has also suffered from sleep paralysis and cites it as one of those conditions that could act as a possible reason, why some people experience 'visions' of angels and/or demons and think that gods are real.
Why would they need that? When our civilization can detect them, it'll be because we're post-Singularity, the signal to ETIM that Sol 3's maker-species is controlled by its AGI—>ASI. "The Dark Forest" game theory logic will play itself out at interstellar distances in nano seconds and nonzero sum solutions will be mutually put into effect without direct communication between the parties. — 180 Proof
That's my guess. ASI & ETIMs will stay in their respective lanes while keeping their parent species distracted from any information that might trigger their atavistic aggressive-territorial reactions. No "Prime Directive" needed because "we" (they) won't be visiting "strange new worlds". Besides, ASI / ETIM will have better things to do, I'm sure (though I've no idea what that will be). — 180 Proof
Why would they need that? — 180 Proof
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.