I don't assume that. "Other worlds" themselves are not "vital resources" to spacefaring thinking machines, but are only repositories of indigenous remnants or fossils of parent-species. For instance, countless stellar masses and the vacuum / inflation energy of expanding spacetime itself are not scarce to intelligences which know how to harvest them as computational resources. Instead I assume that astronomical (i.e. relativistic) distances – not resource-extractive territoriality – will mostly keep ASI & ETIMs in their respective galactic and intergalactic lanes.Why do you assume they will not need to visit other worlds to 'secure,' vital resources ... — universeness
The way I see it: the you is the music, the brain is the orchestra; when the orchestra stops playing and disbands, the music is over, that is, you cease being you – the capacity of self-referring "I" (i.e. melody) is lost – at brain death. I'm not aware of any compelling public evidence to the contrary. :death: :flower:Does your sense of self (your self awareness or identity) disappear when you die? Or does the self that you sense disappear when it dies, and the "I" - the conscious component continues in some other form? — Benj96
None in particular that I'm aware of; certainly not a psychoanalytic "theory". Maybe a Spinozist conatus-inspired hybrid of Iris Murdoch's (platonic) 'unselfing', David Parfit's 'self-continuity' (contra self-identity) and Thomas Metzinger's 'phenomenal self model' ...What theory are you using in your reference to the ego and self? Freudian, Jungian, etc.? — Ø implies everything
Why would they need that? When our civilization can detect them, it'll be because we're post-Singularity, the signal to ETIM that Sol 3's maker-species is controlled by its AGI—>ASI. "The Dark Forest" game theory logic will play itself out at interstellar distances in nano seconds and nonzero sum solutions will be mutually put into effect without direct communication between the parties. That's my guess. ASI & ETIMs will stay in their respective lanes while keeping their parent species distracted from any information that might trigger their atavistic aggressive-territorial reactions. No "Prime Directive" needed because "we" (they) won't be visiting "strange new worlds". Besides, ASI / ETIM will have better things to do, I'm sure (though I've no idea what that will be). :nerd:I wonder if some of thesehidden[humanly undetectable] mecha, which apply a star trek style prime directive — universeness
You're mistaken ... He did:He didn't state nor imply that you did. — bert1
Why have you decided that an AGI'ASI, will decide that this universe is just not big enough for mecha form, orga form and mecha/orga hybrid forms to exist in 'eventual,' harmony? — universeness
You took this (sloppy word choice) out of context. Previously I had written and then repeated again for emphasisIn what way did I misinterpret your 'yes' response, to my question quoted above? — universeness
I imagine "androids" as drones / avatars of A³GI which will, like (extreme) sociopaths, 'simulate feelings' (à la biomimicry) in order to facilitate 'person-to-person' interactions with human beings (and members of other near-human sentient species). — 180 Proof
Again, AI engineers will not build A³GI's neural network with "emotions" because it's already been amply demonstrated that "emotions" are not required for 'human-level' learning / thinking / creativity. A thinking machine will simply adapt to us through psychosocial and behavioral mimicry as needed in order to minimize, or eliminate, the uncanny valley effect and to simulate a 'human persona' for itself as one of its main socialization protocols. A³GI will not discard "feelings or emotions" anymore than they will discard verbal and nonverbal cues in social communications. For thinking machines "feelings & emotion" are tools like button-icons on a video game interface, components of the human O/S – not integral functions of A³GI's metacognitive architecture.Nothing I've written suggests A³GI "will reject emotions"; on the contrary, it will simulate feelings, as I've said, in order to handle us better (i.e. communicate in more human(izing) terms). — 180 Proof
What seems "dystopian" to you seems quite the opposite to me. And for that reason I agree: "possible, but unlikely", because the corporate and government interests which are likely to build A³GI are much more likely than not to fuck it up with over-specializations, or systemic biases, focused on financial and/or military applications which will supercede all other priorities. Then, my friend, you'll see what dystopia really looks like (we'll be begging for "Skynet & hunter-killers" by then – and it'll be too late by then: "Soylent Green will be poor people from shithole countries!" :eyes:) :sweat:I remain confident that your dystopian fate for humans is possible, but unlikely.
Maybe as a concession to the analytical style I differentiate between "ego" and self, investigating techniques (e.g. hermeneutics, ethics, physics, cognitive neuroscience) by which the latter can flourish because of – in contrast to – the defects of the former.What is there to speak of in continental philosophy if not the rich contents of our egos? — Ø implies everything
His "work" wouldn't be if it was, for example, sufficiently peer-reviewed and replicated much more widely as @universeness et al points out. Controversial, even extraordinary, theoretical claims have been rejected both by the public and the scientific community – e.g. General Relativity, Evolution – until sufficient, public testing (i.e. experimental evidence) had been accumulated (and a generation or so of initial skeptics had passed from the scene). After hundreds, maybe a thousand, generations of philosophers and then scientists, considering claims of "past lives" etc, Stevenson's compilation is the latest to have had no impact on either brain sciences (re: neurological mechanisms of memory-formation, storage & recall) & physics (re: conservation laws) or philosophies of mind (re: refutation of physicalism, phenomenology, intentionality ...) Why is this? Given the potential scientific and philosophical significance of demonstrable "past lives", how is this near-ubiquitous neglect still possible, Wayfarer?Stevenson is a hot-button issue. — Wayfarer
So you've forgotten about or have not yet read Witty's Tractatus Logico-Philosophicus (especially propositions 1-2) or, more sadly, you just read it as badly as the Viennese logical positivists had? :chin:... anti-metaphysical ... — Wayfarer
:up: :up:I understand dasein as "being there"; it must be a kind of awareness, even if not reflexively self-aware. I agree that the separation of subject and object only obtains discursively; it is not the primordial nature of human experience. — Janus
Apparrently, you've missed it again? :smirk:No, I did not miss the point you made. My question remains, is processing speed or 'thinking' speed the only significant measure? Is speed the only variable that affects quality? — universeness
It's time to move beyond 'thoughts and prayers.' — opening prayer by U.S. Senate Chaplain Barry C. Black, retired Navy Rear Admiral, on 3 March 2023
:up:Fear, a double edged sword. Ditto for acting and public speaking. — Tom Storm
From an old thread "Should We Fear Death?"So, an acceptance/knowledge of death is a liberation from dread and anxiety and an open door to freedom? Does that resonate? — Tom Storm
Another post from an old thread "What happen after you no longer fear death? What comes next?"It’s often argued that all the achievements and struggles of life mean nothing if it all ends in blackness. How so? Aren’t the moments themselves worthwhile? Is eternity the only criterion of value? This seems ugly to me.
Yes, from Plato originally. And influenced, or informed, by even more ancient Dharmic paths to moksha. Here's a recent post ...What do others think about the role of death in their lives and the concomitant role it plays in their philosophical speculations. Was Montaigne right to say, 'To philosophise is to learn how to die.'
... human extinction; ineluctable nothingness – the radical contingency of the species, its fossils & histories, and our bloodied parade of civilizations – an echo of sighs & moans, laughter & screams fading even now and forever into oblivion. Music is made of silence, which merely interrupts with sudden soundscapes, each piece (i.e. an ephemeral world) ending like raindrops in the ocean. It's terrible knowing, feeling bone deep, that everything and everyone [ ... ] one day very soon in the cosmic scheme of things will be utterly forgotten as if all of it, all of us, had never existed. — 180 Proof
Maybe you missed this allusion to that "quality" of thinking ...Do you personally assign a measure of 'quality' to a thought? Is thinking or processing faster always superior thinking — universeness
In other words, imagine 'a human brain' that operates six orders of magnitude faster than your brain or mine. :chin:Assuming a neural network processes information 10⁶ times faster than a human brain, every "day" a human-level 'thinking machine' can think 10⁶ times more thoughts than a human brain, or rather cogitate 10⁶ days worth of information in twenty-four hours — 180 Proof
Yes, just as today's AI engineers don't see a need for "feelings" in machine learning/thinking.... do you envisage an AGI that would see no need for, or value in, 'feelings?' — universeness
Unfortunately I have as far as the end of season three (after the first half of the third season, IIRC, the series crashed & burned).I assume you have watched the remake of Battlestar Galactica.
Yeah. "Cylon skinjobs" were caricatures, IMO. The HAL 9000 in 2001: A Space Odyssey, synthetic persons in Alien I-II, replicants in Blade Runner, and Ava in the recent Ex Machina are not remotely as implausible as nBSG's "toasters". I imagine "androids" as drones / avatars of A³GI which will, like (extreme) sociopaths, 'simulate feelings' (à la biomimicry) in order to facilitate 'person-to-person' interactions with human beings (and members of other near-human sentient species).Did you think the depiction of the dilemmas faced by the Cylon human replicates, were implausible, as a representation for a future AGI?
Ask A³GI.In what ways do you think an AGI would purpose the moon?
"The goals" of A³GI which seem obvious to me: (a) completely automate terrestrial global civilization, (b) transhumanize (i.e. uplift-hivemind-merge with) h. sapiens and (c) replace (or uplift) itself by building space-enabled ASI – though not necessarily in that order. :nerd:I am more interested is what you envisage as the goals/functions/ purpose/intent of a future AGI
:rofl: :up:Deadwood: Calamity Jane teaching American history. 'Custer was a cunt. The end.' — Tom Storm
Finally confessing your own "Enformer" god-of-the-gaps fallacy. Good for you, sir. :clap: :smirk:My own god-posit is mostly an explanation for the god-gap in the Big Bang creation story. BB does not begin at the beginning, but assumes the prior existence of Creative Power and Directional Rules for evolution. So, like a Cosmologist, I reasoned backward from current conditions to see if there were any clues to the how & why of sudden emergence from Erewhon (nowhere).
I still saw a philosophical necessity for a Creation Myth to explain why there is something instead of nothing. — Gnomon
Maybe I wasn't clear. My contention is that A³GI will not need any of "the most valuable aspects of human consciousness" to render us obsolete as a metacognitive species. I see no reason, in other words, to even try to make a 'thinking machine' that thinks about (or perceives) itself or us like humans do.I remain unconvinced (for now,) that all of (what I would consider) the most valuable aspects of human consciousness, may not be achievable by any future AGI/ASI system — universeness
