• 180 Proof
    15.3k
    In a similar vein, my post-human (post-biomorphic) preference is nano sapien.
    — 180 Proof

    :grin: but why so small?
    universeness
    You've invoked "Moore's Law"; well, in a similiar vein, the miniaturization of tech, like natural complexity (i.e. life), accelerates ... and I think Buckminster Fuller waa right about ephemeralization in the 1930s (later updated by John Smart et al in the 2000s with the transcension hypothesis) that intelligent systems will also continue to miniaturize, such that AGI —> ASI will eventually be instantiated in matter itself (and maybe then somehow in entangled quantum systems). Thus, nano sapiens. Will they be us? I imagine them as our post-biomorphic – infomorphic – descendsnts, and, to me, Clark/Kubrick's "Monolith symbolizes this apotheosis.

    Do you completely reject that a future ASI may choose to remain separate from us, but will augment us, and protect us, when we are in danger.
    I don't think ASI's goals, especially with respect to humanity, are predictable since ASI is over the event horizon of the "technological singularity" (which is the advent of AGI).

    As for AGI and whether or not it will be a benefit or hazard to us, I think that mostly depends on how we engineer / (metacognitively train, not just program) the transition from ANI to AGI. I don't see AGI being inherently hazardous to – motivated to deliberately harm – other sentient species.

    Do you think the monolith is 'learning' or 'teaching' or both, in this scene?
    I imagine the movie 2001 in its entirety as the "Monolith" simulating within itself to its-human ancestral-self ("Kubrick's audience") a reenactment of its human ancestors' becoming post-human.

    So does this depict, for you, an 'ascendance' moment for the human, or a 'completion of purpose' moment for the human.
    Yes.

    Is the monolith making an equivalent style statement, to such as 'as you are now, so once was I, as I am now, so will you be, prepare yourself to follow me?
    No. I imagine that a human astronaut's transformation into the "Star Child" happened long ago (from the Monolith's perspective) as the third(?) and (possibly last) irreverisible step on the developmental path to becoming itself: a nano sapien hypercivilization (aka from our perspective "the Monolith").

    Is this then imagery, of completing the circle, or perhaps even the cycle?
    For us, perhaps it is, given our mythopoetic bias.

    Would you find anything in this final scene then, that is relatable to cyclical universe posits, such as CCC or do you think Kubrick was going for something more akin to the buddhist 'wheel of life?'
    No.

    So do you think the universe is, in the final analysis deterministic or not?
    I think the post-planck era universe is deterministic.

    Or is my general interpretations of your analysis of the final scene you posted and your typings, in Javi's thread, way off?
    Yeah it is, but I didn't elaborate there as much as I have here. Maybe my interpretation of Kubrick's final scene is clearer now? (Btw, both Kubrick's interpretation and mine differ from Arthur C. Clarke's too.) :nerd:
  • universeness
    6.3k
    What would motivate anyone to help the blind?Athena
    I don't understand why you would ask such a question?
    How is it possible to know how to help a blind person?Athena
    Perhaps those who train guide dogs for the blind, for example, could explain it to you better than I.
    Again I don't understand your line of questioning here.

    Hellen Keller could not see or hear, and a woman who could see and hear, taught her language and made it possible for her to have the language necessary for thinking and communicating with others.
    Her parents did not do this. Why didn't her parents teach her? What made the woman who did teach Hellen Keller language different from her parents? The answer will define what makes a human different from AI and from there we can have an interesting discussion.
    Athena
    Or speak! Her parents did not know the 'finger spelling' sign language involved. What motivated the woman who did teach Helen how to communicate, was the fact that she (Anne Sullivan) was a sign language specialist who was brought in, via Helen's parents.
    How does this make Anne Sullivan different from a future ASI that can teach humans sign language? There is no AI expert sign language system, currently capable of teaching finger spelling to someone like Helen Keller. But the proposed abilities of a future ASI, certainly could. In fact, a future ASI could probably develop a much better sign system, that could communicate with Helen, compared to finger spelling. Helen herself preferred 'oralism' to 'finger spelling' and braille etc Oralism involves touching a persons mouth as they speak.
    Hand gesturing and communication through touch, goes way back to ancient times.
    Again, I find your line of questioning bizarre, here Athena. But, I know you are a wise old force, so I am sure it's just a failing on my part to 'get where you are coming from here.'
  • universeness
    6.3k
    You've invoked "Moore's Law"; well, in a similiar vein, the miniaturization of tech, like natural complexity (i.e. life), accelerates ... and I think Buckminster Fuller waa right about ephemeralization in the 1930s (later updated by John Smart et al in the 2000s with the transcension hypothesis) that intelligent systems will also continue to miniaturize, such that AGI —> ASI will eventually be instantiated in matter itself (and maybe then somehow in entangled quantum systems). Thus, nano sapiens. Will they be us? I imagine them as our post-biomorphic – infomorphic – descendsnts,; and, to me, Clark/Kubrick's "Monolith symbolizes this apotheosis.180 Proof
    I agree with your suggestion that 'functionality' can be miniaturised but I have not really thought about how miniature something could be, but still be self-aware or conscious. Any nano tech I have heard of is certainly functional and can even be networked to achieve a common goal etc but no nano tech is currently sentient. I though by nano sapien, you meant that there would not be much left, which was 'human,' hence your 'post-human' preference over 'transhuman.' I see now that your analysis runs deeper than that.

    I don't think ASI's goals, especially with respect to humanity, are predictable since ASI is over the event horizon of the "technological singularity" (which is the advent of AGI).180 Proof
    True!

    I imagine the movie 2001 in its entirety as the "Monolith" simulating within itself to its-human ancestral-self ("Kubrick's audience") a reenactment of its human ancestors' becoming post-human.180 Proof
    Ok, so the monolith IS post-human.

    No. I imagine that a human astronaut's transformation into the "Star Child" happened long ago (from the Monolith's perspective) as the third(?) and (possibly last) irreverisible step on the developmental path to becoming itself: a nano sapien hypercivilization (aka from our perspective "the Monolith").180 Proof
    Confirms what I though you were saying about what the monolith represents, but what do you think of 2010, the sequel to 2001, written by Clarke as well. In that film, a large number of monoliths are used to turn Jupiter into a new star. I assume Jupiter's moons are also turned into new habitable space for humans but Europa is to be left alone and is protected by a monolith. This was too close to the Adam and Eve BS for me. You can go to any tree EXCEPT THIS ONE (Europa). Oh come on Mr Clarke, how derivative can you get?

    From wiki:
    HAL determines that the spot is a vast group of Monoliths, multiplying exponentially and altering Jupiter's density and chemical composition. He suggests canceling the launch in order to study the changes occurring to Jupiter. Floyd worries that HAL will prioritize his mission over the humans' survival, but Chandra admits to the computer that there is a danger, and that Discovery may be destroyed. HAL thanks Chandra for telling him the truth, and ensures the Leonov's escape. Before Discovery is destroyed, Bowman asks HAL to transmit a priority message, assuring him that they will soon be together. The Monoliths engulf Jupiter, which undergoes nuclear fusion, becoming a new star. HAL transmits this message to Earth:

    ALL THESE WORLDS
    ARE YOURS EXCEPT
    EUROPA
    ATTEMPT NO
    LANDING THERE
    USE THEM TOGETHER
    USE THEM IN PEACE

    The Leonov survives the shockwave from Jupiter's ignition, and returns home. Floyd narrates how the new star's miraculous appearance, and the message from a mysterious alien power, inspire the American and Soviet leaders to seek peace. Under its infant sun, icy Europa transforms into a humid jungle, covered with life, and watched over by a Monolith.


    Do you see these monoliths that exponentially multiply to turn Jupiter into a new star, as lifeforms or some kind of solarforming/terraforming system? The film does not confirm that the monoliths are consumed by this process. We don't know what happens to all those monoliths.

    From your link to the fermi paradox, I liked:
    The Transcension Hypothesis ventures that an advanced civilization will become fundamentally altered by its technology. In short, it theorizes that any ETIs that predate humanity have long-since transformed into something that is not recognizable by conventional SETI standards.
    But as this confirms, that may only apply to ETI's that are way more advanced that humans.
    Perhaps a little like 'the first ones,' dramatised in the sci-fi series Babylon 5. But such suggests many other species would have to exist other that just 'first ones,' unless, for some unknown reason, WE ARE the first ones.

    Is this then imagery, of completing the circle, or perhaps even the cycle?
    For us, perhaps it is, given our mythopoetic bias.
    180 Proof
    :grin: Yeah, Poetic license has a lot of girth.

    I think the post-planck era universe is deterministic.180 Proof
    So do you think 'quantum fluctuations' are deterministic? I think they are the only example of true 'random happenstance' that I am convinced does qualify as 'random.'

    Maybe my interpretation of Kubrick's final scene is clearer now? (Btw, both Kubrick's interpretation and mine differ from Arthur C. Clarke's too.)180 Proof

    True, perhaps also true for the sequel, 2010, as it will also be the director Peter Hyams' interpretation of Clarke's story.
  • 180 Proof
    15.3k
    Ok, so the monolith IS post-human.universeness
    Post-posthuman (i.e. post-sentient).

    ... what do you think of 2010 ...
    I didn't think much of either book or film. IMO, the latter is quite dated and superficially derivative.

    So do you think 'quantum fluctuations' are deterministic?
    They certainly aren't deterministic to a classical observer.
  • universeness
    6.3k
    They certainly aren't deterministic to a classical observer.180 Proof

    :lol: Fair enough, I don't know how to be anything else. Perhaps I need to consult a quantum physicist on determinism Vs quantum fluctuations.
  • noAxioms
    1.5k
    If the people on the tracks, made the train, and caused it to hurtle towards themselves, then they are the only ones who can stop it. The optimists are not as passively waiting and are not as meekly accepting of the fate your pessimism suggests, they can do nothing about.universeness
    From what I see, all the efforts have been aimed at reduction of the acceleration of the train, not even reduction of its speed. It’s not a fault. I cannot think of a being that has this capability. Perhaps this is evidence against intelligent design, because that’s one of the primary items I would have included in a decent design.

    How about just the person running a paper-pushing position at say a local doctor’s office.— noAxioms
    Soon automated, hopefully, same for all such tedious jobs.
    OK, I can accept that. I also had a paper-pushing job of sorts, but quite a good one involving significant creativity and pay. Such jobs will also be available for automation one the capability is there. It isn’t yet. But I suspect that there will be those who still choose to do such things, even if not in a capacity that displaces the machines doing the actual necessary work.
    You keep churning out such examples, and I keep repeating that I am confident that any job humans don't want to do, can be eventually automated.
    It wasn’t so much a job that wasn’t desired, but rather management that nobody wants to work under. I suppose they can make machines that don’t mind being berated for not being fast enough, or machines that don’t need to wear diapers just because the boss thinks 4 hours between restroom breaks is a minimum interval (Amazon does this among others).
    Yes, machines have automated many tasks, but so far they’re really awful at reacting to problems that arise, anything out of the ordinary. How does the robot restaurant cook react to a rat in the fresh food storage? Probably doesn’t notice it.
    If anyone refuses to do their share, then I would not remove access to any of their basic needs, but there would be social consequence's of their refusal, to do their fair share.
    Ah, now you finally mention the possible utility of consequences, even if completely unspecified.
    Pro-life and bodily autonomy arguments and issues like it, will no doubt persist for a long time yet.
    This is an interesting conflict because I’ve never seen either side of the argument make the slightest attempt to acknowledge the points made by the opposing side. There’s almost zero rationality to it. I’ve been to rallies (you decide which side) and trust me, rationality is nowhere to be seen.
    Point is, the issue with the environment is similarly lacking in rational thought, despite all the claims to the contrary. You need to start with a goal, and ask ‘why that goal?’, and if there’s an answer to that, then you probably haven’t identified the goal yet, only a means to some other goal.
    Who knows how new tech will change how an abortion is performed in the future.
    Sure, but ‘how’ is not the issue. ‘If’ is more the issue.

    Well, I have already stated that the main consequence of behaving as you suggest, in the quote above, is 'social status' based.universeness
    Those on the bottom of the social status scale don’t seem to mind their position there, or the social disdain that comes with it.
    If you know Jimmy (photo's provided), perhaps you could discuss with him, why he will not help his local community, in the ways we have asked him to.
    Because he gets all he needs without helping. That’s apparently enough for Jimmy. Of course I don’t see public shame-sheets naming each of them each month or so, a list of able-bodies individuals on the dole. People would get tired of such propaganda pretty quickly and it would lose any real pressure after a short time.
    You might find my suggestion here unpalatable, and you might even think that violence would be threatened or enacted against Jimmy, or Jimmy himself would respond to such social haranguing with violence, even though it would be prosecuted, if it was perpetrated.
    Then there would be crime, which would be dealt with accordingly, especially with automated evidence-gathering infrastructure that makes it almost impossible to get away with anything illegal. It’s not big-brother if it’s just preventing crime, right?
    Perhaps you can suggest a better way to reason with Jimmy, if all verbal reasoning has failed to date.
    I don’t know, I sort of favor the way they do it with the guy in the hut, but how to differentiate the layabout from the guy who has this busy hobby writing unpopular books and is too busy to pitch into community-necessary work that somehow cannot be automated? Some have excuses. Not all are able-bodied. Some are retired and exempt, and part of the code is to extend their presence as long as possible.
    Perhaps the issue would never arise eventually, due to the level of automation achieved.
    Just so. Then there’s no obligatory tasks, pretty much exactly like life in a zoo.

    How is population of a given region controlled? That can’t stay exponential forever, else the human biomass density will eventually exceed the mass density of the available elements. None of the above visions work without this. Shipping the excess off-planet is not a solution. Colonization is done with new blood. Australia is sort of an exception to this, but it was not done with surplus, but with undesirables.

    Either war will destroy us all or it will eventually unite us all, as it has since we came out of the wilds.universeness
    Don’t think it can unite us. Sure, it can join two smaller groups into a larger one, as it always has, but it cannot, nor has it ever in history, made us one. At best it will be a total imperialistic state with one small group in control of everyone else as occupied states. If they kill off all the occupied population (as they seem to be attempting with Ukraine), then without anyone over whom to have power, the state will collapse into smaller units in mutual conflict. Imagine the entire planet controlled by somebody like Putin, with nothing but Russians everywhere and nobody left who isn’t one. That won’t last. The rules will not be the same for everybody. I can’t prove this, but it seems human nature that this is inevitable. A group needs an enemy to maintain its identity as that group. There’s never been an ‘us’ that seems to encompass all of humanity.
    Of course if there were other planets colonized, that might well unite an entire planet or even federations of them.
    Two tribes go to war and either one conquers the other of they make peace by uniting.
    If that was the outcome, there’d be no point to the war. No, the loser loses something, usually significantly more than just say their leader having to bend the knee. Why does Ukraine resist what’s happening if all they have to do is unite and everybody goes home happy?

    I believe global unity and world government is inevitable
    Well we differ there. I find it impossible unless you limit ‘global’ to ‘one of multiple globes’. Just my opinion.
    Trying to figure out if/where the sarcasm kicks in. — noAxioms
    I intended no sarcasm in what I typed.universeness
    I have bad sarcasm radar, so never sure.
    I see from my readings here that my thinking needs modulation by your robust brand of optimism.ucarr
    I presumed at first this was straight, but I delude myself. jgill definitely throws in on the sarcasm side.
    But in seriousness, I find my comment to which that was a response to be an optimistic outlook for the grander scale. My goals are far larger than those of most people, larger than humanity even. So I’m seen as a pessimist because I see humanity as only a means to something, not as an end.

    If what you say proves to be true in the future, and the 'wealthy' still exist as you describe them above.
    You said this, just not in those words. You said people could barter for more than the essential needs provided by the state on what I called the black market. That makes for wealthy people. If not, then such activity (barter) should be illegal and all goods (say the highly sought after paintings) should be handed out by lottery or something, in which case they’ll all be destroyed in short order because the average Joe has no means to care for priceless artwork. The artist will probably not bother to make many, knowing this fate awaits them.
    You seem to be OK with there being wealthy people. After all, it makes for an incentive to do something truly productive rather than mere pursuit of one’s hobbies.
    I wonder if a sufficiently wealthy person could create a company, all without money. What if the company could be publicly owned? That would make for money appearing in a system devoid of it. My brother is well educated in such matters. I should discuss stuff like that with him.
    Yeah, a lottery win can be a death sentence for many.
    Agree, but it wouldn’t be for me, mostly for the same reasons I don’t pay the stupid tax.
    'Earned the same amount,' is controversial, to say the least. The money trick means you can earn great wealth, not by particularly working hard but by sycophantically leeching from the sweat and toil of workers.
    I didn’t mention any ‘trick’. I’ve earned my nest egg without every having an employee. I do own a company of sorts now, but it’s just me. It pays far less than the days when I was employed, but I find myself not wanting to get back into it.
    Nevertheless, those that earn money by working for the man or by being the man are both in the same category: far more likely to manage assets better than somebody who won similar asserts in a prize.
    If you earned your millions/billions via investments and deals on the stock markets (gambling joints), then your wealth is via the money trick and is NOT CLEAN imo.
    Ah, we seem to have introduced a concept of clean money, different than the usual definition (laundered). You seem to define it in terms of moral means of acquisition. My brother (the one I spoke of above) made his living day-trading for several years. It wasn’t gambling and he did quite well, but I told him that it wasn’t productive since the activity served the interests of nobody, no customer or anything. He stopped doing it (afaik) and now rents other people’s houses. That is a productive activity and I approve more.
    That beginning, that resulted in the Amazon company that exists today and the abomination that is now the wealth of Bezos, is nefarious, and vile, and needs to be stopped from ever, ever happening, in the future.
    It’s efficient, and should be emulated by your perfect society. You just don’t like the money going to the owners. The brutal working conditions should be illegal though, but they’re necessary to be competitive. Your social society would eliminate that competition and theoretically make the working conditions far better, especially since nobody is going to be forced to do it by the necessity of needing to feed their families.

    Sounds good to me! Apart from the 'waste of resources.'universeness
    You don’t think long commutes are a waste of resources? My last job was about 200 km away. I considered moving but the cost of living was so much higher there. So I went in once or twice a week and did a 40 hour shift and did the rest of the work from home. I burned 4 cars into the ground doing that. Every one of them was lost somewhere on the commute to that place.

    For me, the term 'information singularity' or 'technological singularity,' is more about a 'moment of very significant change.'universeness
    Yes, but one car passing another isn’t a significant change. It’s a subtle one, even if the long term implications are not subtle. Maybe the cars are not side by side but km apart and nobody notices the difference.
    I didn’t see the point in bringing up a mathematical singularity at all. OK, a black hole event horizon is a singularity of sorts, and dropping through one won’t be noticed by the thing doing it, but the implications (certain doom) are there, and probably were already there before the EH was crossed. So there’s a bit of appropriateness to that analogy.
  • 180 Proof
    15.3k
    YOU connected YOUR enformer with deism which means YOU labelled it a deity. All you have done since then, is try to struggle out of those manacles you placed on yourself by trying to redefine deism. Why you choose to cosplay as a theist/deist, whilst denying your dalliances with it [ ... ] Is your 'enformationism' a hot topic of debate within the scientific community? Will it become so, anytime soon?universeness
    If only @Gnomon & co could (i.e. would make the effort to) understand and appreciate the soundly speculative implications of contemporary sciences such as ...

    ... maybe he (they) would reformulate and convey his (their) woo-of-the-gaps instead as a cogent philosophical system or treatise. :smirk:

    @Jack Cummins @Wayfarer @bert1
  • Wayfarer
    22.3k
    Why the Many Worlds Interpretation has Many Problems, Philip Ball

    The main scientific attraction of the MWI is that it requires no changes or additions to the standard mathematical representation of quantum mechanics. There is no mysterious, ad hoc and abrupt collapse of the wave function.

    There’s the motivation. Read on for the remainder.
  • Wayfarer
    22.3k
    Hugh Everett was drunk when he thought of it. He never got a hearing from Bohr. He left academia and worked on ICBM missile systems, dying early, an alcoholic, with a clause in his will that his ashes be put out in the garbage.

    Which they were.

    (Source The Many Worlds of Hugh Everett III, Scientific American)

    Incidentally the Scientific American article ends with a poignant note which Everett had apparently included in the original version of his dissertation, to wit:

    Once we have granted that any physical theory is essentially only a model for the world of experience,” Everett concluded in the unedited version of his dissertation, “we must renounce all hope of finding anything like the correct theory ... simply because the totality of experience is never accessible to us.

    That's the one thing he said which makes sense to me.
  • universeness
    6.3k
    If only Gnomon & co could (i.e. would make the effort to) understand and appreciate the soundly speculative implications of contemporary sciences such as180 Proof

    A great clip. I had not watched this one, and I liked the 4 strands mentioned and the proposed connectivity between them. Last night, I watched this almost 2.5 hour debate on the question 'Is Christianity rational,' between Matt Dillahunty and an eastern orthodox guy who uses the ID, 'Posh.'
    Posh's arguments for why a first cause mind with intent was rational and logical, sounded very familiar. I think Gnomon would approve. Matt debunked his viewpoints very well. Worth watching if you can find the time.

  • universeness
    6.3k
    How does the robot restaurant cook react to a rat in the fresh food storage? Probably doesn’t notice it.noAxioms
    You are merely trying to suggest a scenario which YOU think CURRENT automated systems could not deal with. I will leave such issues to the experts in the field. They are aware of such problems as cook's of the past have reported them. The reggae band UB40, even wrote a song about the issue:


    Who knows how new tech will change how an abortion is performed in the future.
    Sure, but ‘how’ is not the issue. ‘If’ is more the issue.
    noAxioms
    Yes but bodily autonomy may not be an issue in the future if the whole process is done outside of the body, as I am sure most women would prefer that, to the bodily trauma they currently have to go through. No abortion as such would be needed just a case of completing a process or stopping it. I imagine, a whole new set of arguments would ensue.
    How about a future where a man can be injected with a compound which makes him produce the equivalent of a female egg. This could then be removed and fertilised with sperm, from his male partner.
    :lol: I would love to see the theist's react to that one. I think future biotech is going to create many 'fun' possibilities (Ok, I was employing a little sarcasm just then!)

    Those on the bottom of the social status scale don’t seem to mind their position there, or the social disdain that comes with it.noAxioms
    You know this for certain? How many have you personally asked?

    Then there would be crime, which would be dealt with accordingly, especially with automated evidence-gathering infrastructure that makes it almost impossible to get away with anything illegal. It’s not big-brother if it’s just preventing crime, right?noAxioms
    Crime has always existed. I think there would be a lot less of it, in a fair socioeconomic world.
    I have never suggested, it would be totally removed, by any sociopolitical system I support.

    Just so. Then there’s no obligatory tasks, pretty much exactly like life in a zoo.noAxioms
    Do the animals in a zoo have free travel? freedom of speech and protest? a democratic vote? Free education? A career path of their choice with an ability to change their chosen life path anytime they wish?
    If they do, then I would love to live in such a world zoo.
  • universeness
    6.3k
    How is population of a given region controlled? That can’t stay exponential forever, else the human biomass density will eventually exceed the mass density of the available elements. None of the above visions work without this. Shipping the excess off-planet is not a solution. Colonization is done with new blood. Australia is sort of an exception to this, but it was not done with surplus, but with undesirables.noAxioms

    Via population education, a better means of production, distribution and exchange, perhaps we can make the deserts bloom, build environmental friendly, cities under the water, and we also have the potentially unlimited living space, that might eventually result from space exploration and development.

    If that was the outcome, there’d be no point to the war. No, the loser loses something, usually significantly more than just say their leader having to bend the knee. Why does Ukraine resist what’s happening if all they have to do is unite and everybody goes home happy?noAxioms
    Ukraine may well have united with Russia in the same way as countries in the European union united.
    Putin is too autocratic to understand such possibilities. You can achieve much more with the carrots than with the sticks.
    If we were in days gone past then Putin's daughter might have been married to Zolensky's son, and prevented the war. There rarely is any point to war. What is it good for? Absolutely nothing!

    You seem to be OK with there being wealthy people. After all, it makes for an incentive to do something truly productive rather than mere pursuit of one’s hobbies.
    I wonder if a sufficiently wealthy person could create a company, all without money. What if the company could be publicly owned? That would make for money appearing in a system devoid of it. My brother is well educated in such matters. I should discuss stuff like that with him.
    noAxioms

    As long as what you would consider 'wealthy,' gives such individuals no significant ability, to influence any significant number of individuals, to vote a particular way, or can influence the actions of those in authority, or can help them gain political office, then yes, I could accept it. Especially if everyone can take their basic needs and protections for granted from cradle to grave, and it is the only way to sate those who find such a pursuit, essential to their inner vision of what personal freedom is, and what they consider, the obvious result and goal of an powerful inner entrepreneurial drive.
  • universeness
    6.3k
    You don’t think long commutes are a waste of resources? My last job was about 200 km away. I considered moving but the cost of living was so much higher there. So I went in once or twice a week and did a 40 hour shift and did the rest of the work from home. I burned 4 cars into the ground doing that. Every one of them was lost somewhere on the commute to that placenoAxioms
    Yes, I do think long commutes are a waste of resources. I quite liked most of the imagery you invoked in:
    I suspect the future for the personal vehicle (let alone a flying one) is doomed. Transportation in any sufficiently dense population is best done by mass transit. I’ve been in the places where many people don’t own cars since everything can be reached via bus, subway, intercity trains, boats, etc. Most of the personal transportation might be limited to bicycles. It’s too rural where I live to do that, but that raises the problem where many want to live in a scenic place like the mountains, but do work more suited to an urban setting. That makes for a lot of resources wasted on commuting, even if it is a mass commute.
    There will be small vehicles, like a service van for the plumber and such.
    noAxioms

    But, I didn't like the suggestion that in your scenario, 'a lot of resources,' would be wasted, so I typed:
    Sounds good to me! Apart from the 'waste of resources.'universeness
    I have no idea why you interpreted this as You don’t think long commutes are a waste of resources?

    Yes, but one car passing another isn’t a significant change. It’s a subtle one, even if the long term implications are not subtle. Maybe the cars are not side by side but km apart and nobody notices the difference.
    I didn’t see the point in bringing up a mathematical singularity at all. OK, a black hole event horizon is a singularity of sorts, and dropping through one won’t be noticed by the thing doing it, but the implications (certain doom) are there, and probably were already there before the EH was crossed. So there’s a bit of appropriateness to that analogy.
    noAxioms
    I don't follow your logic here. The development of an AGI/ASI, has been posited by many, as the technical singularity moment, that will ring the death knell for the whole human species. That's why I mentioned it in my OP on this thread, as I wanted to know how credible, posters here, considered that dystopian prediction to be.
  • Athena
    3.2k
    Again I don't understand your line of questioning here.universeness

    I ask question so people think about the answers. The Greeks asked about the impossible and dared to answer the questions. How does this make Anne Sullivan different from a future ASI that can teach humans sign language?

    I don't understand why you would ask such a question?universeness

    To establish what makes human thinking different from AI.

    How does this make Anne Sullivan different from a future ASI that can teach humans sign language? Tuniverseness

    Anne Sullivan was motivated to learn and teach for human reasons. AI does not have that motivation. There is no caring or feeling for AI. AI can destroy thousands of lives because it has no emotions that would stop it from doing what is programmed to do. It also would not create something new and needed to resolve a human problem for the same no motive reason. Your computer will not wake up one morning and attempt to teach you valuable lessons. It does not care about you or any human. It has no human experience or feelings for determining what is just and what is humane.

    In fact, a future ASI could probably develop a much better sign system, that could communicate with Helen, compared to finger spelling.universeness

    I think that is an unrealistic expectation of what AI can do and my reasoning for thinking that is given above. Now if you said AI could be used to develop a better communication system, I would agree that might be possible. A motivated human could create something better with AI, but it is the human, not AI, that directs the fulfillment of a need because it is the feeling human who cares and is motivated.

    Again, I find your line of questioning bizarre, here Athena.universeness
    When we begin arguing we close our minds and block out the opposing reasoning that threatens our sanity by putting our reasoning in doubt. Ego starts screaming, I have to be right so the other person has to be wrong or is crazy to disagree with what I know is right. Or we can ask, what is your reasoning considering the possibility that the other knows something we do not.
  • Alkis Piskas
    2.1k
    Anne Sullivan was motivated to learn and teach for human reasons. AI does not have that motivation. There is no caring or feeling for AI. AI can destroy thousands of lives because it has no emotions that would stop it from doing what is programmed to do. It also would not create something new and needed to resolve a human problem for the same no motive reason. Your computer will not wake up one morning and attempt to teach you valuable lessons. It does not care about you or any human. It has no human experience or feelings for determining what is just and what is humane.Athena
    :up:
  • Alkis Piskas
    2.1k
    The development of an AGI/ASI, has been posited by many, as the technical singularity momentuniverseness
    What does this exactly mean?

    The term "Artificial Superintelligence (ASI)" is exaggerated. There's no actually such a thing as "artificial superintelligence". There's only Artificial Intelligence (AI), which can range from very simple computations to very complex and sophisticated solutions to problems and, with an analogous complexity and capacity in handling of data.

    Below is how ASI is defined/described by a standard source. You can find a very similar definition/description in a lot of the standard sources.

    "Artificial superintelligence (ASI) entails having a software-based system with intellectual powers beyond those of humans across a comprehensive range of categories and fields of endeavor."
    (https://www.techtarget.com/searchenterpriseai/definition/artificial-superintelligence-ASI)

    The key word is "intellectual", which means having to do the intellect. And here's what intellect means:
    "1. The power of knowing as distinguished from the power to feel and to will: the capacity for knowledge.
    2. The capacity for rational or intelligent thought especially when highly developed."

    (https://www.merriam-webster.com/dictionary/knowledge)
    How can find other similar definitions/descriptions, but rational thinking and/or knowledge will be central concepts and part of it.

    However, knowledge involves undestanding. It's not something mechanical or computational or an ability to store and retrieve data. It also ofetn involves perception.

    AI has no undestanding. It cannot undestand. It cannot perceive. It has no consiousness. It cannot even think. It just follows and process instructions, which may indeed involve going through quite sophisticated and complex routines (algorithms) in order to find solutions to problems.

    You can hear from many people that AI has consciousness and undestands and all that stuff. Well, before believing them and/or taking that kind of information for granted, you must study and acquire a solid knowledge about AI. Then, you must have experience in applying and programming AI, and for this you must be an eperienced programmer. Only then you can judge for yourself and be certain about the validity of their statements. But of, course, you don't need to do all that! :smile: You can only know well the basics and apply simple logic.

    Nothing can surpass human intelligence. And AI is based on and exists because of human inteligence.
  • universeness
    6.3k
    How does this make Anne Sullivan different from a future ASI that can teach humans sign language?Athena

    A future ASI maybe as comparable with the intellect of Anne Sullivan as you or I are comparable with the intellect of a chimpanzee.

    To establish what makes human thinking different from AI.Athena
    You are attempting to compare human intellect with current AI. Current AI is advancing in functionality and capability. Systems like chatGPT are very advanced compared to an early system such as ELIZA.
    ELIZA was considered a significant advance on historical AI.
    How close are we to creating AGI?

    Anne Sullivan was motivated to learn and teach for human reasons. AI does not have that motivation. There is no caring or feeling for AI. AI can destroy thousands of lives because it has no emotions that would stop it from doing what is programmed to do. It also would not create something new and needed to resolve a human problem for the same no motive reason. Your computer will not wake up one morning and attempt to teach you valuable lessons. It does not care about you or any human. It has no human experience or feelings for determining what is just and what is humane.Athena
    Humans who became more 'enlightened' tend to reject 'law of the jungle' behaviours.
    AGI would have a learning capacity, which would grow much faster than the human ability to become enlightened. It's just as probable that an AGI/ASI would reach a level of enlightenment that would ensure it's benevolence towards all life and all flora and fauna. The destructive AI you contemplate would not imo, be a very advanced AI, and we could probably defeat it, quite easily.

    Or we can ask, what is your reasoning considering the possibility that the other knows something we do not.Athena
    How much do you know about current developments in AI, what sources are you referencing?
  • universeness
    6.3k
    The term "Artificial Superintelligence (ASI)" is exaggerated. There's no actually such a thing as "artificial superintelligence". There's only Artificial Intelligence (AI), which can range from very simple computations to very complex and sophisticated solutions to problems and, with an analogous complexity and capacity in handling of data.Alkis Piskas

    No, ASI is proposed, based on the current advances in AI and by an observed pace of advancement, indicated by such as Moore's law.

    However, knowledge involves undestanding. It's not something mechanical or computational or an ability to store and retrieve data. It also ofetn involves perception.Alkis Piskas
    A book contains knowledge but has no understanding until your brain processes it.
    An AGI or ASI is a moment of pivotal change or 'singularity,' if and only if it becomes self-aware.

    AI has no undestanding. It cannot undestand. It cannot perceive. It has no consiousness. It cannot even think. It just follows and process instructions, which may indeed involve going through quite sophisticated and complex routines (algorithms) in order to find solutions to problems.Alkis Piskas

    This is correct for all current AI systems imo but not for future AI.
    Science knows very little at the moment, about the 'instant' or 'recipe' that happened/caused at some point, after abiogenesis, an awareness of self. Programmed AI will eventually become self-programming. We already have AGV's (automatically guided vehicles), such as extraterrestial rovers, that can employ decision methodologies, which can 'learn.' In other words, they don't just pattern match to previously stored scenario's from a very large, stored, knowledge base under a 'query based' expert system. It can use a massive array of sensors to gather information about a live event it is experiencing, and use queries it forms through it's programmed expert system, which can pattern match with its knowledge base. This is not so dissimilar, to what you do when you face an unfamiliar situation, and you query your brain/instincts/emotions. for 'what to do next,'
    A future AGI will be way more advanced than the current extraterrestial rovers/AGV's we currently have.
    If this 'learning' ability continues, then I think the AI system will be able to program itself, by recording every experience it has, and by linking that to in-built previous programming. If this ability grows, then I think it will become self-aware, in the same way natural evolution caused many lifeforms, to become self-aware, at some point, after abiogenesis occurred, via very large variety combining in every way it possibly could, based on happenstance. I think self-awareness will happen for artificial intelligence, based on the fact it happened for natural intelligence.
    An expert medical system which contains the knowledge of many human doctors, can be replicated in seconds. Training a new human doctor take's many many years. Expert systems are being employed more and more. There are issues, but they are being overcome, at a faster and faster pace.

    From Emerald Insight (a site that sells journals, books and case studies,) we have:
    Today′s doctors require decision support aids to help them cope with the management of increasing amounts of medical information (records, research advances, new drugs), make appropriate choices and even to substitute in an expert′s absence. Such aids exist in the form of medical expert systems, which are complex computer programs that emulate clinical reasoning. Expert systems consist of a knowledge base in which doctors expertise is encoded and an “inference engine” which manipulates that knowledge. A number of successful diagnostic, management and combined systems are in use but these are a small fraction of the total available. Preventing wider usage are difficulties in evaluation as well as in response time. Significant improvements in resource management can be obtained by the deployment of medical expert systems, so they are predicted to influence profoundly the future of health care in general practice and hospitals alike.
  • universeness
    6.3k
    You can hear from many people that AI has consciousness and undestands and all that stuff. Well, before believing them and/or taking that kind of information for granted, you must study and acquire a solid knowledge about AI. Then, you must have experience in applying and programming AI, and for this you must be an eperienced programmer. Only then you can judge for yourself and be certain about the validity of their statements. But of, course, you don't need to do all that! :smile: You can only know well the basics and apply simple logic.Alkis Piskas

    I am a retired Computer Scientist who taught the subject for 30+ years Alkis.
    I am not exactly an AI neophyte.
  • Alkis Piskas
    2.1k
    No, ASI is proposed, based on the current advances in AI and by an observed pace of advancement su indicated by such as Moore's law.universeness
    Proposed as what? (I just read the first para of the article to which your link refers to and it talks about an observation, not a proposition. Anyway, this is not the main point here.)

    A book contains knowledge but has no understanding until your brain processes it.universeness
    A book contains data, not knowledge. Knowledge is created after you assimilate this data. (Check the term "knowledge".) And it is your mind that process this data, not your brain. The brain can only process stimuli. And stimuli are not data.

    An AGI or ASI is a moment of pivotal change or 'singularity,' if and only if it becomes self-aware.universeness
    AI can never become self-aware or even just aware. Awareness is an attribute of life (living organisms).

    AI has no undestanding.
    — Alkis Piskas
    This is correct for all current AI systems imo but not for future AI.
    universeness
    It is correct for past, present and future AI. You might have read a lot about AI --a lot ope people say a lot of things about it and a lot of speculating is going around-- but IMO you must stick to basics. That is, what AI actually is. If something else is created or develpped based on it, it will be another subject, not AI anymore. (E.g. cloning.)

    Science knows very little at the momentuniverseness
    Science knows a lot about AI already. But if you mean if Science can find how can AI become "aware", well, I don't know of any scientific projects at this moment trying to achieve AI awareness, although there might be some without my knowledge.

    Otherwise, I really admire and respect what you do, all the scientific research you are doing on the subject, something which I know you do for many other subjects. I wish I had the necessary patience myself to do the same! :smile:
  • 180 Proof
    15.3k
    Awareness is an attribute of life (living organisms).Alkis Piskas
    Only life can be aware? How do you know this?
  • Alkis Piskas
    2.1k
    I am a retired Computer Scientist who taught the subject for 30+ years Alkis.
    I am not exactly an AI neophyte.
    universeness
    Oh, I was not meaning to invalidate your knowledge, @universeness! I'm very sorry about that! Really. :sad:
    I most probably pushed it to far. I do that sometimes. It has nothing to do with the other person. It has to do with myself, who has read and heard --and I still do-- so much crap about AI, that it makes me puke. And this, because I am a AI programmer and I always try do make people aware and know what AI is actually about. But the wall of ignorance is too thick for me to break and it becomes strongher and higher with time. So, maybe it's time for me to stop doing that. In fact, stop caring about that and let people live in their ignorance. Besides, this situation is so old as the dawn of Man.

    I'm sorry again, @universeness.
  • Alkis Piskas
    2.1k
    Only life can be aware? How do you know this?180 Proof
    You mean, what are my arguments about this, right? Because we all know things, don't we?
    So my argument is the following: All living organisms respond to stimuli. And to respond to a stimulus one must perceive it in some way, i.e. it must be aware of that stimulus. Also, living organisms are aware of danger, even if this is by instinct.

    Anyway, you can find a lot of references on this subject in the Web. Here's an example:

    "All living things can respond to their surroundings, just like you can taste something awful then spit it out and shout “YUCK!” And all life comes from other life – just like how you came from your mother and father. So even though we look so different from other living things, we are much more the same than different."
    (https://astrobiology.nasa.gov/education/alp/characteristics-of-life/)

    You can find many more yourself.

    As for inanimate objects, they simply cannot respond to anything, since thay cannot perceive. The can only "follow" the laws of Physics.
  • 180 Proof
    15.3k
    You didn't answer my first question (you must have missed it). Again – in response to your comment about AI – only life can be aware? (How do you know this?) Even more succinctly:

    Which physical laws, AP, prevent us from building / growing a 'self-aware AI'?
  • Alkis Piskas
    2.1k
    You didn't answer my first question (you must have missed it)180 Proof
    It was a simple question: "Only life can be aware? How do you know this?" How could I missed it? :grin:
    I have certainly answered you. Regarding both life and non-life (inanimate objects).
    I can't do more than that. Let's snap out of it, OK?
  • Wayfarer
    22.3k
    Which physical laws, AP, prevent us from building / growing a 'self-aware AI'180 Proof

    No physical laws accurately describe sentient beings except insofar as sentient beings are subject to physical laws such as the law of gravity. But sentient beings operate by principles which can't be reduced to the laws of physics, such as the ability to act intentionally, heal, maintain homeostasis, reproduce, and so on. None of these operations are necessarily reducible to physical laws.

    A computer, as I'm sure you're aware, could theoretically be constructed from pipes and water, or stones and rubber bands, although it would obviously be wildly impractical, as micro-electronics offer efficiencies of scale that could never be realised in such media. But, in principle, a computer is just a fantastically advanced abacus that performs calculations and outputs results. So why would it ever be possible for such a device to become a being? You could, as Kastrup says, symbollically represent kidney function on a modern computer with great accuracy, but you would not expect the computer to urinate.
  • 180 Proof
    15.3k
    Nevermind. :roll:

    The usual non sequiturs. :sweat:
  • universeness
    6.3k
    Proposed as what?Alkis Piskas
    Your own choice of link seems to define the term quite well:
    From that site, we have:
    What is artificial superintelligence (ASI)?
    Artificial superintelligence (ASI) entails having a software-based system with intellectual powers beyond those of humans across a comprehensive range of categories and fields of endeavor.

    I understand that the definition mentions 'intellect' only and not such as 'emotion,' 'instinct,' 'intuition,' etc. But if you watch the material coming out from current AI experts, such as Nick Bostrom, Demis Hassabis, et al. You should accept that what they are reporting, is not like listening to a preacher talking BS from a pulpit. What they are saying, has a credence level, backed by scientific projections, that we should all pay attention to. Here is yet another short example:

    Nick discusses some of the issue's related to future AGI/ASI systems along with the issue of a self-aware, conscious AI.

    A book contains data, not knowledge. Knowledge is created after you assimilate this data. (Check the term "knowledge".) And it is your mind that process this data, not your brain. The brain can only process stimuli. And stimuli are not data.Alkis Piskas

    Data has no meaning! '23' or 'Bob,' IS data. A book contains contextualised data, labelled data, data with associated meaning, such as "When Bob was 23 years of age, he picked up his first book on artificial intelligence." That sentence is NOT DATA, it is INFORMATION (data with meaning), which when read and processed by such as a human brain, it adds to the readers KNOWLEDGE of Bob when he was 23 years of age. I am not an advocate of your dualist viewpoint. I make no significant distinction between the human mind and the human brain.

    AI can never become self-aware or even just aware. Awareness is an attribute of life (living organisms).Alkis Piskas
    Wanna bet?? :grin:

    Science knows very little at the moment
    — universeness
    Science knows a lot about AI already. But if you mean if Science can find how can AI become "aware", well, I don't know of any scientific projects at this moment trying to achieve AI awareness, although there might be some without my knowledge.
    Alkis Piskas

    My quote above was referring to what science knows about the exact 'tipping point' and the exact ingredients/recipe/mechanisms, which caused natural self-awareness or consciousness to happen, in the sense of a moment or event or process/series of random events within a given duration within 'spacetime.' We are fairly convinced that abiogenesis occurred FIRST and then self-awareness/consciousness, after, but science has many gaps in it's current knowledge about those events.

    Developing AGI and ASI may fill in many of those gaps and by doing so, silence any theistic and theosophistic residuals, that are still holding back, human growth and progress.
    I find it fascinating that ASI might mean our extinction, or the kind of 'post-human' eon that @180 Proof, raises an eyebrow of credence towards or the next welcome stage of human progress, that we need, to become a significant extraterrestial and interstellar species, that I believe it WILL allow us to become. ASI will give us the longevity and robustness we need to have so many more options in life, than we have now.

    Otherwise, I really admire and respect what you do, all the scientific research you are doing on the subject, something which I know you do for many other subjects. I wish I had the necessary patience myself to do the same! :smile:Alkis Piskas

    Thanks Aliks, I enjoy exchanging views with you also.
  • 180 Proof
    15.3k
    interstellar speciesuniverseness
    I think the 'posthuman future' will be intrastellar-intraplanetary, not "interstellar"; and, unless we merge with it, the stars are only for ASI ... :nerd:
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.