Please read through first as my initial points may seem more antagonistic than they really are. You see the same dangers I've pointed out, but you need to drive them to their conclusions first.
Not so much forget as discount. — Vera Mont
Why do you discount the major factor for my argument? Without it the whole notion of non-work becomes just nonsense since nothing in the world would produce necessary resources for any of us to exist. My argument focus on the singularity event of advanced automation, when almost any task can be turned over to software and hardware rather than a person.
I wished this was just a flimsy thought experiment, but just as uncontrolled exponential climate change and nuclear war is a thought experiment scenario, they are also possible futures that needs to be seriously considered. So is this. And you base your counter argument on ignoring this very fact.
As much as people want to do. — Vera Mont
How do you combine this with an industry and government using automation for any practical task? What work, other than renovating your own house, writing a book, painting, other arts, cooking a fine meal and so on, are you referring to these people doing?
You need to specify based on a task that is by its core and value impossible to replace with software and hardware.
For whom? To what end? What motivates AI to do that? — Vera Mont
I recommend studying how AI functions. Most people who discuss automation does not have good insight into this field of science. The most common mistake is to think about AI as basically just general purpose AI, or rather, sentient AI.
To try and be short, sentient AI is useless. It's basically unprogrammable and would only have the function of being a sentient alternative perspective to humans in philosophy, but it has no inherent function, it basically becomes just another sentient individual.
The AI that actually will be used, and is already being used to a great degree, is advanced algorithmic AI, synthetic intelligence, neural network intelligence. This is simply an AI that is
specifically tailored to a specific function.
Automation will be programmed to adress certain tasks. Like, in this example, optimize planning of changes to an environment in order to improve it for inhabitants and the ecology. It will be performing fast administrative changes to mechanical workers to streamline environmental work for that specific end goal. There are no administrative personnel, no human workers, the only input is the intention placed on the algorithmic AI to perform towards this end goal.
Which either leads to a paper clip scenario as its worst outcome, or it functions well. Maybe it functions even so well that the input doesn't have to be by a human, but rather a top level algorithmic AI which functions as a broader planner where environmental issues is a lower branch.
You see, the question you ask is too simplistic to cover how AIs actually work and how it will probably be utilized in the future.
For the sheer joy and satisfaction of doing it! — Vera Mont
Of course, and who has the privilege of doing this job? Because no one will pay for it when there's an almost infinitely cheap labor force through robotics.
So you can't build an industry out of it when it requires that people work for free. And of course there's that little problem in which among the billions of people who live on this planet, most of them do work that is a necessity for income rather than doing what they love.
Who will provide the resources to work for free, doing what you love, without having demands from the employer to perform in competition with companies who utilize automation?
But you are right, people will work with what gives them joy. The problem you won't seem to include in your assessment is how you can grant everyone to be able to do what they love. Both in resources, but also in value.
Here's scenario you have to consider.
Imagine that the lack of work makes millions, maybe even a billion people to pursue work in areas that robotics and software can't replace (which becomes just a handful of occupations). For example, a billion people choose painting. Yes, AI's can paint, but art isn't just scrambling inspirations together with paint and produce a painting, it's also about intention in combination with a viewer experiencing that intention, art requires the artist and the receiver.
This leads to an oversaturation of artworks. Billions of paintings ending up in artistic noise in which artistic meaning gets lost. There are not enough museums to show the paintings, online resources becomes more saturated than millions of posts of TikTok. The experience of painting loses all meaning when so many people collectively only works with it and the feedback becomes based on shorter than glances interpretations than never dwell any deeper than a few seconds.
You need to follow your questions to their logical conclusions, this is philosophy we're doing here.
And then add the fact that not all, far from all people actually has any interest in creative work or work that fulfill them. Plenty of people have no such ambitions,
what will they do?
People like the feeling of satisfaction when they have completed a task they set for themselves; the elation of overcoming a challenge, solving a problem. People enjoy exerting their physical capabilities, in sports, but it's more meaningful to do so in the creation of something concrete. People also enjoy sharing work that serves their sense of community, like a pot-luck supper or barn-raising. Have you ever seen men happier - in the sense of abiding contentment, rather than momentary joy - than when a group of them is huddled over a malfunctioning engine or a recalcitrant tree stump? I can't prove it, but I have a feeling most sick people and little children would prefer to be cared for by a loving adult than an efficient robot. — Vera Mont
...have you followed everything to their logical conclusions? How can you reconcile all of this on the scale of billions of people? Stop and think for a minute. The problem is not what people want, feel is meaningful or value etc. The problem is that we are stuck in a system of thinking that is based upon a capitalist foundation that automation breaks at its core.
Your arguments are based on how automation works today, not the implications of future automation. You are stuck in the desert of the real basically.
The fact that something can be done, doesn't mean that it must be done. — Vera Mont
Stop and look at the world today. Look at the forces driving everything, driving progression etc. Then, ask yourself what's stopping advanced automation from happening in the future? It's not really a question of "must be done", but rather it's a question of "something that will just be".
Here's the kicker: you need to dismantle capitalist culture at its core and replace it with something else before automation happens, in order for it not to happen. But since, as I've explained, capitalist culture is a Baudrillardian system, people cannot invent something other that isn't part of the core system already in place as the resources and tools to invent something new needs to come out of the system already in place both in practice and in psychology.
Besides, given that fact that most automation (that's not military) is controlled by commercial interests, as it keeps eroding its paid work-force, it incidentally erodes its customer base and the government's tax base; it has to reach a point of diminishing returns where no money is changing hands at all. UBI is a temporary stop-gap, as it also depends on redistribution of money.
Once there's no more profit to be made, who directs the robots? This, to me, is the central question about automation. (Based on the very large assumption that the whole house of credit cards doesn't collapse before that vanishing point, and all the billionaires head for the mountain strongholds. — Vera Mont
Here you actually start to get to the point I'm talking about: the actual collapse of capitalist culture.
A) UBIs start to increase as taxes on the income placed on the companies who manufacture also increase. At some point there is either a balance that works, or companies gets taxed into no ability to produce, even with cheap labor by robots and the economy collapses entirely as a system, throwing the world into a total capitalist collapse and soon follows, as a natural outcome of that chaos... war.
B) The capitalist system follows to the very end point, in which transactions stop as all the wealth of the world has reached the accumulated highest point, the small group of people who owns the world industry run by automation. This is the scenario you point at. As all money has accumulated it loses all value, but the rich already has the resource wealth and no incentive to keep producing towards the people who are not in monetary and resource control. This also leads to a chaos and... war. However, the risk here is even greater as war might be towards the people in control of resources and that is essentially a losing battle, giving total power of a few over the rest of the world as they control robotics as a means of controlling the rest of the world population.
Scenario B essentially manifest the very extreme version of a company owning "your data", they end up literally owning you as you have no possible way of organizing a revolution against such accumulated resource power. Have you seen "Mad Max Fury Road"? What you see being portraid as society in the beginning, with Immortan Joe controlling water, gasoline, genetic bloodlines and ammunition as resources form a high tower bunker, is basically this scenarios end point, but without the ability to strike back as an army of militarized AI robots would stop any rebellion in an instance.
Very much the opposite is in motion in America. Introducing moral philosophy, depends on a sensible school board operating under a sensible government with a generous budget. In Finland, you may be able to do it; in the USA, not under the current political trend. — Vera Mont
I already consider USA as a ticking time bomb of uneducated people collapsing the system because no one cared to actually educate people into sensible, empathic and thoughtful people. It will be the end of USA at some point. Nations with a good strategy of education will become the future superpowers, but since most of them are really small nations, there's a risk of them being snuffed out by wrestler presidents and delusional self-proclaimed emperors just because of their threat to educated people in their nations (much like Putin's fear of western culture "invading" Russia and threatening his power).
So, as an end point. You seem to see the very dangers that I'm pointing towards, but you may need to drive them to their logical conclusions. Automation is much more world changing than I think people realize.