Comments

  • Why The Simulation Argument is Wrong
    While I don't believe in the simulation argument because there's no evidence for it, similar to any evidence for a God in theistic arguments, there are points to be made dealing with the concept, since I think the concept in itself usually attributes our own current existence to be of importance to the one's "simulating" us, when in fact we might be unimportant in our current state.

    First, if the world is simulated, why don't its 'designers' simply 'pop out' at times and leave us with some trace of their existence? Guidance through such a virtual world might be helpful, and yet there is no trace of anyone 'programming' or 'guiding' us anywhere.jasonm

    Why would it be helpful? If you help a moth shed its cocoon you will doom it to weakness in nature and death. "Helping" us out may be contradictory to the purpose of the simulation.

    On top of that, a simulation of this magnitude would more likely have an end goal, similar to the concept of emerging a god out of the determinism of this chaotic system. That "god" wasn't the start of everything, but instead being the goal; us, or a collective of intelligences throughout the universe merging together civilisations and technology into a final form so fundamental that it would be able to manifest reality on its own, i.e the equivalent of a god. That could be a purpose of a higher being simulating us.

    Or, we're observed and there's no interest in helping us out because that's not the purpose of the simulation. It could be simulating for the argument of how to avoid a downfall of society, as a demonstration for people outside the simulation to understand the importance of a certain societal practice that avoids self-destruction and that our simulation is a case-point to show what happens without that societal practice in order to teach young and ignorant of the reasons why they implemented it in society.

    The list goes on for the purposes of a simulation and very few would constitute a reason to interfere and "help" us.

    Similarly, why don't we sometimes notice violations of the laws of physics? If it's just a simulation, does it matter if the laws of physics are perfectly consistent? This applies to any law of this simulated world, including propositional logic. Again, if you are there, leave us with some trace of your existence through 'miracles' and other types of anomalies that our world does not seem to have. And yet there seems to be no instances of this kind.jasonm

    Why must the miracles or attributes of such anomalies be understandable or within a human concept? Maybe black holes and quantum superpositions and virtual particles are just such anomalies. Or if the simulation is so fundamental that it basically simulates the most fundamental aspects of reality itself, we wouldn't be able to find anything odd as everything is consistent with the laws of physics.

    But on top of that, there's a problem with the frame of reference. If all our science and all our knowledge about reality comes from the foundation of this simulation, then our frame of reference is the reality of the simulation itself. What are you supposed to compare it to? You don't have an original to compare the map against, to reference Baudrillard's ideas.

    Third: what type of computing power would be required to 'house' this virtual universe? Are we talking about computers that are bigger than the universe itself? Is this possible even in principle?jasonm

    This is the problem with the simulation argument. It stems from a logical argument of probability about what we human's are likely to do, or what is likely to happen if technology evolves, but it ignores the physical nature of computational power. Yes, there might be computational power found in technology that we've yet to invent, but the calculation required for handling data down to the Planck scale at all positions of reality is so extreme it reaches either infinity or requires the same size as reality itself. Whatever it is, it would most likely have to be something beyond what is capable within this reality, something within some higher form of reality in which manifesting our reality isn't that big of a deal.

    Then, we can also view it through the concept of The Matrix and Dark City. In which the system that handles the simulation simply guard us from ever knowing about it. Since the ones doing the simulation have absolute control, they can simply tune our knowledge away from us, leaving us able to see these breaks in the simulation, but we're simply "edited" to not spot them.

    That's also similar to Westworld:

    ...it doesn't look like anything to me...

    In that sense, I think the notion that the universe is 'simulated' is completely superfluous and can therefore be explained away as being 'highly improbable.'jasonm

    Yes, as said, the argument is merely a logical one about progression of technology, but ignores computational problems. It's also based on humans conducting the simulation and we wouldn't need to simulate at this magnitude for any purposes other than creation itself, and if so, it really doesn't matter if we're in a simulation as our reality is simply the only reality we know, similar to the holographic universe theory.
  • The "AI is theft" debate - An argument
    So AI becomes another tool, not a competing intellect.frank

    For anyone using ChatGPT for anything other than just playing around, it's clear that it's a tool that requires a bit of training to get something out of. I find it quite funny whenever I see journalists and anti-AI people just using it in the wrong way in order to underscore how bad it is or how stupid it is.

    For instance, a lot of the demonstrations of plagiarism have been an elaborate line of attempts to push a system to break and in return plagiarize. It's good to do it in order to fine-tune the system to not accidentally end up in such a position, but to demonstrate plagiarism it's quite a dishonest action to do.

    It would be like playing elaborate tricks with a commissioned painter, lying to them about the intent of the artwork being created, saying that it's just a copy of another painter's work for the sake of playing around, for private use etc. in order to trick the painter into plagiarism so that you can then blame the painter publicly for being a plagiarist.

    My view is based on people seeing my work and reading complex messages into it. And this is painting, not ai art. I'm interested in their interpretations. My own intentions are fairly nebulous. Does that mean I'm not doing art? I guess that's possible. Or what if communications isn't really the point of art? What if it's about a certain kind of life? A work comes alive in the mind of the viewer. In a way, Hamlet is alive because you're alive and he's part of you. I'm saying what if the artist isn't creating an inanimate message, but rather a living being?frank

    The "communication" does not have to be known. It's more basic and fundamental than something like a "message". The "communication" and "intent" has more to do with your choices, and those "choices" during your creative work becomes the "intentions" and "communication" that will later be interpreted by the receiver and audience.

    For instance, a painter who creates abstract art might not know at all why she makes something in a certain way, but the choices are made in letting that instinct go wild and letting the sum of her subjective identity as a human evolve through that work.

    Similar to if someone has an outburst in an argument which forms from that person's subjective identity and experience, influencing the receiver in that argument to react or interpret. Art is a form of human communication that is beyond mere words or description; rather it is a form of transforming the interior of the subjective into observable and interpretable form for others.

    Which is why a world without art leads to cold interactions, conflicts and war as in such a world there are no such levels of communication between subjective internal perspectives to form a collective conscious, and why dictators fear it so much.
  • The "AI is theft" debate - An argument
    I said I was withholding my judgement. I have never claimed that the case is definitively settled for the reasons I've mentioned before regarding the state of our knowledge about both neuro and computer science. You clearly seem to disagree but I suppose we can just agree to disagree on that.Mr Bee

    The problem is that it influence your conclusions overall. It becomes part of the premisses for that conclusion even though it should be excluded.

    I have never said it was binary. I just said that whatever the difference is between human and current AI models it doesn't need to be something magical.Mr Bee

    There are more similarities than there are differences within the context of neural memory and pathway remembering and generation. Pointing out the differences are like excluding a white birch tree from the categories of trees because it was white and not brown like the others. Just because there are differences doesn't mean they're categorically different. Storage and distribution of copyrighted material rely heavily on the proof of storage of such files for the intention of spreading or functions that focus on spreading; neither which is going on within these models.

    In the end, it doesn't even have to be that complex. Using copyrighted material in private, regardless if a company does it or a private person, is not breaking copyright. Ruling that it is would lead to absurd consequences.

    In any case, you seem to agree that there is a difference between the two as well, well now the question is what that means with regards to it's "creativity" and whether the AI is "creative" like we are. Again I think this is a matter where we agree to disagree on.Mr Bee

    "Creative like we are" have two dimensions based on what totality of the process you speak of. "Creative like our brain's physical process" is what I'm talking about, not "creative as subjective entity with intent".

    One may argue that for some work is what gives their life meaning, It's unhealthy working conditions that are the problem more so.Mr Bee

    Stress levels are at all time highs, the working conditions are not stretched from good to bad, it's from bad to worse. A society that only focus on finding meaningful jobs will have the foundation for them, right now, there's no such foundation. Instead, the foundation is free market capitalism which is an entity that grows into making everything as effective as possible and reducing costs as much as possible. It is impossible to build a long term sustainable level of non-stress environments within it.

    It also removes the leverage that workers usually have over their employers. In a society that is already heavily biased towards the latter, what will that mean? That's a concern that I have had about automation, even before the advent of AI. It honestly feels like all the rich folk are gonna use automation to leave everyone else in the dust while they fly to Mars.Mr Bee

    As i said, wide automation breaks the Hegalian master/slave concept of capitalism.

    The scenario you describe cannot happen as the distinction of "rich" only has meaning within the concept of a capitalist system. Meaning, someone needs to buy the products that make the rich, rich. They might accumulate wealth, they might even fly to Mars, but so what? You think the world stops spinning because of that? The engineers and workers still exist and can work on something else, new leaders and new ideologies emerge among the people that's left, new strategies and developments occur.

    This is why I feel all debates about automation and the rich and similar topics are just shallow level. People shouting hashtags online and actually not digging deeper into proper research about the consequences of automation. It's mostly just "the rich will just do whatever they want and everyone else is fucked". If people decide they're just fucked and don't care anymore, then they're part of the problem they complain about. People can work together and do something instead of just hashtagging bullshit online.

    History shows that that is rarely how things go. If that were the case then wealth inequality wouldn't have been so rampant and we would've solved world hunger and climate change by now. People are very bad at being proactive after all. It's likely any necessary changes that will need to happen will only happen once people reach a breaking point and start protesting.Mr Bee

    I'm talking about nations, not the world. A single nation that experience a collapse in the free market and usual workforce will have to change their society in order to make sure its citizens don't overthrow the government. The inability for nations to solve world problems has nothing to do with national pressures of their own society changing, such actions can happen over night and do so when society is under pressure or in collapse.

    And the problems with world hunger and climate change mostly has to do with dictators and corrupt people at the top of the governments in those nations that are either struck by poverty or are the worst offenders of climate change.

    We can also argue that automation creates a situation in which the means of production gets so cheap that expanding food production to poor nations isn't a loss for richer nations. Again, the consequence analysis of automation seem to stop at a shallow level.

    The irony is that it seems like AI is going after the jobs that people found more meaningful, like creative and white collar jobs. It's the more monotonous blue collar jobs that are more secure for now, at least until we see more progress made in the realm of robotics. Once automation takes over both then I don't know where that will leave us.Mr Bee

    As I've said numerous times in this thread, creating art and being an artist won't go away because the meaning of creating art is not the same as creating content. Content creation is meaningless, it's the business of creating design for the purpose of profit. Automation of content creation won't render creation of art meaningless. And art and AI can be fused in terms of better tools for artists.

    Full wide automation of most jobs in society leads to what I'm describing. A collapse of capitalism as we know it. And it's up to each nation to decide on how to deal with that.

    Future meaningful jobs or rather tasks within a fully automated society are those that will focus on the value of individual thought and ideas for knowledge and art, as well as production that gains a value of meaning through being handmade.

    It's just that people are so indoctrinated into the capitalist free market mindset that they mistake capitalist mechanics for being an integral part of human existence. It's not.

    We'll see. Maybe I've just been a pessimist recently (nothing about what's currently going on in the world is giving me much to be hopeful about) but I can just as easily see how this can end up going in a dystopian fashion. Maybe it's because I've watched one too many sci-fi movies. Right now, assuming people get UBI, Wall-E seems to be on my mind currently.Mr Bee

    Wall-E is an apt dystopian vision of the future. I don't think any of this will lead to some post-apocalyptic reality or war torn destroyed world. We will more likely enter a bright future in which we are physically healthy in a clean environment that's ecologically sound. But instead so intellectually unstimulated that we're just mindlessly consuming endless trash content...

    ...but how is that any different from today? In which the majority just mindlessly consume trash, eat trash and do nothing of actual meaning, either for themselves or others. While some try to find meaning, try to focus on exploration of meaning in their lives through art, science and knowledge.

    The only difference would be that such a future society would make the consumers fully focused on consumption and those interested in meaning, fully focused on meaning. Instead of everyone being part of the machine that creates all the trash.

    Certainly alot of them don't value their workers on a personal level alot of the time but I'd distinguish that from abuse. Of course that isn't really the main concern here.Mr Bee

    I'm not talking about abuse, I'm talking about how a lack of interest in the art that's created leads to an experience among artists of making meaningless stuff. The same lack of interest in what they do that makes these companies turn to AI is what makes artists drained of a sense of meaning. So who cares if they replace their artists with AI, most of those artists may need that to rattle them out of such a destructive co-dependent relationship with their current job in order to get going and find meaning elsewhere, or join a company that values their input. And even if AI takes over entirely, you still need artists eyes, ears and minds to evaluate what's created and guide it... they will always have a job.

    The same can't be said of white collar managers and soon also blue collar workers as the robots get better every day. But the situation is the same for them as well. A blue collar worker work themselves into medical costs that ruins them, for companies who build and makes stuff that won't ever credit the workers. Let them get to build something with their hands for companies who appreciate that level of meaning instead.

    I mean if you want to just train a model and keep it in your room for the rest of it's life, then there's nothing wrong with that, but like I said that's not important. None of what you said seems to undermine the point you're responding to, unless I am misreading you here.Mr Bee

    You are still confusing the training process with the output. You can absolutely use a model officially, as long as it's aligned with our copyright laws. If a user of that model then misuse it, then that's not the model or the engineers fault.
  • The "AI is theft" debate - An argument
    All sorts of meaning could be projected onto it, but my intention probably wouldn't show up anywhere because of the way I made it. The words I entered had nothing to do with this content. It's a result of using one image as a template and putting obscure, meaningless phrases in as prompts. What you get is a never ending play on the colors and shapes in the template image, most of which surprise me. My only role is picking what gets saved and what doesn't. This is a technique I used with Photoshop, but the possibilities just explode with AI. And you can put the ai images into Photoshop and then back into ai. It goes on foreverfrank

    Intention is still just my technical term for you guiding the model. If it's left to generate without any input it will just randomize in meaningless ways. You could end up with something that you can project meaning onto, but that is not the same as meaning in creation.

    Even random words in the prompt is still intention, your choice of words, your choice of only relying on intuition, it's still a guiding principle that the AI model can't decide on its own.

    Such exploration through random choices is still guidance. And we've seen that in traditional art as well, especially abstract art.

    I think the real reason art loses value with AI is the raw magnitude of output it's capable of.frank

    You can say the same about photography. Why is street photography considered art? It's an output that isn't able to be replicated; instant snapshots and thousands upon thousands of photos, choosing one.

    Art is the interplay between the artist and the audience, the invisible communication through creation, about anything from abstract emotions to specific messages. The purpose of a created artwork is the narrative defining the value of it. If all you do is generating images through randomization, then what is the purpose you're trying to convey?

    Aesthetic appreciation can be made with whatever you find beauty in, but that doesn't make it art. A spectacular tree in a spectacular place in spectacular light can make someone cry of wonder of it's beauty. But it isn't art. The "why" you do something is just as important with randomized processes as with intentional ones. Your choice of relying on intuition is part of the communicative narrative.

    But if you don't have that and just generate quantity upon quantity of images until you find something that you like, then that's content production. You're not conveying anything, you're exploring a forrest in search of that beautiful tree in that spectacular location. Content that is beautiful, but not really art as the purpose of its creation isn't there.

    You claim it's identical merely by appeal to a perceived similarity to private legal use. But being similar is neither sufficient nor necessary for anything to be legal. Murder in one jurisdiction is similar to legal euthanasia in another. That's no reason to legalize murder.jkop

    This is simply just a False Analogy fallacy you're making. An absurd comparison of two vastly different legal situations.

    The similarity is simply within the private use. You can't argue against that my use of copyright material in private is just as valid as if a tech company works with copyright material in private. The only point at which you can call copyright laws into question is by examining the output, the production out of the work done in private.

    If you say that the use of copyrighted material by these companies in their private workflow is illegal. Then why is it legal if I use copyrighted material in my private workflow? What is the difference here? That it's a tech company? If I have an art company and I use copyrighted material in my workflow but it doesn't show up in the creation I provide to clients, shouldn't that be considered illegal as well?

    You're just saying there's a line, but without a clear definition of what or where that line is drawn. In order to rule something illegal in a legal matter, you absolutely have to be able to draw a line that is based on facts and cold hard truth to the extent it's possible. And if not possible, then rulings will be in favor of the accused as crime cannot be proven beyond doubt.

    Corporate engineers training an Ai-system in order to increase its market value is obviously not identical to private fair use such as visiting a public library.jkop

    That's just a straw man misrepresentation of the argument. You're inventing a specific purpose (through your personal opinion of what they do) and then comparing it to something made within the context of an argument as analogy for the purpose of showing the similarities of the physical process in the brain and with the technology.

    On top of that, it's a misrepresentation of why such data was used. It has nothing to do with increasing market value; it's rather a method developed over the years in machine learning sciences. Increasing the amount of data required using material that's copyrighted in order to these those magnitudes of quantities. This was made long before there was any notion of gold rush in this industry. And therefore you're still just making arguments on the preconceived idea and ideology that these companies are evil, not on any actual ground for conclusions within the context of legal matters.

    The arguments you make here doesn't honestly engage with the discussion, you're simply ignoring the arguments that's been made or you don't care because you just fundamentally disagree. Neither which is valid to prove the opinion you have, and it ends up just being an emotional response, which isn't enough in matters of what is legal or not.

    We pay taxes, authors or their publishers get paid by well established conventions and agreements. Laws help courts decide whether a controversial use is authorized, fair or illegal. That's not for the user to decide, nor for their corporate spin doctors.jkop

    What's your point? You still need to prove that the training process is illegal. Instead you're basically just saying "corporations BAD, we GOOD". Describing how laws and courts help decide something doesn't change anything about what I've been talking about. Quite the opposite, I've been saying over and over that proving "theft" in court is close to impossible in the context of training these models as the process is too similar to what artists are already doing in their workflow. And because of this, just saying that "courts will decide" is meaningless as a counter argument. Yes, their rulings will decide, but as I've shown examples of, there are court rulings that seemed like clear cases of infringement that ended up freeing the accused anyway. And those cases were made in which the decision of the accused was clearly established to use other's materials directly in the finished work. If any court anywhere is to prove "theft" in the process of AI training, then they have to prove that these models has the files they were trained on within them, which they don't. And if they decide that a neural memory is the same as storing files like normal on hard drives, then what does that mean for a person with photographic memory when our brain has a similar method of neural pathways for storage? The problems pile up with trying to prove "theft" to the point it would most likely not hold in court beyond doubt.

    If you do not want the AI to know a particular information, then simply do not put that information onto the internet.chiknsld

    This is true even outside anything about AI. People seem to forget that websites that show something officially, outside of private conversations, like these posts and this thread, is public information. The act of writing here is me accepting that my text is part of the official reality that anyone can access, read or save. I still own copyright to my text without it needing any stamp or being registered as such, but if someone download this text to use while they write some essay on copyright law and AI, then I can't stop them. But if they cite me without citation, that is plagiarism.

    That's why AI alignment in outputs needs to evolve in order to align with the same obligations humans exist under.

    I'll give you an example: You are a scientist working on a groundbreaking theory that will win you the Nobel Prize (prestige) but the AI is now trained on that data and then helps another scientist who is competing with you from another country. This would be an absolutely horrible scenario, but is this the AI's fault, or is it the fault of the scientist who inputted valuable information?

    In this scenario, both scientists are working with AI.

    We all have a universal responsibility to create knowledge (some more than others) but you also do not want to be a fool and foil your own plans by giving away that knowledge too quickly. It's akin to the idea of "casting pearls"
    chiknsld

    If you use an AI in that manner you have accepted that the AI is using your information. There are AI models tuned to keep privacy of data, otherwise they wouldn't be able to be used in fields like science.

    That's more of a question of being irresponsible to yourself.

    In terms of science however, one area that's gotten a huge improvement with AI is versions of models trained on publications. The level of speed at which someone can do research on and getting good sources for citations has increased so much that it's already speeding up research in the world overall.
  • The "AI is theft" debate - An argument
    However, corporations do have a fiduciary requirement to generate profit for their shareholders. So, making money is the name of the game. The people who make up corporations, especially when engaged in new, cutting edge projects, clearly have lots of intellectual interests aside from the fiduciary requirement. I'm sure developing AI is an engrossing career for many people, requiring all the ingenuity and creativity they can muster.BC

    Actually, OpenAI was purely open source before they realized they needed investments for the amount of computing power needed to handle the training of their AI model. And the usage of copyrighted material wasn't decided based on making money but trying to solve and improve the AI models they attempted to make. There was no inflow of money in this until society realized what they could do and the potential for these models. So all the greed came after the first models were already trained, and so did all the criticism, basically inventing reasons to fight against the technology by flipping the narrative into "us the good guys workers" and "them the greedy soulless corporations". But I think everyone needs to chill and realize that reality isn't so binary and focus on criticism where it's valid and vital, not fundamentally emotional outbursts and herd mentality.

    but the final uses to which this technology will be applied are not at all clear, and I don't trust corporations -- or their engineers -- to just do the right thing,BC

    Which is why we need regulations, laws and I'm all for demanding a global agency with veto control over every AI company in the world and who has full insight into what's going on behind closed doors. A kind of UN for AI development. Because AI also has enormous benefits for us. In medical science we're talking about a possible revolution.

    This is why I'm opposed to the argument that AI training on copyrighted material is copyright infringement. Technically it's not, but also, it stops the development and progress of AI research and the tools we get from it. We need to keep developing these models, we just need better oversight on the other end, on the uses of them. And people need to understand the difference between what these two sides mean.

    In the end, if artists could stop acting like radicalized luddites and be part of figuring out what the actual way forward is, we'll rather end up in alignment with how we're to use these models. They will become tools used by artists more than replacing them.

    Instead of destroying the machine, figure out the path into the future, because people are more inclined to listen to all the doom and gloom, and be very bad at looking closer at the actual benefits this technology can produce.

    Just an example. True enough, it's not quite the same as AI. But corporations regularly do things that turn out to be harmful--sometimes knowing damn well that it was harmful--and later try to avoid responsibility.BC

    Yes, and that's why I'm all for oversight and control of the "output" side of AI, to have societal regulations and demands on these companies so that the usage benefit people rather than harm them. The problem is that these companies get attacked at the wrong end, attacked at the point that could destroy the machines. Out of pure emotional fear, radicalized by social media hashtag conflicts spiraling out of control.

    I was thinking of intention as in a desire to create something meaningful. An artist might not have any particular meaning in mind, or even if they do, it's somewhat irrelevant to the meaning the audience finds. So it's obvious that AI can kick ass creatively. In this case, all the meaning is produced by the audience, right?frank

    I'm using the term "intention" here as more of a technical distinction of agency in actions. Humans can decide something, based on meaning, based on needs and wants, which does not exist within the AI systems, such decisions has to be input in order to get an output. They will not produce anything on their own. Even if I just type "generate an image of a bridge" without any more details in descriptions, it will rather randomize any decisions of composition, lighting, color etc. but it doesn't do so based on wants, needs and emotions. The more I "input", the more of my subjective intention I bring into the system as guiding principles for its generation.

    The creative process within these AI systems are rather more the technical and physical process being similar to the neurological processes in our brain when we "generate" an image (picture it in your mind) and my argument is based around how copyright cannot be applied to simulating that brain process as a neural network anymore than a camera replicating the act of seeing. A camera can be used to plagiarize, but blaming the camera manufacturer and R&D for some user's plagiarism using said camera is not right.

    Yea, an AI artist could create a century's worth of art in one day. I don't really know what to make of that.frank

    Not really. A true artist takes time to evaluate and figure out the meaning of what they want to create and what they've created. Someone just asking an AI to produce large quantities of something just generates content.

    Content and art aren't equal. One is purely made for profit and filler, the other a pursuit of meaning. You can't pursue meaning in anything by focusing on quantity. Some people might find meaning in some AI generated content, some people don't care about art enough to make a distinction. But what's the difference between that and the difference between a long running soap opera compared to 2001 a space odyssey? I for one, as an artist and thinker, don't care if trash content and quantity production in media gets replaced by AI. It'll give more time for us to focus on actually creating art rather than being forced to work with content as a factory.

    I don't know enough about the processes in detail, but just a first glance impression gives off the feeling they're different.Mr Bee

    This here is the problem with the overall debate in the world. People replace insight and knowledge with "what they feel is the truth". A devastating practice in pursuit of truth and a common ground for all people to exist on. Producing "truths" based on what feels right is what leads to conflict and war.

    Chat-GPT I don't think is as intelligent as a human is. It doesn't behave the way a human intelligence would. Can I explain what is the basis for that? No. Does that mean I think it's magic then? Not at all.Mr Bee

    No on is claiming that either. Why is this a binary perspective for you? Saying that ChatGPT simulates certain systems in the brain does not mean it is as intelligent as a human. But it's like it has to either be a dead cold machine or equal to human intelligence, there's no middle ground? ChatGPT is a middle ground; it mimics certain aspects of the human brain in terms of how ideas, thoughts and images are generated out of neural memory, but it does not reach the totality of what the brain does.

    The things is that for alot of people generative AI seems like a solution looking for a problem.Mr Bee

    Maybe because they can't think past shallow internet conflicts on twitter and look deeper into this topic.

    I think this video sums it up pretty nicely:Mr Bee

    It summarizes one thing only: the shallow and surface level media coverage of AI. Of course these tech companies hyperbole things when much of the investment money that gets poured into the industry comes from the stupid people who fall for these promises.

    That doesn't mean, however, that there aren't tremendous potentials for AI technologies. We're already seeing this within medical sciences like AlphaFold 3.

    The inability for people to see past tech company hyperboles and shallow media coverage of this technology is what drives people into this shallow hashtag Twitter-level of conflict over AI. All of this is radicalizing people into either camp, into either side of the conflict. It is utterly stupid and filled with bullshit that stands in the way of actual productive conversation about how to further develop and use this new computational tech into the future.

    Such conversations exist, but they get drowned in all the bullshit online.

    According to some reports, AI could replace hundreds of millions of jobs. If it doesn't replace it with anything else, then to brush off the economic disruption to people's lives without considering policies like UBI is the sort of thinking that sets off revolutions.Mr Bee

    Of course, no one truly into the real productive discourse about AI ignores this issue.

    However, while people fear the economic disruption, the world is also worried about how work is stressing us to death, how people have lost a sense of meaning through meaningless jobs and how we're existentially draining ourselves into a soulless existence. So, why would such disruption be bad?

    Because it would effectively drive society towards systems like UBI, and it would force society to question what a "good job" really is.

    It would push us all into breaking an Hegelian situation of the master/slave concept (interpretably in this context), in which all the masters replace workers with robots and AI, effectively lowering production cost to extremely low numbers and increase their revenue, but who's gonna buy anything when no one has the effective income to buy their products?

    It's ironic that people complain about these AI companies through the perspective of capitalist greed when the end result of massive societal disruption through AI would effectively be the end of capitalism as we know it, since it removes the master/slave dynamic.

    Any nation who spots these disruptions will begin to incorporate societal changes to make sure living conditions continue to be good for the citizens. And instead of working ourselves to death with meaningless jobs, people may find time to actually figure out what is meaningful for them to do and focus on that.

    Because people don't just want factory manufactured things. There are millions of companies in the world who don't expand into factory production and low-wage exploitation, both on moral grounds but also that their product is sold specifically as "handmade". It's like people in the AI discourse aren't actually thinking through the nuances of what a post-disruption society would look like. They just end their thinking at the disruption, they can only think through the perspective of free market capitalism and are unable to picture society past this system. Maybe because their lives are so ingrained into capitalism that they're unable to even think in other perspectives, but the truth is that we may very well see a totally new system of society that hasn't been realized yet, even theoretically, because it relies on a foundation of absolute automation of all sectors in society, a condition that hasn't been on the table before.

    This is why most science fiction depictions of the future are rather simplistic when depicting the implication of disruptive technology. Ignoring the butterfly effects of new technologies emerging.

    Just think about the industrial revolution, all of this with AI is repeating history. We're seeing a threat to workers of a previous type, but see an opening for new types of jobs. We also see an increase in living conditions through the industrial- and post-industrial revolution. And the discussion has ALWAYS been the same, with people believing that the "new technology" spells the end for humanity because they cannot fathom the actual consequences and rather always view things in the perspective of destruction and oblivion.

    We already have a capitalist machinery that is destroying people both in low-wage nations and in more westernized societies. Disrupting this is not a bad thing. But we have to make sure to focus on aligning society towards better conditions for people. That's impossible if people just do shouting matches about who's worst and always complain about any new technology that gets invented.

    I'm rather optimistic about a future that has, through automation, disrupted capitalism as we know it. I think people make shallow assumptions that the rich would just continue to accumulate wealth indefinitely, something that is technically impossible, especially in a world in which the gears of capitalism has essentially broken down. And I think people forget that the progress in the world so far has made people live in conditions that were only available to royalty as close as 100 years ago. What the tech gurus are hyperboling in media is totally irrelevant. They're speaking to the gullible masses and investors, not taking part in the real discussions about the future. A future that I think people attribute the same kind of doom and gloom towards that people in all time periods of innovation and progress has been doing.

    I, however, see a technology that can revolutionize aspects of our lives in ways that will make our lives better, even if that means everyone's out of a job. Because then we can focus on living life together and not wasting our lives on being slaves for masters. Essentially, the question becomes, do you want to continue with a world as it is today? Or would the world be better off if soul crushing work gets demolished by automation, leaving everyone to mostly benefit from such an automated society.

    When you're too cheap to hire someone to create something then you're probably also too lazy to fix the inevitable problems that comes with generated content.Mr Bee

    Still doesn't change the fact that these lazy CEOs and companies were treating artists badly before firing them due to AI. I don't think this is a loss for artists. Trash commercials isn't something I would value high and I actually think it's better for the soul of the artist to just not work at such places. I've seen many artists who just stop making art all together after a few years of such bullshit. That's killing artists more than any AI does.

    I can imagine the top art companies, like say a Pixar or a Studio Ghibli, focusing solely on human art, in particular because they can afford it. I don't see them relying on AI. Like a high end restaurant that makes it's food from scratch without any manufactured food, they'll probably continue to exist.

    There will also be companies that use AI to a certain extent as well, and then companies that rely on it too much to the point of being low-end.
    Mr Bee

    But AI will also lower the cost for those who focus on human created art. The reason why I think the official debate about AI is so misleading and bad is because people believe that AI tools are only generative in terms of images or video. But there are other aspects in AI that speed up the process of making art that is in pure control of an artist. Like rotoscoping and isolating elements in sound. Or like the fact that an artist working at a game company right now is required to not only create the design for some asset like a "building" on a game map, they have to create maybe 20 versions of the same design, something that could very well take upwards of a few months. But now there are technologies that enables them to focus on just one building, and really get into depth and detail with that design without stressing through it, and then let an AI iterate different versions of that design without losing their own control over it.

    Such tools enhance artists control and gives them more time to really polish their creations. So even human created art will benefit. And we're not criticizing art created today for being "cheated" with current tools of the trade. No one is shedding tears because rotoscoping has started to become trivial, it's a time consuming and soul-crushing process that waste artists time and the investors money. And in animation work like for Pixar, they're working with a lot of AI assisted tools for animation that makes animations be more fluid and physically accurate because such parts of the process have always been a chore.

    But many artists seem to not understand this and believe that it's all about generating images and video. Those aspects, in their simplest form, are just consumer grade toys, or content creation similar to Shutterstock. The true meaning of those tools are in tandem with other tools enhancing the work for actual artists rather than replacing them.

    None of this really addresses their concern about financial stability. I fear that this new technology just gives more leverage over a group of people who have been historically underpaid. I hope it ends up well, but I'm not gonna brush off these concerns as luddite behavior.Mr Bee

    But what you describe is exactly what the luddite situation was during the industrial revolution. And their lives became better. Why are we criticizing this disruption by defending an industry that was underpaying them and using them to the brink of them giving up art all-together?

    It's oddly ironic that people argue against such disruption through wanting to maintain previously bad conditions. Effectively criticizing the greed of these capitalists, while framing the criticism within the context of making money, even if the conditions are horrible.

    Would you argue against automation in low-wage sweatshops that uses kids as their workforce? People's arguments resemble saying that we shouldn't improve the conditions there with automation because that would mean that they would get no pay at all and it's better that they get some pay and work themselves to death than no pay at all. Really?

    That's an awful defense of keeping the status quo. Bad working conditions disappearing due to automation is a good thing.

    Not at all. They're just not allowed to use their non-transformative output based on those references and expect to get paid for it. Like I said before, if you want to look at a bunch of copyrighted material and download it on your computer, that's fine since all you're doing is consuming.Mr Bee

    Yes, consuming like the training process does with these models. It's a training process, like training your brain on references and inspirations, filling your brain with memories of "data" that will be used within your brain to transform into a new ideas, images or music. This is what I'm saying over and over. Don't confuse the output of these AI models with the training process. Don't confuse the intention of the user with the training process. Because you seem to be able to make that distinction for a human, but not for the AI models. You seem to forget that "user" component of the generative AI and the "user decision" of how to use the generated image. Neither having to do with the training process.

    Noone is allowed to do whatever they want. Is private use suddenly immune to the law? I don't think so.

    Whether a particular use violates the law is obviously not for the user to decide. It's a legal matter.
    jkop

    If the private use is within law and identical to what these companies do, then it is allowed, and that also means that the companies do not break copyright law with their training process. If you confuse the output and use of the output, with the private workflow of the artist or training process of an AI, then you are applying copyright laws wrong and effectively apply copyright infringement onto the workflow and not the result.

    I don't understand why this is so confusing for you? You just adhere to it being a "legal matter", but legal matters operate on established consensus of were specific laws apply. Claiming something illegal that technically doesn't fall under being illegal, does not make it illegal, regardless of people's feelings about it or their personal want that it should. That's not how laws work.
  • The "AI is theft" debate - An argument
    Because a mere intention to want to create a painting of a park doesn't get to the interesting parts about how our brains generate that image in our heads from what we know. Of course I don't know much about creativity, neuroscience, or AI like I said before, so I'm gonna avoid deeper conversations and skip over the following paragraphs you've written for the sake of time.Mr Bee

    Intention is more than just will, intention drives creation in a fluid constant manner, not just a will to paint a park, but every detail of that park and the interpretation of it into reworks and changes.

    But it's important to know the depths of all of this, because that's what's part of defining the foundation for laws and regulations.

    They certainly deal with alot of criticism themselves if you're implying they don't. Tracing isn't exactly a widely accepted practice.Mr Bee

    From ever encounter with other artists I've had, almost all plays very loose with the concept of copyright during their process. It's very undefined were the line is drawn and most often than not it's people who aren't artists themselves who moralize the most about where lines are drawn. Tracing and photobashing are just obvious extreme examples of it, but there are lots of double standards behind closed doors during the workflow and process of creating works of art.

    Among artists there's a lot of moralizing towards others while not a lot of introspection into their own workflows.

    I'm willing to reject dualism as well, though I'm not sure why you're attributing this and indeterminism to people who just believe that human creativity and whatever is going on in diffusion models aren't equivalent.Mr Bee

    Because it's a question of having a foundation for laws and regulations. To defend human creativity with that it's magic isn't enough. Anything closer to facts is required and the closest to facts we have is what we've found in neuroscience and how these processes are similar between neural networks/machine learning and the human brain. What people like to believe in their own private spiritual way is no foundation for societal law and regulations.

    I'm not saying that the human brain isn't a machine, I'm just saying that there are differences between the architecture of human brains and AI diffusion models something that may reveal itself with a further understanding of neuroscience and AI.Mr Bee

    The architecture in the neural network is mimicked while the totality of the brain is not. What's "same" is the behavior of memory formation and path formation. But the question is that if the process is similar, then the argument can be made that a person with photographic memory who reads a book and then writes something inspired by it is utilizing the same kind of process as the AI model that's trained on the same book writing out something based on it.

    If the trained AI is considered a "normal computer", that would mean that a human had a specific part in the brain that stored a direct copy of a file of that book, not how we actually remember what we've read and how we remember it within the context of other texts. The AI model does the same, it doesn't have a file of the book, it "remembers" the book in relation to other texts.

    If we are to create laws and regulations for that, then it's not as simple as saying that the engineers "programmed in this book into the AI", because that's like saying that someone took a book and forced it into my skull as a file into my brain rather than gave it to me to memorize in my head. The difference between a normal computer and a trained AI is that there's no copy of the original file, there's nothing of the original file just like there's no original copy of the book in my brain. And most of copyright laws are based on physical or digital files being spread around.

    Given how disruptive AI can be to all walks of society then I think that is a reason for pause unless we end up creating a very large society backlash.Mr Bee

    Would you want to go back to before the industrial revolution because society supposedly paused it? With all the improvements in society that eventually came because of it?

    I find it a bit ironic that people don't want massive change while at the same time complain that nothing is done about the problems that actually exist in society. These AI models aren't just pumping out images, video and texts, they are used in other applications. Just the other day, a paper was released on protein folding that uses the new AlphaFold 3 and within it, it uses a diffusion model as part of the system.

    If I have to be blunt, the benefits of these AI systems are potentially so massive that I couldn't care less about a minority of bitter artist who lost a job at a corporation that didn't even appreciate these artists contribution enough to value them staying. These different AI models are working together and help each other with tasks that reach beyond mere generative images or text, and the development of such beneficial processes will be delayed if they're forced to retrain on less data due to some court ruling.

    Destroying the model's capabilities by restricting the training is not the solution, the solution is to restrict the use and put guardrails on the outputs. But society and artists are frankly so uneducated about all this that they don't understand the consequences of what they're screaming for. They just hate AI because they're afraid and become luddites who smash machines that could benefit society in ways they can't even fathom.

    I despise such stupidity. Imagine a cancer drug that gets delayed because artists got a court to rule a model restricted and taken down when in the middle of being a key component in a larger structure of AI systems in analysis and protein folding. People don't know what they're talking about, people don't seem to have a clue about anything related to AI models beyond the click bait journalism and twitter brawls. Yet, at the same time, they want new cancer treatments and improvements to society in which, right now, AI is making massive improvements using the same models that rely on the same training data that they want to restrict and remove from these companies.

    The equation in all of this is so skewed and misrepresented.

    Those were more new mediums, new ways of making art and music then something that could completely replace artists can do. I'd look more at the invention of the camera and it's relation to portrait artists as a better example.Mr Bee

    AI can't replace artists. As I've mentioned, an artist is skilled in having the eye, ear and mind to evaluate what is being created. An AI can't do this. You need an artist to use AI effectively, you can't have some hack generate stuff if the output is going to be used professionally.

    And those new mediums weren't new ways for those who were skilled in older methods of concept art. People who spent decades painting with real pencils and paint. They viewed the change just as artists today do with AI because they didn't know how to retrain themselves, so they became obsolete. Yet, people treat this transition as "whatever". Just as we treat all previous transitions in society and cherish how good life is with all our modern standards of living and working. Why would today be any different?

    But sure, the invention of the camera and painting portraits might be more similar. But that also invented the job of the portrait photographer. Just like a concept artist today will be able to utilize their skill of knowledge in composition and design and use an AI model to speed up the process towards a specific goal. Why wouldn't they?

    I view the hacks who think they're artists because they went from no artistic knowledge to writing prompts and getting outputs, as the same as people who bought a DSLR with video capabilities in 2008 thinking they will be the next Oscar nominated cinematographer just because the DSLR had a 35mm sensor and produced a photographic look almost exactly how cinema cameras produced images on real film productions... only to end up making ugly low budget shit that sure had an optically and technically similar look, but nothing of everything else in terms of composition and filmmaking language. Because they weren't actual cinematographers who could evaluate what they were making. They couldn't see their mistakes and they didn't know why something should look in a certain way.

    AI will not replace artists. It's a misunderstanding of what an artist is. AI generation will replace the low-paid jobs at companies who didn't care for the artists to begin with and who never really had the money or care neither.

    Companies who actually care about art but use AI will still need artists, they need their eyes to handle AI generative outputs and change according to plan and vision.

    Honestly we should all be concerned on that because if they're fine with it then artists are out of a job and we as consumers will have to deal with more sloppy art. Already I see advertisements where people have 6 fingers on a hand.Mr Bee

    Is that a bad thing? Does anyone actually care about ads in terms of aesthetic appreciation? Or is everyone trying their best to get their adblockers to work better and remove them all together? The working conditions for artists were already awful for these kinds of companies, maybe it's better that this part of the industry collapses and gets replaced by outputs of the same quality as the no-caring CEOs of these agencies. Why are we defending shit jobs like these? It's them which will be replaced first.

    What it will do is probably focus the consumers attention towards real artists work more. The difference will feel clearer. Just like how audiences have gotten fed up with the Marvel Cinematic Universe repeating itself over and over and over again and instead started to embrace original blockbusters once more. People don't actually want the repetitive factory outputs, they want artistic perspectives and this might rip the band aid off the industry instead of making everything into a low-wage factory.

    As said, artists won't disappear, because they will be the ones working on stuff that matters. Yes, some might not return to working in the field, because they were never able to get to higher positions above these low-paid shitty jobs for these uncaring CEOs in the first place. So maybe that's not a bad thing? Just like the luddites who smashed the sewing machines got better health after those slave-like conditions ended and sewing machines were the condition to work under instead.

    Because they're the ones giving most artists a job at this point and they need those jobs. Unfortunately that's the society we live in.Mr Bee

    Then start learning AI and be the artist who can give that expertise showing an edge against the non-artists that are put to work with these AIs and who don't know what the word "composition" even means. It's an easier working condition, it's less stressful and it's similarly "just a job". Why is anyone even crying over these kinds of jobs getting replaced by an easier form? It makes no sense really. Why work as a slave because a CEO demanded 5 new concept art pieces over the weekend not caring how much work that this would demand of a concept artist?

    A company hiring a non-artist to work with AI won't work and they will soon realize this. You need the eyes, you need the ears and the poetic mind to evaluate and form what the AI is generating, that's the skill the artist is bringing.

    They're both related. If the output process is not considered transformative enough then if the input contains copyright material then it's illegal.Mr Bee

    No, if the output isn't transformative enough, that's not a problem of the training data, that's a problem with alignment in order to mitigate accidental plagiarism. What you're saying is like saying that if an artist makes something that's not transformative enough, we should rule that artist to never be able to look in magazines or photographs again for inspiration and references. They will never be able to make anything again because they went to an arts museum and saw other paintings and because they accidentally plagiarized. It would mean their workflow is infringing on copyright, not the output, which makes no sense in comparison to common artistic practices and workflows.

    The training process and the output are not the same thing in terms of copyright and a user producing plagiarism with an AI model does not mean the training process is infringing on copyright.

    It's this misunderstanding of how the technology works that makes anti-AI arguments illogical and problematic.

    I don't think intention is a requirement of artistic output. An artist may not have anything to say about what a particular work means. It just flows out the same way dreams do. Art comes alive in the viewer. The viewer provides meaning by the way they're uniquely touched.frank

    Intention here is not really meant as "knowing" or the "truth" of expression, it's merely the driving force or guiding principle from which creation is directed. In essence, it means that there's an intention of choice to draw a bridge against a sunset, an intention that does not appear for an AI on its own because it does not have an emotional component to experience and subjectivity. The identity of a person, their subjective sense of reality forms an intention to create something. In ourselves it forms as an interplay between our physical creative process in our brain (the one similar to these systems) and our emotional life informing and guiding us towards an intended path for what that process creates.

    And this is also why I say that artists won't disappear. Because even an AI that is a superintelligence and has the capacity to create art on its own because they're essentially sentient, would still just constitute a single subjective perspective. Becoming a single "artist" among others. Maybe more able to produce art quicker and int more quantities, but still, people might like its art, but will want to see other perspectives from other artists and that requires a quantity of individual artists, AIs included.
  • The "AI is theft" debate - An argument
    The painting of Mona Lisa is a swarm of atoms. Also a forgery of the paining is a swarm of atoms. But interpreting the nature of these different swarms of atoms is neither sufficient nor necessary for interpreting them as paintings, or for knowing that the other is a forgery.jkop

    Neither is an argument that compares laws based on actions that aren't within the actions being targeted by existing laws. The training of these models is similar to any artist who works behind closed doors with their own "stolen" copyrighted material. It's still the output that's the main focus of determining plagiarism and what actions are taken to mitigate that. An artist who accidentally plagiarize does not get barred from ever making art again. A system that didn't handle accidental plagiarism in earlier versions before, but is then optimized to mitigate it, should that be banned?

    You still seem to be unable to separate the formation of these neural models with the actions in generating outputs. The training process is not breaking copyright because the output and use of the system did so. That's purely a problem with alignment.

    So this argument about Mona Lisa makes no sense in this context.

    Whether something qualifies for copyright or theft is a legal matter. Therefore, we must consider the legal criteria, and, for example, analyse the output, the work process that led to it, the time, people involved, context, the threshold of originality set by the local jurisdiction and so on. You can't pre-define whether it is a forgery in any jurisdiction before the relevant components exist and from which the fact could emerge. This process is not only about information, nor swarms of atoms, but practical matters for courts to decide with the help of experts on the history of the work in question.jkop

    Training an AI model on copyright data is still not infringement anymore than an artist who use copyrighted material in their work. The output is all there is and if accidental plagiarism is happening all the time, courts can demand that these companies do what it takes to mitigate that from happening, which they are already doing... the rest is up to the intent of the user and should be the responsibility of the user. If the user is forcing a system to plagiarize, that's not the tech company's fault anymore than Adobe being responsible for if someone is using someone else's photo to enhance their own by manually changing it in Photoshop.

    Blaming the training process for copyright infringement would fall into an arbitrary definition about how we can handle copyright data in our own private sphere. It would mean that I cannot take a favorite book and in my private home take a pencil and write down segments from that book onto a piece of paper, that would constitute copyright infringement, even if I don't spread that paper around officially.

    How the tech works is out in the open, but people make emotionally loaded and wrong interpretations of it based on their own lack of understanding both about the technology.

    Like, if I ask: where's the copyrighted data if I download an AI model? Point to where that data is please. It's not encrypted, that's not why you can't point to it. It's in there in the model, so if you are to claim copyright infringement, you have to point at the copied material inside the AI model. Otherwise how do you define that the companies "spread copyrighted material"? This is the foundation of any claim of theft, because what they did in the lab is behind closed doors, just as with an artist taking copyrighted material into their workflow and process.

    Regarding the training of Ai-systems by allowing them to scan and analyse existing works, then I think we must also look at the legal criteria for authorized or unauthorized use.jkop

    I can take a photo of a page in a book and have it for myself. That's not illegal. I can scan and analyze I can do whatever I want as long as I'm not spreading those photos and scans around.

    If you claim that training an AI model is unauthorized, then that would mean it is unauthorized for you to take screenshots, take photos of anything in your home (there are copyrighted material everywhere from books to posters and paintings to design of furniture etc.) because that is what the consequence for such definitions will be.

    Then you can say, ok we only make it unauthorized to train AI models with such analysis. That would mean that all algorithms in the world that works based on analysing data would be considered unauthorized, fundamentally breaking internet as we know it. Ok, so you say we only do it for generative AI, then why? On what grounds do you separate that from any other scenario? If the defense say that such ruling is arbitrarily in favor of artists based on very unspecified reasons other than that they are humans, and question why engineers aren't allowed to utilize similar processes as artists, then what's morally correct here?

    It all starts to boil down to how to define why artists have more rights than these engineers and why the machine isn't allowed while artists utilize even worse handling of copyrighted material in their private workflows.

    It you actually start to break all of this down it's not as clear cut as you seem to believe.

    Doesn't matter whether we deconstruct the meanings of 'scan', 'copy', 'memorize' etc. or learn more about the mechanics of these systems. They use the works, and what matters is whether their use is authorized or not.jkop

    Of course it matters. Artists scan, copy and memorize lots of copyrighted material in their workflows. Why are they allowed and not the engineers training these models? Why are artists allowed to do whatever they want in their private workflows, but not these companies? Because that's what you are targeting here. What's the difference? I mean, what's the actual difference that produce such a legal difference between the two that you can conclude one illegal over the other?

    Just a random question. Had someone sold the database of all posts of a forum (not this one, in my mind), would that be considered theft or public information?Shawn

    If the forum rules constitutes that they in some way "own" the posts, then all people who write on the forum can't do anything about it. However, I don't think forum owners own the posts written so they can't really sell the posts. However, there's nothing illegal with scraping all posts from a forum if they're available as public information. Google's spider is already doing this for search results and as I've pointed out in this thread, training an AI model on those forum posts shouldn't be considered illegal as they're not copying anything more than I'm reading other's posts and can draw upon them when I write something myself.

    It all, always, at every point, boils down to what the output of an AI system is. Did the user ask for a plagiarized post from one of the posters of a forum? Then the user is breaking copyright. If the system accidentally plagiarize, then the tech company must be forced to implement mitigation systems to prevent that from happening.

    But information, works of art and everything that is available in some form officially, can't be protected from being viewed, analyzed and decoded. That's not what copyright is for, it's for protecting unlawful spreading, reselling or plagiarism of other's work. With enough original transform, an output or spread of such output isn't anymore illegal than if I get inspired by Giorgio de Chirico and create a cover for a game.
  • The "AI is theft" debate - An argument
    #1. Make money.BC

    There are better ways to make money. It's easy to fall into the debate trap of always summerizing anything against companies as having pure capitalist interests, but trying to solve AI tools is kinda the worst attempt if all intentions are just profit. And the competition in this new field as an industrial component of society demands making sure it is the best product without risking courts taking them down and ruining your entire business.

    I do not know what percent of the vast bulk of material sucked up for AI training is copyrighted, but thousands of individual and corporate entities own the rights to a lot of the AI training material. I don't know whether the most valuable part was copyrighted currently, or had been copyrighted in the past, nor how much was just indifferent printed matter. Given the bulk of material required, it seems likely that no distinction was made.BC

    Since the amount of data is key for making the systems accurate and better, it needs to be as much as possible. And since text and images are in larger quantities today than from people who died 70 years ago, it is required to use copyrighted material.

    But since it is training data, why does it matter? People seem confused as to what "theft" really means when talking about training data. Memory is formed through "pattern networks" similar to the human brain, nothing is copied into the system. Weights and biases are programmed in to guide the learning and it learns how to recognize pattern structures on all the data. Since it's learning this, it's learning the commonalities in images, text or whatever data in order to form understanding on how to predict next steps in the generation. When it accidentally plagiarize something, it's similar to how we picture a memory in our head as clear as we can. I can remember a Van Gogh painting with high clarity, but is still not identical to the original. I can remember text I've read, but I often misremember the exact lines. This is because my mind fills in the gaps through predictive methods based on other memory and other information I've learned.

    As I've mentioned numerous times in this thread, it's important to distinguish between training processes and generated outputs. The alignment problem is about getting the system to function in a way not destructive to society, in this case destructive to artists copyright. But the data it was trained on is part of information available in the world and since it's used behind closed doors in their labs, it's no different from how artists use copyrighted images, music or text in their workflow while creating something. A painter who cut out images from magazines and use it as inspirational references while painting, copying compositions, objects, colors and similar from those images are basically the same as an AI model trained on those images, maybe even less direct in copying when compared to some artists.

    Have you ever scrolled through royalty free library music? Their process is basically taking what's popular in commercial music or movie soundtracks, replicating the sound of all instruments, take enough notes but changing one or two so as to not copy some track directly. How is this different from anything that Suno or Udio is doing?

    And with scale, with millions of images, text, music etc. it means that the risk of accidental plagiarism is very low compared to an artist using just a few sources.

    In the end it's still the responsibility of the one using the system to generate something who need to make sure they're not plagiarizing anything. It's their choice of how to use the image.

    The output and the training process is not one and the same thing, but people use the nature of outputs and accidental plagiarism in outputs in relation to the training process and training data as proof for theft when it's not actually in support of such a conclusion. There's no database of copyrighted material on some cloud server somewhere in which these systems "get the originals". They don't store any copyrighted material anywhere but in their own lab. So how does that differ from an artist who's been scraping the web for references in their work storing it on their own hard drives?

    The many people who produce copyrighted material haven't volunteered to give up their ideas.BC

    In what way did they "give up their ideas"? If I create an image, upload it to something like Pinterest and someone else downloads that image to use as a reference in their own artistic work, then they didn't commit theft. Why is it theft if a company uses it for training data of an AI model? As long as the output and generations are aligned not to fall into plagiarism, why does that matter anymore than if another artist used my images in their own workflow? Because the companies are part of a larger capitalist structure? That's not a foundation for defining "theft".

    Here's an example of artwork that is "inspired" by the artist Giorgio de Chirico for the game cover of "ICO".

    ?u=https%3A%2F%2Fwww.mobygames.com%2Fimages%2Fcovers%2Fl%2F127591-ico-playstation-2-front-cover.jpg&f=1&nofb=1&ipt=cb996a3fd2c882188cea636d3b03e7167a122264383426b0be258fdccd8fd051&ipo=images ?u=https%3A%2F%2Fwww.soho-art.com%2Fshopinfo%2Fuploads%2F1281311321_large-image_giorgiodechiricothenostalgiaoftheinfinite1912013oilpaintinglarge.jpg&f=1&nofb=1&ipt=01354c6508c2adee6a25cec712e0bc8e33807e503895fba98d34cc82ba42713c&ipo=images

    ?u=https%3A%2F%2Fi0.wp.com%2Fftn-blog.com%2Fwp-content%2Fuploads%2F2017%2F08%2Fschermafbeelding-2017-08-14-om-08-59-07.png&f=1&nofb=1&ipt=e7bb7f3d5203d63f8e2ca4fcfe44d0454de9b47ebb6186d48c0251e6666e8730&ipo=images

    No one cares about that, no one screams "theft", in general they loved how the artist for the cover got "inspired" by Giorgio de Chirico.

    But if I were to make an image with an AI diffusion model doing exactly the same, meaning, some elements and general composition is unique, but the style, colors and specific details were similar but not copied, and then use it commercially, then everyone would want to crucify me for theft.

    If it was even possible, because if I ask DALL-E for it, it simply replies:

    I was unable to generate the image due to content policy restrictions related to the specific artistic style you mentioned. If you'd like, I can create an image inspired by a surreal landscape featuring a windmill and stone structures at sunset, using a more general artistic approach. Let me know how you'd like to proceed!

    And if I let it, this pops out:

    Ska-rmavbild-2024-05-10-kl-13-31-00.png

    It's vaguely resembling some aspects of Giorgio de Chirico's art style, but compared to the ICO game cover, it's nowhere near it.

    This is an example of alignment for the usage of these systems, in which the system tries to recognize attempts to plagiarize. And this process is improving all the time. But people still use old examples of outdated models to "prove" what these systems are doing at the moment. Or they use the examples of companies or people who blatantly doesn't care about alignment to prove that a totally other company also does it because... "AI is evil" which is just the length of their entire argument.

    And with court rulings from the past ruling in favor of the accused like in Mannion v. Coors Brewing Co., then artists seem to be protected far more for blatant rip-offs than an AI diffusion model producing something far less of a direct copy.

    So your claim is that adding intentionality to current diffusion models is enough to bridge the gap between human and machine creativity? Like I said before I don't have the ability to evaluate these claims with the proper technical knowledge but that sounds difficult to believe.Mr Bee

    Why is it difficult to believe? It's far more rooted in current understandings in neuroscience than any spiritual or mystifying narrative of the uniqueness of the "human soul" or whatever nonsense people attribute human creativity to stem from. Yes, I'm simplifying it somewhat for the sake of the argument; the intention and predictive/pattern recognition system within us are rather constantly working as a loop influencing each other and constantly generating a large portion of how our consciousness operates. Other parts of our consciousness functions the same; like how our visual cortex isn't getting fed by some 200 fps camera that is our eyes, but rather that our eyes register photons that our visual cortex interprets by generating predictions in-between the raw visual data we feed through our eyes. It's the reason why we have optical illusions and if we stare at some object in high contrast for a long period of time and then look at a white canvas we see an inverted image as our brain try to over-compensate by generating a flow of data to fill in gaps that aren't seen anymore in raw data.

    At its core the actual structure of a neural engine or machine learning is mimicking the exact nature of how our brain operates with pathways. We don't have a raw data copy of what we see or hear, we have paths that forms in relation to other memory paths and the relations between them forms the memories that we have. It's why we can store such vast amounts of information in our heads because it's not bound to physical "bits", but connections which become exponentially complex the more you have.

    Inspired by these findings in neuroscience, machine learning using neural maps started to show remarkable increases of computing capabilities far beyond normal computers, but what they gained in compute power, they lost in accuracy. Which is key to understanding all of this.

    They don't copy anything because that would mean an AI model would be absolutely huge in size. The reason I can download an AI model that is rather trivial in size is because it's just a neural map, there's no training data within them. It's just a neural structure "memory", similar to the neural maps in our own brains.

    And they're using the same "diffusion model" operations that tries to mimic how we "remember" from this neural map by analyzing the input (intention) and find pathway memories that links to the meaning of the input and interpret it into predictions that generating something new.

    Recent science (that I've linked in above posts) have started to find remarkable similarities between our brain and these systems. And that's because they didn't make these AI models based on some final conclusions about our brain, they were instead inspired by what was found in neural science and just tried methods to mimic our brain, without knowing if it would work or what would happen.

    This is the reason why no one still knows how an LLM could generate fluid text in other languages without direct programming of such functions and why many of these other functions just emerged from these large quantities of text data forming these neural maps.

    It's rather that because these companies did all of this, neuroscientists are starting to use their research papers back into their own field as it shows hints at how functions in our brain emerge abilities just by how prediction occurs within these neural pathways. It's basically someone trying something before people know exactly how something works and discovering actual results.

    The point being, it mimics what we know about how our brain generate something like an image or text and thus, what's missing is everything that constitutes an "intention" in this process. "Intention" isn't just a computational issue, but something that reflects the totality of our mind, with emotions, behaviors and what might constitute everything about they physical of being a human. Human "intention" might therefore not be able to be copied without requiring everything that constitutes being a "human".

    A good example of another technology that mimics a human function is the camera, or the recorder and speakers. These are more simplistic in comparison, but we've replicated the way our eyes register photons, especially in digital cameras, with lenses, rods and cones. And we've replicated how we record sounds using membranes and how our vocal cords produce sounds like membranes in a speaker and its hollow structure which forms sounds like our throat.

    But when we mimic brain structures and we witness how they form behaviors similar to how our brain functions during creativity, we are all of a sudden thrown into moral questions about copyright in which people who don't understand the tech generally argues like those sitting in the audience at the first film projection, believing the train actually is about to hit them, or how record players and cameras took the souls of people when they got recording in those mediums.

    As far as I see this, it's a religious and spiritual realm that makes people fear these AI models core functions, not scientific conclusions. It's about people breaking cameras because they think they will capture their souls.

    Okay, but in most instances artists don't trace.Mr Bee

    Neither do diffusion models, ever. But artists who trace will still come out unscathed compared to how people react to AI generated images. Where is the line drawn? What's the foundation on which definitions of such differences are made?

    I don't see how originality is undermined by determinism. I'm perfectly happy to believe in determinism, but I also believe in creativity all the same. The deterministic process that occurs in a human brain to create a work of art is what we call "creativity". Whether we should apply the same to the process in a machine is another issue.Mr Bee

    That's not enough of a foundation to conclude that machines do not replicate the physical process that goes on in our brain. You're just attributing some kind of "spiritual creative soul" to the mind, that it's just this "mysterious thing within us" and therefore can't be replicated.

    Attributing some uniqueness to ourselves only because we have problems comprehending how this thing within us works or function isn't enough when we're trying to define actual laws and regulations around a system. What is actually known about our brain through neuroscience is the closest thing we have to an answer, and that should be the foundations for laws and regulations, not spiritual and emotional inventions. The fact is that human actions are always traceable to previous causes, to previous states, and a creative choice is no different from another type of choice.

    The only reason why people attribute some uniqueness to our internal processing in creativity is because people can't separate their emotional attachment to the idea of divine creativity; It's basically just an existential question that when we break down the physical processes of the brain and find the deterministic behaviors and demystify creativity, people feel like their worldview and sense of self gets demolished. And the emotional reactions from that are no grounds for actual conclusions about how we function and what can be replicated in a machine.

    Indeed the definitions are very arbitrary and unclear. That was my point. It was fine in the past since we all agree that most art created by humans is a creative exercise but in the case of AI it gets more complicated since now we have to be more clear about what it is and if AI generated art meets the standard to be called "creative".Mr Bee

    And when we dig into it, we see how hard it is to distinguish what actually constitutes human creativity form machine creativity.

    However, I don't think this is a problem due to the problem of "intention". These AI models aren't really creative in exactly the way we are. They mimic the physical processes of our creativity, which isn't the same as the totality of what constitutes being creative. We might be able to replicate this in the future, but for now, the intention is what drives the creativity, we are still asking the AI to make something, we are still guiding it. It cannot do it on its own, even though it has the physical neural pathway processing replicated. Even if we just hit a button for it to randomly create something, it then gets guided by the fundamental weights and biases that were there to inform it's fundamental basic handling of the neural map.

    To generate we must combine intention with the process and therefor before we can judge copyright infringement, those to must have produced something. I.e the output. And so, the argument I've been making in here is that any attempt to blame the training process due to using copyrighted data in the training data is futile as nothing have been created until after intention and process generates an output.

    Only then can plagiarism and other copyright problem be called into question.

    However the problem is that in today's art industry, we don't just have artists and consumers but middle men publishers who hire the former to create products for the latter. The fact is alot of artists depend on these middle men for their livelihoods and unfortunately these people 1) Don't care about the quality of the artists they hire and 2) Prioritize making money above all else. For corporations artists merely create products for them to sell and nothing more so when a technology like AI comes up which produces products for them for a fraction of the cost in a fraction of the time, then they will more than happily lay off their human artists for what they consider to be "good enough" replacements even if the consumers they sell these products to will ultimately consider them inferior.

    There are people who take personal commissions but there are also those that do commissions for commercial clients who may want an illustration for their book or for an advertisement. Already we're seeing those types of jobs going away because the people who commissioned those artists don't care in particular about the end product so if they can get an illustration by a cheaper means they'll go for it.
    Mr Bee

    Yes, some jobs will disappear or change into a new form. This have happened all the time throughout history when progress is rapidly changing society. Change is scary, and people are most of the time very comfortable in their bubble which when popped lead them to strike out like an animal defending themselves. This is where the luddite behavior comes into play.

    But are we saying that we shouldn't progress technology and tools because of this?

    When photoshop arrived with all its tools, all the concept artists who used pencils and paint behaved like luddites, trying to work against concept art being made with these new digital tools. When digital musical instruments started becoming really good, the luddites within the world of composing started saying that people who can't write notes shouldn't be hired or considered "real" composers.

    What both these companies think and what luddites think of AI, they both forget that the working artist's biggest skill isn't that they can paint a straight line, or play a note, it's that they have the eye for composition and design, that they have an ear for melody, a mind for poetry.

    People seem to have forgot what artists are actually hired for and it's not the craft. A concept artist isn't really hired for their personal style (if they're not among the biggest names in the industry), they're hired to guide the design based on the requirements of the project. They're hired for their ability to evaluate what's being formed and created.

    All forms of art made within a larger project at such companies like game studios etc. is all in slavery to the overarching design of the entire project.

    And because of this, the input that these artists have is very limited to the fundamental core of their expertise, i.e the knowledge of the artist to guide design towards the need of the project.

    Therefore, a company who fires an artist in favor of someone who's not an artist to start working with AI generation, will soon discover that the art direction becomes sloppy and uninspiring, not because the AI model is bad, but because there's no "guiding principles" and expert eye guiding any of it towards a final state.

    This is why artists need to learn to work with these models rather than reject them. Find ways of fusing their art style, maybe even train a personalized AI on their own art and work in a symbiosis with it.

    Because this type of work for corporations is fundamentally soulless anyway. These artists aren't working with something they then own, the corporation owns it. They're there to serve a purpose.

    In reality, an artist speeding up their process with AI would leave them more time to actually create for themselves. More time to figure out their own ideas and explore in more meaningful ways. Because they don't have to work overtime for some insecure producer who constantly changes their mind making them have to do patch works of their art because other people are lacking in creativity or ability to understand works of art.

    Anyone who's been working within these kinds of corporate systems and these types of corporations aren't actually happy. Because there's no appreciation for anything they do and no understanding of their ideas as everything is filtered through whatever corporate strategy that drives the project from above.

    Why not then be the artist who's an expert with AI? Because you can't put an intern onto writing prompts, that's not how generative AI works. You need to have an eye for art even if you work with AI. Utilizing AI in your work for these companies does not destroy your artistic soul for what you make for yourself or your own projects.

    The "good enough" companies, before these AI models, have never been good for artists anyway. Why would artists ever even care for their work towards these companies if they themselves won't care for the artists? So if they start becoming artist with expertise in AI, then these companies will soon hire them back once they realize it's just not viable to have non-artists handling their AI generations.

    I don't think artists have really been thinking enough about this shift in the industry. Instead they're behaving like luddites thinking that's a good way forward. And companies who don't know the value of artists, didn't value artists before either. And those companies who are forced into using AI due to how it speeds up projects when trying to compete with others, but who still focus on the quality of art in their product, will still hire actual artists for the job of handling the generative AIs. Because they understand that the quality of the artist isn't in the brush, or photoshop, or a random prompt, it's in the eye, ear and mind to evaluate what's being created, to make changes, to guide something, a design, towards a specific goal.

    How all of this just seem to get lost in the debate about generative AI is mind boggling.

    Maybe this is because there are more non-artists, who aren't even working with any of this, who drives the hate against AI. Just like all the people who hate CGI in movies and just suck up the marketing of "no CGI" in movies nowadays. When the truth is that CGI is used all of the time in movies and these people simply have no idea what they're talking about, they just want to start a brawl online to feed the attention economy with their ego.

    Of course the data collection isn't the problem but what people do with it. It's perfectly fine for someone to download a bunch of images and store it on their computer but the reason why photobashing is considered controversial is that it takes that data and uses it in a manner that some consider to be insufficiently transformative. Whether AI's process is like that is another matter that we need to address.Mr Bee

    Then you agree that the lawsuits going on that targets the training process rather than the outputs, uses of outputs and the users misusing these models are in the wrong.

    Sorry if I missed some of your points but your responses have been quite long. If we're gonna continue this discussion I'd appreciate it if you made your points more concise.Mr Bee

    Sorry, ready this too late :sweat: But still, the topic requires some complexity in my opinion as the biggest problem is how the current societal debate about AI is often too simplified and consolidated down into shallow interpretations and analysis.
  • The "AI is theft" debate - An argument
    You ask "Why is B theft?" but your scenario omits any legal criteria for defining theft, such as whether B satisfies a set threshold of originality.

    How could we know whether B is theft when you don't show or describe its output, only its way of information processing. Then, by cherry picking similarities and differences between human and artificial ways of information processing, you push us to conclude that B is not theft. :roll:
    jkop

    Because the issue that the whole argument I made is about... is that there are claims of copyright infringement put on the process of training these models, not the output. When people scream "theft!" to the tech companies, they are screaming at the process of training the models using copyrighted material. What I demonstrated with the scenarios is that the process does not fall under copyright infringement, because it's an internal process that is behind closed doors, either inside our head or in the AI lab. And so, that process cannot be blamed for copyright infringement and the companies cannot be blamed to violate any copyright other than the output.

    Because of this, the output is a question of alignment and the companies are actively working towards mitigating accidental plagiarism. Which means they're already working to adress the problems that artists don't like about AI generations. And the user is then solely the one responsible for how they use the generated images and are solely the ones who need to make sure they don't end up with plagiarized content.

    But the main issue, why I'm making this argument, is that none of this matters for the artists doing lawsuits. They are attacking the first part, the process, the one that the scenario A and B is about. And therefore shown to not be interested in alignment or making sure these models are safe from plagiarism. Instead, they either have no knowledge of the technology and make shit up about how it is theft, things that aren't true about how the technology works because they think the companies just take their stuff and put it out there. Or they know how the technology works, but they intentionally target the part of the technology that would kill the models as an attempt to destroy the machines as the luddites did. Both of these stances are problematic and could lead to court rulings at a loss for artists, effectively giving less voice to artists in this matter, rather than enforcing them.

    The argument is about focusing on alignment and how to improve the outputs past plagiarism. It's about making sure LLMs always cite something if they use direct quotes, and have guardrails that self-analyze the outputs to make sure it falls within our rather arbitrary definitions of originality.

    Because people stare blindly into the darkness about these AI models. The positive sides of them, all the areas in which we actually benefit, like medical, sciences and even artists in terms of certain tools, are actually partly using the same models that are trained on this copyrighted material, because the amount of data is key to the accuracy and abilities of the models. So when artist want to block the usage of copyrighted material in training data, they're killing far more than they might realize. If a cancer drug development is utilizing GPT-4 and they suddenly must shut it down and retrain on less data, it will stop the development of that drug as well as maybe not be able to continue if a reworked model doesn't function the same due to the removal of a large portion of training data.

    People, simply don't understand this technology and run around screaming "theft!" just because others scream "theft!". There's no further understanding and no further nuance to this topic and this simplified and shallow version of the debate needs to stop, for everyone's sake. These models are neither bad or good, they're tools and as such it's the usage of the tools that needs to be adressed, not let luddites destroy them.
  • The "AI is theft" debate - An argument
    One difference between A and B is this:

    You give them the same analysis regarding memorizing and synthesizing of content, but you give them different analyses regarding intent and accountability. Conversely, you ignore their differences in the former, but not in the latter.
    jkop

    No, the AI system and the brain function in their physical process has no accountability because you can only put guilt on something that has subjective intent. And I've described how intent is incorporated into each. The human has both the process function AND the intent built into the brain. The AI system only has the process system and the intent is the user of that system. But if we still put responsibility on the process itself, then it's a problem with alignment and we can fine tune the AI system to align better. Even better than we can align a human as human emotion comes in the way of aligning their intent. Which is why accidental plagiarism happens all the time. We simply aren't smart enough in comparison to an AI model that's been properly aligned with copyright law. Such a system will effectively be better than a human at producing non-copyrighted material set within a decided value of "originality".
  • The "AI is theft" debate - An argument
    The processors in AI facilities lack intention, but AI facilities are owned and operated by human individuals and corporations who have extensive intentions.BC

    And those extensive intentions are what, in your perspective? And in what context of copyright do those intentions exist?

    AGI doesn't necessarily have to think exactly like us, but human intelligence is the only known example of a GI that we have and with regards to copyright laws it's important that the distinction between an AGI and a human intelligence not be that all that wide because our laws were made with humans in mind.Mr Bee

    Not exactly sure what point you're making here? The only time in which copyright laws apply to the system itself and independent of humans either on the back or front end is when an AGI shows real intelligence and provable qualia, but that's a whole other topic on AI that won't apply until we're actually at that point in history. That could be few years from now, 50 years or maybe never, depending on things we've yet to know about AGI and super intelligence. For now, the AGI system's that are on the table mostly just combine many different tasks so that if you input a prompt it will plan, train itself and focus efforts towards a the goal you asked for without constant monitoring and iterative inputs from a human.

    Some believe this would lead to actual subjective intelligence for the AI, but it's still so mechanical and lacking the emotional component that's key to how humans structure their experience that the possibility for qualia is pretty low or non-existent. So the human input, the "prompter" still carries the responsibility of its use. I think, however, that the alignment problem becomes a bigger issue with AGI as we can't predict in what ways an AGI plan and execute for a specific goal.

    This is also why AGI can be dangerous, like the paperclip scenario. With enough resources at its disposal it can spiral out of control. I think that the first example of this will be a collapse of some website infrastructure like Facebook as the AGI ends up flooding the servers with operations due to a task that spirals out of control. So before we see nuclear war or any actual dangers we will probably see some sort of spammed nonsense because an AGI executed a hallucinated plan for some simple goal it was prompted to do.

    But all of that is another topic really.

    The question is whether or not that process is acceptable or if it should be considered "theft" under the law. We've decided as a society that someone looking at a bunch of art and using it as inspiration for creating their own works is an acceptable form of creation. The arguments that I've heard from the pro-AI side usually tries to equate the former with the latter as if they're essentially the same. That much isn't clear though. My impression is that at the very least they're quite different and should be treated differently. That doesn't mean that the former is necessarily illegal though, just that it should be treated to a different standard whatever that may be.Mr Bee

    The difference between the systems and the human brain has more to do with the systems not being the totality of how a brain works. It's simulating a very specific mechanical aspect of our mind, but as I've mentioned it lacks intention and internal will, which is why inputted prompts need to guide these processes towards a desired goal. If you were able to add different "brain" functions up to the point that the system is operating on identical terms as the totality of our brain, how do laws for humans start to apply on the system? When do we decide it having agency enough to be the one responsible for actions?

    But the fundamental core to all of this is whether or not copyright laws apply to a machine that merely operate on simulating a human brain function. It may be that neural networks that are floating and constantly reshape and retrain itself on input data is all there is to human consciousness, we don't know until we reach that point for these models. But in the end it becomes rather a question of how copyright laws function within a simulation of how we humans "record" everything around us in memory and how we operate on it.

    Because when we compare these systems to that of artists and how they create something, there are a number of actions by artists that seem far more infringing on copyright than what these systems do. If a diffusion model is trained on millions of real and imaginary images of bridges, it will generate a bridge that is merely a synthesis of them all. And since there's only a limited number of image perspectives of bridges that are three-dimensionally possible, where it ends up will weight more towards one set of images than others, but never a single photo. An artist, however, might take a single copyrighted image and trace-draw on top of it, essentially copying the exact composition and choice of perspective from the one who took the photograph.

    So if we're just goin by the definition of a "copy" or that the system "copies" from the training data, it rather looks like there are more artists actually copying than there are actual copying going on within these diffusion models.

    Copyright court cases have always been about judging "how much" was copied. It's generally about defining how many notes something was similar to, if lyrics or texts appeared in too many exact words or sentences after another. And they all depend on the ability of the lawyers and attorneys to prove that the actions taken were more or less based on a line drawn in the sand from previous cases that proved or disproved infringement.

    Copyright law has always been shifting because it's trying to apply a definition of originality to determine if a piece of art is infringement or not. But the more we learn about the brain and creative process of the mind, the more we understand of how little free will we actually have and how influential our chemical and environmental processes are in creativity, and how less logical it is to propose "true originality". It simply doesn't exist. But copyright laws demand that we have a certain line drawn in the sand that defines where we conclude something "original", otherwise art and creativity cannot exist within a free market society.

    Anyone who studied human creativity in a scientific manner, looking at biological processes, neuroscience etc. will start to see how these definitions soon become artificial and non-scientific. They are essentially arbitrary inventions that over the centuries and decades since 1709 have gone through patch-works trying to make sure that line in the sand is in the correct place.

    But they're also taken advantage of. With artists that had a lot of power using it against lesser known artists. And institutions who've used it as a weapon to acquire pieces of work from artists who lose their compensation because they didn't have a dozen legal teams behind them fighting for their rights.

    So, what exactly has "society" decided about copyright laws? In my view it seems to be a rather messy power battle rather than truly finding where the line is drawn in the sand. The reason why well-known artists try to prove copyright infringement within the process of training these models is that if they win, they will kill the models as they can't use the data that is necessary to train them. The idea of the existential threat to artists have skewed people's minds into making every attempt to kill these models, regardless of how illogical the reasoning is behind it. But it's all based on some magical thinking about creativity and ignoring the social and intellectual relationship between the artist and the audience.

    So, first, creativity isn't a magic box that produce originality, there's no spiritual and divine source for it and that produces a problem for the people drawing the line in the sand. Where do you draw it? When do you decide something is original? Second, artists will never disappear because of these AI models. Because art is about the communication between the artist and their audience. The audience want THAT artist's perspective and subjective involvement in creation. If someone, artists or hacks who believe they're artists, think that generating a duplicate of a certain painting style through an AI system is going to kill the original artist, they're delusional. The audience doesn't care to experience derivative work, they care only about what the actual artist will do next, because the social and intellectual interplay between the artist and the audience is just as important, if not the most important aspect rather than some derivative content that looks similar. That artists believe they're gonna lose money on some hacks forcing an AI to make "copies" and derivative work out of their style is delusional on both sides of the debate.

    In the end, it might be that we actually need the AI models for the purpose of deciding copyright infringement:

    Imagine if we actually train these AIs on absolutely everything that's ever been created and possible to use as training data. And then we align that system to be used as a filter where we decide the weights of the system to approximately draw that line in the sand, based on what we "feel" is right for copyright laws. Then, every time we have a copyright dispute in the world, be it an AI generation or someone's actual work of art, this artwork is put through that filter and it can spot if that piece of work falls under copyright infringement or not.

    That would solve both the problem with AI generated outputs and normal copyright cases that try to figure out if something was plagiarized.

    This is why I argue for artists to work with these companies for the purpose of alignment rather than battling against them. Because if we had a system that could help spot plagiarized content and define what's derivative, it will not only solve the problems with AI generative content, it will also help artists that do not have enough legal power to win against powerful actors within the entertainment industry.

    But because the debate is so simplified down to two polarized sides and that people's view on copyright laws is this belief that there is a permanent and rigid line in the sand, we end up in a battle about power struggles about other things rather than about artists actual rights, the creativity and the prospects of these AI models.

    Judging the training process to be copyright infringement becomes a stretch and a very wiggly drawn line in that sand. Such a definition start to creep into aspects that doesn't really have to do with copying and spreading files, or plagiarism and derivative work. And it becomes problematic to define that line properly based on how artists themselves work.

    Depends on what we're talking about when we say that this hypothetical person "takes parts of those files and makes a collage out of them". The issue isn't really the fact that we have memories that can store data about our experiences, but rather how we take that data and use it to create something new.Mr Bee

    Then you agree that the training process of AI models does not infringe on copyright and that it's rather the problem of alignment, i.e how these AI models generate something and how we can improve them not to end up producing accidental plagiarism that the focus should be on. And as I mentioned above, such a filter in the system or such an additional function to spot plagiarism would maybe even be helpful to determine if plagiarism has occurred even outside AI generations; making copyright cases more automatic and fair to all artists and not just the ones powerful enough to have a legal teams acting as copyright special forces.

    Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind.jkop

    If the court look at the actual outputted work, then the training process does not infringe on copyright and the problem is about alignment, not training data or the training process.

    Defining how the system works is absolutely important to all of this. If lots of artists use direct copies of other's work in their own work and such work can pass copyright after a certain level of manipulation, then something that never use direct copies should also pass copyright. How a tool or technology function is absolutely part of how we define copyright. Such rulings have been going on for a long time and not just in this case:

    https://en.wikipedia.org/wiki/White-Smith_Music_Publishing_Co._v._Apollo_Co.
    https://en.wikipedia.org/wiki/Williams_%26_Wilkins_Co._v._United_States
    https://en.wikipedia.org/wiki/Sony_Corp._of_America_v._Universal_City_Studios,_Inc.
    https://en.wikipedia.org/wiki/Bridgeman_Art_Library_v._Corel_Corp.

    What's relevant is whether a work satisfies a set threshold of originality, or whether it contains, in part or as a whole, other copyrighted works.jkop

    If we then look at only the output, there's cases like Mannion v. Coors Brewing Co., in which the derivative work can be argued is even more closely resembled to the original than what a diffusion model produce even when asked to do a direct copy, and yet, the court ruled that it was not copyright infringement.

    So where do you draw the line? As soon as we start to define "originality" and we start to use scientific research on human creativity, we run into the problem of what constitutes "inspiration" or "sources" for the synthesis that is the creative output.

    There is no clear line about what constitutes "originality", so it's not a binary question. AI generation can be ruled both infringement and not, so it all ends up being about alignment; how to make sure the system acts within copyright laws and not that it, in itself, breaks copyright law, which is what the anti-AI movement is trying to prove, on these shaky grounds. And the question of what constitutes "originality" is within the history of copyright cases a very muddy defined concept, to the point that anyone saying the concept is "clear", don't know enough about this topic and has merely made up their own mind about what they themselves believe, which is no ground for any law or regulation.

    There are also alternatives or additions to copyright, such as copyleft, Creative Commons, Public Domain etc. Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy, and to snag people's copyrighted works, obfuscate their identity but exploit their quality will increase the market value of the systems. Plain theft!jkop

    And now you're just falling back on screaming "theft!" You simply don't care about the argument I've made over and over now. Training data is not theft because it's not a copy and the process mimics how the human brain memorize and synthesize information. It's not theft for a person with photographic memory, so why is it theft for these companies when they're not distributing the raw data anywhere?

    Once again you don't seem to understand how the systems work. It's not about greed; the systems require such a large amount of data to function in a way that makes the technology function properly. The amount of data is key. And HOW the technology works is absolutely part of how we define copyright laws, as described with the cases above. So ignoring how this tech works and just screaming that they are "greeeeedy!" just becomes the same shouting polarized hashtag mantra that everyone else is doing right now.

    And this attitude and lack of knowledge about the technology show up in your contradictions:

    Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind.jkop
    Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy... Plain theft!jkop

    ...If the court should just look at the output, then the training data and process is not the problem, but still you scream that this process is theft, even though the court might only be able to look at what the output of these AI models are doing.

    The training process using copyrighted material happens behind closed doors. Just like artists gathering copyrighted material in their process to produce their artwork. If the training process on copyrighted material is identical to an artist using copyrighted material when working, since both appears behind closed doors... the only thing that matters is the final artwork and output from the AI. If alignment is solved, there won't be a problem, but the use of copyrighted material in the training process is not theft, regardless of how you feel about it.

    Based on previous copyright cases, if the tech companies win against those claiming the training process is "theft", it won't be because the companies are greedy and have "corrupted" legal teams, it will be because of the copyright law itself and how it's ruled in the past. It's delusional to think that all of this concludes in "clear" cases of "theft".
  • The "AI is theft" debate - An argument
    A and B are set up to acquire writing skills in similar ways. But this similarity is irrelevant for determining whether a literary output violates copyright law.jkop

    Why is it irrelevant? The system itself lacks the central human component that is the intention of its use. While the human has that intention built in. You cannot say the system violates copyright law as the system itself isn't able to either have copyright on its output or by its own will break copyright. This has been established by the "https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute" and would surely apply to an AI model as well. That leaves the user as the sole agent responsible for the output, or rather, for the use of the output.

    Because I can paint a perfect copy of someone else's painting. It's rather in what way I use it that defines how copyright is applied. In some cases I can show my work crediting the original painter, in some cases I can sell it like that, or I will not be able show it at all but still have it privately or unofficially. The use of the output defines how copyright applies and because of that we are far past any stage in which the AI model and its function is involved, if it has at all made anything even remotely close to infringing on copyright.

    It's basically just a tool, like the canvas, paint and paintbrush. If I want to sell that art, it's my responsibility to make sure it isn't breaking any copyright laws. The problem arise when people make blanket statements that all AI outputs break copyright, which is a false statement. And even if there is a ruling that forbid the use of AI systems, they would only be able to criminalize monetization of outputs, not if they're used privately or unofficially within some other creative process as a supporting tool.

    All artists use copyrighted material during their work, painters usually cut out photos and print out stuff to use as references and inspiration while working. So all of this becomes messy for those proposing to shut these AI models down, and in some cases lead to double standards.

    In any case it leads back to the original claim that my argument challenge. The claim that the training process is breaking copyright because it is trained on copyrighted material. Which is what the A and B scenario is about.

    You blame critics for not understanding the technology, but do you understand copyright law? Imagine if the law was changed and gave Ai-generated content carte blanche just because the machines have been designed to think or acquire skills in a similar way as humans. That's a slippery slope to hell, and instead of a general law you'd have to patch the systems to counter each and every possible misuse. Private tech corporations acting as legislators and judges of what's right and wrong. What horror.jkop

    Explain to me what it is that I don't understand about copyright law.

    And explain to me why you make such a slippery slope argument as some kind of "appeal to extremes" fallacy thinking that such a scenario is what I'm proposing. You don't seem to read what I write when I say that artist need to work with these companies for the purpose of alignment. Do you understand what I mean by that? Because your slippery slope scenario tells me that you don't.

    You keep making these strawmans out of a binary interpretation of this debate. That me criticizing how artists argue against AI means I want to rid all copyright law from AI use. That is clearly false.

    I want people to stop making uninformed, uneducated and polarized arguments and instead educate themselves to understand the systems so the correct arguments can be made that make sure artists and society can align with the development of AI. Because the alternative is the nightmare you fear. And when artists and people just shout their misinformed groupthink opinions as hashtags until a court rules against them because they didn't care to understand how these systems work, that nightmare begins.

    If your claim is that similarity between human and artificial acquisition of skills is a reason for changing copyright law, then my counter-argument is that such similarity is irrelevant.jkop

    How do you interpret this being about changing copyright law? Why are you making stuff up about my argument? Nowhere in my writing did I propose we change copyright laws in favor of these tech companies. I'm saying that copyright law does not apply to the training process of an AI model as the training process is not an action of copyright infringement anymore than a person with photographic memory who reads all books in a library. You seem unable to understand the difference between the training process and the output generation? And it's this training process specifically that is claimed to infringe on copyright and the basis of many of the current lawsuits. Not the generative part. Or rather, they've baked these lawsuits into a confused mess of uninformed criticism that with good lawyers on the tech company's side could argue in the same manner I do. And the court requires proof of copyright infringement. If the court rules that there's no proof of infringement in the training process, it could spiral into dismissal of the case, and that sets the stage for a total dismissal of all artists concerns. No one seems to see how dangerous that is. This is why my actual argument that you seem to misunderstand constantly, is to focus on the problems with image generation and create laws that actually dictate mandatory practices for these tech companies to work with artists for the purpose of alignment. That's the only way forward. These artists are now on a crusade to try and rid the world of these AI models and it's a fools errand. They don't understand the models and the technology, and try bite off more than they can chew instead of focusing their criticism properly.

    What is relevant is whether the output contains recognizable parts of other people's work.jkop

    Alignment is work already being conducted by these companies, as I've said now numerous times. It's about making sure plagiarism doesn't occur. It's in everyone's interest that it doesn't happen.

    And the challenge is that you need to define "how similar" something is in order to define infringement. This is the case in every copyright case in court. Artists are already using references that copy entire elements into their own work without it being copyright infringement. Artists can see something in certain colors, then see something else with a nice composition, then see a picture in a newspaper that becomes the central figure in the artwork and they combine all three into a new image that everyone would consider "original". If an AI model does exactly the same, and at the same time only use its neural memory, it's using even less direct references and influences as a diffusion model never copy anything directly into an image. Even older examples of outdated misaligned models that show almost identical images still can't reproduce them exactly, because they aren't using a file as the source, they're using neural memory in the same way we humans do. Compare that to artists who directly use other people's work in their art, it happens more than people realize. Just check how many films in which directors blatantly copy a painting into a shot composition and style, or use an entire scene from another movie almost verbatim. How do you draw the line? Why would the diffusion models and LLMs be worse than how artists are already working? As I said, it ends up being an arbitrary line in which we just conclude... because it's a machine. But as I've said, the machine, like the gun that forced the painter to plagiarize, cannot be blamed for copyright infringement. Only the user can.

    One might unintentionally plagiarize recognizable parts of someone else's picture, novel, scientific paper etc. and the lack of intent (hard to prove) might reduce the penalty but hardly controversial as a violation.jkop

    Yes, which means the alignment problem is the most important one to solve. Yet, as mentioned, if we actually study how artists work, if we check their process, if I check my own process, it quickly becomes very muddy how works of art forms. People saying that art magically appears out of our divine creativity are just religious and spiritual and that's not a foundation for laws. The creative process is part a technological/biological function and part subjective intention. If the function can be externalized as a tool, then how do copyright get defined? Copyright can only be applied to intention, it cannot be applied to the process, otherwise all artists would infringe on copyright in their process of creation.

    In the end, if alignment gets solved for these AI models, to the point they are unable to copy anything over a certain point of plagiaristic level for an output, and that this aligns with copyright laws for definitions of "originality", then these systems will actually be better at avoiding copyright infringement than human artists, because they won't try to fool the copyright system for the purpose of gaining something out of riding on other's success, which is the most common reason why people infringe on copyright outside the accidental. An aligned system does not care, it only sets the guardrails so that the human component cannot step over the line.
  • The "AI is theft" debate - An argument
    There is data, and at a deeper level there is what the data means. Can an AI algorithm ever discover what data means at this deeper level?RussellA

    The system doesn't think, the system doesn't have intention. Neither writer exists within the system, it is the user that informs the intention that guides the system. It's the level of complexity that the system operates on that defines how well that output becomes. But the fact remains that if the engineer program the system not to plagiarize and the user doesn't ask for plagiarism, there's no plagiarism going on anymore than an artist who draws upon their memory of works of art that inspire them. These systems have build in guardrails that attempt to prevent accidental plagiarism, something that occurs all the time by humans and has been spottet within the systems as well. But in contrast to human accidental plagiarism, these systems are getting better and better at discovering such accidents, because such accidents are in no ones interest. It's not good for the artist who's work was part of the training data, it's not good for the user and it's not good for the AI company. No one has any incentive to let these AI models be plagiarist machines.

    But the problem I'm bringing up in my argument primarily has to do with claims that the act of training the AI model using copyrighted material is plagiarism and copyright infringement. That's not the same as the alignment problem of its uses that you are bringing up.

    If alignment keeps getting better, will artists stop criticizing these companies for plagiarism? No. Even if AI models end up at a point where it's basically impossible to get accidental plagiarism, artists will not stop criticizing, because they aren't arguing based on rational reasoning, they want to take down these AI models because many feel they're a threat to their income and they invent lies about how the system operates and about what the intentions are of the engineers and the companies behind these models. They argue that these companies "intentionally targets" them when they don't. These companies invented a technology that mimic how the human brain learn and memorize and how this memory functions as part of predicting reality, demonstrating it with predicting images and text into existence and how this prediction start to emerge other attributes of cognition. It's been happening for years, but is now at a point in which there can be practical applications in society for their use.

    We will see the same with robotics in a couple of years. The partly Nvidia-lead research using LLMs for training robots that was just published showed how GPT-4 can be used in combination with robotics training and simulation training. Meaning, we will see a surge in how well robots perform soon. It's basically just a matter of time before we start seeing commercial robots for generalized purposes or business applications outside of pure industrial production. And this will lead to other sectors in society starting to criticize these companies for "targeting their jobs".

    But it's always been like this. It's the luddites all over the again, smashing the industrial machines instead of getting to know what this new technology could mean and how they could use them as well.

    That's one of the main issues right? How comparable human creativity is to that of AI. When an AI "draws upon" all the data it is trained on is it the same as when a human does the same like in the two scenarios you've brought up?

    At the very least it can be said that the consensus is that AIs don't think like we do, which is why don't see tech companies proclaiming that they've achieved AGI. There are certainly some clear shortcomings to how current AI models work compared to human brain activity, though given how little we know about neuroscience (in particular the process of human creativity) and how much less we seem to know about AI I'd say that the matter of whether we should differentiate human inspiration and AI's' "inspiration" currently is at best unclear.
    Mr Bee

    AGI doesn't mean it thinks like us either. AGI just means that it generalizes between many different functions and does so automatically based on what's needed in any certain situation.

    But I still maintain that people misunderstand these things. It's not a binary question, it's not that we are looking at these systems as A) Not thinking like humans therefore they are plagiarism machines or B) They think like us therefor they don't plagiarize.

    Rather, it is about looking at what constitutes plagiarism and copyright theft in these systems. Copyright laws are clear when it comes to plagiarism and stealing copyrighted material. But they run into problems when they're applied as a blanket statement against these AI models. These AI models doesn't think like us, but they mimic parts of our brain. And mimicking part of our brain is not copyright infringement or theft because if it does so with remarkable similarity, then we can't criticize these operations without criticizing how these functions exists within ourselves. The difference between our specific brain function and these AI systems become arbitrary and start to take the form of spirituality or religon in which the critics falls back on "because we are humans".

    Let's say we build a robot that uses visual data to memorize a street it walks along. It uses machine learning and as a constantly updating neural network that mimics a floating memory system like our own neural network constantly changing with more input data. While walking down the street it scans its surroundings and memorize everything into neural connections, like we do. At some point it ends up at a museum of modern art and goes inside, and inside it memorizes its surroundings, but that also means all the paintings and photographs. Later in the lab we ask it to talk about its day, it may describe its route and we ask it to form an image of the street. It produces an image that somewhat looks like the street, skewed, but with similar colors, similar weather and so on. This is similar to how we remember. We then ask it to draw a painting inspired by what it saw int the museum. What will it do?

    Critics of AI would say it will plagiarize, copy and that it has stored the copyrighted photos and paintings through the camera. But that's not what has happened. It has a neural network that formed out of the input data, it doesn't have a flash card storing it as a video or photo of something. It might draw something that is accidental plagiarism out of that memory, but since the diffusion system generates from a noise through prediction into form, it will always be different than pure reality, different from a pure copy. Accidental plagiarism happens all the time with people and as artists we learn to check our work so it doesn't fall under it. If the engineers push the system to do such checks, to make sure it doesn't get too close to copyrighted material, then how can it plagiarize? Then we end up with a system that does not directly store anything, it remembers what it has seen just like humans remembers through our own neural network, and it will prevent itself from drawing anything too close to an original.

    One might say that the AI system's neural memory is too perfect and would constitute being the same as having it on a normal flash card, but how is that different from a person with photographic memory? It then becomes a question of accuracy, effectively saying that people with photographic memory shouldn't enter a museum as they are basically storing all those works of art in their neural memory.

    Because what the argument in here is fundamentally about is the claim that the act of training AI models on copyrighted material is breaking copyright. The use and alignment problem is another issue and an issue that can be solved without banning these AI models. But the promotion for banning these systems stem from claims that they were trained on copyrighted material. And it is that specific point that I argue doesn't hold due to how these systems operate and how the training process is nearly identical to how humans are "training" their own neural-based memory.

    Let's say humans actually had a flash card in our brain. And everything we saw and heard, read and experienced, were stored as files in folders on that flash card. And when we wrote or painted something we all just took parts of those files and produced some collage out of them. How would we talk about copyright in that case?

    But when a system does the opposite and instead mimic how our brain operate and how we act from memory, we run into a problem as much of our copyright laws are defined based on interpreting "how much" a human "copied" something. How many notes were taken, accidentally or not, how much of another painting can be spotted in this new work etc. But for AI generated material, it seems that it doesn't matter how far off from other's work it is, it could be provably as original as any other human creation deemed "original", but it still gets blamed as plagiarism because the training data was copyrighted material, not realizing that artists function on the same principles and sometimes even go further than these AI systems, as my example of concept artists showed.

    The conclusion, or message I'm trying to convey here is that the attempt to ban these AI models and call their training process theft is just luddite behavior out of existential fear. And that the real problem is alignment to prevent accidental plagiarism, which is something these companies work hard to prevent as it's in no ones interest for that to happen in outputs. That this antagonizing pitch fork behavior that artists and other people have in this context is counter-productive and that they should instead demand to work WITH these companies to help mitigate accidental plagiarism and ill-willed use of these models.

    It's not like photobashing isn't controversial too mind you. So if you're saying that AI diffusions models are equivalent to that practice then that probably doesn't help your argument.Mr Bee

    No, I'm saying diffusion models doesn't do that and that there's a big irony to the fact that many concept artists who are now actively trying to fight AI with arguments of theft effectively have made money in the past through a practice that is the very same process they falsely accuse these diffusion models of doing based on a misunderstanding of how they actually operate. The operation of these diffusion models, compared to that practice, actually makes the model more moral than the concept artists within this context as diffusion models never directly copies anything into the images, since they don't have any direct copies in memory.

    This highlights a perfect example of why artist's battle to ban these models and their reasoning behind it becomes rather messy and could bite them back in ways that destroys far more for them than if they actually tried to help these companies to instead align their models for the benefit of artists.
  • The "AI is theft" debate - An argument
    According to you, or copyright law?jkop

    According to the logic of the argument. Copyright law does not cover these things and the argument I'm making is that there are problems with people's reasoning around copyright and how these systems operate. A user that intentionally push a system to do plagiarism and who carefully manipulate the prompts for that intention, disregarding any warnings by the system and the alignment programming of it to not do so... ends up solely being the guilty one. It's basically like if you asked a painter to make a direct copy of a famous painting and the painter says "no", pointing out that's plagiarism, yet you take out a gun and hold it to the painter's head and demand it. Will any court of law say that the painter, who is capable of painting any kind of painting in the world, is as guilty as you, just because he has that painting skill, knowledge of painters and different paintings, and the technical capability?

    If 'feeding', 'training', or 'memorizing' does not equal copying, then what is an example of copying? It is certainly possible to copy an original painting by training a plagiarizer (human or artificial) in how to identify the relevant features and from these construct a map or model for reproductions or remixes with other copies for arbitrary purposes. Dodgy and probably criminal.jkop

    No one is doing this. No one is intentionally programming the systems to plagiarize. It's just a continuation of the misunderstandings. Training neural network systems is a computer science field in the pursuit of mimicking the human mind. To generalize operation to function beyond direct programming. If you actually study the history of computer science in artificial intelligence, the concept of neural network and machine learning has to do with forming neural networks in order to form operations that act upon pattern recognition, essentially forming new ideas or generalized operation out of the patterns that emerge from the quantity of analyzed information and how they exist in relation to each other. This is then aligned into a system of prediction that emulate the predictive thinking of our brains.

    A diffusion model therefor "hallucinate" forward an image out of this attempt to predict shapes, colors and perspective based on what it has learned, not copies of what it used to learn. And the key component that is missing is the guiding intent; "what" it should predict. It's not intelligent, it's not a thinking machine, it merely mimics the specific process of neural memory, pattern recognition and predictive operation that we have in our brains. So it cannot just predict on its own, it can't "create on its own". It needs someone to guide the prediction.

    Therefore, if you actually look at how these companies develop these models, you will also see a lot of effort put into alignment programming. They do not intentionally align the models to perform plagiarism, they actively work against it, and making sure there are guardrails for accidental plagiarism and block users trying to intentionally produce it. But even so, these systems are black boxes and people that want to manipulate and find backdoors into plagiarism could be able to do so, especially on older models. But that only leads back to who's to blame for plagiarism and it becomes even clearer that it's the user of the system who intentionally want to plagiarize something and solely becomes the one guilty of it. Not the engineers, or the process of training these models.

    You use the words 'feeding', 'training', and 'memorizing' for describing what computers and minds do, and talk of neural information as if that would mean that computers and minds process information in the same or similar way. Yet the similarity between biological and artificial neural networks has decreased since the 1940s. I've 'never seen a biologist or neuroscientist talk of brains as computers in this regard. Look up Susan Greenfield, for instance.jkop

    The idea behind machine learning and neural networks were inspired by findings in neuroscience, but the purpose wasn't to conclude similarities, it was to see if operations could be generalized and improve predictability in complex situations, such as robotics based on experimenting with similarities to what neuroscientists had discovered about the biological brain. It's only just recently that specific research in neuroscience (IBS, MIT etc.) has been focusing on these similarities between these AI models and how the brain functions, concluding that there are striking similarities between the two. But the underlying principles of operation has always been imitating how memory forms. But you confuse the totality of how a brain operates with the specific function of memory and predictability. Susan Greenfield is even aligned with how memory forms in our brain and hasn't published anything to the contrary of what other researchers have concluded in that context. No one is saying that these AI systems acts as the totality of the brain, but the memory in our head exists as a neural network that acts from the connections rather than raw data. This is how neuroscientists describes how our memory functions and operates as the basis for prediction and actions. The most recent understandings is that memories are not stored in parts of the brain, but instead exists as spread across the brain with different regions featuring a more or less concentration of connections based on the nature of the information. Essentially acting like weights and biases in an AI system that focus how memories are used.

    The fact is that our brain doesn't store information like a file. And likewise, a machine-learned neural network doesn't either. If I read and memorize a page from a book (easier if I had photographic memory), I didn't store this page in my head as a file like a computer does. The same goes for a neural network that was trained with this page. It didn't copy the page, it has put it in relation to other pages, other texts, other things in the world that has been part of its training. And just like our brain, if we were to remove the "other stuff", then the memory and understanding of that specific page would deteriorate, because the memory of the page relies on how it relates to other knowledge, other information, about language, about visual pattern recognition, about contextual understanding of the texts meaning and so on. All are part of our ability to remember the page and our ability to do something with that memory.

    Again, I ask... what is the difference in scenario A and scenario B? Explain to me the difference please.

    Your repeated claims that I (or any critic) misunderstand the technology are unwarranted. You take it for granted that a mind works like a computer (it doesn't) and ramble on as if the perceived similarity would be an argument for updating copyright law. It's not.jkop

    It's not unwarranted since the research that's being done right now continues to find similarities to the point that neuroscientists are starting to utilize these AI models in their research in order to further understand the brain. I don't see anyone else in these debates actually arguing out of the research that's actually being done. So how is it unwarranted to criticize others for not fully understanding the technology when they essentially don't? Especially when people talk about these models storing copyrighted data when they truly don't, and that these engineers also fundamentally programmed the models to focus on doing plagiarism, when that's a blatant lie.

    It seems rather that it's you who take for granted that your outdated understanding of the brain is enough as a counter argument. And forget the fact that there are currently no final scientific conclusions as to how our brain works. The difference however, is that I'm writing these arguments out of the latest research in these fields and that's the only foundation to form any kind of argument, especially when we're talking about the context of this topic. What you are personally convinced about when it comes to how the brain works is irrelevant, whoever you've decided to trust in this field is irrelevant. It's the consensus of up to date neuroscience and computer science that should act as the foundation for arguments.

    So, what are you basing your counter arguments on? What exactly is your counter argument?
  • The "AI is theft" debate - An argument


    The definitions of how artists work upon inspirations and other's work is part of the equation, but still dependent on the intention of the artist. The intention isn't built into the AI models, it's the user that forms the intended use and guiding principle of creation.

    So, artist's stealing from other artists is the same as a user prompting the AI model to "steal" in some form. But the production of the text or image by the AI model is not theft in itself, it's merely a function mimicking how the human brain acts upon information (memory) and synthesize that into something new. What is lacking within the system itself is the intention and thus it can't be blamed for theft.

    Which means we can't blame the technicians and engineers for copyright infringement as these AI models don't have "copies" of copyrighted work inside them. An AI model is trained on data (the copyrighted material) and forms a neural network that functions as its memory and foundation of operation, i.e what weights and biases it has for operating.

    So, it's essentially exactly the same as how our brain structure works when it uses our memory that is essentially a neural network formed by raw input data; and through emotional biases and functions synthesize those memories into new forms of ideas and hallucinations. Only through intention do we direct this into forming an intentional creative output, essentially forming something outside of us that we call art.

    It's this difference that gets lost in the debate around AI. People reason that the training data is theft, but how can that be defined as theft? If I have a photographic memory and I read all books in a library, I've essentially formed a neural network within me on the same ground as training a neural network. And thus, there are no copies, there's only a web of connections that remember data.

    If we criminalize remembering data, the formation of a neural network based on data, then a person with photographic memory is just as guilty by merely existing in the world. The difference between training an AI model and a human becomes arbitrary and emotional rather than logical.

    We lose track of which moral actions are actually immoral and blame the wrong people. It's essentially back to the luddites destroying machines during the industrial revolution, but this time, the immoral actions of the users of these AI systems gets a pass and instead the attacks gets directed towards the engineers. Not because they're actually guilty of anything, but rather because of the popularity of hating big tech. It's a polarizing situation in which the rational reasoning gets lost in favor of people forming an identity around whatever groupthink they're in

    And we might lose an enormous benefit to humanity through future AI systems because people don't seem to care to research how these things actually operate and instead just scream out their hate due to their fear of the unknown.

    Society needs to be smarter than this. Artists need to be smarter than this.
  • The "AI is theft" debate - An argument
    The user of the system is accountablejkop

    If the user asks for an intentional plagiarized copy of something, or a derivative output, then yes, the user is the only one accountable as the system does not have intention on its own.

    possibly its programmers as they intentionally instruct the system to process copyright protected content in order to produce a remix. It seems fairly clear, I think, that it's plagiarism and corruption of other people's work.jkop

    But this is still a misunderstanding of the system and how it works. As I've stated in the library example, you are yourself feeding copyrighted material into your own mind that's synthesized into your creative output. Training a system on copyrighted material does not equal copying that material, THAT is a misunderstanding of what a neural system does. It memorize the data in the same way a human memorize data as neural information. You are confusing the "intention" that drives creation, with the underlying physical process.

    There is no fundamental difference between you learning knowledge from a book and these models learning from the same book. And the "remix" is fundamentally the same between how the neural network forms the synthesis and how you form a synthesis. The only difference is the intention, which in both systems, the mind and the AI model, is the human input component.

    So it's not clear at all that it's plagiarism because the description of the system that you did isn't correct about how it functions. And it's this misunderstanding that a neural network and machine learning functions and this mystification about how the human mind works that produces these faulty conclusions.

    If we are to produce laws and regulations for AI, they will need to be based on the most objective truths about how these systems operate. When people make arguments that make arbitrary lines between how artificial neural networks work and neural networks in our brains, then we get into problematic and arbitrary differences that fundamentally spells out an emotional conclusion: "we shouldn't replicate human functions" and the followup question becomes "on what grounds?" Religious? Spiritual? Emotional? Neither which is grounds for laws and regulations.

    What's similar is the way they appear to be creative, but the way they appear is not the way they function.jkop

    That's not what I'm talking about or what this argument is about. The appearance of creativity is not the issue or fundamental part of this. The function; the PHYSICAL function of the system is identical to how our brain functions within the same context. The only thing that's missing is the intention, the driving force in form of the "prompt" or "input request". Within us, our creativity is interlinked with our physical synthesis system and thus it also includes the "prompt"; the "intention" of creation. AI systems however, only has the synthesis system, but that in itself is not breaking any copyrights anymore than our own mind when we experience something and dream, hallucinates and produce ideas. The intention to plagiarize is a decision and that decision is made by a human, that responsibility is made by a human at the point of the "intention" and "prompt", not before it.

    And so, the argument can be made that a person who reads three books and writes a derivative work based on those three books is doing the same as someone who prompts an AI to write derivative work. Where do you put the blame? On reading the three books (training the models), or the intention of writing the derivative work (prompting the AI model to write derivative)?

    If you put the blame on the act of training the models, then you also put blame on reading the books. Then you are essentially saying that the criminal act of theft is conducted by all people all the time they read, see, hear and experience someone else's work. Because that is the same as training these models, it is the same fundamental process.

    But putting blame on that is, of course, absurd. We instead blame the person's intent of writing a derivative piece. We blame the act of wanting to produce that work. And since the AI models doesn't have that intent, you cannot logically put blame on the act of training these models on copyrighted material, because there's nothing in that act that breaks copyright. It's identical to how we humans consume copyrighted material, storing it in neural memory form. And a person with photographic memory excels at this type of neural storage, exactly like these models.

    A machine's iterative computations and growing set of syntactic rules (passed for "learning") are observer-dependent and, as such, very different from a biological observer's ability to form intent and create or discover meanings.jkop

    I'm not sure if you really read my argument closely enough because you keep mixing up intention with the fundamental process and function of information synthesis.

    There are two parts of creation: 1) The accumulation of information within neural memory and its synthesis into a new form. 2) The intention for creation that guides what is formed. One is the source and one is the driving principle for creation out of that source. The source itself is just a massive pool of raw floating data, both in our minds and in these systems. You cannot blame any of that for the creation because that is like blaming our memory for infringing on copyrighted material. It's always the intention that defines if something is plagiarized or derivative. And yes, the intention is a separate act. We constantly produce ideas and visions even without intent. Hallucinations and dreams form without controlled intent. And it's through controlled intent we actually create something outside of us. It is a separate act and part of the whole process.

    And this is the core of my argument. Blaming the process of training AI models as being copyright infringement is looking to be objectively false. It's a fundamental misunderstanding of how the system works, and they seem to more or less come from people's emotional hate for big tech. I'm also critical of big tech, but people will lose much more control over these systems if our laws and regulations get defined by court rulings in which we lose because side of the people made naive emotional arguments about things they fundamentally don't understand. Misunderstandings of the system, of the science of neural memory and functions and how our own minds work.

    I ask, how are these two different?

    A) A person has a perfect photographic memory. They go to the library every day for 20 years and read every book in that library. They then write a short story drawing upon all that has been read and seen in that library during these 20 years.

    B) A tech company let an algorithm read through all books at the same library, which produces a Large Language Model based on that library as its training data. It's then prompted to write a short story and draws upon all it has read and seen through the algorithm.
    Christoffer

    This is the core of this argument. The difference between intention and the formation of the memory core that creation is drawn from and how synthesis occurs. That formation and synthesis in itself does not break copyright laws, but people still call it theft without thinking it through properly.

    Neither man nor machine becomes creative by simulating some observer-dependent appearance of being creative.jkop

    This isn't about creativity. This isn't about wether or not these systems "can be creative". The "creative process" does not imply these systems being able to produce "art". This type of confusion in the debate is what makes people not able to discuss the topic properly. The process itself, the function that simulates the creative process, is not a question of "art" and AI, that's another debate.

    The problem is that people who fight against AI, especially artists fighting against these big tech companies, put blame on the use of copyrighted material in training these models, without understanding that this process is identical to how the human mind is "trained" as well.

    All while many artists, in their own work and processes, directly use other people's work in the formation of their own, but still put some arbitrary line between themselves and the machines when these machines seem to do the same, even though things like diffusion models are fundamentally unable to do produce direct copies in the same way as some of these artists do (as per the concept art example).

    It's a search for a black sheep in the wrong place. Antagonizing the people in control of these systems rather than trying to promote themselves, as artists, to be part of the development and help form a better condition for future artists.
  • The infinite straw person paradox
    I think you meant my comment was written with A.I. and it was in fact. Good looking out, Lionino. I am messing with newly updated Bing Copilot. My post was translated here by me from the help of a.i., but I did not directly copy and paste it from chat.Kizzy

    Rather than have the AI write for you, write your own post and maybe analyze the grammar and structure by the AI as support. While AI functions well att writing, the problem is that you lose or never challenge your own process of thought as writing isn't just outward communication, it's part of your internal processing of ideas.

    It's been proven in studies that the physicality of writing increases the brain activity. It is stronger in writing on paper with a pen as it's the most physical action you can do, but it's also there while writing on a keyboard.

    So letting an AI do that work for you will only lead to you losing something in the purpose of gaining knowledge through discourse. So, write your own stuff and maybe use AI as a research and editing tool, but never as the source of train of thought for an argument. It will make you cognitively numb.
  • Donald Trump (All General Trump Conversations Here)
    What I can't figure out is, what Trump voters think they're voting for.Wayfarer

    They don't know what they're voting for in terms of politics, they aren't educated enough or they are so within their own bubble that they don't have any access to anything but the notion of "us and them". For them, politics is nothing more than a wrestling match. They want a "good guy" that is charismatic and talk like they do. They aren't intelligent enough to comprehend normal politics or understand political ideas; they flock to the emotional; to the experience of shouting in a group against the evil. They're no different from any other who fall to their knees in front of a father figure that can guide them to a better life.

    It is basic religious fundamentalism in its mechanics.

    well, there's some very dangerous forces at work here.Wayfarer

    Just as with any other religious fundamentalism. It's arguably the same mechanics as with the Nazi regime; the mechanics function the same as in any other time in history when there's a charismatic leader (in their eyes and ears), who promise them paradise.

    It either goes two ways; the charisma fades away with the group's fanaticism fading away as generations die off and gets replaced with new holding less fundamentalism in their hearts. Or they grow to a point in which they believe themselves to hold enough power to just, take over. And at that point, if they are stronger than the rest, they will replace people in power and place the rest into fascist obedience. But more likely, if they try it, the rest will rise up and realize they need to push them back, resulting in civil war or major war.

    They would call it revolution, but revolution has a distinct difference in that true positive revolution revolves around standing up against a government that has taken away the power of its citizens, while they strive for being the one taking away the power of the people (in their favor).

    They effectively ends up being a fundamentalist terrorist group who's yet to have initiated violence, not counting the Jan 6 coup attempt and any unreported religious violence onto minorities that has happened without people's knowledge.

    And as such they should be watched in the same way we have eyes on terrorist groups of the world. Especially if Trump loses the election I wouldn't be surprised if we see something far worse than Jan 6 happening. But regardless of how such a violent event plays out, it will spell the end for Maga in that any larger than Jan 6 violence happening will cement their status as a terrorist movement and only the most hardcore Maga folks will keep wearing those hats.

    The "normal" people are basically just waiting for a legitimate reason to remove these people from power. And any political support of violence against US citizens will be such a hard line that any protection these politicians had before such events will be gone and they will be removed by force. If they then try to provoke further violence, well that's when the entire movement will be officially registered a terrorist group.

    If that happen, it would also spell the breakpoint for the republican party. The ones opposing Trump would eject anyone even close to supporting Trump or the Maga movement, or, they'll leave the Republican party as a rotting corpse while they start a new party with the focus on being the real republicans, promising to never let similar "terrorists" into their party. Cleaning their history and washing away any filth stains from it.

    However things go, there will be a fulcrum point that tips things in some direction. But I'm too optimistic to see anything other than the utter collapse of Maga through self-destruction. They're too stupid to function as a revolutionary movement. They're too stupid to uphold any momentum of such actions. They're basically children playing with fire and when they effectively burn the house down they will face the consequences. They shouldn't be underestimated, but we shouldn't overestimate their ability either. They may have guns and explosives, but if violence erupts into insurrection, we have to remember that revolutionary movements in the world and history actually had training and intelligence behind their attempts. These people aren't revolutionary masterminds and within a nation like the US, any revolutionary action would require an extremely intelligent strategy that fools the entire military.

    And if Trump wins, if he tries anything like this himself, I don't think the rest of the government or military will actually listen. If Trump starts to initiate violence against his own citizens, that's gonna be a fast ticket to his downfall.

    The only reason we're not seeing enough pushback at the moment is that Trump and his followers are just barely on the side of democratic rights. But in a nation like the US, any attempt to remove the constitution or demolish the basic fundamentals of how people define the US will result in a strong pushback, maybe even outright violent pushbacks against Trump and his followers.
  • US Election 2024 (All general discussion)
    because if it were implemented, it might work, and it would make Joe Biden look good.Wayfarer

    It's here that the question of democracy becomes muddied. If actions are done not for the people but for the sake of power and winning elections, then there's no true representative democracy anymore, but a pseudo-democracy.

    The need to simplify everything down to calling pseudo-democracies real democracies because people seem to be unable to understand what is and what isn't a true democracy makes it impossible to progress past the problems of these kinds of pseudo-democracies.

    The US is just a patch work of a democracy, barely on the side of being for the people, mostly just operating under similar ideals as religious fundamentalistic nations around the world; probably the only nation working under Christian fundamentalism in the world, and it infects their democracy and produces demagogues and pseudo-democratic practices.
  • Health
    What counts as ultra-processed?Mikie

    https://health.clevelandclinic.org/ultra-processed-foods

    An example list by ChatGPT:

    Chicken nuggets
    Frozen meals
    Hot dogs
    Packaged soups
    Potato chips
    Soft drinks
    Sweetened breakfast cereals
    Packaged bread and buns
    Industrial pastries and cakes
    Pre-packaged pies and pasta dishes
    Margarine and spreads
    Ice cream and dairy-based desserts
    Processed cheese products
    Flavored milk drinks
    Instant noodles and soups
    Processed meats such as sausages, salami, and bacon
    Microwave popcorn
    Store-bought cookies and biscuits
    Candy bars
    Artificially sweetened beverages
    Flavored yogurts high in sugar
    Ready-to-eat snacks like pretzels and flavored crackers
  • Health


    I avoid ultra processed food and take care with good quality sources for the food I eat.

    England is a perfect example of what happens when ultra processed food has been mainstreamed so hard that it starts to kill its citizens. Avoid ultra processed food at all costs.
  • “That’s not an argument”
    An argument is the presenting of reasons/evidence for a claim or conclusion. Really that simple.Mikie

    I rarely see people actually doing this in here. It's more common than in other places online, but it's still mostly anecdotal and emotional reasoning and when questioned they tantrum because what they were so convinced of in their own mind ended up going through normal scrutiny.

    I think there's a point in not just focusing on making a proper argument. In order for a philosophical discussion to take place, people need to abandon their emotions about their argument and treat it as someone else's argument. Detachement to ones ideas is the only way to not fall into bias and fallacy.

    So, start out with the argument, and then treat any following discussion in which people object to it as you yourself being part of them discussing someone else's idea. As soon as the argument is presented, don't act as if you own it or else you will start protecting it with your life.

    I'm often calling out fallacies and biases, but it's because they're so common among people who aren't well versed in how to rationally treat their own convictions with detachement. A core tenet in philosophy is to question yourself as part of the scrutiny of a formed idea, but most of the time people are just planting their concepts and ideas as flags on a battlefield before going to war for that flag.

    But I agree that some are sloppy in their use of calling out fallacies and biases. Many call out fallacies that aren't fallacies, lacking knowledge of what certain fallacies really are and just wave it around as a shield to any form of scrutiny. But generally, the same people doing that are also the ones committing most of the fallacies themselves.

    The main problem on this forum is rather that when people create their arguments, they aren't actually presenting any evidence or rational logic behind their reasoning. They cook up whatever they believe is evidence and then try to demand it be enough to prove their point, ending up going in circles saying "I've already presented the evidence".

    Generally, a majority of people do not have the necessary knowledge of how to make actual arguments or how to decode arguments. So most people will just go around in circles, failing to grow their knowledge even in a place dedicated to grow knowledge.
  • Donald Trump (All General Trump Conversations Here)
    rumpism is an authoritarian[a] political movement that follows the political ideologies associated with Donald Trump and his political base[32][33] incorporating ideologies such as right-wing populism, national conservatism, neo-nationalism, and neo-fascism.b] Trumpist rhetoric heavily features anti-immigrant,[43] xenophobic,[44] nativist,[45] and racist attacks against minority groups.[46][47] Other identified aspects include conspiracist,[48][49] isolationist,[45][50] Christian nationalist,[51] protectionist,[52][53] anti-feminist,[17][13] and anti-LGBT[54] beliefs. Trumpists and Trumpians are terms that refer to individuals exhibiting its characteristics.Benkei

    Careful so you don't step on someone's free speech by labeling them as something they say they definitely aren't while some apologist calls you out for calling them stupid racists rather than trying to bridge the societal gaps by giving them the intellectual respect they themselves demand to deserve.
  • Are all living things conscious?
    Isn't your title "Are all living things conscious?" a question? And isn't my answer congruent with it?Alkis Piskas

    Are you conscious and aware of the fact that I didn't create this thread and that the title question isn't mine? :sweat:
  • Are all living things conscious?
    Yes, all. Including organisms and plants. They all perceive and react to their environment. Because they all want to survive. And multiply.Alkis Piskas

    Not sure why you quoted me with the title of the thread, but consciousness require awareness. It doesn't require self-awareness, but awareness of the processes that occurs to them and reactions by them. A rock isn't measurably aware of the hammer hitting it, a bug is.

    But I still don't know what you are actually answering to or why you quoted the thread's title as if I asked it?
  • Donald Trump (All General Trump Conversations Here)
    I'm not as concerned with Trump winning the election as I am with him losing the vote count. He has been busy keeping his MAGA base stirred up, and if he loses he might incite them to disrupt procedures at the Capital and overturn the presidency by force of arms, not simply wandering the Halls of Congress.jgill

    So all the potential consequences with him in power during a time of extreme global unrest due to both Russia and China is not a larger concern than some backwards MAGA cult members mounting a real attack that would quickly be fought back and at the same time cement the need to reshape politics into a form that prevents things like this to ever happen again?

    Trump is right about one thing with his "bloodbath" rhetoric though; if he and his followers take things too far, it will tip the scales of society's tolerance of them so far into the negative that they will be branded a terrorist group and if someone wears a MAGA hat it won't end well. Most of these people are gullible idiots, but if consequences for affiliation with MAGA becomes too negative, they will quickly break down into very obscure smaller groups of fanatics.

    I can't see how any of this would end well for Trump, his closest people and his followers. With luck, everything fizzles out over the years, but if Trump and his followers take things too far, then they will quickly realize that there are far more people on the good side who won't tolerate this bullshit.
  • Climate Change (General Discussion)
    Oh if only I could find the right way to talk. 'Crisis' good, 'catastrophe' bad; 'tipping point' good, 'point of no return' bad; 'Houston we have a problem', good, 'The rocket has exploded' bad.

    The main thing is to get the talk nuanced just so, and then everyone will act and no one will despair. Or possibly not.
    unenlightened

    Language matter, especially in media headlines for the part of the masses who are stupid enough to only read the headlines; but who carry enough democratic power to vote people into power who actively act against mitigation strategies.

    Modern capitalism has pushed media in many nations to compete in the attention economy of who can write in the most bold, underlined ALL CAPS text ending with the most exclamation marks; for the purpose of reaching the absolute most extreme eye catching DOOM rhetoric possible.

    Ignoring how such media behavior affect the population who aren't intellectual enough to do anything but follow the most shallow interpretation of reality is to ignore how group think and cult mentality shape and form the upper most deciding factors of democratic elections.

    Today, almost every election balances right at the mid point between two sides and elections become essentially decided by a very small group of people who are pushed and pulled by people in power using any kind of algorithmic weapon they can muster.

    In the end, the intellectual and educated masses stand firm on each side in an election and has to hope that their side had the highest marketing budget to sway that sheep herd in the middle towards their direction.

    If anyone calls that kind of "democracy" our peak of society and spearhead of civilisation, they're delusional. Democracy today is just a sports game of sheep herding into winning and gaining power for the next four years. It's not about what's good for society or about solutions to problems.

    So, language matter; language can sway that middle herd towards or away from mitigation strategies. But since commercial media isn't playing a game of morality or truth, but rather profit, the truth gets pushed to the small fine writing underneath the profit-gaining headlines, and the headlines always focus on doom, it's what sells the most ads and grabs the most attention, and attention is today's most valuable currency, more precious than saving the world.

    Narcissus gazing into his reflection in the water; so mesmerized that he can't hear the deadly tsunami up the river.
  • Climate Change (General Discussion)


    Yes, that's my point, the threat is real, the science is real, but the language used in media play into an ideal of everything being too late, when it's not. Or rather, the complexity gets lost and the doomer climate science deniers just point towards singular words as sources and reasons for their cause.

    We already had to change using the term "climate change" to "climate crisis" as deniers leaned into arguments about the "change" having happened before in earth's history and there's no proof for human actions being the reason. So changing it to "climate crisis" have helped push back against those kinds of stupid arguments from them. And now when most of them have shifted into acknowledging human causes, but changed the narrative into that we're doomed so there's no point in changing, then we need to adjust the language to push back against that kind of doomer rhetoric.

    Maybe push terms like "mitigation efforts" and "mitigation strategies" into the mainstream in order to push the concept that it's not too late and there's still time to do stuff. That way the debate instead goes into a debate against deniers and doomers with the frame of reference being a question of "why would you oppose mitigating the effects of this crisis?"

    Change in language works best on people who can't understand information on their own and who instead rely on other authorities to form opinions (authority in terms of group think clusters and populistic influencers pushing their agendas rather than upholding facts).

    Since we have the problem of these people having enough democratic power to push elections in the direction of leaders who would halt mitigation strategies, then the only democratic strategy to use is rhetoric to persuade them.

    The other option is for UN to declare a form of global marshal law on the topic of climate change and that no democratic nation can oppose or work against global mitigation strategies. But I doubt UN can have enough power to shift anything through that.
  • Climate Change (General Discussion)
    I think one error that media and climate scientists make when trying to communicate the problems is to use terms like "point of no return". I think this has been negatively helping the legitimizing of the shifting goal posts for climate science denier's "doomer stance" of "yes, the climate is shifting and yes we might be responsible but there's no point in doing anything since we're already doomed".

    If we can leave out terms like "point of no return", we won't play into their newest but equally stupid position against mitigation projects.
  • US Election 2024 (All general discussion)
    So if an independent candidate DID win (it's a thought-experiment, not an actual prediction) he or she would have to turn to the Democrats because the Republicans can't manage a piss-up in a brewery.Wayfarer

    I don't think any independent candidate would win, but they would split the votes so much if there were three options available that the democrats would win simply by the lack of enough votes on either side of the Republicans.

    However, if, by some miracle, a stable Republican outlier wins instead as an independent, I think that she would gather everyone siding with the Lincoln project and build up a proper party through them. And they might even push out many of the MAGA cult members infesting the other halls of power in congress over time.

    Regardless, I think the only way out and away from Trumpism is to have an independent option during election. Too many Republicans who hate Trump hate the Democrats more and they would vote for the independent voice and drag all the ones who's opting out entirely. It would divide the Republicans, but the smart ones would know it's their only option forward as the MAGA cult could very well spell the end for the Republican party as a whole. Soon or later the normal Republicans will have to take some home cleaning action. It's like they've been infested by cockroaches and have given up trying to solve the issue, but if they grow into too much of a problem they will have to start stomping them out and call exterminators.
  • US Election 2024 (All general discussion)
    which party would she be more likely to be able to negotiate policies with, in light of the dysfunction that characterises the MAGA-GOP?Wayfarer

    I'm not entirely sure how the details of these things go, but wouldn't she align with the Lincon Project and draw together the Republicans who don't want to be part of the MAGA cult?

    Would it be so bold as to predict that at some point, the Republican party will split and the new faction will be called "New Republicans" or something like "True Republicans" or similar? Gathering momentum among normal people who usually vote Republican. That they would acknowledge that it's problematic to gain traction at this time in history, but that their goal is to build up a sense of trust that voters will get a stable Republican party by voting for them and their internal goal is to clean house and rid themselves of any MAGA supporters. That way, the MAGA cult will probably soon evaporate since they cannot get enough traction by numbers alone and the gullible cult folks who soon get tired of not being represented will move on and just vote for the new republican party while the core MAGA cult will just gather together in some remote location and shoot beer cans or whatever mindless trash they find meaningful.
  • Climate Change (General Discussion)


    Yes, that's a good video on what's going on right now. Maybe if the people who can still use their brains could stop focusing on spending their time on so much trash culture and lazy attitudes towards politics and philosophical thought; they might be able to help change the course instead. But people aren't interested, even if they're on the right side of history.

    The problem isn't really the climate deniers or the climate doomers, they're mostly just irrelevant since they're not nearly enough of a democratic force to stand in the way of necessary change. Or, that's how it should be at least. The problem is that democracies are tilted to such an extreme balance between decent and absolute trash that they've become relevant without really being a large democratic force; all because the rest of society consist of lazy people who "can't find the time to involve themselves in these issues".

    It's this lazy attitude, this "I don't have time to think about..." that is the real problem. There's not enough demand on politicians and parties, so politicians fall back into playing into populism in order to keep their power.

    People who acknowledge the problem and agree with the need for solutions, still just don't give a shit about voting for those who actually push for necessary change and they don't seem to care to speak up when necessary.

    This is why shifting the social sphere into climate denying and doomerism should be considered immoral. Something equivalent of being a racist, spitting on the poor, abusive behavior etc. Society needs to change towards treating people who talk and act within such attitudes to be unwelcomed, totally ok to be fired from jobs, kicked out of restaurants, unwanted in social situations etc. And if someone would talk like that in media it should be equivalent of uttering the n-word in public; not as an opinion that's treated equal to everything else.

    If so, if pushed in that direction of social culture in society, it would gather a greater momentum towards action. It would lead to politicians being careful not to cater to such voices and the social consequences would be too severe for people to go around shouting such opinions and statements.

    Since the consequences of not doing anything to mitigate climate chance are so far away in time, we need to have consequences here and now that people want to avoid. Producing a culture of more severe negative social consequences as direct results of promoting or uttering climate denial and doomerism would help change the lazy attitude into being more active and proactive. It would force people to be more verbal in order to keep their social moral status and in doing so keep the focus on working towards solutions higher on the list for politicians as it's part of the cultural atmosphere they want to cater to in order to gain votes.

    Right now, people who, in social situations, talk a lot about climate change and the required need for solutions are often viewed as "bad at parties", while people who are deniers or doomers just get eyeroll reactions. That makes the issues and the topic dead in politics and something left for Reddit brawls, rather than part of core societal topics. Forcing a harsher moral environment around the topic could push people to "show their moral stance" more openly since they surely don't want to be viewed as possible deniers or doomers.

    If people can't take actions on their own, then make it customary immoral not to. The sad truth is that status and social structures are more important for common people than saving the planet. And so shaping a social construct of morality around the subject into being more extreme could help steer the ship in a better direction faster.
  • Migrating to England
    Why not a Scandinavian country instead? If you want a better and working socialistic environment, then England doesn't seem like the best choice?
  • Sound great but they are wrong!!!


    "It's freedom of speech"
  • Bowling Alone
    I’m old enough to see it in my own life. It’s not only technology but a decline of spirituality — one aspect being religion. So in a sense one major contributing factor is a change in philosophy.

    An interesting example is looking at the arts — movies, television, music. Compare Woodstock 1969 to Woodstock 1999. That alone says it all.
    Mikie

    I think the major part has to do with disconnection to others. With technology and internet we've increased our ability to communicate, but we are disconnected to the physical form of communication. There's tons of studies on the importance of physical connection, being in the room with other people. It's been something very much experienced coming out of the pandemic, how mental health drastically improves as soon as people started physically seeing each other again.

    We're blasted by information in our alone time, and the information is "dead". Like this text, like all text on this forum, it is a dead representation of who the people writing here are as a whole. If we all gathered and met up, the discussions would look very different, but it would also have a dimension of emotion that isn't seen online. Respect is higher when facing each other talking.

    So it's not really about just "meeting up", it's about the quality of interaction that is lost online. Humans are built to interact through micro-expressions, body language, tonality in voice. While we don't need it all the time, the dominance of online communication over the physical have led to a change in behavior.

    Together with the focus on individuality, the neoliberal ideology of the self, it has skewed the perspective people have of their ego.

    What we need more today than ever is social groups not just meeting, but building something together within the physical realm. A step back from the individual perspective, the focus on the ego and into a collective realm in which social groups build something together.

    There's no surprise that there's been a rise in isolated groups over the years; stronger polarization between different ideas and ideologies. The lack of a sense of collective as a society has pushed people into other forms of gatherings and without careful guidance formed into destructive ones like MAGA, incel communities, ethnic groups divided away from multicultural collaboration and into hostility against other groups etc We've even seen it in the extreme ways that political parties have generated followers that are less open to actual politics in which a party in parlaments collaborate with a "give and take" structure towards other opposing parties. Previously, political parties collaborated all over the spectrum with the intent of representing their voters wills and needs in the halls of power. Now, they only try to play a political game without any real vision and in closed rooms, scheming stronger ill-willed strategies against other opposing political parties; everything is about sabotaging others party politics than a give and take strategy for progress and problem solving.

    What all of this shows is that while the neoliberal individuality have focused on the ego, that ego still craves the social realm, but with a lack of a collective dimension it clusters together with whatever rhymes with that specific ego and the group behaves outwards with hostility as the single individual ego does at its core.

    It also shows that the individual craves something to be passionate about, and without a larger collective vision, they can only turn to these minor ones and double down on them. As you mentioned, the decline in spirituality and religion has created a void in the larger collective sense.

    But I believe that the solution should be to have something that connects people within the context of a larger collective aspiration. We need a form of goal for humanity as a whole. Something that feels like we're heading somewhere, without necessarily having to do with religion. We need something that people feel is something we build together and can collaborate within.

    The major obstacles is that the largest policies today are controlled by corporations who's interest is in profit. That's nothing that can be collectively gathered around.

    We therefore need a shift towards a collective goal that we can all build together. Something we can all believe in is the right path forward for humanity. Something that gathers people across borders and breaking away from capitalistic profit seeking.

    One such project, I would say, is building a new form of living that mitigates climate change. That project demands collaboration from all people and a dismantle of the selfish individuality that is toxic to us. Figuring that out requires innovations, engineers, philosophers, builders, collaboration across industries and different people. Across borders, nationalities and ethnicities.

    It could be such a projects that we collectively gather around to achieve together, but for that we need to remove power from those holding us back from doing so. We need to stop being careless with who we vote for, who we support and what industries we give our money to. We need to stop our cognitively biased rhetoric that's only there to hide our laziness and find a goal and vision for a future with this problem fixed and work towards it, gathering people around us for it.

    When you look at Woodstock 69 as you mentioned, the one thing that is key is that they were a social group who focused on the good of the collective rather than the expression of the individual. And the LSD helped with a lot of ego death that infused such mentality even further.

    We need more ego death, more of a large collective goal or spirit (without it having to be religion), and we need projects and visions as a collective to focus on. Solving world problems and returning to a sense of collective achievements, like returning to looking up at the stars and dreaming of new human achievements in exploration and reaching new heights as a species.

    People underestimate the importance of such dreams. The constant complain against those who dream of things like space exploration is that we should focus on stuff like ending poverty instead, but they're missing the point of what such dreams do to us. Let's say we end poverty.... and then what? The emptiness of just existing without a sense of purpose kills people far more than a lack of food, because it kills the mind and makes us empty bodies; husks mindlessly moving around, acting out confused emotions in the lack of a path forward.

    We need to dream collectively, we need to collaborate more as a collective and we need to kill the ego. That's the difference between Woodstock 69 and 99.
  • Climate Change (General Discussion)
    And of course... republicans. Can we actually just conclude them to be collectively stupid? Like, what more evidence do we need?

  • Climate Change (General Discussion)
    The problem is that any breakdown in civil order would inevitably disrupt commerce and turn politics more authoritarian.Punshhh

    It's primarily industries that needs to be changed by force. Regular people will surely hate the consequences of the industry changes, but new industries will pop up that can follow the new path long before people start to vote for dictators. Like, the least they can do is to tax carbon emissions, and do it a lot. Then use the money as direct funds towards engineering solutions for mitigation.

    Programmes of education to educate the population in the severity and pressing nature of the threat would be effective in spreading the word.Punshhh

    Education doesn't seem to help much for those last percentages of people who are enough to screw up elections with candidates who oppose green industries.

    You say we are able to make the necessary changes and prevent catastrophe. But I would say it is too late nowPunshhh

    It's too late for some consequences, but giving up would be far more catastrophic. There's no point in just stop mitigation. But we have to speed up the change and do it fast.

    It looks as though the transition to carbon neutral transport is not going to be rolled out in time and may fail, with either a move back to oil, or a collapse of transport systems.Punshhh

    Moving back into oil just to see the entire world collapse is just stupid as a strategy. Just burn the oil industry (not literally). Crack down on the corrupted politicians getting money from it, do it by force if needed. Block oil entirely or partially (to have transportation for the build up of green replacements.

    People buy what is on the market, so remove oil-driven products from the market. Have the governments put a ban on new gas cars earlier than we have now. If they bitch about it and try something as a blow back, put them in jail.

    It's basically war against the climate change consequences and there's traitors walking about.

    The rest of the world would be cut loose and would have to fend for themselves.Punshhh

    Billions against a fortress? Politicians in high places will soon enough be toppled if that would ever happen. Desperation force people into the only option they have, and getting into revolutionary mode à la France can move mountains.

    I just wish people would argue for more serious push against the oil industry than has been done so far. There are just too many politicians in the pockets of the global oil industry, and the politicians who are directly owning their part of the oil industry need to be starved out of power. Like, Russia should be totally isolated. China should be totally isolated. With the only key to the door being that they stop oil. If not, they can hunger until the people storm the leader's castles.
  • Climate Change (General Discussion)
    But there is an enormous inertia in the system and the culture. Many of us are banging our heads against this wall of inertia.Punshhh

    Yes, the system itself is the problem and people rely on the system too much. For this to be fixed, we need to break the system, even if that has to be healed afterwards. The consequences of breaking its stability will be far less than that of taking too long to change course.

    Eventually one realises all we can do is play our part from the position we are in within society. Ideally one would become a politician run for office and change things. Or figure out a way to change peoples minds through some kind of media organisation, or protest group. But again the inertia hits home and many people are already doing these things. In fact some of these people are pushing so hard that media campaigns are growing to discredit them as extremists and pull more people into climate denial.Punshhh

    Activists in this are just as much morons as the deniers and oil chills. It's the other extreme end featuring people who can't handle the psychological stress, so they act out in desperation rather than rationality.

    But the problem is that there's no time to play along as usual. I'm serious here, the largest contributors to emissions need to be put into such pressure that they collapse as economies if they don't change course. China for instance, is the worlds largest contributor to emissions. Their economy need to be crippled to the point they accept they have to change. And so does every nation who does so globally. The global economy will crash because of this, but it has to be done as money is the only thing that moves this world. The problem is that politicians in the world do not take action, they try to eat the cake and have it to, they can't turn their backs on voters who are deniers and who don't care about climate change, so they play it down; they do the absolute minimum required by COP and COP itself only arrive at minimal conclusions that scientists are criticizing being too little each time they gather.

    If people think that such breaking of the system would lead to conflicts and war, yes, it might. But imagine a world with billions of people relocated and battling for resources during famine and societies in need to rebuild their infrastructure and housing due to shifting environmental needs. What wars would that generate?

    What we as regular people can do is as I said, view all people who don't take this seriously as immoral people due to them downplaying the seriousness. There's a big difference between viewing them as having the wrong opinion and seeing them as immoral. The change produces a social change. The problem is that the realm and dimension of the consequences of passive behavior isn't communicated. The seriousness of collective passiveness is downplayed. If the link between being passive or dismissive of the problems and the consequences a few decades from now are established, then it would be easier to view this passivity as being immoral and these people as being immoral.

    But we are still acting like it's just an opinion, like it's a behavior that's fine. It's like in the 40s and 50s and it's fine to be a racist. It's fine to divide people by color. And at some point it's not fine anymore and if you express racist opinions or behave like that in public you'll get punched in the face and people would cheer that on. That's the level of social behavior we need to be at in order to seriously pressure politicians and the public opinion. And even then it would be hard, seen as there's plenty of politicians who still win elections with downright outspoken racism. Even today that happens, but at least that power usually can't survive long if the social ideal is to punch a racist.

    So one reaches a point of acceptance, an acceptance that the crisis is enormous and irreversible and we as a species are to weak to prevent it. This is quite normal, the list of species extinctions in the fossil record is long and there is an inevitability to it.Punshhh

    We are not to weak to prevent it, we just need to do what it takes. When the pressure is on, people won't be weak, they will fight and kill for change. That's where we're heading if we're not acting now.

    It is too late now to overcome this current cycle of climate change, however if some portion of humanity can survive, adapt and preserve our intellectual and technological achievements sufficiently that they can be conveyed to the next flourishing of civilisation. There is an increased chance of achieving a that custodial role.Punshhh

    Or just change course now. If that's our future and people would start to realize this to be a very likely outcome, then they will pick up guns and remove anyone who do not actively work to fix it. It's easy to ignore it now, but when enough people get the short end of the stick, they will soon organize and do something. We might see billions of them. Billions who have nothing else they can do but storm the castles of immorality.


    I think the way you describe it is how many people view things, especially in places that may seem to be out of danger. But people don't realize that there is no such place. The changing climate collapse ecosystems and produce a cocktail effect of consequences, many unpredictable as we've witnessed already. This increase will more than likely happen in our lifetime. If people care for their children, then what future are they giving them? Putting blind folds on the kids, trying to soothe them into a belief that everything will be fine and then kicking them out into a world that is breaking apart?

    Adults today are so inactive and passive that young teenagers have essentially given up. The depression around this subject among young people is so severe and their parents just don't seem to give a shit. It's appalling in my opinion.

    And I actually don't see most people actually accepting how serious this problem is, or rather, they don't seem to accept just how serious this can become. I see most regular people as ignorant, putting on the blind folds and distracting themselves with mindless instagram reels. Essentially they have their head in the sand until the hurricane winds rip their bodies from their stuck heads. If they actually understood, they would speak more openly about it, but they don't, because it's socially awkward to do so, it's socially awkward to be angry about how things are. Changing that would make things go faster.

    And one such change would be to draw a clear moral line between the active and the passive person. If the deniers and passive people are considered immoral, then people will start to express themselves much more on the matter. People will find it much less awkward to socially be outspoken about the issues. People will find it is moral to talk about solutions, to have it as a conversation starter.

    People aren't talking right now, they are quiet.

    How much further, how much does it take in order for regular people to stop voting for politicians who downplay the problems? How much further does it have to go in order for people to put pressure on world politics? How much further does it have to go in order for people to start talking about the issues much more openly?

    I suspect that when the first bullet is fired from a guerilla or resistance group fighting for a piece of land because their own nation is uninhabitable; then people will realize just how dire the situation is. Then it would be such an illogical thing to say "go back to your own country" because they can't, and the number of people and military groups born out of such desperation will grow, and grow, and grow. And they will creep closer and closer and closer to the comfort of people's homes. Then, maybe, regular people will start to get the fucking point on how serious this thing is.
  • Climate Change (General Discussion)
    There may be a handful of sceptics who genuinely don’t accept the science. But they will fade away soon as the climactic impacts start to be felt.Punshhh

    A handfull enough to sway politics in favor of populist leaders who keep the necessary mitigation from happening in time.

    The impacts of climate change will change their minds soon enoughPunshhh

    Yes, as I said, that is the scenario if we fail to do something now. Or, we do what's necessary to not let millions die.

    If it is immoral to let millions die and put the world into a economical and relocation crisis of historical proportions due to inaction, would it be more immoral to take away people's voting rights if they deny that actions need to be taken? Is that a level of cost to the world worth keeping their democratic votes, or is that just a good example of why Kantian ethics aren't enough to make moral sense in all situations? I sense that such things get people's blood pumping into slippery slope scenarios of totalitarian governments, but no, it's about one thing and one thing alone; getting on the right path to avoid disaster and in a battle for the health of the world and all people, that's requires a level of martial law as it is a war against inaction. Any industry that does not have a strategy or plan to change course will lose their execs and board, any politicians who don't have a serious plan for changing a nation's course in time will be removed from power. The blame cannot be put onto the people as the people can only follow how society is structured. The only blame they can get is for who they put into power and everyone needs to be prepared for major economical turmoil as assets are relocated into solutions from the current non-solutions.

    As scientists are witnessing more and more actual consequences of climate change, it is clear that the consequences are very underestimated. If this continues we actually don't know how severe it can get. An eco-system can absolutely survive, but in what state? Losing algae in the sea would produce another tipping point. And with collapses of certain groups of species it can lead to new forms of pathogens and invasive species that could cause new pandemics and a massive famine on a scale never before seen.

    I don't think people really realize how delicate the balance of the world is. The economy is a good analogy for it. The most minor problem can cause extreme fluctuations of the global economical balance. The war in Ukraine and subsequent blockage of gas from Russia caused an energy crisis, which helped pushed us into a big inflationary spiral. The blockade of the Suez Canal alone was able to put the entire world into economical fragility. But it was the sum of the Ukraine war, the pandemic, the blockage, the energy crisis, the Chinese/Taiwan unrest that put the global economy into turmoil. Put into terms of the world's ecological balance and temperature, people underestimate what the change does to the planet. It's like people only think that the sea will rise and the warmest parts of the world will get slightly warmer. In Scandinavia, some people think it will be nice to grow more wine as the region gets warmer, like what the hell are they talking about? It's like people have an inability to actually extrapolate a logical overview of the consequences. If even scientists underestimate the damage, or by fear of being attacked by the idiots of society if they look like alarmists; then just imagine how bad the general population is at accurate predictions of the level of damage we face.

    In my view, rip the fucking band-aid and then we can heal the world from that. It's much easier for everyone than trying to heal from a broken world.
  • Climate Change (General Discussion)
    Because you're sticking to your old guns.baker

    You only know the things I write here, you know nothing else. But you act upon such lack of knowledge and perform judgement. This is just a dishonest attempt at framing the other in a discussion as a form of ad hominem.

    I'm not criticizing you for being rude or mean, I'm criticizing you for being ineffective. Because I want you to be effective.

    You have some really strange ideas about my intentions here.
    baker

    And you are too vague about your intentions as well as framing it in very odd rhetoric.

    Once again, you do not know anything other than what you read of me here. As I've pointed out many times now; if there are deniers, there's no point in trying to convince them as they are acting through a cult mentality. You cannot convince them as long as they are deeply rooted within their community of denial.

    So what efficiency are you talking about? Being efficient in achieving what exactly?

    For me, a question like, "How do you talk to someone who thinks that mankind will adapt to whatever comes, when it comes; so that this person will change their mind and act differently, more in line with planet preservation?" makes perfect sense, to you, it clearly doesn't.baker

    And it's been done to death. How much more education do these people need? The denial group have slowly started to go into just acceptance of a changing world, but they do so in the context of not acting anyway. The outcome of their reasoning is the same as their previous pure denial.

    If they are unable to understand that mitigation is still necessary so as to not completely ruin everything and that not acting will cause millions of deaths as a result, then they haven't really been convinced, they have only moved their goal posts of their denial towards a new position of defending their inaction and ignorance.

    Why should the world cater to these people? Why should we continue wasting time trying to convince them and not just move on with the debate towards what solutions will work best?

    But is being harsh to those people leading to the result you want, namely, an improved state of the planet?baker

    By ignoring them and implementing changes to society anyway, yes, we will save millions and mitigate the worst damages. There's no time to build public opinion through convincing these people, it will be too late. The strategies need to circumvent slow progress, the damage of such rapid progress will be microscopic against the consequence of not doing so.

    Treating these people as immoral is not an act of entitlement, it is an act of building a collective sense of morality that can drive changes in society. If it is considered moral to support actions taken to mitigate climate change and immoral not to, then it will use social structures to form public opinion rather than being dependent on uneducated or people unequipped to understand complex knowledge.

    Structural racism have rarely been fought through educating racists to not support such structural racism; that does not work until they've instinctually already left the racist mindset. Instead it has been a moral dimension that's been most effective transforming society. Reshaping the idea of dividing people into being an immoral act at its core. Then, people don't have to understand any complex knowledge about a subject, they just have to accept the more instinctually programmed moral codes in the social structures they exist in. That's why I don't just call them uneducated, idiots or conspiratorial cultists, but also immoral people who support a destructive movement through inaction or active action against mitigation efforts.

    View them as immoral people, just like racists, abusers and other immoral people. Don't act like they're just expressing some opinions that have some balanced value, because there's no such balance. It's like saying that a racist statement is just as morally acceptable as someone making a statement about love. It's not. Making statements that push public opinion towards ignorance about climate change is an immoral act that can with enough collective public drive cause delays that will kill millions. It is pushing dominos in a direction of pure horror and that is simply an immoral act.

    I think it should still be possible to talk to such people in ways that will get through them.
    It might just take more creativity and effort, and inventing new strategies.
    baker

    You don't think this has been done for decades now? There's no time left to keep doing this. If we had 50 years more to slowly change people's mind, yes, but just look at how far anti-racism has gotten. Shouldn't we've been freed of such idiocy by now? Aren't we educated enough by now to understand how immoral and stupid racism is? We still have major problems with that and more education and trying to convince racists does not help. The only thing that helps is to shut them up and make policies against racism.

    If you have some idea that hasn't been tried to death before regarding convincing these people, then everyone's listening. But there's no practical value to just pointing out that there "should be some way to convince them". If we are to act now, then the solution is to just ignore them, make policies regardless of their opinions and just shut them up. There's simply no time educating these adult children.

    How are you going to "just do what's needed"? By abolishing democracy?baker

    When it comes to the issues with climate change, it has nothing to do with abolishing democracy. It's not a question of opinion or idealism, it is a fact of our world's reality and a fact that points in a certain problematic direction for everyone. Everyone, globally, should take action towards mitigating climate change and stop listening to these immoral people. That is not the same as abolishing democracy.

    Like, if there was a comet coming towards us and the entire world economy and all nations need to act together to solve it fast. Would you leave that up to democracy? To be debated? To try and convince idiots that the problem is real? No, all nations would just move towards solutions like if they had a giant bulldozer. They would run the idiots over and everyone who understands the dangers would cheer it on.

    I think they just fight against having their minds changed by the strategies used so far. Other strategies might yield better results.baker

    The people we are talking about are not discussing the most effective strategies, they are opposing what would be minor inconveniences in their lives. The only ones equipped to really decide the best strategies are the actual scientists, experts and engineers working to solve the problems. Regular people should shut up and listen to these experts. Politicians should shut up and listen to the solutions. The moral dimension around the subject need to become more clear to the public.

    As an example, I used to work as a mathematics tutor. A highschool student came in to be tutored about linear functions. This was her last chance; if she would fail the next test, she would be expelled from school. The situation was dire. She was first tutored by an older tutor, I witnessed some of their sessions. It was clear right away that the student didn't have a grasp on fractions and rules for solving equations. Without mastering those basic things, it's impossible to do linear functions. But the old tutor insisted on working on linear functions with the student. They made no progress and he gave up on her, declaring her to be a hopeless case. The student was then assigned to me. We spent the summer learning fractions and basic rules for equations, things she should have mastered years ago. She passed the test, completed her education, even earned and master's degree.

    Your attitude is that of a teacher; a teacher's goal is to teach. My attitude is that of a tutor; a tutor's goal is to get the student to learn the subject matter, (almost) no matter what it takes.
    baker

    Failing that education would not result in potentially millions of deaths and extremely changed living conditions of the entire global population. Sorry, but this analogy does not work for the subject of climate change as it does not have a moral consequence and the dimension of time to avoid such a consequence. Class is over, the semester is done, action needs to be taken.

    There you go, outsourcing responsibility again.baker

    How am I outsourcing responsibility when I point in the direction of the one's who are actually responsible to take the actions needed? What responsibility are you suggesting me to have and take? Isn't it responsible to also push for actions taken now and not caring for the saboteurs working against these necessary actions? Isn't it taking responsibility to try and push for a moral realm of thinking around this subject and abandoning the idea that this is some debate of ideals?

    What type of responsibility is valid in your book? Considering the urgent time for action and the lack of time to educate people actively giving experts the finger? Did that student give you the finger when you tried to help her? No, because your analogy is about people wanting to be educated, it's nowhere near the reality of this subject matter. I'm all for education, but we don't have time to educate people in order for them to support solutions to a damn comet on collision course with earth. In such a situation you simply ignore the ignorant and take the necessary action that is needed right now.