But in my heart, I might find a McDonalds commercial artistic. What then? — ENOAH
When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions. — ssu
Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance). — Nemo2124
Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.
Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room). — Nemo2124
When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.
You're unpleasant, so I won't be interacting with you further. — fishfry
I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t. — fishfry
Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."
Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time. — fishfry
And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links. — fishfry
Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me. — fishfry
Is there anything I've written that leads you to think that I want to read more about emergence? — fishfry
Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting. — fishfry
My point exactly. In this context, emergence means "We don't effing know." That's all it means. — fishfry
I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords. — fishfry
You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything. — fishfry
I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.
I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it. — fishfry
I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.
It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case. — fishfry
I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post. — fishfry
But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there. — fishfry
Ah. The first good question you've posed to me. Note how jargon-free it was. — fishfry
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.. — fishfry
But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening. — fishfry
Nobody knows what the secret sauce of human minds is. — fishfry
Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea. — fishfry
How can you say that? Reasoning our way through novel situations and environments is exactly what humans do. — fishfry
That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs. — fishfry
How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time? — fishfry
Humans are not "probability systems in math or physics." — fishfry
Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking. — fishfry
Yes, but apparently you can't see that. — fishfry
But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data.. — fishfry
Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? — Christoffer
Yes, but apparently you can't see that. — fishfry
I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity. — fishfry
Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do you — fishfry
You're right, I lack exponential emergent multimodality. — fishfry
I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal? — fishfry
You've convinced me to stop listening to you. — fishfry
the "Chinese room" isn't a test to pass — flannel jesus
"We have no idea what's happening, but emergence is a cool word that obscures this fact." — fishfry
I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about. — fishfry
But calling that emergence, as if that explains anything at all, is a cheat. — fishfry
"Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear. — fishfry
Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight. — fishfry
I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.
If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.
A new idea is needed. — fishfry
Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI. — fishfry
I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires. — fishfry
They're a very clever way to do data mining. — fishfry
(1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s. — fishfry
By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits. — fishfry
Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years? — fishfry
I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection... — fishfry
Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means. — fishfry
You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates. — fishfry
You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."
So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."
You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post. — fishfry
Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making? — fishfry
You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management. — fishfry
You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s. — fishfry
"Impossible to peer into." I call that bullpucky. Intimidation by obsurantism. — fishfry
Every line of code was designed and written by programmers who entirely understood what they were doing. — fishfry
And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. . — fishfry
You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."
I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs. — fishfry
Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?
That will save us both a lot of time.
I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.
I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all. — fishfry
Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anything — flannel jesus
"inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are. — flannel jesus
so robots can't walk? — flannel jesus
Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation. — flannel jesus
"Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back." — flannel jesus
And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a join and some way of powering the extension and contraction of that joint."
And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution. — flannel jesus
That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer? — flannel jesus
if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"? — flannel jesus
The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution. — flannel jesus
"we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me. — flannel jesus
You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it? — flannel jesus
No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that. — flannel jesus
Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it. — flannel jesus
they develop in the end a degree of subjectivity that can be given recognition through language. — Nemo2124
Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response. — fishfry
There's no intelligence, let alone self-awareness being demonstrated. — fishfry
There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening. — fishfry
This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware. — fishfry
Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software. — fishfry
And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do. — fishfry
So I didn't need to explain this, you already agree. — fishfry
Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.
What "new behaviors" to you refer to? A chatbot is a chatbot. — fishfry
Believe they start spouting racist gibberish to each other. I do assume you follow the AI news. — fishfry
Well if we don't know, what are you claiming?
You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard. — fishfry
I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will. — fishfry
In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know. — fishfry
They know everything that's happened, but nothing about what's happening. — fishfry
They can't reason their way through a situation they haven't been trained on. — fishfry
since someone chooses what data to train them on — fishfry
Neural nets will never produce AGI. — fishfry
You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output. — fishfry
I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick. — fishfry
Neural nets are the wrong petri dish.
I appreciate your thoughtful comments, but I can't say you moved my position. — fishfry
You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc... — Nemo2124
In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject. — Nemo2124
Don't you think we're pretty close to having something pass the Turing Test? — RogueAI
This would require solving the Problem of Other Minds, which seems insolvable. — RogueAI
I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.
Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy. — mcdoodle
The proponents and producers of large language models do, however, encourage this anthropomorphic process. GPT-x or Google bard refer to themselves as 'I'. I've had conversations with the Bard machine about this issue but it fudged the answer as to how that can be justified. To my mind the use of the word 'I' implies a human agent, or a fiction by a human agent pretending insight into another animal's thoughts. I reject the I-ness of AI. — mcdoodle
There is an aspect of anthropomorphism, where we have projected human qualities onto machines. The subject of the machine, could be nothing more than a convenient linguistic formation, with no real subjectivity behind it. It's the 'artificialness' of the AI that we have to bear in mind at every-step, noting iteratively as it increases in competence that it is not a real self in the human sense. This is what I think is happening right now as we encounter this new-fangled AI, we are proceeding with caution. — Nemo2124
Chat-GPT and other talking bots are not intelligent themselves, they simply follow a particular code and practice, and express information regarding it. They do not truly think or reason, it's a jest of some human's programming. — Barkon
The question is how do we relate to this emergent intelligence that gives the appearance of being a fully-formed subject or self? This self of the machine, this phenomenon of AI, has caused a shift because it has presented itself as an alternative self to that of the human. When we address the AI, we communicate with it as another self, but the problematic is how do we relate to it. In my opinion, the human self has been de-centred. We used to place our own subjective experiences at the centre of the world we inhabit, but the emergence of machine-subjectivity or this AI, has challenged that. In a sense, it has replaced us, caused this de-centring and given the appearance of thought. That's my understanding. — Nemo2124
Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. This technological leap forward that has occurred in the past few years is heralded as progressive, but as the end-point in our development is it not thwarting creativity and vitally original human thought? On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward. If so, it represents an evolutionary bottle-neck rather than a dead-end. — Nemo2124
I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like. — fishfry
First, if the world is simulated, why don't its 'designers' simply 'pop out' at times and leave us with some trace of their existence? Guidance through such a virtual world might be helpful, and yet there is no trace of anyone 'programming' or 'guiding' us anywhere. — jasonm
Similarly, why don't we sometimes notice violations of the laws of physics? If it's just a simulation, does it matter if the laws of physics are perfectly consistent? This applies to any law of this simulated world, including propositional logic. Again, if you are there, leave us with some trace of your existence through 'miracles' and other types of anomalies that our world does not seem to have. And yet there seems to be no instances of this kind. — jasonm
Third: what type of computing power would be required to 'house' this virtual universe? Are we talking about computers that are bigger than the universe itself? Is this possible even in principle? — jasonm
...it doesn't look like anything to me...
In that sense, I think the notion that the universe is 'simulated' is completely superfluous and can therefore be explained away as being 'highly improbable.' — jasonm
So AI becomes another tool, not a competing intellect. — frank
My view is based on people seeing my work and reading complex messages into it. And this is painting, not ai art. I'm interested in their interpretations. My own intentions are fairly nebulous. Does that mean I'm not doing art? I guess that's possible. Or what if communications isn't really the point of art? What if it's about a certain kind of life? A work comes alive in the mind of the viewer. In a way, Hamlet is alive because you're alive and he's part of you. I'm saying what if the artist isn't creating an inanimate message, but rather a living being? — frank
I said I was withholding my judgement. I have never claimed that the case is definitively settled for the reasons I've mentioned before regarding the state of our knowledge about both neuro and computer science. You clearly seem to disagree but I suppose we can just agree to disagree on that. — Mr Bee
I have never said it was binary. I just said that whatever the difference is between human and current AI models it doesn't need to be something magical. — Mr Bee
In any case, you seem to agree that there is a difference between the two as well, well now the question is what that means with regards to it's "creativity" and whether the AI is "creative" like we are. Again I think this is a matter where we agree to disagree on. — Mr Bee
One may argue that for some work is what gives their life meaning, It's unhealthy working conditions that are the problem more so. — Mr Bee
It also removes the leverage that workers usually have over their employers. In a society that is already heavily biased towards the latter, what will that mean? That's a concern that I have had about automation, even before the advent of AI. It honestly feels like all the rich folk are gonna use automation to leave everyone else in the dust while they fly to Mars. — Mr Bee
History shows that that is rarely how things go. If that were the case then wealth inequality wouldn't have been so rampant and we would've solved world hunger and climate change by now. People are very bad at being proactive after all. It's likely any necessary changes that will need to happen will only happen once people reach a breaking point and start protesting. — Mr Bee
The irony is that it seems like AI is going after the jobs that people found more meaningful, like creative and white collar jobs. It's the more monotonous blue collar jobs that are more secure for now, at least until we see more progress made in the realm of robotics. Once automation takes over both then I don't know where that will leave us. — Mr Bee
We'll see. Maybe I've just been a pessimist recently (nothing about what's currently going on in the world is giving me much to be hopeful about) but I can just as easily see how this can end up going in a dystopian fashion. Maybe it's because I've watched one too many sci-fi movies. Right now, assuming people get UBI, Wall-E seems to be on my mind currently. — Mr Bee
Certainly alot of them don't value their workers on a personal level alot of the time but I'd distinguish that from abuse. Of course that isn't really the main concern here. — Mr Bee
I mean if you want to just train a model and keep it in your room for the rest of it's life, then there's nothing wrong with that, but like I said that's not important. None of what you said seems to undermine the point you're responding to, unless I am misreading you here. — Mr Bee
All sorts of meaning could be projected onto it, but my intention probably wouldn't show up anywhere because of the way I made it. The words I entered had nothing to do with this content. It's a result of using one image as a template and putting obscure, meaningless phrases in as prompts. What you get is a never ending play on the colors and shapes in the template image, most of which surprise me. My only role is picking what gets saved and what doesn't. This is a technique I used with Photoshop, but the possibilities just explode with AI. And you can put the ai images into Photoshop and then back into ai. It goes on forever — frank
I think the real reason art loses value with AI is the raw magnitude of output it's capable of. — frank
You claim it's identical merely by appeal to a perceived similarity to private legal use. But being similar is neither sufficient nor necessary for anything to be legal. Murder in one jurisdiction is similar to legal euthanasia in another. That's no reason to legalize murder. — jkop
Corporate engineers training an Ai-system in order to increase its market value is obviously not identical to private fair use such as visiting a public library. — jkop
We pay taxes, authors or their publishers get paid by well established conventions and agreements. Laws help courts decide whether a controversial use is authorized, fair or illegal. That's not for the user to decide, nor for their corporate spin doctors. — jkop
If you do not want the AI to know a particular information, then simply do not put that information onto the internet. — chiknsld
I'll give you an example: You are a scientist working on a groundbreaking theory that will win you the Nobel Prize (prestige) but the AI is now trained on that data and then helps another scientist who is competing with you from another country. This would be an absolutely horrible scenario, but is this the AI's fault, or is it the fault of the scientist who inputted valuable information?
In this scenario, both scientists are working with AI.
We all have a universal responsibility to create knowledge (some more than others) but you also do not want to be a fool and foil your own plans by giving away that knowledge too quickly. It's akin to the idea of "casting pearls" — chiknsld
However, corporations do have a fiduciary requirement to generate profit for their shareholders. So, making money is the name of the game. The people who make up corporations, especially when engaged in new, cutting edge projects, clearly have lots of intellectual interests aside from the fiduciary requirement. I'm sure developing AI is an engrossing career for many people, requiring all the ingenuity and creativity they can muster. — BC
but the final uses to which this technology will be applied are not at all clear, and I don't trust corporations -- or their engineers -- to just do the right thing, — BC
Just an example. True enough, it's not quite the same as AI. But corporations regularly do things that turn out to be harmful--sometimes knowing damn well that it was harmful--and later try to avoid responsibility. — BC
I was thinking of intention as in a desire to create something meaningful. An artist might not have any particular meaning in mind, or even if they do, it's somewhat irrelevant to the meaning the audience finds. So it's obvious that AI can kick ass creatively. In this case, all the meaning is produced by the audience, right? — frank
Yea, an AI artist could create a century's worth of art in one day. I don't really know what to make of that. — frank
I don't know enough about the processes in detail, but just a first glance impression gives off the feeling they're different. — Mr Bee
Chat-GPT I don't think is as intelligent as a human is. It doesn't behave the way a human intelligence would. Can I explain what is the basis for that? No. Does that mean I think it's magic then? Not at all. — Mr Bee
The things is that for alot of people generative AI seems like a solution looking for a problem. — Mr Bee
I think this video sums it up pretty nicely: — Mr Bee
According to some reports, AI could replace hundreds of millions of jobs. If it doesn't replace it with anything else, then to brush off the economic disruption to people's lives without considering policies like UBI is the sort of thinking that sets off revolutions. — Mr Bee
When you're too cheap to hire someone to create something then you're probably also too lazy to fix the inevitable problems that comes with generated content. — Mr Bee
I can imagine the top art companies, like say a Pixar or a Studio Ghibli, focusing solely on human art, in particular because they can afford it. I don't see them relying on AI. Like a high end restaurant that makes it's food from scratch without any manufactured food, they'll probably continue to exist.
There will also be companies that use AI to a certain extent as well, and then companies that rely on it too much to the point of being low-end. — Mr Bee
None of this really addresses their concern about financial stability. I fear that this new technology just gives more leverage over a group of people who have been historically underpaid. I hope it ends up well, but I'm not gonna brush off these concerns as luddite behavior. — Mr Bee
Not at all. They're just not allowed to use their non-transformative output based on those references and expect to get paid for it. Like I said before, if you want to look at a bunch of copyrighted material and download it on your computer, that's fine since all you're doing is consuming. — Mr Bee
Noone is allowed to do whatever they want. Is private use suddenly immune to the law? I don't think so.
Whether a particular use violates the law is obviously not for the user to decide. It's a legal matter. — jkop
Because a mere intention to want to create a painting of a park doesn't get to the interesting parts about how our brains generate that image in our heads from what we know. Of course I don't know much about creativity, neuroscience, or AI like I said before, so I'm gonna avoid deeper conversations and skip over the following paragraphs you've written for the sake of time. — Mr Bee
They certainly deal with alot of criticism themselves if you're implying they don't. Tracing isn't exactly a widely accepted practice. — Mr Bee
I'm willing to reject dualism as well, though I'm not sure why you're attributing this and indeterminism to people who just believe that human creativity and whatever is going on in diffusion models aren't equivalent. — Mr Bee
I'm not saying that the human brain isn't a machine, I'm just saying that there are differences between the architecture of human brains and AI diffusion models something that may reveal itself with a further understanding of neuroscience and AI. — Mr Bee
Given how disruptive AI can be to all walks of society then I think that is a reason for pause unless we end up creating a very large society backlash. — Mr Bee
Those were more new mediums, new ways of making art and music then something that could completely replace artists can do. I'd look more at the invention of the camera and it's relation to portrait artists as a better example. — Mr Bee
Honestly we should all be concerned on that because if they're fine with it then artists are out of a job and we as consumers will have to deal with more sloppy art. Already I see advertisements where people have 6 fingers on a hand. — Mr Bee
Because they're the ones giving most artists a job at this point and they need those jobs. Unfortunately that's the society we live in. — Mr Bee
They're both related. If the output process is not considered transformative enough then if the input contains copyright material then it's illegal. — Mr Bee
I don't think intention is a requirement of artistic output. An artist may not have anything to say about what a particular work means. It just flows out the same way dreams do. Art comes alive in the viewer. The viewer provides meaning by the way they're uniquely touched. — frank
The painting of Mona Lisa is a swarm of atoms. Also a forgery of the paining is a swarm of atoms. But interpreting the nature of these different swarms of atoms is neither sufficient nor necessary for interpreting them as paintings, or for knowing that the other is a forgery. — jkop
Whether something qualifies for copyright or theft is a legal matter. Therefore, we must consider the legal criteria, and, for example, analyse the output, the work process that led to it, the time, people involved, context, the threshold of originality set by the local jurisdiction and so on. You can't pre-define whether it is a forgery in any jurisdiction before the relevant components exist and from which the fact could emerge. This process is not only about information, nor swarms of atoms, but practical matters for courts to decide with the help of experts on the history of the work in question. — jkop
Regarding the training of Ai-systems by allowing them to scan and analyse existing works, then I think we must also look at the legal criteria for authorized or unauthorized use. — jkop
Doesn't matter whether we deconstruct the meanings of 'scan', 'copy', 'memorize' etc. or learn more about the mechanics of these systems. They use the works, and what matters is whether their use is authorized or not. — jkop
Just a random question. Had someone sold the database of all posts of a forum (not this one, in my mind), would that be considered theft or public information? — Shawn
#1. Make money. — BC
I do not know what percent of the vast bulk of material sucked up for AI training is copyrighted, but thousands of individual and corporate entities own the rights to a lot of the AI training material. I don't know whether the most valuable part was copyrighted currently, or had been copyrighted in the past, nor how much was just indifferent printed matter. Given the bulk of material required, it seems likely that no distinction was made. — BC
The many people who produce copyrighted material haven't volunteered to give up their ideas. — BC


I was unable to generate the image due to content policy restrictions related to the specific artistic style you mentioned. If you'd like, I can create an image inspired by a surreal landscape featuring a windmill and stone structures at sunset, using a more general artistic approach. Let me know how you'd like to proceed!

So your claim is that adding intentionality to current diffusion models is enough to bridge the gap between human and machine creativity? Like I said before I don't have the ability to evaluate these claims with the proper technical knowledge but that sounds difficult to believe. — Mr Bee
Okay, but in most instances artists don't trace. — Mr Bee
I don't see how originality is undermined by determinism. I'm perfectly happy to believe in determinism, but I also believe in creativity all the same. The deterministic process that occurs in a human brain to create a work of art is what we call "creativity". Whether we should apply the same to the process in a machine is another issue. — Mr Bee
Indeed the definitions are very arbitrary and unclear. That was my point. It was fine in the past since we all agree that most art created by humans is a creative exercise but in the case of AI it gets more complicated since now we have to be more clear about what it is and if AI generated art meets the standard to be called "creative". — Mr Bee
However the problem is that in today's art industry, we don't just have artists and consumers but middle men publishers who hire the former to create products for the latter. The fact is alot of artists depend on these middle men for their livelihoods and unfortunately these people 1) Don't care about the quality of the artists they hire and 2) Prioritize making money above all else. For corporations artists merely create products for them to sell and nothing more so when a technology like AI comes up which produces products for them for a fraction of the cost in a fraction of the time, then they will more than happily lay off their human artists for what they consider to be "good enough" replacements even if the consumers they sell these products to will ultimately consider them inferior.
There are people who take personal commissions but there are also those that do commissions for commercial clients who may want an illustration for their book or for an advertisement. Already we're seeing those types of jobs going away because the people who commissioned those artists don't care in particular about the end product so if they can get an illustration by a cheaper means they'll go for it. — Mr Bee
Of course the data collection isn't the problem but what people do with it. It's perfectly fine for someone to download a bunch of images and store it on their computer but the reason why photobashing is considered controversial is that it takes that data and uses it in a manner that some consider to be insufficiently transformative. Whether AI's process is like that is another matter that we need to address. — Mr Bee
Sorry if I missed some of your points but your responses have been quite long. If we're gonna continue this discussion I'd appreciate it if you made your points more concise. — Mr Bee
You ask "Why is B theft?" but your scenario omits any legal criteria for defining theft, such as whether B satisfies a set threshold of originality.
How could we know whether B is theft when you don't show or describe its output, only its way of information processing. Then, by cherry picking similarities and differences between human and artificial ways of information processing, you push us to conclude that B is not theft. :roll: — jkop
One difference between A and B is this:
You give them the same analysis regarding memorizing and synthesizing of content, but you give them different analyses regarding intent and accountability. Conversely, you ignore their differences in the former, but not in the latter. — jkop
The processors in AI facilities lack intention, but AI facilities are owned and operated by human individuals and corporations who have extensive intentions. — BC
AGI doesn't necessarily have to think exactly like us, but human intelligence is the only known example of a GI that we have and with regards to copyright laws it's important that the distinction between an AGI and a human intelligence not be that all that wide because our laws were made with humans in mind. — Mr Bee
The question is whether or not that process is acceptable or if it should be considered "theft" under the law. We've decided as a society that someone looking at a bunch of art and using it as inspiration for creating their own works is an acceptable form of creation. The arguments that I've heard from the pro-AI side usually tries to equate the former with the latter as if they're essentially the same. That much isn't clear though. My impression is that at the very least they're quite different and should be treated differently. That doesn't mean that the former is necessarily illegal though, just that it should be treated to a different standard whatever that may be. — Mr Bee
Depends on what we're talking about when we say that this hypothetical person "takes parts of those files and makes a collage out of them". The issue isn't really the fact that we have memories that can store data about our experiences, but rather how we take that data and use it to create something new. — Mr Bee
Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind. — jkop
What's relevant is whether a work satisfies a set threshold of originality, or whether it contains, in part or as a whole, other copyrighted works. — jkop
There are also alternatives or additions to copyright, such as copyleft, Creative Commons, Public Domain etc. Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy, and to snag people's copyrighted works, obfuscate their identity but exploit their quality will increase the market value of the systems. Plain theft! — jkop
Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind. — jkop
Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy... Plain theft! — jkop
A and B are set up to acquire writing skills in similar ways. But this similarity is irrelevant for determining whether a literary output violates copyright law. — jkop
You blame critics for not understanding the technology, but do you understand copyright law? Imagine if the law was changed and gave Ai-generated content carte blanche just because the machines have been designed to think or acquire skills in a similar way as humans. That's a slippery slope to hell, and instead of a general law you'd have to patch the systems to counter each and every possible misuse. Private tech corporations acting as legislators and judges of what's right and wrong. What horror. — jkop
If your claim is that similarity between human and artificial acquisition of skills is a reason for changing copyright law, then my counter-argument is that such similarity is irrelevant. — jkop
What is relevant is whether the output contains recognizable parts of other people's work. — jkop
One might unintentionally plagiarize recognizable parts of someone else's picture, novel, scientific paper etc. and the lack of intent (hard to prove) might reduce the penalty but hardly controversial as a violation. — jkop
There is data, and at a deeper level there is what the data means. Can an AI algorithm ever discover what data means at this deeper level? — RussellA
That's one of the main issues right? How comparable human creativity is to that of AI. When an AI "draws upon" all the data it is trained on is it the same as when a human does the same like in the two scenarios you've brought up?
At the very least it can be said that the consensus is that AIs don't think like we do, which is why don't see tech companies proclaiming that they've achieved AGI. There are certainly some clear shortcomings to how current AI models work compared to human brain activity, though given how little we know about neuroscience (in particular the process of human creativity) and how much less we seem to know about AI I'd say that the matter of whether we should differentiate human inspiration and AI's' "inspiration" currently is at best unclear. — Mr Bee
It's not like photobashing isn't controversial too mind you. So if you're saying that AI diffusions models are equivalent to that practice then that probably doesn't help your argument. — Mr Bee
According to you, or copyright law? — jkop
If 'feeding', 'training', or 'memorizing' does not equal copying, then what is an example of copying? It is certainly possible to copy an original painting by training a plagiarizer (human or artificial) in how to identify the relevant features and from these construct a map or model for reproductions or remixes with other copies for arbitrary purposes. Dodgy and probably criminal. — jkop
You use the words 'feeding', 'training', and 'memorizing' for describing what computers and minds do, and talk of neural information as if that would mean that computers and minds process information in the same or similar way. Yet the similarity between biological and artificial neural networks has decreased since the 1940s. I've 'never seen a biologist or neuroscientist talk of brains as computers in this regard. Look up Susan Greenfield, for instance. — jkop
Your repeated claims that I (or any critic) misunderstand the technology are unwarranted. You take it for granted that a mind works like a computer (it doesn't) and ramble on as if the perceived similarity would be an argument for updating copyright law. It's not. — jkop
The user of the system is accountable — jkop
possibly its programmers as they intentionally instruct the system to process copyright protected content in order to produce a remix. It seems fairly clear, I think, that it's plagiarism and corruption of other people's work. — jkop
What's similar is the way they appear to be creative, but the way they appear is not the way they function. — jkop
A machine's iterative computations and growing set of syntactic rules (passed for "learning") are observer-dependent and, as such, very different from a biological observer's ability to form intent and create or discover meanings. — jkop
A) A person has a perfect photographic memory. They go to the library every day for 20 years and read every book in that library. They then write a short story drawing upon all that has been read and seen in that library during these 20 years.
B) A tech company let an algorithm read through all books at the same library, which produces a Large Language Model based on that library as its training data. It's then prompted to write a short story and draws upon all it has read and seen through the algorithm. — Christoffer
Neither man nor machine becomes creative by simulating some observer-dependent appearance of being creative. — jkop
I think you meant my comment was written with A.I. and it was in fact. Good looking out, Lionino. I am messing with newly updated Bing Copilot. My post was translated here by me from the help of a.i., but I did not directly copy and paste it from chat. — Kizzy
