Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. — Nemo2124
The modern native populations of Europe largely descend from three distinct lineages: Mesolithic hunter-gatherers, descended from populations associated with the Paleolithic Epigravettian culture; Neolithic Early European Farmers who migrated from Anatolia during the Neolithic Revolution 9,000 years ago; and Yamnaya Steppe herders who expanded into Europe from the Pontic–Caspian steppe of Ukraine and southern Russia in the context of Indo-European migrations 5,000 years ago. — https://en.m.wikipedia.org/wiki/Europe
Exponential population growth has been made possible by the exponential growth in technologies, notably medical technology. — Janus
If the private use is within law and identical to what these companies do, then it is allowed, and that also means that the companies do not break copyright law with their training process. — Christoffer
Why are artists allowed to do whatever they want in their private workflows, but not these companies? — Christoffer
Is it possible to have a healthy economy which is 'steady state'? Not expanding and not shrinking? — BC
socialism moot through Universal Basic Income? — Shawn
Why is it irrelevant? — Christoffer
How does this prove we aren't a simulation though? — Benj96
Again, I ask... what is the difference in scenario A and scenario B? Explain to me the difference please. — Christoffer
So, what are you basing your counter arguments on? What exactly is your counter argument? — Christoffer
If the user asks for an intentional plagiarized copy of something, or a derivative output, then yes, the user is the only one accountable as the system does not have intention on its own. — Christoffer
But this is still a misunderstanding of the system and how it works. As I've stated in the library example, you are yourself feeding copyrighted material into your own mind that's synthesized into your creative output. Training a system on copyrighted material does not equal copying that material, THAT is a misunderstanding of what a neural system does. It memorize the data in the same way a human memorize data as neural information. You are confusing the "intention" that drives creation, with the underlying physical process. — Christoffer
Ever since I watched the movie "The Matrix" I have been troubled by how to tell what is real and what is not. — Truth Seeker
the way they function is so similar to the physical and systemic processes of human creativity that any ill-intent to plagiarize can only be blamed on a user having that intention. All while many artists have been directly using other people's work in their production for decades in a way that is worse than how these models synthesize new text and images from their training data. — Christoffer
How can an argument for these models being "plagiarism machines" be made when the system itself doesn't have any intention of plagiarism? — Christoffer
I don't even know what "I like Ice cream" means when I think it, let alone say it. It is expressed and heard as a process which will have an effect. — ENOAH
Epistemology includes criticism about the limits of our scientific knowledge and it warns us against the idea that we can get ultimately objective knowledge. — Angelo Cannata
So what does it mean "epistemically objective"? — Angelo Cannata
Pray tell, what is your opinion on the state of global education. For me, the critical thinker is resilient to rhetoric and propaganda, the fact learner is however....not. — Benj96
Imagine that one day, you get the best idea in the world. You go to tell your friend, but then you realize something: You don't have any words to describe your idea. Is this scenario possible? — Scarecow
How do we decide what is fact and what is opinion? — Truth Seeker
There are more than 8.1 billion humans on Earth and our conflicting ideologies, religions, worldviews and values divide us. — Truth Seeker
I worry that we will destroy ourselves and all the other species with our conflicts. — Truth Seeker
I think that if we could work out what is fact and what is opinion, it would help us get on with each other better. — Truth Seeker
Searle believes that brain matter has some special biological property that enables mental states to have intrinsic intentionality as opposed to the mere derived intentionality that printed texts and the symbols algorithmically manipulated by computers have. But if robots and people would exhibit the same forms of behavior and make the same reports regarding their own phenomenology, how would we know that we aren't also lacking what it is that the robots allegedly lack? — Pierre-Normand
Are biologically active molecules not in some ways also "symbols" ie structures which "say" something - exert a particular defined or prescribed effect. — Benj96
However, my point was about the relevance of isomorphisms. Pointing out that there can be irrelevant isomorphisms such as between a constellation and a swarm of insects, doesn't change the fact that there are relevant isomorphism. (Such as between the shape of bird wings and airplane wings, or between biological neural nets and artificial neural nets.) — wonderer1
Since artificial neural networks are designed for information processing which is to a degree isomorphic to biological neural networks, this doesn't seem like a very substantive objection to me. It's not merely a coincidence. — wonderer1
Consider the system reply and the robot reply to Searle's Chinese Room argument. Before GPT-4 was released, I was an advocate of the robot reply, myself, and thought the system reply had a point but was also somewhat misguided. In the robot reply, it is being conceded to Searle that the robot's "brain" (the Chinese Room) doesn't understand anything. But the operation of the robot's brain enables the robot to engage in responsive behavior (including verbal behavior) that manifests genuine understanding of the language it uses. — Pierre-Normand
I'm not sure how that follows. The authors of the paper you linked made a good point about the liabilities of iteratively training LLMs with the synthetic data that they generated. That's a common liability for human beings also, who often lock themselved into epistemic bubbles or get stuck in creative ruts. Outside challenges are required to keep the creative flame alive. — Pierre-Normand
their training data and interactions with humans do ground their language use in the real world to some degree. Their cooperative interactions with their users furnish a form of grounding somewhat in line with Gareth Evans' consumer/producer account of the semantics of proper names — Pierre-Normand
Unless, consciousness is a product of complexity. As we still don't know what makes matter aware or animate, we cannot exclude the possibility that it is complexity of information transfer that imbues this "sensation". If that is the case, and consciousness is indeed high grades of negativity entropy, then its not so far fetched to believe that we can create it in computers . — Benj96
..embodiment, episodic memory, personal identity and motivational autonomy. Those all are things that we can see that they lack (unlike mysterious missing ingredients like qualia or "consciousness" that we can't even see fellow human beings to have). Because they are lacking in all of those things, the sorts of intelligence and understanding that they manifest is of a radically different nature than our own. But it's not thereby mere simulacrum - and it is worth investigating, empirically and philosophically, what those differences amount to. — Pierre-Normand
Of course, this is all still quite different from the way human cognition works, with our [sic] biological neural networks and their own unique patterns of parallel and serial processing. And there's still much debate and uncertainty around the nature of machine intelligence and understanding.
But I think the transformer architecture provides a powerful foundation for integrating information and dynamically shifting attention in response to evolving goals and contexts. It allows for a kind of flexible, responsive intelligence that goes beyond simple serial processing. — Pierre-Normand
But then, the actor's ability to imitate the discourse of a physicist would slowly evolve into a genuine understanding of the relevant theories. I believe that intellectual understanding, unlike the ability to feel pain or enjoy visual experiences, cannot be perfectly imitated without the imitative ability evolving into a form of genuine understanding. — Pierre-Normand
there remains a stark distinction between the flexible behavior of an AI that can "understand" an intellectual domain well enough to respond intelligently to any question about it, and an actor who can only fool people lacking that understanding. — Pierre-Normand
But you could say the same about me. Am I a simulation or a duplication of what another human might say in response to your commentary? — Benj96
The second thing is how do we give it both "an objective" but also "free auto-self-augementation" in order to reason. And curiously, could that be the difference between something that feels/experiences and something that is lifeless, programmed and instructed? — Benj96
"Time flies like an arrow; fruit flies like a banana." — Pierre-Normand
For them to see when standing what we see when hanging upside down it must be that their eyes and/or brain work differently. — Michael
I’m saying that whether or not sugar tastes sweet is determined by the animal’s biology. It’s not “right” for it to taste sweet and “wrong” for it to taste sour. Sight is no different. It’s not “right” that light with a wavelength of 700nm looks red and not “right” that the sky is “up” and the ground “down”. These are all just consequences of our biology, and different organisms with different biologies can experience the world differently. — Michael
It is neither a contradiction, nor physically impossible, for some organism to have that very same veridical visual experience when standing on their feet. It only requires that their eyes and/or brain work differently to ours.
Neither point of view is "more correct" than the other.
Photoreception isn't special. It's as subjective as smell and taste — Michael