Most of the solutions being sought by private enterprise are for the maximization of profit by various means and methods. That need not concern us, since the tasks do not require creative or original thinking, just even faster and more efficient computing and robot control. Most of the solutions sought by government agencies are for expediting and streamlining office functions (cutting cost) or increasing military capability. Again, not so much more clever than last year's computers and weapon systems.So much money and effort is being put into it by governments as an investment for future solutions. — Jack Cummins
As an aid to research, of course it's embraced by scientists. Also, just for itself: the next generation of even more sophisticated tech. That's not quite the same thing as embracing it scientifically - at least, if I understand that phrase correctly.Many of its development involve medical technology and engineering diagnostics. This goes along with ideas of technological progress and makes it appears as an idea to be embraced scientifically. — Jack Cummins
How deep into what? I'm sure it can calculate more, better, faster than the previous generation. It can compare, collate, distill and synthesize existing human knowledge and theories faster than any human. it can apply critical analyses that humans have already worked out. Most humans are not original; they build on the knowledge of their predecessors. Whether an AI can add something new remains to be seen.The technology may identify problems and look at solutions, but how deep does it go? — Jack Cummins
Mass and social media have already done that.It may be a tool, but the danger is that it will be used to replace critical human thinking — Jack Cummins
About some things, yes. Whatever presents available objective facts, a computer can draw objective conclusions. But that doesn't mean the owners will share those truths with the rest of us. If the information is incomplete or inaccurate, the computer can make even less sense of it than we can, since it can't fill in with intuition. About the things computers can't fathom, we each have some perception of a truth - but we're not objective.Does the idea of artificial Intelligence embrace the seeking of objective 'truth'? — Jack Cummins
As for AI, sentience and philosophy, the issue is that without sentience AI does not have life experiences. As it is, it doesn't have parents, self-image and sexuality. It does not have reflective consciousness, thereby, it is not able to attain wisdom. — Jack Cummins
You are correct to say that it is not that the idea of artificial intelligence doesn't really reach 'intelligence' or consciousness. The problem may that the idea has become mystified in an unhelpful way. The use of the word 'intelligence' doesn't help. Also, it may be revered as if it is 'magic', like a new mythology of gods. — Jack Cummins
We could start by defining "intelligence" and "consciousness".How do you think that it may be examined and critiqued in an analytical and philosophical point of view? — Jack Cummins
Considering how many people today are lazy thinkers, I think that there is a growing reservation that people will allow AI to do all their thinking for them. The key is to realize that AI is a tool and not meant to take over all thinking or else your mind will atrophy. I use it for repetitive and mundane tasks in programming so that I can focus more on higher order thinking. When you do seek assistance in your thinking you want to make sure you understand the answer given, not just blindly copying and pasting the code without knowing what it is actually doing.Also, how important is it to question its growing role in so many areas of life? To what extent does it compare with or replace human innovation and creativity? — Jack Cummins
There are monists that are neither materialists nor idealists. For them, intelligence is simply a process that anything can have if it fits the description. Don't we need to define the terms first to be able to say what has it and what doesn't?I don't think it does raise any questions about intelligence or consciousness at all. It is useful and interesting on its own merit, but people who are taken by this equaling intelligence I think are deluding themselves into a very radical dualism which collapses into incoherence. — Manuel
Poor example. Cardiologists do not use a computer to simulate the pumping of blood. They use an artificial heart that is a mechanical device that pumps and circulates actual blood inside your body.To make this concrete and brief. Suppose we simulate on a computer a person's lunges' and all the functions associated with breathing, are we going to say that the computer is breathing? Of course not. It's pixels on a screen; it's not breathing in any meaningful sense of the word. — Manuel
Then are we deluding ourselves whenever we use the term "intelligent" to refer to ourselves?But it's much worse for thinking. We do not know what thinking is for us. We can't say what it is. If we can't say what thinking is for us, how are we supposed to that for a computer? — Manuel
To say that AI developers and computer scientists are deluding themselves you seem to imply that AI computer scientists should be calling philosophers to fix their computers and software. — Harry Hindu
Cardiologists do not use a computer to simulate the pumping of blood. — Harry Hindu
Then are we deluding ourselves whenever we use the term "intelligent" to refer to ourselves? — Harry Hindu
You were talking about people that attribute terms like "intelligence" to LLMs as being deluded. My point is that philosophers seem to think they know more about LLMs than AI developers do.We are talking about LLM's not problems with software. — Manuel
What is understanding? How do you know that you understand anything if you never end up properly mimicking the something you are trying to understand?That's the point.
You seem to think that mimicking something is the same as understanding it. — Manuel
What goes on in the head and how do we show it?The point is that mimicking behavior does nothing to show what goes on in a person's head. — Manuel
Straw-men. That isn't what I am saying at all. Mirror-makers, botanists and astrophysicists haven't started calling mirrors, plants and planetary orbits artificially intelligent. AI developers are calling LLMs artificially intelligent, with the term, "artificial" referring to how it was created - by humans instead of "naturally" by natural selection. I could go on about the distinction between artificial and natural here but that is for a different thread:Unless you are willing to extend intelligence to mirrors, plants and planetary orbits. If you do, then the word loses meaning. — Manuel
Why? What makes a mass of neurons intelligent, but a mass of silicon circuits not?If you don't, then let's hone in on what makes most sense, studying people who appear to exhibit this behavior. Once we get a better idea of what it is, we can proceed to do it to animals.
But to extend that to non-organic things is a massive leap. It's playing with a word as opposed to dealing with a phenomenon. — Manuel
You were talking about people that attribute terms like "intelligence" to LLMs as being deluded. My point is that philosophers seem to think they know more about LLMs than AI developers do. — Harry Hindu
What is understanding? How do you know that you understand anything if you never end up properly mimicking the something you are trying to understand? — Harry Hindu
AI developers are calling LLMs artificially intelligent, with the term, "artificial" referring to how it was created - by humans instead of "naturally" by natural selection. I could go on about the distinction between artificial and natural here but that is for a different thread: — Harry Hindu
Why? What makes a mass of neurons intelligent, but a mass of silicon circuits not? — Harry Hindu
Fair point. The same could be said about philosophers not agreeing on what is intelligent and how to define intelligence. Even you have agreed that we may be deluding ourselves in the use of the term. What these points convey to me is that we need a definition to start with.No, they do not. But when it comes to conceptual distinctions, such as claiming that AI is actually intelligence, that is a category error. I see no reason why philosophers shouldn't say so.
But to be fair, many AI experts also say that LLM's are not intelligent. So that may convey more authority to you. — Manuel
That there is more to being a dog than walking on four legs and sniffing anuses.Understanding is an extremely complicated concept that I cannot pretend to define exhaustively. Maybe you could define it and see if I agree or not.
As I see it understanding is related to connecting ideas together, seeing cause and effect, intuiting why a person does A instead of B. Giving reasons for something as opposed to something else, etc.
But few, if any words outside mathematics have full definitions. Numbers probably.
We can mimic a dog or a dolphin. We can get on four legs and start using our nose, or we can swim and pretend we have capacities we lack.
What does that tell you though? — Manuel
How so? If we can substitute artificial devices for organic ones in the body there does not seem like much of a difference in understanding. The difference, of course, is the brain - the most complex thing (both organic and inorganic) in the universe. But this is just evidence that we should at least be careful in how we talk about what it does, how it does it and how other things (both organic and inorganic) might be similar or different.Yeah, it is artificial. But the understanding between something artificial and something organic is quite massive. — Manuel
The automated customer service ones usually come with a drop-box of questions you can ask, and if your problem isn't covered by those possibilities, the bot doesn't understand you. These are not at all intelligent programs, they're fairly primitive. It would be nice if you could pick up the phone, have your call answered - within minutes, not hours - by an entity who a) speaks your language, b) knows the service or industry they speak for, c) is bright and attentive enough to understand the caller's question even if the caller doesn't know the correct terms and d) is motivated to help.It would be good to think that it would be about efficiency but my own experience of AI, such as telephone lines, have been so unhelpful. — Jack Cummins
What these points convey to me is that we need a definition to start with. — Harry Hindu
How so? If we can substitute artificial devices for organic ones in the body there does not seem like much of a difference in understanding. — Harry Hindu
I imagine that AGI will not primarily benefit humans, and will eventually surpass us in every cognitive way. Any benefits to us, I also imagine (best case scenario), will be fortuitous by-products of AGI's hyper-productivity in all (formerly human) technical, scientific, economic and organizational endeavors.'Civilization' metacognitively automated by AGI so that options for further developing human culture (e.g. arts, recreation, win-win social relations) will be optimized – but will most of us / our descendants take advantage of such an optimal space for cultural expression or [will we] just continue amusing ourselves to death? — 180 Proof
It is probably equivalent to LSD experimenting culturally. — Jack Cummins
The biggest problem is the creation of consciousness itself, which may defy the building of a brain and nervous system, as well as body parts. Without this, the humans fabricated artificially are likely to be like Madam Tussard models with mechanical voices and movements, even simulated thought. Interior consciousness is likely to be lacking, or substance. It comes down to the creation of nature itself and a probable inability to create the spark of life inherent in nature and consciousness. — Jack Cummins
It may be shown after great errors that AI is not as intelligent as human beings, as it is too robotic and concrete. — Jack Cummins
I don't have a good definition. Problem solving? Surviving? Doing differential calculus? Tricking people?
It's very broad. I'd only be very careful in extrapolating from these we do which we call intelligent to other things. Dogs show intelligent behavior, but they can't survive in the wild. Are the smart and stupid?
It's tricky. — Manuel
So what else is missing if you are able to duplicate the function? Does it really matter what material is being used to perform the same function? Again, what makes a mass of neurons intelligent but a mass of silicon circuits not? What if engineers designed an artificial heart that lasts much longer and is structurally more sound than an organic one?Sure, we have a good amount of structural understanding about some of the things hearts (and other organs) do. As you mentioned with the Chinese case above, it's nowhere near exhaustive. It serves important functional needs, but "function", however one defines it, is only a part of understanding. — Manuel
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.