• Jack Cummins
    5.4k
    I am aware that there have been a number of threads on artificial intelligence, but it is such an issue in the world. So, t it requires a lot of questioning. So much money and effort is being put into it by governments as an investment for future solutions. Many of its development involve medical technology and engineering diagnostics. This goes along with ideas of technological progress and makes it appears as an idea to be embraced scientifically.

    However, there are so many questions, especially ideas about what constitutes intelligence. The technology may identify problems and look at solutions, but how deep does it go? It may be that the most sophisticated forms of artificial intelligence will need to go beyond the superficial. It may be a tool, but the danger is that it will be used to replace critical human thinking and the concept of intelligence is open to so much scrutiny.

    In some circles, IQ tests were seen as an objective measure of intelligence, but this may be a restrictive understanding of intelligence. There is also the idea of emotional intelligence which is more about understanding psychological aspects of life and 'truth'. Does the idea of artificial Intelligence embrace the seeking of objective 'truth'?

    It is not possible to avoid the issue of artificial intelligence because it is becoming an aspect of daily life, including both science and the arts. However, rather than simply being seen as an aspect of development in the twentieth first century, it raises questions about the nature of intelligence and consciousness in human judgement. How do you think that it may be examined and critiqued in an analytical and philosophical point of view? Also, how important is it to question its growing role in so many areas of life? To what extent does it compare with or replace human innovation and creativity?
  • Vera Mont
    4.5k
    So much money and effort is being put into it by governments as an investment for future solutions.Jack Cummins
    Most of the solutions being sought by private enterprise are for the maximization of profit by various means and methods. That need not concern us, since the tasks do not require creative or original thinking, just even faster and more efficient computing and robot control. Most of the solutions sought by government agencies are for expediting and streamlining office functions (cutting cost) or increasing military capability. Again, not so much more clever than last year's computers and weapon systems.
    Many of its development involve medical technology and engineering diagnostics. This goes along with ideas of technological progress and makes it appears as an idea to be embraced scientifically.Jack Cummins
    As an aid to research, of course it's embraced by scientists. Also, just for itself: the next generation of even more sophisticated tech. That's not quite the same thing as embracing it scientifically - at least, if I understand that phrase correctly.
    The technology may identify problems and look at solutions, but how deep does it go?Jack Cummins
    How deep into what? I'm sure it can calculate more, better, faster than the previous generation. It can compare, collate, distill and synthesize existing human knowledge and theories faster than any human. it can apply critical analyses that humans have already worked out. Most humans are not original; they build on the knowledge of their predecessors. Whether an AI can add something new remains to be seen.
    It may be a tool, but the danger is that it will be used to replace critical human thinkingJack Cummins
    Mass and social media have already done that.
    Does the idea of artificial Intelligence embrace the seeking of objective 'truth'?Jack Cummins
    About some things, yes. Whatever presents available objective facts, a computer can draw objective conclusions. But that doesn't mean the owners will share those truths with the rest of us. If the information is incomplete or inaccurate, the computer can make even less sense of it than we can, since it can't fill in with intuition. About the things computers can't fathom, we each have some perception of a truth - but we're not objective.
    As for the philosophical aspect of artificial intelligence, it's not here yet. However cleverly a computer has been programmed, it is not conscious or sentient. If/when it develops an independent personality, we don't know how that personality will manifest. Until then, we can only speculate about its uses, not its nature.
  • Manuel
    4.2k
    I don't think it does raise any questions about intelligence or consciousness at all. It is useful and interesting on its own merit, but people who are taken by this equaling intelligence I think are deluding themselves into a very radical dualism which collapses into incoherence.

    To make this concrete and brief. Suppose we simulate on a computer a person's lunges' and all the functions associated with breathing, are we going to say that the computer is breathing? Of course not. It's pixels on a screen; it's not breathing in any meaningful sense of the word.

    But it's much worse for thinking. We do not know what thinking is for us. We can't say what it is. If we can't say what thinking is for us, how are we supposed to that for a computer?

    So sure, engage with "AI" and LLM's and all that, but be cognizant that these things are fancy tools, telling us nothing about intelligence, or thinking or consciousness. Might as well say a mirror is conscious too.
  • 180 Proof
    15.6k
    I don't think it [AI, LLMs] does raise any questions about intelligence or consciousness at all.Manuel
    :100:

    :up: :up:
  • javi2541997
    6k
    An absolutely wonderful and well-written post. I wholeheartedly agree, and (I guess) I couldn't have approached a better point on the overrated robot (AI).
  • Wayfarer
    23.5k
    Have you been interacting much with any of the Large Language Models? If not, I suggest it is one way to get some insights into these questions. Not the only way, but it does help. I suggest creating a login for ChatGPT or Claude.ai or one of the others, which are accessible for free.

    Other than that, what @Manuel said.
  • Jack Cummins
    5.4k

    Thanks for your reply and I am glad that you were the first to reply because one situation which lead me into a 'black hole' of depression was when I realised that some people in recent creative writing activities thread had used AI as an aid. It was clear that they had used it as a tool in the true spirit of the creative process, which matters. Of course, I realise as @Jamal said in a reply to me, during the activity, that technology has always been used by writers. Nevertheless, the use of AI in the arts is one that bothers me because it may become too central and as an expectation.

    With your point about it being used for profit that is my concern about its politics. In England it appears that cuts in so many aspects of human welfare are being made ij order to fund advances in AI. Many people are already struggling with poverty already, especially as unemployment is increasing as humans are being replaced by machines. Then, it seems as if those who are out of work are to be expected to live on the lowest possible income in order for AI to be developed in an outstanding way. This is also backed up by the argument that it is an incentive to make everybody work, but that is when so many humans are being made redundant by AI.

    It would be good to think that it would be about efficiency but my own experience of AI, such as telephone lines, have been so unhelpful. It seems to be looking at any inconsistencies in information as a basis for preventing basic tasks. This may be seen as part of risk assessment, such as fraud, but it reduces life to data and the reality is that many people's lives can't be reduced that simply. That is why I query whether it goes deep enough.

    As for AI, sentience and philosophy, the issue is that without sentience AI does not have life experiences. As it is, it doesn't have parents, self-image and sexuality. It does not have reflective consciousness, thereby, it is not able to attain wisdom.
  • Jack Cummins
    5.4k

    You are correct to say that it is not that the idea of artificial intelligence doesn't really reach 'intelligence' or consciousness. The problem may that the idea has become mystified in an unhelpful way. The use of the word 'intelligence' doesn't help. Also, it may be revered as if it is 'magic', like a new mythology of gods.

    In trying to understand it the definition which I find most helpful is by Daughtery and Wilson,'Human + Machine: Reimagining Work in the Age of AI', (2018):
    'systems that extend human capability by sensing, comprehending, acting and learning.'
    This makes them appear less as forms in their own right. The problem may be that the idea has a connection with the philosophy of transhumanism, with all its science fiction like possibilities.
  • Jack Cummins
    5.4k

    I haven't used ChatGPT as I haven't found the idea as particularly exciting, but I will probably try to at some point. It is probably equivalent to LSD experimenting culturally. Of course, my comparison does make it seem like an adventure into multidimensionality, or information as being the fabric of the collective unconscious. This may be where it gets complicated as systems don't have to be conscious necessarily, but do have some independent existence beyond human minds.
  • ZisKnow
    15


    I'm a frequent user of ChatGPT, and I've found its design makes it an excellent reflective tool for understanding and organizing your own thoughts, as well as gathering and summarizing information from various sources. One common misconception is treating it as if it has an independent existence, it doesn't. ChatGPT works by determining the most statistically likely sequence of words to form a coherent response based on the input it receives

    However that doesn't make it invalid as a tool for understanding some of the basis of thought and consciousness, in many ways it could be seen as a kind of gestalt of human experience. It draws from the vast dataset of human knowledge, language, and ideas, reflecting back patterns and perspectives that can feel profound.
  • Corvus
    4.1k
    As for AI, sentience and philosophy, the issue is that without sentience AI does not have life experiences. As it is, it doesn't have parents, self-image and sexuality. It does not have reflective consciousness, thereby, it is not able to attain wisdom.Jack Cummins

    Recently I bought a few items from some of the online shops, and the items were described by A.I. generated texts. When the items arrived, I found out the most of the descriptions by the AI were wrong. It was just meaningless praise of the goods without accuracy in detail and functionality.

    I had to return the 2 items out of 3 for full refund. I asked the online sellers not to use the AI generated descriptions for their items for sale.
  • Manuel
    4.2k
    You are correct to say that it is not that the idea of artificial intelligence doesn't really reach 'intelligence' or consciousness. The problem may that the idea has become mystified in an unhelpful way. The use of the word 'intelligence' doesn't help. Also, it may be revered as if it is 'magic', like a new mythology of gods.Jack Cummins

    I can understand that. But again, why aren't we mystified by human lungs? You can make the same argument, since we can't replicate it on a computer it is now mystified.

    Magic, I mean, sure we are the only creatures with the capacity for self-reflection (so far as we know).It makes sense that we would want to understand it, but to do so you should proceed with human beings, not computers.

    Transhumanism is a lot of hot air, imo. But I may be wrong.
  • Harry Hindu
    5.2k
    How do you think that it may be examined and critiqued in an analytical and philosophical point of view?Jack Cummins
    We could start by defining "intelligence" and "consciousness".

    Also, how important is it to question its growing role in so many areas of life? To what extent does it compare with or replace human innovation and creativity?Jack Cummins
    Considering how many people today are lazy thinkers, I think that there is a growing reservation that people will allow AI to do all their thinking for them. The key is to realize that AI is a tool and not meant to take over all thinking or else your mind will atrophy. I use it for repetitive and mundane tasks in programming so that I can focus more on higher order thinking. When you do seek assistance in your thinking you want to make sure you understand the answer given, not just blindly copying and pasting the code without knowing what it is actually doing.



    I don't think it does raise any questions about intelligence or consciousness at all. It is useful and interesting on its own merit, but people who are taken by this equaling intelligence I think are deluding themselves into a very radical dualism which collapses into incoherence.Manuel
    There are monists that are neither materialists nor idealists. For them, intelligence is simply a process that anything can have if it fits the description. Don't we need to define the terms first to be able to say what has it and what doesn't?

    To say that AI developers and computer scientists are deluding themselves you seem to imply that AI computer scientists should be calling philosophers to fix their computers and software.

    To make this concrete and brief. Suppose we simulate on a computer a person's lunges' and all the functions associated with breathing, are we going to say that the computer is breathing? Of course not. It's pixels on a screen; it's not breathing in any meaningful sense of the word.Manuel
    Poor example. Cardiologists do not use a computer to simulate the pumping of blood. They use an artificial heart that is a mechanical device that pumps and circulates actual blood inside your body.

    But it's much worse for thinking. We do not know what thinking is for us. We can't say what it is. If we can't say what thinking is for us, how are we supposed to that for a computer?Manuel
    Then are we deluding ourselves whenever we use the term "intelligent" to refer to ourselves?

    Seems like definitions are the solution to the problem we have here.

    What if we were to start with a simpler term, "memory". Do we have memory? Do computers have memory? Computer scientists seem to think they do. How does the memory of a computer and your memory differ? What is memory?
  • Manuel
    4.2k
    To say that AI developers and computer scientists are deluding themselves you seem to imply that AI computer scientists should be calling philosophers to fix their computers and software.Harry Hindu

    We are talking about LLM's not problems with software.

    Cardiologists do not use a computer to simulate the pumping of blood.Harry Hindu

    That's the point.

    You seem to think that mimicking something is the same as understanding it.

    Then are we deluding ourselves whenever we use the term "intelligent" to refer to ourselves?Harry Hindu

    We could be. We use these terms as best we can for ourselves and others, sometimes to animals. But of course, we could be wrong.

    We have to deal with life as it comes and often have to simplify extremely complicated actions to make sense of them.

    The point is that mimicking behavior does nothing to show what goes on in a person's head.

    Unless you are willing to extend intelligence to mirrors, plants and planetary orbits. If you do, then the word loses meaning.

    If you don't, then let's hone in on what makes most sense, studying people who appear to exhibit this behavior. Once we get a better idea of what it is, we can proceed to do it to animals.

    But to extend that to non-organic things is a massive leap. It's playing with a word as opposed to dealing with a phenomenon.
  • Harry Hindu
    5.2k
    We are talking about LLM's not problems with software.Manuel
    You were talking about people that attribute terms like "intelligence" to LLMs as being deluded. My point is that philosophers seem to think they know more about LLMs than AI developers do.

    That's the point.

    You seem to think that mimicking something is the same as understanding it.
    Manuel
    What is understanding? How do you know that you understand anything if you never end up properly mimicking the something you are trying to understand?

    The point is that mimicking behavior does nothing to show what goes on in a person's head.Manuel
    What goes on in the head and how do we show it?

    Unless you are willing to extend intelligence to mirrors, plants and planetary orbits. If you do, then the word loses meaning.Manuel
    Straw-men. That isn't what I am saying at all. Mirror-makers, botanists and astrophysicists haven't started calling mirrors, plants and planetary orbits artificially intelligent. AI developers are calling LLMs artificially intelligent, with the term, "artificial" referring to how it was created - by humans instead of "naturally" by natural selection. I could go on about the distinction between artificial and natural here but that is for a different thread:
    https://thephilosophyforum.com/discussion/2405/artificial-vs-natural-vs-supernatural

    If you don't, then let's hone in on what makes most sense, studying people who appear to exhibit this behavior. Once we get a better idea of what it is, we can proceed to do it to animals.

    But to extend that to non-organic things is a massive leap. It's playing with a word as opposed to dealing with a phenomenon.
    Manuel
    Why? What makes a mass of neurons intelligent, but a mass of silicon circuits not?
  • Manuel
    4.2k
    You were talking about people that attribute terms like "intelligence" to LLMs as being deluded. My point is that philosophers seem to think they know more about LLMs than AI developers do.Harry Hindu

    No, they do not. But when it comes to conceptual distinctions, such as claiming that AI is actually intelligence, that is a category error. I see no reason why philosophers shouldn't say so.

    But to be fair, many AI experts also say that LLM's are not intelligent. So that may convey more authority to you.

    What is understanding? How do you know that you understand anything if you never end up properly mimicking the something you are trying to understand?Harry Hindu

    Understanding is an extremely complicated concept that I cannot pretend to define exhaustively. Maybe you could define it and see if I agree or not.

    As I see it understanding is related to connecting ideas together, seeing cause and effect, intuiting why a person does A instead of B. Giving reasons for something as opposed to something else, etc.

    But few, if any words outside mathematics have full definitions. Numbers probably.

    We can mimic a dog or a dolphin. We can get on four legs and start using our nose, or we can swim and pretend we have capacities we lack.

    What does that tell you though?

    AI developers are calling LLMs artificially intelligent, with the term, "artificial" referring to how it was created - by humans instead of "naturally" by natural selection. I could go on about the distinction between artificial and natural here but that is for a different thread:Harry Hindu

    Yeah, it is artificial. But the understanding between something artificial and something organic is quite massive.

    Why? What makes a mass of neurons intelligent, but a mass of silicon circuits not?Harry Hindu

    Masses of neurons are intelligent? People are intelligent (or not) and we try to clarify the term. Maybe you use an IQ test, or "street smarts", the ability to persuade, etc.
  • Harry Hindu
    5.2k
    No, they do not. But when it comes to conceptual distinctions, such as claiming that AI is actually intelligence, that is a category error. I see no reason why philosophers shouldn't say so.

    But to be fair, many AI experts also say that LLM's are not intelligent. So that may convey more authority to you.
    Manuel
    Fair point. The same could be said about philosophers not agreeing on what is intelligent and how to define intelligence. Even you have agreed that we may be deluding ourselves in the use of the term. What these points convey to me is that we need a definition to start with.


    Understanding is an extremely complicated concept that I cannot pretend to define exhaustively. Maybe you could define it and see if I agree or not.

    As I see it understanding is related to connecting ideas together, seeing cause and effect, intuiting why a person does A instead of B. Giving reasons for something as opposed to something else, etc.

    But few, if any words outside mathematics have full definitions. Numbers probably.

    We can mimic a dog or a dolphin. We can get on four legs and start using our nose, or we can swim and pretend we have capacities we lack.

    What does that tell you though?
    Manuel
    That there is more to being a dog than walking on four legs and sniffing anuses.

    Are wolves mimicking dogs? Are wolves mimicking canines? It seems to me that we need to define intelligence to say what kinds of processes exhaust what it means to be intelligent.

    I see understanding as equivalent to knowledge. It is information in the mind that has been tested empirically and logically by being used to accomplish some goal.

    Searle says the man in the Chinese room does not understand anything. Yet the man does understand the language the instructions are written in. He understands what language is - that the scribbles refer to some actions he is suppose to take in the room, and which actions those scribbles refer to.

    He does not understand Chinese because he has not been given the same instructions Chinese speakers received to learn how to use the scribbles and sounds of Chinese. He is not using the scribbles in the same way even though it appears on the outside the man is mimicking an understanding of Chinese. In other words, the man's understanding of what to do with Chinese scribbles and sounds does not exhaust what it means to understand Chinese.


    Yeah, it is artificial. But the understanding between something artificial and something organic is quite massive.Manuel
    How so? If we can substitute artificial devices for organic ones in the body there does not seem like much of a difference in understanding. The difference, of course, is the brain - the most complex thing (both organic and inorganic) in the universe. But this is just evidence that we should at least be careful in how we talk about what it does, how it does it and how other things (both organic and inorganic) might be similar or different.
  • Harry Hindu
    5.2k
    One of the things I like about ChatGPT when it comes to discussing philosophy with it is that it does not hold any emotional attachments to the things it says. It is capable of "changing its mind" from what it said initially given new information and new relevant questions. Does that mean ChatGPT is more intelligent than us emotional humans?
  • Vera Mont
    4.5k
    It would be good to think that it would be about efficiency but my own experience of AI, such as telephone lines, have been so unhelpful.Jack Cummins
    The automated customer service ones usually come with a drop-box of questions you can ask, and if your problem isn't covered by those possibilities, the bot doesn't understand you. These are not at all intelligent programs, they're fairly primitive. It would be nice if you could pick up the phone, have your call answered - within minutes, not hours - by an entity who a) speaks your language, b) knows the service or industry they speak for, c) is bright and attentive enough to understand the caller's question even if the caller doesn't know the correct terms and d) is motivated to help.
    Oh, wait, that's 1960! And that's what an AI help line is supposed to imitate. But that conscientious helpful clerk is long retired or been made redundant; the present automated services are replacing frustrated, often verbally abused employees in India.

    There is zero chance that more sophisticated computer and robotics technology will result in overall improvement in the welfare of any nation. It will raise the the standard of living of some - Musk, Bezos, Zuckerberg et al, their top level executives and tech gurus. For everyone else, it's same old, same old: another unnecessary convenience that throws another few thousand people out on the streets, a few protests, a few heads stove in by cops, then we carry on.
    AFAIC, AI in writing is just another unnecessary convenience I can do without - like the cellular phone that isn't grafted to my palm.

    But when the real AI becomes self-aware, watch out! I almost wish I could live to see that. Think of all that's been programmed and fed into its generations. The thing is very likely to be schizophrenic, paranoid and manic-depressive. I wouldn't be surprised if it self-destructed on its birthday. The most interesting question is whether it decides to take us along.
  • 180 Proof
    15.6k
    Get back to me when "AI" (e.g. ChatGPT) is no longer just a powerful, higher-order automation toy / tool (for mundane research, business & military tasks) but instead a human-level – self-aware or not – cognitive agent.

    :up:
  • Manuel
    4.2k
    What these points convey to me is that we need a definition to start with.Harry Hindu

    I don't have a good definition. Problem solving? Surviving? Doing differential calculus? Tricking people?

    It's very broad. I'd only be very careful in extrapolating from these we do which we call intelligent to other things. Dogs show intelligent behavior, but they can't survive in the wild. Are the smart and stupid?

    It's tricky.

    How so? If we can substitute artificial devices for organic ones in the body there does not seem like much of a difference in understanding.Harry Hindu

    Sure, we have a good amount of structural understanding about some of the things hearts (and other organs) do. As you mentioned with the Chinese case above, it's nowhere near exhaustive. It serves important functional needs, but "function", however one defines it, is only a part of understanding.

    And of course, thinking, reflection is just exponentially more difficult to deal with than any other organ.
  • 180 Proof
    15.6k
    An excerpt from one of your recent threads, Jack...
    I imagine that AGI will not primarily benefit humans, and will eventually surpass us in every cognitive way. Any benefits to us, I also imagine (best case scenario), will be fortuitous by-products of AGI's hyper-productivity in all (formerly human) technical, scientific, economic and organizational endeavors.'Civilization' metacognitively automated by AGI so that options for further developing human culture (e.g. arts, recreation, win-win social relations) will be optimized – but will most of us / our descendants take advantage of such an optimal space for cultural expression or [will we] just continue amusing ourselves to death?180 Proof

  • Vera Mont
    4.5k
    There is a finite amount of material for manufactured goods, bads and uglies. When the Earth runs out of resources and the waste has poisoned all the potable water and arable land, there will be no more producing and consuming. The faster we make more things, to sooner we die.
    How efficient do we really want our tools to be?
  • Wayfarer
    23.5k
    It is probably equivalent to LSD experimenting culturally.Jack Cummins

    Nothing like it, and I've done both! Dive in. ChatGPT is programmed to be friendly and approachable, and it is. Open a free account, copy your OP into it, and ask, "What do you think?" You'll be surprised (and, I think, delighted) with what comes back.
  • Jack Cummins
    5.4k

    The way in which AI draws upon statistics is significant, making it useful but questionable in dealing with particulars and specifics. For those who rely on it too much, there is a danger of it being about assuming the norm, without considering irregularities and 'black swans' of experience.
  • Jack Cummins
    5.4k

    The reliance on AI descriptions can be problematic. While it may be seen as efficient it can be time consuming.I find that AI job websites generate spam of jobs which in reality I don't have the requirements for.

    Just collecting a parcel which was delivered to me while out from the post office, which used to be easy became so problematic. I nearly gave up but this would have upset the person who sent it.

    Also, as there are problems for basic tasks. This its what makes it questionable when aspects of economic and political life are being thrown more and more into the hands of AI. It may be shown after great errors that AI is not as intelligent as human beings, as it is too robotic and concrete.
  • Jack Cummins
    5.4k

    One issue of the human-machine interface is that experimentation would raise ethical questions. Some experiments have been made in crossover forms with animals which may be dubious too. The area of experimental research may be in terms of those who have medical conditions, such as brain injuries. I know someone who had a metal plate in her brain after an accident, but she seemed far from robotic.

    As for the actual possibilities, it is likely that a form of being which is both human and artificially enhanced by technology is not going to happen in the way the transhumanists imagine. Of course, it is hard to know what the limitations are because previous experiments, such as sex change transitions would have been once thought to be possible. But, males are females are similar whereas machines and humans are completely different forms.

    The biggest problem is the creation of consciousness itself, which may defy the building of a brain and nervous system, as well as body parts. Without this, the humans fabricated artificially are likely to be like Madam Tussard models with mechanical voices and movements, even simulated thought. Interior consciousness is likely to be lacking, or substance. It comes down to the creation of nature itself and a probable inability to create the spark of life inherent in nature and consciousness.
  • Corvus
    4.1k
    The biggest problem is the creation of consciousness itself, which may defy the building of a brain and nervous system, as well as body parts. Without this, the humans fabricated artificially are likely to be like Madam Tussard models with mechanical voices and movements, even simulated thought. Interior consciousness is likely to be lacking, or substance. It comes down to the creation of nature itself and a probable inability to create the spark of life inherent in nature and consciousness.Jack Cummins

    :ok: :up:
  • 180 Proof
    15.6k
    Interesting, but your post isn't a direct reply to anything I've written on this thread as far as I can tell. And afaik Ai research / development has nothing to do either with "consciousness" (i.e. phenomenal self-modeling intentionality) or directly with B-M-I (transhumanist) teleprosthetics, etc. In the near term, AI tools (like e.g. LLMs, AlphaZero neural nets, etc) are end user-prompted autonomous systems and not yet 'human-independent agents' in their right (such as prospective AGI systems).
  • Corvus
    4.1k
    It may be shown after great errors that AI is not as intelligent as human beings, as it is too robotic and concrete.Jack Cummins

    :ok: :sparkle: From computer programming point of view, AI is just an overrated search engine.
  • Harry Hindu
    5.2k
    I don't have a good definition. Problem solving? Surviving? Doing differential calculus? Tricking people?

    It's very broad. I'd only be very careful in extrapolating from these we do which we call intelligent to other things. Dogs show intelligent behavior, but they can't survive in the wild. Are the smart and stupid?

    It's tricky.
    Manuel

    What if we were to start with the idea that intelligence comes in degrees? Depending on how many properties of intelligence some thing exhibits, it possesses more or less intelligence.

    Is intelligence what you know or how you can apply what you know, or a bit of both? Is there a difference between intelligence and wisdom?

    Sure, we have a good amount of structural understanding about some of the things hearts (and other organs) do. As you mentioned with the Chinese case above, it's nowhere near exhaustive. It serves important functional needs, but "function", however one defines it, is only a part of understanding.Manuel
    So what else is missing if you are able to duplicate the function? Does it really matter what material is being used to perform the same function? Again, what makes a mass of neurons intelligent but a mass of silicon circuits not? What if engineers designed an artificial heart that lasts much longer and is structurally more sound than an organic one?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.