• Philosophy of AI
    Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics.Nemo2124

    What's a dead-end, I think, is the belief that an artificial replication of human thought is or could become an actual instance of thought just by being similar or practically indistinguishable.
  • The Idea That Changed Europe
    The modern native populations of Europe largely descend from three distinct lineages: Mesolithic hunter-gatherers, descended from populations associated with the Paleolithic Epigravettian culture; Neolithic Early European Farmers who migrated from Anatolia during the Neolithic Revolution 9,000 years ago; and Yamnaya Steppe herders who expanded into Europe from the Pontic–Caspian steppe of Ukraine and southern Russia in the context of Indo-European migrations 5,000 years ago. — https://en.m.wikipedia.org/wiki/Europe

    The first creation story was probably expressed when languages emerged, say, 350 000 years ago, when it became practically possible to talk about causes and effects.
  • Is a Successful No-Growth Economic Plan even possible?
    Exponential population growth has been made possible by the exponential growth in technologies, notably medical technology.Janus

    Yeah, more babies survive. But when the standard of living increases, women have fewer children. So initially the population will grow, but then stabilize and decrease, as the majority will be older people.
  • The "AI is theft" debate - An argument
    If the private use is within law and identical to what these companies do, then it is allowed, and that also means that the companies do not break copyright law with their training process.Christoffer

    You claim it's identical merely by appeal to a perceived similarity to private legal use. But being similar is neither sufficient nor necessary for anything to be legal. Murder in one jurisdiction is similar to legal euthanasia in another. That's no reason to legalize murder.

    Corporate engineers training an Ai-system in order to increase its market value is obviously not identical to private fair use such as visiting a public library.

    We pay taxes, authors or their publishers get paid by well established conventions and agreements. Laws help courts decide whether a controversial use is authorized, fair or illegal. That's not for the user to decide, nor for their corporate spin doctors.
  • The "AI is theft" debate - An argument
    Why are artists allowed to do whatever they want in their private workflows, but not these companies?Christoffer

    Noone is allowed to do whatever they want. Is private use suddenly immune to the law? I don't think so.

    Whether a particular use violates the law is obviously not for the user to decide. It's a legal matter.
  • Is a Successful No-Growth Economic Plan even possible?
    Is it possible to have a healthy economy which is 'steady state'? Not expanding and not shrinking?BC

    Expanding economies include parts which are steady state or shrinking. For example, corporate profits can be high at the same time as the amount of jobs is steady state or shrinking. Younger generations of the population remain in education or work with entertainment because no new job opportunities arise. They live with their parents until they're 30 - 40, and get no children. Populations are ageing, with increasing costs for society.

    Would this change if we somehow limit the expanding parts of the economy?
  • Does Universal Basic Income make socialism, moot?
    socialism moot through Universal Basic Income?Shawn

    Not only socialism but also capitalism exploits the fact that we need a sufficient income or outcome for living.

    A universal basic income means that there will be no more starving, homeless, uneducated or uninsured individuals to exploit.

    However, there will still remain plenty of inequalities for the political interests to exploit in their pursuit of power.
  • The "AI is theft" debate - An argument


    Ok, if your opponent's arguments are also about the nature of the information processing, then they cannot say whether B is theft. No-one can from only looking at information processing.

    The painting of Mona Lisa is a swarm of atoms. Also a forgery of the paining is a swarm of atoms. But interpreting the nature of these different swarms of atoms is neither sufficient nor necessary for interpreting them as paintings, or for knowing that the other is a forgery.

    Whether something qualifies for copyright or theft is a legal matter. Therefore, we must consider the legal criteria, and, for example, analyse the output, the work process that led to it, the time, people involved, context, the threshold of originality set by the local jurisdiction and so on. You can't pre-define whether it is a forgery in any jurisdiction before the relevant components exist and from which the fact could emerge. This process is not only about information, nor swarms of atoms, but practical matters for courts to decide with the help of experts on the history of the work in question.

    Addition:
    When the producer of a work is artificial without a legal status, then it will be its user who is accountable. If the user remains unknown, the publisher is accountable (e.g. a gallery, a magazine, book publisher, ad-agency etc).

    Regarding the training of Ai-systems by allowing them to scan and analyse existing works, then I think we must also look at the legal criteria for authorized or unauthorized use. That's why I referred to licenses such as Copyleft, Creative Commons, Public Domain etc. Doesn't matter whether we deconstruct the meanings of 'scan', 'copy', 'memorize' etc. or learn more about the mechanics of these systems. They use the works, and what matters is whether their use is authorized or not.
  • The "AI is theft" debate - An argument


    You ask "Why is B theft?" but your scenario omits any legal criteria for defining theft, such as whether B satisfies a set threshold of originality.

    How could we know whether B is theft when you don't show or describe its output, only its way of information processing. Then, by cherry picking similarities and differences between human and artificial ways of information processing, you push us to conclude that B is not theft. :roll:
  • The "AI is theft" debate - An argument


    One difference between A and B is this:

    You give them the same analysis regarding memorizing and synthesizing of content, but you give them different analyses regarding intent and accountability. Conversely, you ignore their differences in the former, but not in the latter.

    They should be given the same analysis.
  • Realistically, could a free press exist under a dictatorship?
    We already have free press under multinational dictatorships called 'corporations'.
  • The "AI is theft" debate - An argument
    Why is it irrelevant?Christoffer

    Because a court looks at the work, that's where the content is manifest, not in the mechanics of an Ai-system nor in its similarities with a human mind.

    What's relevant is whether a work satisfies a set threshold of originality, or whether it contains, in part or as a whole, other copyrighted works.

    There are also alternatives or additions to copyright, such as copyleft, Creative Commons, Public Domain etc. Machines could be "trained" on such content instead of stolen content, but the Ai industry is greedy, and to snag people's copyrighted works, obfuscate their identity but exploit their quality will increase the market value of the systems. Plain theft!
  • Why The Simulation Argument is Wrong
    How does this prove we aren't a simulation though?Benj96

    So if a picture cannot become a duplication of what it depicts we have little reason to expect that an increased sophistication of the depiction (e.g. computer simulation) could change the logic (asymmetry) of their relation. Therefore we have little or no reason to believe that we are in a simulation.

    A version of the argument might look like this:

    Assume that simulations are not duplications.
    Simulations of experiences are not duplications of experiences.
    Therefore, our experiences are not simulations.

    Some might want to add that our experiences are real, but the objects and states of affairs that we experience are simulations. But if we are in a simulation, then how could the word “simulation” refer to an actual simulation? If we are in a simulation, then the word 'simulation' doesn't refer to anything actual. Therefore, the claim “we are in a simulation” (i.e. an actual simulation) is false.
  • The "AI is theft" debate - An argument
    Again, I ask... what is the difference in scenario A and scenario B? Explain to me the difference please.Christoffer

    A and B are set up to acquire writing skills in similar ways. But this similarity is irrelevant for determining whether a literary output violates copyright law.

    You blame critics for not understanding the technology, but do you understand copyright law? Imagine if the law was changed and gave Ai-generated content carte blanche just because the machines have been designed to think or acquire skills in a similar way as humans. That's a slippery slope to hell, and instead of a general law you'd have to patch the systems to counter each and every possible misuse. Private tech corporations acting as legislators and judges of what's right and wrong. What horror.


    So, what are you basing your counter arguments on? What exactly is your counter argument?Christoffer

    If your claim is that similarity between human and artificial acquisition of skills is a reason for changing copyright law, then my counter-argument is that such similarity is irrelevant. What is relevant is whether the output contains recognizable parts of other people's work.

    One might unintentionally plagiarize recognizable parts of someone else's picture, novel, scientific paper etc. and the lack of intent (hard to prove) might reduce the penalty but hardly controversial as a violation.
  • Why The Simulation Argument is Wrong


    You're right, our bodies, sense organs, and interactions with frogs amount to our ability to identify them. The causal history, however, is what makes it necessary to experience the frog as a frog and not as a hopping constellation of colored shapes.

    Another argument against the simulation hypothesis might be this:

    A simulation is a representation, and a representation is selective and asymmetric relative to what it is a representation of. For example, a painting of Mona Lisa represents Mona Lisa, but Mona Lisa doesn't represent the painting. It is impossible to produce a complete representation of Mona Lisa in the sense that the representation becomes equivalent to, or a duplication of the real Mona Lisa. Photo copies of the painting represent the painting and perhaps also Mona Lisa, but they are only duplications of each other, as copies, not of the original painting, nor of Mona Lisa. Although this example only considers visual features, the argument applies to any of her features, e.g. sound of her voice, scent, feel of her skin etc. Therefore, it is impossible to produce a complete representation or simulation of Mona Lisa.

    Yet many people seem to believe that the whole universe, or at least our experienced part of the universe, is or could be a simulation.
  • Why The Simulation Argument is Wrong


    If a simulation exists, then there must exist at least one more thing (or set of things) which is constitutive for the simulation, e.g. a brain, a computer, their materials and properties and surrounding conditions of satisfaction. Therefore, everything cannot be a simulation.

    Furthermore, if the simulation (e.g an emergent property within a network of electrical circuits) is about something (e.g. our world at the level of humans and mid sized objects), then we have at least three things to consider: the simulation (emerging from electric circuits), what causes it (a brain and computers etc), and what it is about (a part of our world). So, not only is it impossible for everything to be a simulation, the simulation is just one thing among many other things in our world.

    To know whether the things that we experience belong to the simulation or to the non-simulated parts of our world we can investigate what's necessary for something to be experienced as a frog, for instance.

    A frog is not just a constellation of coloured shapes that hop around for no apparent reason. Simulations, pictures, or descriptions of frogs are syntactically disjoint and detachable in a way that real frogs are not. Real frogs are continuous, recalcitrant, and seamlessly connected to other creatures and environments, which in turn are connected to chemistry, physics, astrophysics, cosmology or everything. Our ability to identify frogs, as frogs, has a causal history that arguably amounts to everything, but everything cannot be a simulation.
  • Is Nihilism associated with depression?


    To expect life to be meaningless has its perks, because whenever life appears meaningless the expectation is satisfied, and whenever life appears meaningful you'll be surprised and enjoy the fact that you were wrong. An optimist who expects life to be meaningful does not enjoy being proved wrong. Therefore, I'd rather be the pessimist, but I wouldn't call it nihilism.

    Regarding nihilism, I don't think there is good reason to believe that life is meaningless everywhere and always.
  • The "AI is theft" debate - An argument
    [
    If the user asks for an intentional plagiarized copy of something, or a derivative output, then yes, the user is the only one accountable as the system does not have intention on its own.Christoffer

    According to you, or copyright law?


    But this is still a misunderstanding of the system and how it works. As I've stated in the library example, you are yourself feeding copyrighted material into your own mind that's synthesized into your creative output. Training a system on copyrighted material does not equal copying that material, THAT is a misunderstanding of what a neural system does. It memorize the data in the same way a human memorize data as neural information. You are confusing the "intention" that drives creation, with the underlying physical process.Christoffer


    If 'feeding', 'training', or 'memorizing' does not equal copying, then what is an example of copying? It is certainly possible to copy an original painting by training a plagiarizer (human or artificial) in how to identify the relevant features and from these construct a map or model for reproductions or remixes with other copies for arbitrary purposes. Dodgy and probably criminal.

    You use the words 'feeding', 'training', and 'memorizing' for describing what computers and minds do, and talk of neural information as if that would mean that computers and minds process information in the same or similar way. Yet the similarity between biological and artificial neural networks has decreased since the 1940s. I've 'never seen a biologist or neuroscientist talk of brains as computers in this regard. Look up Susan Greenfield, for instance.

    Your repeated claims that I (or any critic) misunderstand the technology are unwarranted. You take it for granted that a mind works like a computer (it doesn't) and ramble on as if the perceived similarity would be an argument for updating copyright law. It's not.
  • We don't know anything objectively


    Sorry, below is Putnam's argument against global skepticism. The argument is based on the assumption that words don't magically have meanings, they have causal histories and constraints (CC) to things in order to have the meanings that they have.

    1. Assume we are brains in a vat

    2. If we are brains in a vat, then “brain” does not refer to brain, and “vat” does not refer to vat (via CC)

    3. If “brain in a vat” does not refer to brains in a vat, then “we are brains in a vat” is false

    Thus, if we are brains in a vat, then the sentence “We are brains in a vat” is false (1,2,3)
  • We don't know anything objectively
    Ever since I watched the movie "The Matrix" I have been troubled by how to tell what is real and what is not.Truth Seeker

    Here's why we cannot be brains in a vat; https://iep.utm.edu/brain-in-a-vat-argument/
  • The "AI is theft" debate - An argument
    the way they function is so similar to the physical and systemic processes of human creativity that any ill-intent to plagiarize can only be blamed on a user having that intention. All while many artists have been directly using other people's work in their production for decades in a way that is worse than how these models synthesize new text and images from their training data.Christoffer

    What's similar is the way they appear to be creative, but the way they appear is not the way they function.

    A machine's iterative computations and growing set of syntactic rules (passed for "learning") are observer-dependent and, as such, very different from a biological observer's ability to form intent and create or discover meanings.

    Neither man nor machine becomes creative by simulating some observer-dependent appearance of being creative.
  • The "AI is theft" debate - An argument
    How can an argument for these models being "plagiarism machines" be made when the system itself doesn't have any intention of plagiarism?Christoffer

    The user of the system is accountable, and possibly its programmers as they intentionally instruct the system to process copyright protected content in order to produce a remix. It seems fairly clear, I think, that it's plagiarism and corruption of other people's work.
  • Are there any ideas that can't possibly be expressed using language.
    I don't even know what "I like Ice cream" means when I think it, let alone say it. It is expressed and heard as a process which will have an effect.ENOAH

    Hence, its meaning is expressed.


    Epistemology includes criticism about the limits of our scientific knowledge and it warns us against the idea that we can get ultimately objective knowledge.Angelo Cannata

    I don't know about you, but some (e.g. postmodernists) refer to epistemological problems, not because they care about epistemology but because knowledge can be decisive, change beliefs, authority, privileges etc. Some fear knowledge more than death.

    So what does it mean "epistemically objective"?Angelo Cannata

    In a general sense, it means that the knowledge is about something, i.e. that there exists some object that the knowledge refers to, is directed towards.

    So, for example, my experiences exist in a subjective domain within the objective world, and their mode of existing is unlike the objective mode in which mountains and molecules exist. But our experiences are just as real as mountains and molecules: we have them, think and talk about them, and we express them in various ways, e.g..in the arts, theatre etc. Thus, accumulating objective knowledge about subjectivity, i.e. knowledge that refers to something that exists.
  • Philosophy as a prophylaxis against propaganda?
    Pray tell, what is your opinion on the state of global education. For me, the critical thinker is resilient to rhetoric and propaganda, the fact learner is however....not.Benj96

    While many facts are results of critical thinking, critical thinking without fact-learning is anti-intellectual.

    Lots of propaganda masquerades as "critical thinking" where the sole purpose of the "thinking" is to cast suspicion or doubt on the facts, e.g. to undermine the possibility to criticize false or nonsensical claims etc.
  • Are there any ideas that can't possibly be expressed using language.


    What do you expect from an expression? Expressing my subjectivity is to exemplify some property that it has, e.g. its first person point of view. So, I'll draw a perspective picture of what I see from my point of view, or describe it with words. Its subjective mode of existing doesn't prevent me from expressing it in epistemically objective ways.
  • Are there any ideas that can't possibly be expressed using language.
    Imagine that one day, you get the best idea in the world. You go to tell your friend, but then you realize something: You don't have any words to describe your idea. Is this scenario possible?Scarecow

    It's possible to forget words, stutter, or have a neurological disorder, paralysis, brain damage etc. that makes it difficult or impossible to express thoughts.

    It is also possible that you feel that you get the best idea, but when you're about to express it, there is nothing to express. The feeling was just evoked by a wish or fantasy about what it might be like to have the best idea.

    Theoretically, however, anything can be expressed.
  • How do we decide what is fact and what is opinion?
    How do we decide what is fact and what is opinion?Truth Seeker

    We might have different beliefs about the current market value of a house, for instance, but we can list the house for sale in order to find out whether our beliefs correspond to its current market value. What is fact and what is opinion in this case is not something we decide but find out.

    There are more than 8.1 billion humans on Earth and our conflicting ideologies, religions, worldviews and values divide us.Truth Seeker

    There are many more things that unite us as living organisms than there are divisive ideas etc. The ideas of power mad ideologues, preachers, poets etc. are irrelevant compared to the wonders of nature.

    I worry that we will destroy ourselves and all the other species with our conflicts.Truth Seeker

    Possibly, yet never before in human history has there been so much public attention on sustainability and global climate change. Many of us reduce damage by avoiding unsustainable products, food, and life styles, many businesses are desperately trying to green wash their unsustainable products or replace them with better alternatives.

    I think that if we could work out what is fact and what is opinion, it would help us get on with each other better.Truth Seeker

    1 + 1 = 2 is a fact. Pizza tastes good is an opinion. What needs to be worked out here? Look at the philosophers who study the nature of facts, do they seem to get on with each other better? :cool:
  • AGI - the leap from word magic to true reasoning
    Searle believes that brain matter has some special biological property that enables mental states to have intrinsic intentionality as opposed to the mere derived intentionality that printed texts and the symbols algorithmically manipulated by computers have. But if robots and people would exhibit the same forms of behavior and make the same reports regarding their own phenomenology, how would we know that we aren't also lacking what it is that the robots allegedly lack?Pierre-Normand

    I suppose we could still have good theoretical reason to suspect that they lack genuine understanding. So far the true test has not been empirical but conceptual (e.g. some assume functionalism or a computational theory of mind, others don't).

    I don't know if brain matter or an exclusively biological property is necessary for consciousness to arise. It seems to be an emergent property, and it arises in very different kinds of biology, e.g. primates, cephalopods. So
    in a functional sense it could arise elsewhere. But I think the functional theory of consciousness is too narrow. Consciousness is related to a background, a body, action, perception, hormone levels, and a lot of other conditions that together leave some biological forms of life as the only plausible candidates for having conscious states.

    So, perhaps consciousness is not dependent on biological matter per se, but on the conditions in which the ability evolved, which might then exclude non-biological systems from duplicating it.


    Are biologically active molecules not in some ways also "symbols" ie structures which "say" something - exert a particular defined or prescribed effect.Benj96

    Molecules exist independent of us. We discover them or their meanings, and refer to them with the help of symbols. Symbols, however, don't exist independent of us. There's nothing in a molecule that symbolizes unless we choose to use some feature in the molecule for symbolization. But the molecule doesn't care about our symbolic practices.


    However, my point was about the relevance of isomorphisms. Pointing out that there can be irrelevant isomorphisms such as between a constellation and a swarm of insects, doesn't change the fact that there are relevant isomorphism. (Such as between the shape of bird wings and airplane wings, or between biological neural nets and artificial neural nets.)wonderer1

    Bird wings and airplane wings have many similarities and many differences. Artificial neural networks have become increasingly different from their biological counterparts since the 1940s or 50s.



    .
  • AGI - the leap from word magic to true reasoning
    Since artificial neural networks are designed for information processing which is to a degree isomorphic to biological neural networks, this doesn't seem like a very substantive objection to me. It's not merely a coincidence.wonderer1

    Whether the processing is designed or coincidental doesn't matter. The objection refers to isomorphism and the false promise that by being like the biological process the artificial process can be conscious. However, a conscious person with a speech defect can fail the Turing test, while smooth talking chat bots pass the test, or win the game Jeopardy, without being conscious in the sense that the person is conscious. Isomorphism is neither sufficient nor necessary for being conscious.


    Consider the system reply and the robot reply to Searle's Chinese Room argument. Before GPT-4 was released, I was an advocate of the robot reply, myself, and thought the system reply had a point but was also somewhat misguided. In the robot reply, it is being conceded to Searle that the robot's "brain" (the Chinese Room) doesn't understand anything. But the operation of the robot's brain enables the robot to engage in responsive behavior (including verbal behavior) that manifests genuine understanding of the language it uses.Pierre-Normand

    It seems likely that we will soon encounter robots in our daily lives that can perform many practical and intellectual tasks, and behave in ways that manifest a sufficient understanding of our language. But I wouldn't call it genuine. A lack of genuine understanding can be buried under layers of parallell processes, and being hard to detect is no reason to reinterpret it as genuine. According to Searle, adding more syntax won't get a robot to semantics, and its computations are observer-relative.

    One might also add that authenticity matters. For example, it matters whether a painting is genuine or counterfeit, not necessarily for its function, but for our understanding of its history, under what conditions it was produced, and for our evaluation of its quality etc.. The same could be true of simulated and genuine understanding.
  • AGI - the leap from word magic to true reasoning


    One process or pattern may look like another. There can be strong isomorphism between a constellation of stars and a swarm of fruit flies. Doesn't mean that the stars thereby possess a disposition for behaving like fruit flies.

    I'm not sure how that follows. The authors of the paper you linked made a good point about the liabilities of iteratively training LLMs with the synthetic data that they generated. That's a common liability for human beings also, who often lock themselved into epistemic bubbles or get stuck in creative ruts. Outside challenges are required to keep the creative flame alive.Pierre-Normand

    I assumed that LLMs would identify and preserve actual and relevant diversity , but the paper shows that the reduction of diversity is systematic. The LLMs follow rules, regardless of what is actual and relevant. That's basically what Searle's Chinese room shows.

    We might also reduce diversity in our beliefs and descriptions e.g. for convenience or social reasons, but the false and misleading ones are naturally challenged by our direct relation with reality.
  • AGI - the leap from word magic to true reasoning
    their training data and interactions with humans do ground their language use in the real world to some degree. Their cooperative interactions with their users furnish a form of grounding somewhat in line with Gareth Evans' consumer/producer account of the semantics of proper namesPierre-Normand

    Their training data is, I think, based on our descriptions of the world, or their own computations and remixes of our descriptions. In this sense their relation to the world is indirect at best.

    There's some research showing that when LLMs remix their own remixes, the diversity of the content decreases and becomes increasingly similar. I'm guessing it could be fixed with some additional rule to increase diversity, but then it seems fairly clear that it's all an act, and that they have no relation to the world.


    Unless, consciousness is a product of complexity. As we still don't know what makes matter aware or animate, we cannot exclude the possibility that it is complexity of information transfer that imbues this "sensation". If that is the case, and consciousness is indeed high grades of negativity entropy, then its not so far fetched to believe that we can create it in computers .Benj96

    Computer code is a bunch of symbols, recall. Could a bunch of symbols become consciously alive? The idea seems as far fetched as voodoo magic.

    It seems probable that the biological phenomenon that we call consciousness is an emergent property. Many emergent properties are simple, others are complicated but have simple underlying chemical reactions, such as in photosynthesis. Perhaps the underlying mechanisms from which consciousness arises are relatively simple yet enables us to think and speak infinitely many meanings (hence the immense network of nerve cells in the brain)?
  • AGI - the leap from word magic to true reasoning
    ..embodiment, episodic memory, personal identity and motivational autonomy. Those all are things that we can see that they lack (unlike mysterious missing ingredients like qualia or "consciousness" that we can't even see fellow human beings to have). Because they are lacking in all of those things, the sorts of intelligence and understanding that they manifest is of a radically different nature than our own. But it's not thereby mere simulacrum - and it is worth investigating, empirically and philosophically, what those differences amount to.Pierre-Normand

    Yes, they are radically different. Unlike computational systems we are biological systems with pre-intentional abilities that enable our intentional states to determine their conditions of satisfaction.

    Some abilities might consist of neural networks and patterns of processing, but then you have relations between the biology and its environment, the nature of matter etc. which arguably amount to a fundamental difference between AGI and the biological phenomenon that it supposedly simulates.

    Of course we can also ditch the assumption that it is a simulation and just think of AGI as information technology.

    Of course, this is all still quite different from the way human cognition works, with our [sic] biological neural networks and their own unique patterns of parallel and serial processing. And there's still much debate and uncertainty around the nature of machine intelligence and understanding.

    But I think the transformer architecture provides a powerful foundation for integrating information and dynamically shifting attention in response to evolving goals and contexts. It allows for a kind of flexible, responsive intelligence that goes beyond simple serial processing.
    Pierre-Normand

    It's a leap forward in information technology, for sure.
  • AGI - the leap from word magic to true reasoning
    But then, the actor's ability to imitate the discourse of a physicist would slowly evolve into a genuine understanding of the relevant theories. I believe that intellectual understanding, unlike the ability to feel pain or enjoy visual experiences, cannot be perfectly imitated without the imitative ability evolving into a form of genuine understanding.Pierre-Normand

    A human actor already has the ability to understand things, so that's how an actor can learn to understand physics by acting like a physicist. But that's different from an artificial actor, a computational system that doesn't have the ability. Acting as if it had the ability doesn't evoke the ability.

    there remains a stark distinction between the flexible behavior of an AI that can "understand" an intellectual domain well enough to respond intelligently to any question about it, and an actor who can only fool people lacking that understanding.Pierre-Normand

    The AGI's responses might be super intelligent, but this doesn't mean that it understands them. I suppose it doesn't have to in order to be a useful assistant.
  • AGI - the leap from word magic to true reasoning
    But you could say the same about me. Am I a simulation or a duplication of what another human might say in response to your commentary?Benj96

    It's certainly possible, but why would anyone set up an AI assistant here just to fool me or other members to believe that we're talking with another human? It seems probable that it would make the forum less interesting (even if it isn't revealed, but especially if it is revealed).

    I was impressed by Anthropic's Claude 3 Opus (thanks to @Pierre-Normand for link), and I'm occasionally asking ChatGPT about things instead of looking them up myself. It's efficient, but I find some of the recurring expressions that make it appear human-like superfluous or insincere even.

    Artificial general intelligence is something else. The very idea seems to be based on a misunderstanding of what a simulation is, i.e. that somehow, e.g. with increased complexity, it would suddenly become a duplication. It won't.
  • AGI - the leap from word magic to true reasoning
    The second thing is how do we give it both "an objective" but also "free auto-self-augementation" in order to reason. And curiously, could that be the difference between something that feels/experiences and something that is lifeless, programmed and instructed?Benj96

    The difference is, I think, in what makes a simulation different from a duplication. We can instruct a simulation to respond to words and objects in ways that appear non-instructed, spontaneous, emotional etc. But what for? Is indiscernibility from being human worth striving for? A simulation is never a duplication.
  • Indirect Realism and Direct Realism
    "Time flies like an arrow; fruit flies like a banana."Pierre-Normand

    Talk of things on two levels can easily become ambiguous :halo:


    For them to see when standing what we see when hanging upside down it must be that their eyes and/or brain work differently.Michael

    Must they, though? Some of us who have the same type of eyes / brains may stand up and others hang upside down. Are we having different experiences? Initially, yes, but after a few hours, no. We know this from experiments and the fact that we see the world upright despite the fact that it is projected upside down on the retina as the light travels through the eye's lens.

    I’m saying that whether or not sugar tastes sweet is determined by the animal’s biology. It’s not “right” for it to taste sweet and “wrong” for it to taste sour. Sight is no different. It’s not “right” that light with a wavelength of 700nm looks red and not “right” that the sky is “up” and the ground “down”. These are all just consequences of our biology, and different organisms with different biologies can experience the world differently.Michael

    Then you're analysing the biology in isolation, as if the causal chains of chemicals, radiation, pressure etc from the environment would suddenly stop in the organism, and instead each individual organism creates its own experience.

    I'd say seeing a colour is neither right nor wrong, it's just a causal fact, how a particular wavelength in the visible spectrum causes a particular biological phenomenon in organisms that have the ability to respond to wavelengths in the visible spectrum. This raw conscious experience, can then be used in many different ways, conventions etc. But the experience is a fact, not a convention.
  • Indirect Realism and Direct Realism
    It is neither a contradiction, nor physically impossible, for some organism to have that very same veridical visual experience when standing on their feet. It only requires that their eyes and/or brain work differently to ours.

    Neither point of view is "more correct" than the other.

    Photoreception isn't special. It's as subjective as smell and taste
    Michael

    Well, they have the same veridical experience when the object of the experience is the same. But why would that require that their eyes / brain work different to ours?

    You postulate that we (humans) have the experience with our kind of eyes / brain, so how come you say that another organism must have differently working eyes and brain to have the same experience?

    Also among humans we have somewhat differently working eyes / brains, an other organisms might have very different eyes / brains, e.g. octopus, mantis shrimp etc. However, these differences matter little when the object that we see is the same, not some figment of our different eyes / brains.

    What do you mean by saying that photoreception is subjective yet not special?

    I'd say photoreception is open to view in plants, animal vision, machine vision etc. The experience, however, that arises in animal vision is not open to view (ontologically subjective).
  • Indirect Realism and Direct Realism


    World maps are indeed conventional, like many other artificial symbols, but misleading as an analogy for visual perception. Visual perception is not an artificial construct relative conventions or habits. It is a biological and physical state of affairs, which is actual for any creature that can see.

    For example, an object seen from far away appears smaller than when it is seen from a closer distance. Therefore, the rails of a railroad track appear to converge towards the horizon, and for an observer on the street the vertical sides of a tall building appear to converge towards the sky. These and similar relations are physical facts that determine the appearances of the objects in visual perception. A banana fly probably doesn't know what a rail road is, but all the same, the further away something is the smaller it appears for the fly as well as for the human.
  • Indirect Realism and Direct Realism

    The example of seeing rain shows how the content of the visual experience is related to the rain, and how the presentational intentionality of seeing differs from the representational intentionality of believing. The content of the visual experience and the rain are inseparable in the sense that it is the visible property of the rain that determines the phenomenal character of the visual experience. The fact that they are separate things is beside the point.
  • Indirect Realism and Direct Realism
    Why not?Michael

    Because perception is direct.

    Try this:Banno

    That guy is taking rain dancing to the next level :cool: