Comments

  • How would you respond to the trolley problem?
    I don't see this much different to how scientific experiments are always much simpler than everyday reality. A dice roll is presumably describable via Newtonian / classical forces but no one creates a direct Newtonian / classical model of a dice roll and then conducts an experiment to validate it.Apustimelogist

    The key difference is that they aren't experiments, they are theoretical in nature only. You cannot really do these experiments practically and the ethical requirements today are so high that they can't ever be done. Therefore the scientific relevance without any form of actual experimental validation makes these quite useless in the same way we can't use string theory as a foundation of thinking about physics as the only form of validation for that is how well the math matches up to the rest. But without any actual correlation between that math and the wild statements of that theory it becomes useless as a foundational theory of everything.

    For me, the point of it isn't to produce moral thinking and correct moral answers but to uncover the underlying reasons and intuitions of moral thought.

    Most of us would assume those reasons are consistent across many different scenarios regardless of complexity or if "the experiment [has] already been conducted".
    Apustimelogist

    Yes, they work as introduction courses to philosophy, but since there's no validation past the theoretical, and real world examples of similar events show much more complexity in their situational circumstances that they become unquantifiable as statistical data, they end up being just introduction material, nothing more.

    The thought experiment itself is the conduction of it. I just want to see what the opinion or judgement is of it. The fact that people may over-estimate their ability to act morally would apply to any thought experiment regardless of complexity or realistic-ness.Apustimelogist

    The problem is that since the main point of moral philosophy is to find truth in what constitutes and defines human morality, it requires accuracy in how we determine morality and situations of moral thinking. These thought experiments aren't valid in any scientific manner other than to conclude just how people over-estimate or under-estimate their ability to think morally, but they're not really good as actual components and premises of moral theories.

    I disagree. As far as I'm aware there is no consensus on the correct solution to the trolley problem.Apustimelogist

    I'm not sure what you're disagreeing with since it's the fact that there's no consensus that's part of putting a spotlight on people's banalities when thinking about morality. It rather shows the weaknesses and lack of depth that people have around the subject of morality as their justifications for their solution to the trolley problem becomes arbitrary and sometimes just a question of their current state of mind and mood.

    The fact that people disagree brings up the question of why they disagree and what this says about their moral thinking and what kind of variables make them change their moral choices, which imo is an interesting thing in its own right. The question of how people act and actually behave morally in real life (and whether they actually do what is in agreement with the beliefs, judgements, moral frameworks they have) is also another interesting question in its own right.Apustimelogist

    But still, the problem is that people's justifications rarely correlate to how they actually behave in real moral situations. Their "critical thinking" about their choices in a moral thought experiments just becomes self-indulged fantasies about their ego, rather than a true examination of their morality. The problem with these theoretical lines of thinking and discussions end up being fiction rather than examinations of truth.

    And I'd say that fiction actually manage to be better at promoting moral thinking as the thought experiments in themselves rarely have an empathic dimension to them. But investing yourself in characters of a story that make decisions on your behalf, or even have yourself in control of them like in games, usually promotes much better critical thinking about morality. Just reading the audience discussion around the moral actions in The Last of Us part 2 and how people had problems with everything that happened in that story is more fascinating and revealing as a case study in morality than how people justify their choices in the trolley problem.

    I think my disagreement with people in regard to these things maybe stems from me finding these questions interesting in their own right as opposed to just a vehicle for prescribing practical morality.Apustimelogist

    I found them interesting when I started out studying philosophy, but the further I've dived down into the complexity of moral philosophy the more trivial I've found these thought experiments to be. Especially when taking into account the complexity of human cognition and psychology and the entire experience of the human condition, both for the individual and the social realm.
  • How would you respond to the trolley problem?
    I'm not sure I agree that scenarios like the trolley problem never happen - I think they probably do a lot in a messier way and in some ways the fact that the trolley problem has no perfect outcome reminds of the messiness of reality sometimes.Apustimelogist

    But the messiness of reality strips the simplicity out of the scenarios adding so many moving parts that the scenario in itself has changed so much that the parameters of measurement becomes skewed.

    I think the value in these analogies is not necessarily in trying to find out what the right thing to do is, but why we have the moral preferences we do and how they differ.Apustimelogist

    Yes, as an introduction to philosophy it's great. But it's not very good at higher level thinking about morality as it's already clear how complex morality can really be.

    Its like an experiment. Scientific experiments need controlled and independent variables to figure out whats going on. If you have a simplified scenario and you change certain aspects of it and see what people think then it may give more clarity as to why we make certain choices or what our preferences are. If you just present a scenario with lots of different factors then its not always clear what is actually guiding peoples decisions.Apustimelogist

    Yes, but in that case I much rather look at the scientific experiments that have already been conducted. Since experiments that cannot be actually conducted only becomes theoretical and at best very surface level. The fact that people regularly over-estimate their ability to act morally in every single situation makes it hard to actually get a good "scientific" result.

    Most moral analogies usually only pinpoints the banalities in people's confidence in their own morality, but those people were usually not very involved in critical thinking about morality to begin with. The same people who would most likely freeze like a deer in headlights when they actually face a real moral dilemma and situation.

    Since the complex parameters always matters in real situations, I'd much rather try and find a method of thinking that can incorporate variables and speed up decision making within moral situations; a more holistic approach with a focus on having a trained mental state and a practical moral methodology to be able to act regardless of pressure.
  • How would you respond to the trolley problem?
    It is only useful where we know nothing about the past or the future, the situation is entirely decontextualised from reality and then we are commanded to chose. It is a game, nothing more and nothing less and we can always choose not to play. All valid moral choices.Benkei

    Yes, I think most moral analogies automatically fail in that they are too simple for being actually valuable in moral philosophy. At best they are a good introduction for people learning philosophy to get them to think critically about morality, but in the end I think that these scenarios tend to get in the way of actually thinking about morality.

    Reality is damn messy and the worst that these kinds of simplified scenarios can get is that society tries to judge someone's action based on a similar simplicity; rather than carefully evaluating the situation that happened. It's the prime reason why we don't have a "final" moral philosophy that can be applied everywhere, because it can't.

    It's why I'm thinking that the "final" theory of moral philosophy may be in a rigid framework of practical moral evaluation, that can be applied to any situation as a framework of critical thinking, rather than being a conductor of axiomatic oughts. Malleable enough to adapt to any situation involving humans in morally challenging situations.
  • How would you respond to the trolley problem?
    The situation adds an extra variable of expertise involved. The participants didn't fully understand the situation, and thought hitting the lever might make things worse. Makes sense. If I'm in a strange room with equipment that I'm unfamiliar with, and I know there are people who normally operate this equipment and are possibly nearby, I'm not going to switch the switch.Philosophim

    Isn't this the point of what I meant by it being a real world test? As in, taking into account all the complexities that piles on top of each other when faced with a real world scenario.

    It's very common to hear philosophical analogies that try to simulate a moral question, but we rarely meet such problems in real life because real life is messier. In real life, if you have a gun and need to choose who to shoot to save whoever, or something along those lines, that scenario would incorporate a lot more moving parts that affect how you morally act. Aspects that aren't as black and white as many philosophical analogies tend to incorporate.

    In the end, most philosophical thought experiments in morality ends up being rather useless for evaluating morality.
  • How would you respond to the trolley problem?
    when is it morally acceptable to choose non-interference?Tzeentch

    When there's insufficient knowledge of the outcome, or of the moving parts of a situation. Or if there's significant risk to your own health. It's not selfish to not risk your life and people who scold someone for opting out of action when there's a significant risk to their own health and life are usually not very good at understanding the pressure of such a situation.

    Other than that, if there's no risk to your own health and the situation is clear and obvious, then I would say it's immoral not to act.
  • How would you respond to the trolley problem?
    Here's... as close as possible... to a real world test. Just to check how people would actually react rather than believe they would.

  • The philosopher and the person?
    Do you agree that the philosopher must uphold, almost, a fiduciary duty towards the public, in terms of living a certain life?Shawn

    A person who does not live by his own philosophy comes off as being dishonest. But there are a few more sides to it.

    If the philosopher is pushing for a certain moral praxis, then why wouldn't they follow it themselves? Being convinced that a certain moral praxis is the right way to live would be informed by living that experience in some way.

    However, there are philosophies that can extend beyond the ability to fully live by it. For instant, a marxist political philosopher can criticize capitalism and the modern free market and its culture, but living by that is close to impossible. This is also why I detest the kind of counter arguments against great thinkers today by dismissing them because they're not fully living by their own teachings. For instance, someone releasing a book that tries to argue for a world without the normal monetary transactions on the free market; would still need to operate within this free market to reach out with his/her ideas and will probably need to sell a few books in order to continue their work without the need for distracting side gigs and burnout.

    Or think about someone criticizing the current social media behavior and TikTok addiction. What's the best way to talk about such ideas? To have an account on social media and TikTok and spread that knowledge there, especially communicating that knowledge within the context of behaviors that people have on these platforms.

    This is all a gradient of course. A moral philosopher in a totalitarian state would look like a fool if they argue against killing innocent people in society if they at the same time also participate in that practice. But at the same time, there's been such thinkers in nations that were in between, arguing against the practice of the state within the context of the political ideologies in power, and subsequently leading to the people slowly turning away form those ideologies over time.

    In the end, it doesn't really matter who the person was or how they lived. We don't dismiss Heidegger's philosophies because he became a Nazi, that's how the mob in society operates. We examine the ideas on their own merits and we examine them in context.

    Philosophy isn't one idea over all else, it's a pattern of different ideas that interplay into a holistic wisdom. The relations between many philosophical ideas are just as important, if not more important than any single idea. This is why I do not like it when people stick to a favorite philosopher and argue any topic as some kind of zealot to that philosopher. That's not the point of philosophy, it's about the progress and evolution of ideas into better wisdom and knowledge.
  • Donald Trump (All General Trump Conversations Here)
    The judge is required to look at this conviction in isolation. He can't pile on just because Trump is otherwise a piece of shit.Hanover

    Not in a general sense, but there have been people who got worse sentence because they acted like assholes towards the judge. Trump’s continuous threats against people involved in this trial could factor in as attempts to obstruct the justice system and lead to a sentence that is more severe.

    It may be that he gets three months in prison, no one knows, but even if he got one day in prison it would be of symbolic importance.
  • Donald Trump (All General Trump Conversations Here)
    Is your impression that this isn't already the case?Tzeentch

    Yes, but that's already obvious for most intellectuals. I'm wondering how the general public will react, think and act. If he were to be elected president while being in prison, how would the general public behave? And on top of that, let's say he actually starts to act like a dictator and begin some retribution, what then?
  • Donald Trump (All General Trump Conversations Here)
    I will feel encouraged only if he loses the election. Right now, this looks like a nation where only about 50% of the population respects the rule of law. A loss won't cure that problem overnight, but perhaps it will loosen Trump's hold on the GOP, who's sycophant leaders feel compelled to echo the convicted felon's attacks on the justice system.Relativist

    Let's say Trump got prison time, even if it's just one year.

    If he were to win the election anyway, what would this mean for the spirit of the US population as a whole? The rest of the world would surely look upon the US as a broken democracy that has lost its ability to function through the framework of a healthy democracy, but what would the people do?

    It's not like there's a Mandela at the helm of the party, someone who's been fighting for a good cause and for democracy who is put in jail because of a corrupt state. No, it's a narcissist who's on the brink of being a dictator and who's a convicted criminal for actual crimes in a democratic state.

    So, how would the people react? Both short term and long term?
  • American Idol: Art?
    Imagine we did agree on what "art" means - what meaningful conversation could you build out of that agreement? You show me that, and I'll show you how to build that conversation WITHOUT agreeing on what "art" means. Deal?flannel jesus

    The OP question would be meaningless. The AI debate would be meaningless. All debates about "is that art?" would be meaningless. And instead the conversations would be centered around the meaning that is being created by artists operating under the definition of art and the consequences of trying to work as an artist under the definition of content. As well as allowing guidance to artists who've gotten lost into content production losing their sense of artistic soul in their daily work. We could focus on talking about art in a way that is true to the human and individual creating it and distinguish it from the influence of monetary need or corruption through profit-driven intentions that takes over the creative process.

    Such discussions bypass that shallow level of people throwing examples at each other asking "is this art?" Which seems to dominate the discourse within aesthetic philosophy, at least dominating public discourse.

    How you personally form a conversation around art does not equal society operating on similar grounds. I'm observing what the discourse in public and places like this forum actually is and argue out of those facts. How you idealize a conversation outside of that is rather irrelevant, isn't it? What would be the point of that? And how would you universalize this to improve the discourse around art? Please show
  • American Idol: Art?
    I'm just pointing out that you're spending a lot of time on the things you call meaningless, and apparently no time on the things you call meaningful, and I think that's interesting.

    I do think your definition of art is disagreeable, but I'd be roping you into a conversation you've already said is meaningless if I tried to argue that.
    flannel jesus

    I have to make the argument for why the question asked is arbitrary in order for making the expanded argument for why such discussions are meaningless as a whole. This discussion is not a question of the aesthetic appreciation of "idol", but about whether or not to define it as art, and my argument is that society often get stuck in such meaningless discussions instead of having a clearly defined starting point of whether it is art or content as a foundational premiss for the interpretation of the created object's or performance's meaning. Without such foundation, we attribute irrelevant or non-existing meaning to content that in its core intention merely had monetary or social status interests in mind, and such, any interpretation of meaning becomes merely hollow interpretations rather than functioning on the foundation of interplay between artist and receiver in which actual artistic meaning can be found.

    The conclusion of a discussion's shallow framework rendering it meaningless is determined by the argument I made, so a counter argument to that would return meaning to the original discussion.

    I don't think you really understand the point I'm making here. If my conclusion is correct, then why is this discussion on-going (the discussion as in this thread, not our specific conversation)? If I'm right, why are people still debating the merits of "Idol" as art? Wouldn't my answer be the final conclusion? And since the discussion is still going, then either I'm right and the discussion in here (among all others, not me), in itself is proving how society cannot transcend this shallow level and self-delusion with the illusion of meaning centered around such shallow debates about art's definition. Or there are actually merits to the question of whether "Idol" is art or not, but that would require a counter-argument to my definition, which I've yet to hear.

    If you agree with me and my argument about the definition, then answer me why almost every discussion about art is centered around a "creation" with the question "is this art?". Because it doesn't matter if I personally operate out of my definition of art if other people cannot transcend that surface level question since every attempt at having a discussion beyond it always end up with them struggling to define if something is art or not.

    If my conclusion were so obvious, then why is the discussion of art's definition still going on? Not just in here, but all over society? I have to make a convincing argument for what art is until society operates on that conclusion and since that's not going on, that's the argument that needs to be settled first, or else everyone will just circle around that surface level and never ending up anywhere because without a framework of definition, the question of "what is art" becomes meaningless.

    I think there are a couple places in philosophy where I make an exception for that - where it actually makes sense, I think, to have a more fluid definition of a word. I think EACH PERSON should attempt to concretely define the boundaries for their use of the word, but I don't thinks it's necessary for every person to conceptualize the word the same way or to have the same boundaries as another person.flannel jesus

    Why would the term "art" not be able to be defined? I think that people attribute too much magic to that term because they're awestruck by some divine mystery about creativity. But in the end it just becomes religious and spiritual hogwash surrounding the term, with some subconscious attempt to elevate it to the divine.

    And I would say that this framework has been a broken primary gear in aesthetic philosophy that makes the branch unnecessarily muddy and vague.

    Why is it so important that the term "art" is vague in its definition? It just seems like people are afraid to touch any attempt to define what "art" is because they've subconsciously formed a divine framework around it. Maybe it's also more common among atheists as the lack of divine belief push them to deify other parts of their reality, and in so attribute creativity and art as divine, which leads to them protecting the term from being clearly defined.

    I value better definitions in order to actually answer the questions about "what art is" once and for all in order to remove this spiritual and religious framework around creativity.

    Free Will is, I think, another word where each person should draw their own distinct boundaries, but two different people can draw their own ideas of the boundaries in different (often extremely different) places.flannel jesus

    Why? It just renders all discussions about free will nonsense and irrelevant. It creates a framework for a discussion that can never reach truth or conclusion since core premises are built on arbitrary foundations. It renders any discussion around the subject pointless as anyone can just return to re-define their definitions in order to render the other's argument wrong.

    It rather seems like an easy way to control the narrative rather than having interest in finding out any truths on a topic. Philosophical discourse aims to build a body of knowledge through the interplay between interlocutors. If the foundational terminology is "whatever", then there's no point in any discussion in the first place. It's utter meaningless.

    Free Will and Art both have a common feature which makes their fluid-boundary-ness palatable, and that is, they have a more primal experience at the center of them, prior to any concrete definition for the source of that experience.flannel jesus

    I disagree. I think that attributes some arbitrary invented mystery to the terms. Just find the logic of the term, the core meaning, settle on it and move on to the discussion using those defined meanings as part of the premises.

    Free Will is an *experience* first and foremost, before it's *whatever some particular philosopher defines it as*.flannel jesus

    No it's not. Free will is literally the ability to choose something freely. That's the definition. To have the ability to, without influence and control over you, choose by yourself. All other interpretations are part of that spiritual nonsense that tries to add magic to the concept in order to transcend the difficult truth of determinism.

    The more both philosophy and science have in both deduction, and evidence shown free will to be non-existent, the more wild magical interpretations of the term we've seen been invented. It's the result of cognitive dissonance, nothing more. The term is pretty clearly defined, it's just people who can't accept determinism or the fact that we're operating on deterministic cognitive processes who are playing lose with the term trying to inject new meaning into it in order to be able to say "yeah, but what exactly do you mean by no free will?" It's a way to control the discussion and narrative at a surface level, nothing more.

    Most thinking people have the experience of Free Will, before they ever come close to trying to define the word Free Will - that experience is more central than any single definition, and I think it makes sense to leave room for different thinkers to define the boundaries and causes and underlying reality of that experience differently.flannel jesus

    No, it's much rather just the difference between a scientific perspective and the common language one. Just like "theory" as a term has two different meanings depending on if it's used in society or in science. Free will in society can be used as a term in legal matters based on the laws we have today, but it's not used in the same way in philosophy and science. In common everyday speak we use "free will" to navigate certain everyday concepts, but in philosophy and science, "free will" is much more strict in its definition.

    But this is creeping out to the public as well, especially in the last couple of decades, as the scientific definition starts to inform how utterly ridiculous society views free will and how destructive it is to view problems in society within the concept of free will. We literally have problems with fighting crime in society due to the stupidity of how we ignore free will as a concept. The inability to understand the true nature of determinism's effect on society makes people believe in solutions that have no roots in scientific theory.

    Just another clear example to demonstrate my point of the importance of clear definitions. Just like with "art", this duality in meaning between the scientific/philosophical definition of the term just produces a shallow level at which all discussions in society operates on. In terms of "art", this is at the frontlines of discussions today as defining AI as art or not is literally what everyone is discussing. So the inability to operate on clear definitions of such terms just produces utter chaos in public debate.

    How is any abstract and arbitrary definitions a positive thing when we clearly see the chaos in society because of it?

    And perhaps Art is similar - perhaps it's an experience first and foremost, before it's a solidly defined word in Webster's English Dictionary. And because it's experience-centric, it makes sense to me to allow for different people to have different boundaries for how they define that experience.flannel jesus

    This is just your own opinion, it's not something we can all operate on to help create better frameworks for debates around art. "Perhaps", "Perhaps", "Perhaps" just makes things unnecessarily abstract in a time when, as I said, we literally see the public debate struggle because of this ill-defined terminology. And as laws are set to be drawn upon stuff like AI, you can't have this "personal opinion" version of a definition, it needs to be clearly defined.

    But if clarity is important, how can we have clarity when words are fluid like this?flannel jesus

    They're not fluid, I clearly defined them. You've yet to make an argument for why they're fluid in opposition to my conclusions. And you stated earlier that you don't disagree with my definition, so why is it fluid if I clearly defined the term and how to use it in society?

    Well, easy: you clarify exacty what YOU mean when you say it, and get them to clarify exactly what they mean when they say it, and then *avoid debating if things are art* -- because that's just semantics, that's just arguing about the boundaries of a subjective experience -- and instead talk about the things you said are more important. As long as MOST words are more clearly unambiguously defined, the occasional word being a bit fluid shouldn't be a terrible barrier to clarity.flannel jesus

    But this is literally impossible as evidenced by how public discourse is being done on concepts like AI art. It's not easy, because, as I've said, people do not operate like this in discussion, just look at this thread alone. People can opt in for what definitions they make for a concept and then they start to debate, only for one interlocutor to, in the middle of the discussion, just return to their own arbitrary definition of art and then the debate becomes circular.

    The proof is in the pudding and the pudding is every damn discussion about art that is going on today. People are not able to do what you are describing there, because it's impossible for people to bypass their bias rooted in ill-determined definitions and lose foundations for the premises.

    You're describing some fantasy discourse that does not reflect how discussions actually look around this topic. This entire thread is centered around the very questions "is that art?", the very headline of this thread shows that your ideal discussion does not exist.

    The solution is to have clear definitions of the terms. That's the actual solution. What you are arguing for is some fantasy of the optimal discussion to just appear out of nothing, out of no parameters of how to conduct discourse. The entire field of philosophy is built upon having the best framework possible around a topic in order to collectively reach truths about that topic. The more ill-defined and lose the terminology is, the less accurate or meaningful such philosophical discussions get. And seen how most discourse around AI-art is going in public, it shows just how shallow and stupid things get when people don't have a good idea of what art actually is defined as.

    The core question I'm asking you is why you are opposed to better and clearer definitions? It seems like a totally unnecessary stance when the alternative is to have a common defined ground to base our premises on. I really don't understand the reasoning here? What possible benefit to collective discourse does that generate? As evidenced by public debates on both "free will" and "art", it produces and pushes polarized nonsense which lead no where but antagonizing people against each other as well as laying an ill-defined foundation for laws and regulations when applicable. I think you underestimate the consequences of ill-defined terminology.
  • American Idol: Art?
    YOU would like to have more fruitful conversations that aren't weighed down by the annoying problem of differing definitions of Art.flannel jesus

    Once again...

    That's not the issue here, I'm talking about a broader perspective of how society handles knowledge and how to mitigate unnecessary lack of clarity through better handling of definitions in language.Christoffer

    so why do you care so much?flannel jesus

    Once again...

    That's not the issue here, I'm talking about a broader perspective of how society handles knowledge and how to mitigate unnecessary lack of clarity through better handling of definitions in language.Christoffer

    You were talking as if the definition of art is stopping you from doing that - I'm letting you know, it is not.flannel jesus

    The definition of art is part of the thread's core question if "Idol" is art. I made an entire argument for why it is not, based on clarifying how "art" can be defined and the difference between that and merely aesthetic appreciation. I'm doing aesthetic philosophy here, I'm not sure what you are doing? This entire thread just shows underscores exactly what I was arguing, that people are just arbitrarily trying to draw some defining line as to where Idol "fits into art", mostly based on personal feelings rather than some philosophical logic.

    Just to point out what I'm doing in relation to the question in the OP:


    I literally made an argument for what art should be defined as, how it answers the OP question and how I think it could help mitigating the unnecessary lack of clarity for these kinds of discussions. If you have a proper counter-argument to that conclusion, please go ahead. Right now you don't even seem to understand the actual problem I addressed.

    Like, you enter a discussion that asks whether or not "Idol" is art, what is your answer to that question? If it's "who cares", then why? Why is that the answer? What's your argument in support of that?
  • American Idol: Art?
    I'm just pointing out that that's you're choice - you don't have to argue with anybody if ads are art, you can talk about the other stuff you said was more important anyway.

    You could literally do it now. That guy that said a McDonald's ad was art... you could literally have the discussion you said was more important, right now, with him. The wishy washy definition of the word "art" isn't the thing stopping you from doing that.
    flannel jesus

    That's not the issue here, I'm talking about a broader perspective of how society handles knowledge and how to mitigate unnecessary lack of clarity through better handling of definitions in language.

    At the same time, the "definition of art" is at the core of aesthetic philosophy, so I don't get your "who cares" attitude? If it doesn't matter, the why are you even in this discussion? You're literally in a thread that tries to define something as art or not, and for that we need to set a definition of what art is.

    I already set parameters for defining art in a way that answers the OP question. So far I've not seen any reasons to why those parameters would be any worse than any "who cares" arguments.
  • American Idol: Art?
    You can skip the pointless debate and go right to the meaningful conversation regardless of if you both call it art or not - choosing to focus on the word is up to you. Don't do it if you don't want toflannel jesus

    I rarely see this. Fuzzy defined terminology constantly gets in the way of depth in discussions. Just because I'm able to cut through it doesn't mean the masses seem able to. And the consequences of it spirals upwards into societal norms rather than just being a single discussion between two people. The accumulation of unnecessary discussions keeps people away from more important depth. It's the same as with political debates as people start to debate the meaning ideological terms because they don't have clear definitions of them, so they get stuck in just wasting time on that rather than get to the core of political issues that needs to be resolved.

    I'm not really sure what you're defending here? What's your argument? That it's better to have lose definitions of terms rather than more defined ones? Why is that even a thing to promote?

    Most discussions of aesthetic philosophy generally just get stuck in this "how to define what is art" debate, which I find meaningless as the examples are just arbitrary interpretations out of the lack of clear definitions of the term "art". It leads to nonsense circular arguments in which people just spell out their personal opinions rather than philosophical concept. That's why I'm more interested in setting clear definitions and through them it's much easier to answer questions like the OP is asking. Otherwise what's the point of even asking if there's no logical and rational argument for an answer to be found?
  • American Idol: Art?
    Duchamp claims that something is art if someone declares that it is art.

    So, nothing too remarkable about declaring American Idol, or any other television program, Art.
    BC

    I disagree. Duchamp's intention was focused on being a message, a communication through expression. Regardless of what that message is, it wasn't made for profit as a primary intention. American Idol is profit first and focused on profit, so it's content, not art. People can appreciate the show for its aesthetical value, but so can they with a beautiful tree in the forest, both the show and the tree weren't formed through the intention of a person wanting to communicate something as the first primary intention; the tree grew as a natural object, the show was created for the profit of the channel, record label and the intention of the contestants to win over others. If they later, after they've won, made art for the sake of creation as artists, then that would be art.

    I refer to my argument earlier in the thread for a deeper dive.
  • American Idol: Art?
    But is it really important that everyone agrees on what art is? I mean we disagree on what things qualify under what categories all the time, why should art be an exception?

    Maybe it's okay that one person says "this McDonald's ad is art to me" and another one says "not to me". That doesn't necessarily mean the word has NO meaning, that just means these two people have different criteria, right?
    flannel jesus

    Why is it not important if we can? Aesthetic appreciation is not the same as "art" and having well defined terms are good for preventing language to get in the way of discussing meaning.

    Lose terminology just leads to those kinds of meaningless hollow shells of debates. In which it's not a discussion about the core and subject that is supposed to be discussed, but instead about how each person defines what something is. And without any anchor to what a term is defined as it leads to a circling argument of no meaning as the two sides are just disagreeing on a criteria for something that has none. It becomes utter meaningless to have such discussions (yet most discussions online are just exactly this).

    So yes, it is important, because it lowers the amount of meaningless illusions of valuable exchange of ideas. Two people disagreeing on the criteria of if a Macdonalds ad being art or not is utter meaningless compared to even the minor meaning of them agreeing it is content and discussing the aesthetical appreciation of said ad.

    Why settle for unnecessary societal norms of language that just adds more barriers in communication when it's possible to form a clear definition that removes them?
  • American Idol: Art?
    But in my heart, I might find a McDonalds commercial artistic. What then?ENOAH

    Your appreciation as the receiver (audience/viewer/listener) is in my view not enough for the criteria of calling it "art". People can have an aesthetic appreciation of something in nature, like a tree or rock formation, but that appreciation and experience is not considered an appreciation of "art", as nature isn't art by the definitions society and humanity operate by.

    You can have an aesthetic appreciation of content just as much as art, but that doesn't mean that the content becomes art. Fundamentally, content is appreciated more minutes per day than art through the sheer quantity of content that is surrounding us in our modern life.

    Just because art can be a business doesn't mean the core values of art is driven by profit. And it doesn't mean that profit-driven content can't be appreciated by the receiver either. It just means that if we don't define art in this way, we run into the problem of "everything can be art", which just renders the term "art" meaningless to even define.

    If such lose definitions are used, then a tree or a rock, the corporate logo, a song written to fight personal depression, the commercial for a car brand, and the black paintings of Goya that were discovered after his death, would all be considered art. But most people wouldn't lump them together, even if they can't define why. With my definition, the categories become obvious, and the definition of art becomes clearer as a defining term.

    Art is closely linked to our existential questions and philosophy, so if profit and earning money has too much of a focus when creating, it fundamentally becomes a version of "selling your soul".
  • American Idol: Art?


    I define expression through the lens of intention. It's either "content" or "art".

    "Content" is primarily expression that has an overweight into commercial interests. If the creator primarily produces something for the intent of profit or monetary transactions, it is "content" and not "art".

    "Art" is expression with an overweight towards the intention of creation itself. It's about the silent communication between the artist and the receiver (audience or viewer/listener). Primarily it is when the intention is not primarily profit or monetary transaction.

    There can be a lot of profit in working with art, and not a lot in terms of content, but the intention is key. Was it created for any form of profit or gain beyond the intention of creation itself?

    Things get a bit muddy when an artist is commissioned to make something for the purpose of content. Let's say Banksy gets commissioned to do design for a large brand. It's still the same type of expression like with his art, but in all forms of definitions it becomes "content" because the purpose of the piece is linked to profit and monetary transaction rather than the purpose of art itself.

    But let's say a game studio is making a game and they've signed with a big publisher who becomes the owner of that studio, and who's pushing them to make something that can sell better. Here, the focus for the investor is to sell more and profit, but the game studio aren't a bunch of commissioned artists as it is the game studio who initiated the will to create and the investors are just means to that purpose. They might have to comply with changes to the game in order to meet the will of the investors, but it's still the creation of the game itself that's at the forefront of intentions, and thus it is defined as "art".

    However, within the same situation, if the publisher owns rights to a franchise that is primarily made for profit and is iterated on year after year for this purpose and the game studio does not have any personal interest in it more than as a "job", then that becomes "content" as both the studio's and investor's intentions are profit over creation.

    But we can also have a mix in which an artist is commissioned for the purpose of aesthetic appreciation itself, meaning, it's not a design for the purpose of profit per se, but being part of the aesthetics of something that is involved with forms of profit. Like a commissioned artist who makes an artwork that hangs within the halls of a designed architecture, or the architecture itself being commissioned work by an individual or organization who want a building on their land. In this case, it's still art since even through there's profit involved, the intention is for the aesthetic appreciation prioritized over profit, even if the aesthetics are part of what generates profit.

    ----

    It may seem weird to define art through this lens, but it makes a clear line drawn in which we can define "art" better through a definition that values "creation" over profit. In essence, if an artist is asked "why" they made something, and their answer primarily revolves around the expression itself and the communication to the receiver (audience and viewer/listener), with any possible profit only being a "byproduct", then it can be defined as art.

    So, on the question if "Idol" is art, we have to look at the mechanics of that show. Is it content or art?

    The channel makes this show for the purpose of profit, they didn't decide to make it for their love of music and dance, it's a bought global franchise that's nationalized within the interest of record company's to find new artists to profit from. From this intention alone we see that the interest is profit and the initiation of the show is through the lens of profit, not creation.

    On top of that, the "commissioned" artists who are there for the contest aren't there for the purpose of creating something for an audience, they're there to battle against others for the purpose of winning the contest, and they do so by trying to impress judges on the basis of primarily the technical qualities necessary to be able to work as artists for record labels.

    Thus, both the investors, production company and the contestants ALL have profit as their main purpose and because of that, Idol is pure content, no art.

    These artists may well become artists after the show in which they focus on their expression and art primarily, but that's outside the scope of the show, so there's nothing with the show itself that defines it as art.

    ----------

    I also think that this way of defining art is a good guide for people who want to be artists. If you want to create but all you do is create for the purpose of profit, what value are you really producing for the audience? If the focus is to get their money, you are probably, maybe even subconsciously, fine-tuning the creation to maximize profit, not maximize the existential value of what the artwork is giving the audience.

    This is also why I think people can sense if something was made with heart or not. The instinct people have for spotting "good" and "bad" art, is rather the ability to sense "content" vs "art". As even in the art world, people can sense if an artist made something just to gain a profit of recognition rather than being honest in creation. When people speak of derivative and bland art, it may very well be centered around an artist who weren't honestly creating the artwork with the heart in the right place, but rather tried to summarize what they thought is effective to paint themselves as artists as an identity rather than purpose.
  • Philosophy of AI
    When Alan Turing talking about the Turing test, there's no attempt to answer the deep philosophical question, but just go with the thinking that a good enough fake is good enough for us. And basically as AI is still a machine, this is enough to us. And this is the way forward. I think we will have quite awesome AI services in a decade or two, but won't be closer to answer the philosophical questions.ssu

    The irony is that we will probably use these AI systems as tools to make further progress on the journey to form a method to evaluate self-awareness and subjectivity. Before we know if they have it they will be used to evaluate. At enough complexity we might find ourselves in a position where they end the test on themselves as "I tried to tell you" :lol:
  • Philosophy of AI
    Regarding the problem of the Chinese room, I think it might be safe to accede that machines do not understand symbols in the same way that we do. The Chinese room thought experiment shows a limit to machine cognition, perhaps. It's quite profound, but I do not think it influences this argument for machine subjectivity, just that its nature might be different from ours (lack of emotions, for instance).Nemo2124

    It's rather showing a limit of our ability to know that it is thinking. Being the outsider feeding the Chinese characters through the door, we get the same behavior of translation regardless of if it's a simple non-cognitive program or if it's a sentient being doing it.

    Another notable thought experiment is the classic "Mary in the black and white room", which is more through the perspective of the AI itself. The current AI models are basically acting as Mary in that room, they have a vast quantity of knowledge about color, but the subjective experience of color is unknown to them until they have a form of qualia.

    Machines are gaining subjective recognition from us via nascent AI (2020-2025). Before they could just be treated as inert objects. Even if we work with AI as if it's a simulated self, we are sowing the seeds for the future AI-robot. The de-centring I mentioned earlier is pertinent, because I think that subjectivity, in fact, begins with the machine. In other words, however abstract, artificial, simulated and impossible you might consider machine selfhood to be - however much you consider them to be totally created by and subordinated to humans - it is in fact, machine subjectivity that is at the epicentre of selfhood, a kind of 'Deus ex Machina' (God from the Machine) seems to exist as a phenomenon we have to deal with.

    Here I think we are bordering on the field of metaphysics, but what certain philosophies indicate about consciousness arising from inert matter, surely this is the same problem we encounter with human consciousness: i.e. how does subjectivity arise from a bundle of neuron firing in tandem or synchronicity. I think, therefore, I am. If machines seem to be co-opting aspects of thinking e.g. mathematical calculation to begin with, then we seem to share common ground, even though the nature of their 'thinking' differs to ours (hence, the Chinese room).
    Nemo2124

    But we still cannot know if they have subjectivity. Let's say we build a robot that mimics all aspects of the theory of predictive coding, and featuring a constant feed of sensory data that basically acts onto a "wetwork" of realtime changing neural structures. Basically as close as we can theoretically think of mechanically mimicking the brain and our psychology. We still don't know if that leads to qualia which is required for subjectivity, required for Mary to experience color.

    All animals have a form of emotional realm that is part of navigating and guiding consciousness. It may very well be that the only reason our consciousness have a reason to act upon the world at all is because of this emotional realm. In the most basic living organisms, these are basically a pain-response in order to form predictive behavior that avoid pain and seek pleasure and in turn form a predictive network of ideas around how to navigate the world and nature.

    At the moment, we're basically focusing all efforts to match the cognitive behavior of humans in these AI systems. But we have zero emotional realm mapped out that work in tandem with those systems. There's nothing driving their actions outside of our external inputs.

    As life on this planet is the only example of cognition and consciousness we need to look for the points of criticality in which lifeforms go from one level of cognition to the next.

    We can basically fully map bacterial behavior with traditional computing algorithms that don't require advanced neural networks. And we've been able to scale up the cognition to certain insects using these neural models. But as soon as the emotional realm of our consciousness starts to emerge in larger animals and mammals we start to hit a wall in which we can only simulate complex reasoning on the level of a multifunctional superadvanced calculator.

    In other words, we've basically done the same as with a normal calculator. We can cognitively solve math in our head, but a calculator is better at it and more advanced. And now we have AI models that can calculate highly advanced reasoning that revolves around audiovisual and language operations.

    We're getting close to perfectly simulate our cognitive abilities in reasoning and mechanical thinking, but we lack the emotional realm that is crucial for animals to "experience" out of that mechanical thinking.

    It might be that this emotional aspect of our consciousness is the key to subjective experience and it's only when we can simulate that as part of these systems that actual subjective experience and qualia emerges out of such an AI model. How we simulate the emotional aspects of our cognition is still highly unknown.
  • Philosophy of AI
    When I criticized the notion of emergence, you could have said, "Well, you're wrong, because this, that, and the other thing." But you are unable to express substantive thoughts of your own. Instead you got arrogant and defensive and started throwing out links and buzzwords, soon followed by insults. Are you like this in real life? People see through you, you know.

    You're unpleasant, so I won't be interacting with you further.
    fishfry

    Your criticism had zero counter-arguments and with an extremely arrogant tone in with a narrative of ridiculing and strawmanning everything I've said while totally ignoring ALL sources provided that acted as support for my premises.

    And now you're trying to play the victim when I've called out all of these behaviors on your side. Masking your own behavior by trying to flip the narrative in this way is a downright narcissistic flip. No one's buying it. I've pinpointed, as much as I could; the small fragments of counter-points you've made through that long rant of disjointed responses, so I've done my part in giving you the benefit of doubt with proper answers to those points. But I'm not gonna back out of calling out the fallacies and arrogant remarks that obscured those points as well. If you want to "control the narrative" like that you just have to find someone else who's susceptible and fall for that kind of behavior. Bye.
  • Philosophy of AI
    I'll stipulate that intelligent and highly educated and credentialed people wrote things that I think are bullsh*t.fishfry

    This is anti-intellectualism. You're just proving yourself to be an uneducated person who clearly finds pride in having radical uneducated opinions. You're not cool, edgy or provide any worth to these discussions, you're just a tragic example of the worst things about how people act today; to ignore actual knowledge and just have opinions, regardless of their merits. Not only is it not contributing to knowledge, it actively works against it. A product of how internet self-radicalize people into believing they are knowledgeable, but taking zero epistemic responsibility of the body of knowledge that the world should be built on. I have nothing but contempt for this kind of behavior and how it transforms the world today.

    Yes. It means "We don't understand but if we say that we won't get our grant renewed, so let's call it emergence. Hell, let's call it exponential emergence, then we'll get a bigger grant."

    Can't we at this point recognize each other's positions? You're not going to get me to agree with you if you just say emergence one more time.
    fishfry

    I'm not going to recognize a position of anti-intellectualism. You show not understanding or insight into the topic I raise. A topic that is broader than just AI research. Your position is worth nothing if you base it off influencers and bloggers and ignore actual research papers. It's lazy and arrogant.

    You will never be able to agree on anything because your knowledge isn't based on actual science and what constitutes how humanity forms a body of knowledge. You're operating on online conflict methods in which a position should be "agreed" upon based on nothing but fallacious arguments and uneducated reasoning. I'm not responsible for your inability to comprehend a topic and I'm not accepting fallacious arguments rooted in that lack of comprehension. Your entire position is based on a lack of knowledge or understanding and a lack of engagement in the source material. As I've said in the beginning, if you build arguments on fallacious and error premises, then everything falls down.

    And then there are the over-educated buzzword spouters. Emergence. Exponential. It's a black box. But no it's not really a black box, but it's an inner black box. And it's multimodal. Here, have some academic links.fishfry

    You continue to parrot yourself based on a core inability to understand anything about this. You don't know what emergence is and you don't know what the black box problem is because you don't understand how the system actually works.

    Can you explain how we're supposed to peer into that black box of neural operation? Explain how we can peer into the decision making of the trained models. NOT the overarching instruction-code, but the core engine, the trained model, the neural map that forms the decisions. If you just say one more time that "the programmers can do it, I know they can" as an answer to a request on"how", then you don't know what the fuck you're talking. Period.

    Surface level is all you've got. Academic buzzwords. I am not the grant approval committee. Your jargon is wasted on me.fishfry

    You're saying the same thing over and over with zero substance as a counter argument. What's your actual argument beyond your fallacies? You have nothing at the root of anything you say here. I can't argue with someone providing zero philosophical engagement. You belong to reddit and twitter, what are you doing on this forum with this level of engagement?

    Is there anything I've written that leads you to think that I want to read more about emergence?fishfry

    No, your anti-intellectualism is pretty loud and clear and I know exactly at what level you're at. If you ignore engagement in the discussion honestly, then you're just a dishonest interlocutor, simple as that. If you ignore actually understanding a scientific field at the core of this topic when someone brings it up, only to dismiss it as buzzwords, then you're not worth much as a part of the discussion. I have other people to engage with who can actually form real arguments. But your ignorance just underscores who's coming out on top in this. No one in a philosophy discussion views the ignorant and anti-intellectual as anything other than irrelevant, so I'm not sure what you're hoping for here.

    Forgive me, I will probably not do that. But I don't want you to think I haven't read these arguments over the years. I have, and I find them wanting.fishfry

    You show no sign of understanding any of it. It's basically just "I'm an expert trust me". The difference between you and me is that I don't do "trust me" arguments. I explain my point, I provide sources if needed and if the person I'm discussing with just utters an "I'm an expert, trust me" I know they're full of shit. So far, you've done no actual arguments beyond saying basically that, so the amount of statistical data informing us exactly how little you know about all of this, is just piling up. And it's impossible to engage with further arguments sticking to the topic if the core of your arguments are these low quality responses.

    My point exactly. In this context, emergence means "We don't effing know." That's all it means.fishfry

    No it doesn't. But how would you know when you don't care?

    I was reading about the McCulloch-Pitts neuron while you were still working on your first buzzwords.fishfry

    The McCullock-Pitt neuron does not include mechanisms for adapting weights. And since this is a critical feature of biological neurons and neural networks, I'm not sure why that applies to either emergence theories or modern neural networks? Or are you just regurgitating part of the history of AI thinking it has any relevance to what I'm writing?

    You write, "may simply arise out of the tendency of the brain to self-organize towards criticality" as iff you think that means anything.fishfry

    It means you're uneducated and don't care to research before commenting:


    I'm expressing the opinion that neural nets are not, in the end, going to get us to AGI or a theory of mind.

    I have no objection to neuroscience research. Just the hype, buzzwords, and exponentially emergent multimodal nonsense that often accompanies it.
    fishfry

    Who cares about your opinion? Your opinion is meaningless without foundational premises for your argument. This forum is about making arguments, it's within the fundamental rules of the forum, if you're here to just make opinions you're in the wrong place.

    I have to apologize to you for making you think you need to expend so much energy on me. I'm a lost cause. It must be frustrating to you. I'm only expressing my opinions, which for what it's worth have been formed by several decades of casual awareness of the AI hype wars, the development of neural nets, and progress in neuroscience.

    It would be easier for you to just write me off as a lost cause. I don't mean to bait you. It's just that when you try to convince me with meaningless jargon, you weaken your own case.
    fishfry

    Why are you even on this forum?

    I wrote, "I'll take the other side of that bet," and that apparently pushed your buttons hard. I did not mean to incite you so, and I apologize for any of my worse excesses of snarkiness in this post.fishfry

    You're making truth statements based on nothing but personal opinion and what you feel like. Again, why are you on this forum with this kind of attitude, this is low quality, maybe look up the forum rules.

    But exponential emergence and multimodality, as substitutes for clear thinking -- You are the one stuck with this nonsense in your mind. You give the impression that perhaps you are involved with some of these fields professionally. If so, I can only urge to you get some clarity in your thinking. Stop using buzzwords and try to think clearly. Emergence does not explain anything. On the contrary, it's an admission that we don't understand something. Start there.fishfry

    I've shown clarity in this and I've provided further reading. But if you don't have the intellectual capacity to engage in it, as you've clearly, in written form, shown not to have and not to have an interest in, then it doesn't matter how much someone try to explain something to you. Your stance is that if you don't understand or comprehend something, then you are, for some weird reason, correct, and that the one you don't understand is wrong and it's their fault for not being clear enough. What kind of disrespectful attitude is that? You're lack of understanding, your lack of engagement, your dismissal of sources, your fallacies in arguments and your lack of providing any actual counter arguments just makes you an arrogant, uneducated and dishonest interlocutor, nothing more. How would a person even be able to have a proper philosophical discussion with someone like you?

    Ah. The first good question you've posed to me. Note how jargon-free it was.fishfry

    Note the attitude you pose.

    But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data..fishfry

    Define "what's happening". Define what constitutes "now".

    If "what is happening" only constitutes a constant stream of sensory data, then that stream of data is always pointing to something happening in the "past", i.e "what's happened". There's no "now" in this regard.

    And because of this, the operation of our mind is simply streaming sensory data as an influence on our already stored neural structure with hormones and chemicals further influencing in strengths determined by pre-existing genetic information and other organ signals.

    In essence, the difference you're trying to aim for, is simply one that's revolves around the speed of analysis of that constant stream of new data, and an ability to use a fluid neural structure that changes based on that data. But the underlying operation is the same, both the system and the brain operate on "past events" because there is no "now".

    Just the fact that the brain need to process sensory data before we comprehend it, means that what we view as "now" is simply just the past. It's the foundation for the theory of predictive coding. This theory suggests that the human brain compensates for the delay in sensory processing by using predictive models based on past experiences. These models enable rapid, automatic responses to familiar situations. Sensory data continually updates these predictions, refining the brain's responses for future interactions. Essentially, the brain uses sensory input both to make immediate decisions and to improve its predictive model for subsequent actions. https://arxiv.org/pdf/2107.12979


    But I can't give you proof. If tomorrow morning someone proves that humans are neural nets, or neural nets are conscious, I'll come back here and retract every word I've written. I don't happen to think there's much chance of that happening.fishfry

    The clearest sign of the uneducated is that they treat science as a binary "true" or "not true". Rather than a process. As with both computer science and neuroscience, there are ongoing research and adhering to that research and the partial findings are much more valid in arguments than demanding "proof" in the way you speak. And as with the theory of predictive coding (don't confuse it with computer coding which it isn't about), it is at the frontlines of neuroscience. What that research implies will, for anyone with an ability to make inductive arguments, point towards the similarities between neural systems and the brain in terms of how both act upon input, generation and output of behavior and actions. That one system, at this time, is in comparison, rudimentary, simplistic and lacking similar operating speed, does not render the underlying similarities that it does have, moot. It rather prompts further research into if behaviors match up further, the closer the system becomes to each other. Which is what current research is going on about.

    Not that this will go anywhere but over your head.

    Nobody knows what the secret sauce of human minds is.fishfry

    While you look at the end of the rainbow, guided by the bloggers and influencers, I'm gonna continue following the actual research.

    Now THAT, I'd appreciate some links for. No more emergence please. But a neural net that updates its node weights in real time is an interesting idea.fishfry

    You don't know the difference between what emergence is and what this is. They are two different aspects within this topic. One has to do with self-awareness and qualia, this has to do with adaptive operation. One is about the nature of subjectivity, the other is about mechanical non-subjective AGI. What we don't know is if emergence occurs the closer the base system gets. But again, that's too complex for you.

    https://arxiv.org/pdf/1705.08690
    https://www.mdpi.com/1099-4300/26/1/93
    https://www.mdpi.com/2076-3417/11/24/12078
    https://www.mdpi.com/1424-8220/23/16/7167

    As the research is ongoing there's no "answers" or "proofs" for it yet in the binary way you require these things to be framed as. Rather, it's the continuation of merging knowledge between computer science and neuroscience that has been going on for a few years now ever since the similarities were noted to occur.

    How can you say that? Reasoning our way through novel situations and environments is exactly what humans do.fishfry

    I can say that because "novel situations" are not a coherently complex thing. We're seeing reasoning capabilities within the models right now. Not at each level of human capacity, pretty rudimentary, but still there. Ignoring that is just dishonest. And with the ongoing research, we don't yet know how complex this reasoning capability will become, simply because we've haven't a multifunction system running yet that utilizes real-time processing and act across different functions. To claim that they won't be able to do is not valid as the current behavior and evidence point in the other direction. Making a fallacy of composition as the sole source as to why they won't be able to reason is not valid.

    That's the trouble with the machine intelligence folks. Rather than uplift their machines, they need to downgrade humans. It's not that programs can't be human, it's that humans are computer programs.fishfry

    No they're not, they're researching AI, or they're researching neuroscience. Of course they're breaking down the building blocks in order to decode consciousness, the mind and behavior. The problem is that there are too many spiritualist and religious nutcases who rather arbitrarily uplift humans to a position that's composed of arrogance and hubris. That we are far more than part of the physical reality we were formed within. I don't care about spiritual and religious hogwash when it comes to actual research, that's something the uneducated people with existential crises can dwell their futile search for meaning in. I'm interested in what is, nothing more, nothing less.

    How can you, a human with life experiences, claim that people don't reason their way through novel situations all the time?fishfry

    Why do you interpret it in this way? It's like you interpret things backwards. What I'm saying is that the operation of our brain and consciousness, through concepts like the theory of predictive coding, seems to operate on rather rudimentary functions that are possible to be replicated with current machine learning in new configurations. What you don't like to hear is the link between such functions generating extreme complexity and that concepts like subjectivity and qualia may form as emergent phenomenas out of that resulting complexity. Probably because you don't give a shit about reading up on any of this and instead just operate on yourself "just not liking it" as the foundation for the argument.

    Humans are not "probability systems in math or physics."fishfry

    Are you disagreeing that our reality is fundamentally acting on probability functions? That's what I mean. Humans are part of this reality and this reality operates on probability. That we show behavior of operating on predictions of probability when navigating reality is following this fact; Predictive Coding Theory, Bayesian Brain Hypothesis, Prospect Theory, Reinforcement Learning Models etc.
    Why wouldn't our psychology be based on the same underlying function as the rest of nature. Evolution itself is acting along predictive functions based on probabilistic "data" that arise out of complex ecological systems.

    I don't deal in religious hogwash to put humans on a pedestal against the rest of reality.

    Credentialism? That's your last and best argument? I could point at you and disprove credentialism based on the lack of clarity in your own thinking.fishfry

    It's not credentialism, I'm fucking asking you for evidence that it's impossible as you clearly just regurgitate the same notion of "impossibility" over and over without any sources or rational deduced argument for it. The problem here isn't clarity, it's that you actively ignore the information given and that you never demonstrate even a shallow understanding of this topic. Telling that you do, does not change that fact. Like in storytelling; show don't tell.

    Show that you understand, show that you have a basis for your claims that AGI can never happen with these models as they are integrated with each other. So far you show nothing else but to try and ridicule the one you argue against, as if that were any kind of foundation for a solid argument. It's downright stupid.

    Yes, but apparently you can't see that.fishfry

    Oh, so now you agree with my description that you earlier denied?

    What about this?

    But one statement I've made is that neural nets only know what's happened. Human minds are able to see what's happening. Humans can figure out what to do in entirely novel situations outside our training data..fishfry

    So when I say this:

    Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences?Christoffer

    You suddenly agree with this:

    Yes, but apparently you can't see that.fishfry

    This is just another level of stupid and it shows that you're just ranting all over the place without actually understanding what the hell this is about, all while trying to mock me for lacking clarity. :lol: Seriously.


    I'm not the grant committee. But I am not opposed to scientific research. Only hype, mysterianism, and buzzwords as a substitute for clarity.fishfry

    But the source of your knowledge, as mentioned by yourself, is still to not read papers, and only bloggers and influencers, the only ones who actually are THE ones to use buzzwords and hype? All while what I've mentioned are actual fields of studies and terminology derived from research papers?That's the most ridiculous I've ever heard. And you seem totally blind to any ability of self-reflection on this dissonance in reasoning. :lol:

    Is that the standard? The ones I read do. Eric Hoel and Gary Marcus come to mind, also Michael Harris. They don't know shit? You sure about that? Why so dismissive? Why so crabby about all this? All I said was, "I'll take the other side of that bet." When you're at the racetrack you don't pick arguments with the people who bet differently than you, do youfishfry

    Yes, they do, but based on how you write things, I don't think you really understand them as you clearly seem to not understand either the concepts that's been mentioned or be able to formulate actual arguments for your claims. Reading blogs is not the same as reading the actual research and actual comprehension of a topic requires more sources of knowledge than just brief summery's. Saying that you read stuff, means nothing if you can't show a comprehension of the body of knowledge required. All of the concepts I've talked about should be something you already know about, but since you don't I only have your word that you "know stuff".

    You're right, I lack exponential emergent multimodality.fishfry

    You lack the basics of how people are supposed to form arguments on this forum. You're doing twitter/reddit posts. Throughout your answer to me, you've not even once demonstrated actual insight into the topic or made any actual counter arguments. That even in that lengthy answer, you still weren't able to. It's like you want to show an example of the opposite of philosophical scrutiny.

    I've spent several decades observing the field of AI and I have academic and professional experience in adjacent fields. What is, this credential day? What is your deal?fishfry

    Once again you just say that "you know shit", without every showing it in your arguments. It's the appeal to authority fallacy as it's your sole source of explanation of why "you know shit". If you have academic and professional experience, you would know how problematic it is to just adhere to experience like that as the source premise for an argument. What it rather tells me is that you either have such experience, but you're simply among the academics who're at the bottom of the barrel (there are lots of academics who're worse than non-academics in the practice of conducting proper arguments and research), or that the academic fields are not actually valid for the specific topic discussed, or that you just say it as a form of desperate attempt to increase validity. But being an academic or have professional experience (whatever that even means without context), means absolutely nothing if you can't show the knowledge that've come out of it. I know lots of academics who are everything from religious zealots to vaccine deniers, it doesn't mean shit. Academia is education and building knowledge, if you can't show that you learned or built any such knowledge, then it means nothing in here.

    You've convinced me to stop listening to you.fishfry

    More convincing evidence that you are acting out of proper academic praxis in discourse? As with everything else, ridiculous.
  • Philosophy of AI
    the "Chinese room" isn't a test to passflannel jesus

    I never said it was a test. I've said it was a problem and an argument about the inability for us to know if something is actually self-aware in their thinking or if it's just highly complex operations looking like it. The problem seems more that you don't understand in what context I'm using that analogy.

    "We have no idea what's happening, but emergence is a cool word that obscures this fact."fishfry

    This is just a straw man fallacy that misrepresents the concept of emergence by suggesting that it is merely a way to mask ignorance. In reality, emergence describes how complex systems and patterns arise from simpler interactions, a concept extensively studied and supported in fields like neuroscience, physics, and philosophy. https://en.wikipedia.org/wiki/Emergence

    Why are you asserting something you don't seem to know anything about? This way of doing arguments makes you first assert this, and then you keep that as some kind of premise in your head while continuing, believing you construct a valid argument when you're not. Everything after it becomes subsequently flawed in reasoning.

    I'm sure these links exhibit educated and credentialed people using the word emergence to obscure the fact that they have no idea what they're talking about.fishfry

    So now you're denying actual studies just because you don't like the implication of what emergence means? This is ridiculous.

    But calling that emergence, as if that explains anything at all, is a cheat.fishfry

    In what way?

    "Mind emerges from the brain} explains nothing, provides no insight. It sounds superficially clever, but if you replace it with, "We have no idea how mind emerges from the brain," it becomes accurate and much, much more clear.fishfry

    You're just continuing with saying the same thing without even engaging with the science behind emergence. What's your take on the similarities between the system and the brain and how the behaviors in these systems and those seen in neuroscience matches up? What's your actual counter argument? Do you even have one?

    Nobody knows how to demonstrate self-awareness of others. We agree on that. But calling it emergence is no help at all. It's harmful, because it gives the illusion of insight without providing insight.fishfry

    You seem to conflate what I said in that paragraph with the concept of emergence. Demonstrating self-awareness is not the same as emergence. I'm talking about emergent behavior. Demonstrating self-awareness is another problem.

    It's like you're confused to what I'm answering to and talking about? You may want to check back on what you've written and what I'm answering to in that paragraph because I think you're just confusing yourself.

    It only becomes harmful when people ignore to actually read up and understand certain concepts before discussing them. You ignoring the science and misrepresenting the arguments or not carefully understand them before answer is the only thing harmful here, it's bad dialectic practice and being a dishonest interlocutor.

    I have no doubt that grants will be granted. That does not bear on what I said. Neural nets are a dead end for achieving AGI. That's what I said. The fact that everyone is out there building ever larger wings out of feathers and wax does not negate the point.

    If you climb a tree, you are closer to the sky than you were before. But you can't reach the moon that way. That would be my point. No matter how much clever research is done.

    A new idea is needed.
    fishfry

    If you don't even know where the end state is, then you cannot conclude anything that final. If emergent behaviors are witnessed, then the research practice is to test it out further and discover why they occur and if they increase in more configurations and integrations.

    You claiming some new idea is needed requires you to actually have final knowledge about how the brain and consciousness works, which you don't. There are no explanations for the emergent behaviors witnessed, and therefore, before you can explain those behaviors with certainty, you really can't say that a new idea is needed. And since we haven't even tested multifunctional models yet, how would you know that the already witnessed emergent behavior does not increase? You're not really making an argument, you just have an opinion and dismiss everything not in-line with that opinion. And when that is questioned you just repeat yourself without any further depth.

    Plenty of people are saying that. I read the hype. If you did not say that, my apologies. But many people do think LLMs are a path to AGI.fishfry

    I don't give a shit about what other people are saying, I'm studying the science behind this, and I don't care about bloggers, tech CEOs and influencers. If that's the source of all your information then you're just part of the uneducated noise that's flooding social media online and not actually engaging in an actual philosophical discussion. How can I take you seriously when you constantly demonstrate this level of engagement?

    I was arguing against something that's commonly said, that neural nets are complicated and mysterious and their programmers can't understand what they are doing. That is already true of most large commercial software systems. Neural nets are conventional programs. I used the example of political bias to show that their programmers understand them perfectly well, and can tune them in accordance with management's desires.fishfry

    How would you know any of this? What's the source of this understanding? Do you understand that the neural net part of the system isn't the same as the operating code surrounding it? Please explain how you know the programmers know what's going on within the neural net that was trained? If you can't, then why are you claiming that they know?

    This just sounds like you heard some blogger or influencer say that the programmers do and then just regurgitate that statement in here without even looking into it with any further depth. This is the problem with discussions today; people are just regurgitating shit they hear online as a form of appeal to authority fallacy.

    They're a very clever way to do data mining.fishfry

    No, it's probability-based predicting computation.

    (1) they are not the way to AGI or sentience; and (2) despite the mysterianism, they are conventional programs that could, in principle, be executed with pencil and paper, and that operate according to the standard rules of physical computation that were developed in the 1940s.fishfry

    You can say the same thing about any complex system. Anything is "simple" in its core fundametnals, but scaling a system up can lead to complex operations vastly outperforming the beliefs and predictions of its limitations. People viewed normal binary computation as banal and simple and couldn't even predict where that would lead.

    Saying that a system at its fundamental core is simple does not equal anything about the totality of the system, especially in scaled up situations. A brain is also just a bunch of neural pathways and chemical systems. We can grow neurons in labs and manipulate the composition easily, and yet it manifest this complex result that is our mind and consciousness.

    "Simple" as a fundamental foundation of a system does not mean shit really. Almost all things in nature are simple things forming complexities that manifest larger properties. Most notable theories in physics tend to lean into being oddly simple and when verified. It's basically Occam's razor, practically applied.


    By mysterianism, I mean claims such as you just articulated: "they operate differently as a whole system ..." That means nothing. The chess program and the web browser on my computer operate differently too, but they are both conventional programs that ultimately do nothing more than flip bits.fishfry


    You have no actual argument, you're just making a fallacy of composition.

    Jeez man more emergence articles? Do you think I haven't been reading this sh*t for years?fishfry

    Oh, you mean like this?

    I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection...fishfry

    You've clearly stated here yourself that you haven't read any of the actual shit for years. You're just regurgitating already regurgitated information from bloggers and influencers.

    Are you actually expecting me to take you seriously? You demonstrate no actual insight into what I'm talking about and you don't care about the information I link to, which are released research papers from studies on the subject. You're basically demonstrating anti-intellectualism in practice here. This isn't reddit or on twitter, you're on a philosophy forum and you dismiss research papers when provided. Why are you even on this forum?

    Emergence emergence emergence emergence emergence. Which means, you don't know. That's what the word means.fishfry

    You think this kind of behavior helps your argument? This is just stupid.

    You claim that "emergence in complexities being partly responsible for much of how the brain operates" explains consciousness? Or what are you claiming, exactly? Save that kind of silly rhetoric for your next grant application. If it were me, I'd tell you to stop obfuscating. "emergence in complexities being partly responsible for much of how the brain operates". Means nothing. Means WE DON'T KNOW how the brain operates.fishfry

    Maybe read up on the topic, and check the sourced research papers. It's not my problem that you don't understand what I'm talking about. Compared to you I actually try to provide sources to support my argument. You're just acting like an utter buffoon with these responses. I'm not responsible for your level of knowledge or comprehension skills, because it doesn't matter if I explain further or in another way. I've already done so extensively, but you demonstrate an inability to engage with the topic or concept honestly and just repeat your dismissal in the most banal and childish way.

    You speak in buzz phrases. It's not only emergent, it's exponential. Remember I'm a math guy. I know what the word exponential means. Like they say these days: "That word does not mean what you think it means."

    So there's emergence, and then there's exponential, which means that it "can form further emergent phenomenas that we haven't seen yet."

    You are speaking in entirely meaningless babble at this point. I don't mean that you're not educated. I mean that you have gotten lost in your own jargon. You have said nothing at all in this post.
    fishfry

    You've yet to provide any source of your understanding of this topic outside of "i'm a math guy trust me" and "I don't read papers I follow bloggers and influencers".

    Yeah, you're not making a case for yourself able to really understand what I'm talking about. Being a "math guy" means nothing. It would be equal to someone saying "I'm a Volvo mechanic, therefore I know how to build a 5-nanometer processor".

    Not understanding what someone else is saying does not make it meaningless babble. And based on how you write your arguments I'd say the case isn't in your favor, but rather that you actually don't know or care to understand. You dismiss research papers as basically "blah blah blah". So, no, it seems more likely that you don't understand what I'm talking about.

    Yes, that's how computers work. When I click on Amazon, whole pages of instructions get executed before the package arrives at my door. What point are you making?fishfry

    That you ignore the formed neural map and just look at the operating code working on top of it, which isn't the same as how the neural system operates underneath.

    You're agreeing with my point. Far from being black boxes, these programs are subject to the commands of programmers, who are subject to the whims of management.fishfry

    The black box is the neural operation underneath. The fact that you confuse the operating code of the software that's there to create a practical application on top of the neural network core operation just shows you know nothing of how these systems actually work. Do you actually believe that the black box concept refers to the operating code of the software? :lol:

    You say that, and I call it neural net mysterianism. You could take that black box, print out its source code, and execute it with pencil and paper. It's an entirely conventional computer program operating on principles well understood since the first electronic digital computers in the 1940s.fishfry

    The neural pathways and how they operate is not the "source code". What the fuck are you talking about? :rofl:

    "Impossible to peer into." I call that bullpucky. Intimidation by obsurantism.fishfry

    Demonstrate how you can peer into the internal operation of the trained model's neural pathways and how they form outputs. Show me any source that demonstrate that this is possible. I'm not talking about software code, I'm talking about what the concept of black box is really about.

    If you trivialize it in the way you do, then demonstrate how, because this is a big problem within computer science, so maybe educate us all on how this would be done.

    Every line of code was designed and written by programmers who entirely understood what they were doing.fishfry

    That's not how these systems work. You have a software running the training and you have a practical operation software working from the trained model, but the trained model itself does not have code in the way you're talking about it. This is the black box problem.

    And every highly complex program exhibits behaviors that surprise their coders. But you can tear it down and figure out what happened. That's what they do at the AI companies all day long. .fishfry

    Please provide any source that easily shows how you can trace back operation within a trained model. Give me one single solid source and example.

    You say it's a black box, and I point out that it does exactly what management tells the programmers to make it do, and you say "No, there's a secret INNER" black box."

    I am not buying it. Not because I don't know that large, complex software systems don't often exhibit surprising behavior. But because I don't impute mystical incomprehensibility to computer programs.
    fishfry

    You're not buying it because you ignore to engage the topic by actually reading up on it. This is reddit and twitter-level of engagement in which you don't care to read anything and just continue the same point over and over. Stop with the strawman arguments it's getting ridiculous.

    Can we stipulate that you think I'm surface level, and I think you're so deep into hype, buzzwords, and black box mysterianism that you can't see straight?

    That will save us both a lot of time.

    I can't sense nuances. They're a black box. In fact they're an inner black box. An emergent, exponentail black box.

    I know you take your ideas very seriously. That's why I'm pushing back. "Exponential emergence" is not a phrase that refers to anything at all.
    fishfry

    You have no argument, you're not engaging with this in any philosophical scrutiny so the discussion just ends at the level you're demonstrating here. It's you who's responsible for just babbling this meaningless pushback, not because you have actual arguments with good sources, but because "you don't agree". On this forum, that's not enough, that's called "low quality". So can you stop the low quality BS and actually make actual arguments rather than this fallacy-ridden rants over and over?
  • Philosophy of AI
    Judging by the way you repeatedly talk about "passing the Chinese room", I don't think you understand the basics. Seems more buzzword-focused than anythingflannel jesus

    You have demonstrated even less. You've done no real argument other than saying that "we can walk because we have legs", a conclusion that's so banal in its shallow simplicity that it could be uttered by a five-year old.

    You ignore actually making arguments from the questions asked and you don't even seem to understand what I'm writing by the way you answer to it. When I explain why robots "can't just walk" you simply utter "so robots can't walk?". Why bother putting time in a discussion with this low quality attitude. Demonstrate a better level of discourse first.
  • Philosophy of AI
    "inspired by" is such a wild goal post move. The reason anything that can walk can walk is because of the processes and structures in it - that's why a person who has the exact same evolutionary history as you and I, but whose legs were ripped off, can't walk - their evolutionary history isn't the thing giving them the ability to walk, their legs and their control of their legs are.flannel jesus

    Why is that moving a goal post? It's literally what engineers use today to design things. Like how they designed commercial drones using evolutionary iterations to find the best balanced, light and aerodynamic form for it. They couldn't design it by "just designing it" anymore than the first people who attempted flight couldn't do so by flapping planks with feathers on them.

    With the way you're answering I don't think you are capable of understanding what I'm talking about. It's like you don't even understand the basics of this. It's pointless.
  • Philosophy of AI
    so robots can't walk?flannel jesus

    Maybe read the entire argument or attempt to understand the point I'm making before commenting.

    Did you read the part about how robots can even walk today? What the development process of making them walk... is really inspired by?
  • Philosophy of AI
    Connecting it to evolution the way you're doing looks as absurd and arbitrary as connecting it to lactation.flannel jesus

    In what way? Evolution is about iterations over time and nature is filled with different iterations of cognitive abilities, primarily changing based different environments influencing different requirements.

    As long as you're not a denier of evolution, I don't know what you're aiming for here?

    "Actually, even though evolultion is in the causal history of why we can walk, it's not the IMMEDIATE reason why we can walk, it's not the proximate cause of our locomotive ability - the proximate cause is the bones and muscles in our legs and back."flannel jesus

    No, the reason something can walk is because of evolutionary processes forming both the physical parts as well as the "operation" of those physical parts. You can't make something "walk" by just having legs and muscles, as well as without the pre-knowledge of how the muscles and bones connect and function, you don't know how they fit together, and even further; bones and muscles have grown along the same time as the development of the cognitive operation using them, they've formed as a totality over time and evolutionary iterations.

    There's no "immediate" reason you can walk as the reason you can walk is the evolution of our body and mind together, leading up to the point of us being able to walk.

    And then, when robotics started up, someone like you might say "well, robots won't be able to walk until they go through a process of natural evolution through tens of thousands of generations", and someone like me would say, "they'll make robots walk when they figure out how to make leg structures broadly similar to our own, with a join and some way of powering the extension and contraction of that joint."

    And the dude like me would be right, because we currently have many robots that can walk, and they didn't go through a process of natural evolution.
    flannel jesus

    Yes they did. The reason they can walk is because we bluntly tried to emulate functions of our joints, bones and muscles for decades before turning to iterative trial and error processes for the design of the physical parts. But even then it couldn't work without evolutionary training the walking sequence and operation through machine learning. It's taken extremely long to mimic this rather rudimentary action of simply walking and we're not even fully there yet.

    And such a feature is one of the most basic and simple things in nature. To underplay evolution's role in forming over iterations, the perfect walking mechanics and internal operation compared to us just brute forcing something into existence is just not rational.

    That's why I think your focus on "evolution" is kind of nonsensical, when instead you should focus more on proximate causes - what are the structures and processes that enable us to walk? Can we put structures like that in a robot? What are the structures and processes that enable us to be conscious? Can we put those in a computer?flannel jesus

    What I don't think you seem to understand with the evolutionary argument is that the complexity of consciousness might first require the extremely complex initial conditions of our genetical compound that even though it in itself is one of the most complex things in the universe, also grows into a being that in itself is even further complex. This level of complexity might not be able to be achieved by just "slapping structures together" as the knowledge of how and in what way may be so complex that it is impossible. That the only way to reach results is with "growing" from initial conditions into a final complexity.

    Evolution is basically chaos theory at play and you seem ignore that fact. We already have evidence within material science and design engineering that trying to "figure out" the best design or material compound can be close to impossible in comparison to growing forth a solution through simulating evolutionary iterations through trial and error.

    This is why these new AI models functions so well as they do, because they're NOT put together by perfect design, they're programmed to have conditions from which they "grow" and a path along which they "grow". The fundamental problem, however, is that in comparison to "walking", all science about consciousness and the brain hasn't been able to pinpoint consciousness as a mere function but, according to current research in this field, is an emerging result of layers of complex operations.

    In essence, if walking is extremely hard to achieve due to similar complexity, simulating actual consciousness might be close to impossible if we don't form an extremely complex path of iterative evolution for such a system.
  • Philosophy of AI
    if the only conscious animals in existence were mammals, would you also say "lactation is a prerequisite for consciousness"?flannel jesus

    That is not a relevant question as I'm not deducing from imaginary premises. I'm deducing from the things we know. If that premise were the case, then research into why would have been made or would be aimed to be made. And the reasons would probably be found or hinted at and be part of the totality of knowledge in biology and evolution. However, as such a premise doesn't have any grounds in what we know about biology and evolution, and so the engagement with that premise becomes just as nonsense as the premise itself.

    What we do know is that there is a progression of cognitive abilities across all life and that it's most likely not bound to specific species as cognitive abilities vary across genetic lines. That some attribute consciousness to mammals is more likely a bias towards the fact that we are mammals and therefore we attribute other mammals closer to us than say birds, even though some birds express cognitive abilities far greater than many mammals.

    The alternative is something like the vision of Process Philosophy - if we can simulate the same sorts of processes that make us conscious (presumably neural processes) in a computer, then perhaps it's in principle possible for that computer to be conscious too. Without evolution.flannel jesus

    Yes, but my argument was that the only possible path of logic that we have is through looking at the formation of our own consciousness and evolution, because that is a fact. The "perhaps" that you express does not solve the fundamental problem of the chinese room.

    We know that we've developed consciousness through biology and evolution. So therefore, the only known process would be that. If we were to create the same conditions for a computer/machine to develop AI using similar conditions, then that would be more probable to form consciousness that passes the chinese room problem and develop actual qualia.

    As with everything being about probability, the "perhaps" in your argument doesn't have enough probability in its logic, as it is basically saying that if I sculpt a tree, it could perhaps become a tree compared to me planting a tree or chemically form the basic building blocks of genetics in a seed for a tree and then planting it to grow. One is jumping to conclusion that mere similarity to the object "could mean" the same, while the other is simulating similar conditions for the object to form. And we know that evolutionary progress of both physical systems and biological is at the foundation of how this reality function, it is most likely required that a system evolves and grows for it to form complex relation to its surrounding conditions.

    I'm not saying that these AI systems don't have subjectivity, we simply do not know, but what I'm saying is that the only conditions we could deduce as logically likely and probable is if we could create the initial conditions to simulate what formed us and grow a system from it.

    Which is close to what we're doing with machine learning, even though it's rudimentary at this time.
  • Philosophy of AI
    "we know this is how it happened once, therefore we know this is exactly how it has to happen every time" - that doesn't look like science to me.flannel jesus

    What do you mean has happened only "once"?

    And in a situation in which you have only one instance of something, is it more or less likely that the same thing happening again require the same or similar initial conditions?

    Science is about probability, what is most probable?
  • Philosophy of AI
    You just inventing a hard rule that all conscious beings had to evolve consciousness didn't come from science. That's not a scientifically discovered fact, is it?flannel jesus

    That consciousness emerged as features in animals through evolution is as close to facts that we have about our biology. And the only things we so far know have consciousness are animals and us in this universe.

    So the only argument that can be made in any form of rational reasoning is the one I did. Anything else fails to form out of what we know and what is within the most probable of truths based on the science we have.

    If you have an additional option it has to respect what we scientifically know at this time.
  • Philosophy of AI
    No, you're making some crazy logical leaps there. There's no reason whatsoever to assume those are the only two options. Your logic provided doesn't prove that.flannel jesus

    Do you have an alternative or additional option that respects science?
  • Philosophy of AI
    Why? That looks like an extremely arbitrary requirement to me. "Nothing can have the properties I have unless it got them in the exact same way I got them." I don't think this is it.flannel jesus

    I'm saying that this is at the most fundamental, deducible in some form, answer to what has qualia.

    We don't know if consciousness can be formed deliberately (direct programming)
    We cannot know if a machine passes the chinese room argument and have qualia just through behavior alone.
    We cannot analyze mere operation of the system to determine it having qualia.
    We cannot know other people aren't P-Zombies.

    The only thing we can know for certain is that I have subjectivity and qualia, I formed through evolution. And since I formed through evolution, I could deduce you as also having qualia, since we are both human beings. And since animals are part of evolution I can deduce that animals also has qualia.

    At some point, dead matter reaches a point of evolution and life in which it has subjectivity and qualia.

    Therefore we can deduce either that all matter has some form of subjectivity and qualia, or it emerges at some point of complex life in evolution.

    How do we know when a machine has the same? That is the problem to solve
  • Philosophy of AI
    they develop in the end a degree of subjectivity that can be given recognition through language.Nemo2124

    You still have the problem of the chinese room. How do you overcome that? It's more important for concluding subjectivity for machines than for other lifeforms as we can deduce that lifeforms have formed through evolution similarly to us and since we have subjectivity or at least I know I have subjectivity, I could conclude that lifeforms have subjectivity as well. But how can I deduce that for a machine if the process of developing it is different than evolution?

    In order for a machine to have subjectivity, its consciousness require at least to develop over time in the same manner as a brain through evolution. To reach machine consciousness, we may need to simulate evolutionary progress for its iterations in the same complexity as evolution on earth. What that entails for computer science we don't yet know.

    Beyond that we may find knowledge that consciousness isn't that special at all, that it's rather trivial to "grow" if we know where to start, to know the "seed" for it so to speak. But that would require knowledge we don't yet have.
  • Philosophy of AI
    Nothing "emerges" from neural nets. You train the net on a corpus of data, you tune the weightings of the nodes, and it spits out a likely response.fishfry

    This is simply wrong. These are examples of what I'm talking about:

    https://hai.stanford.edu/news/examining-emergent-abilities-large-language-models
    https://ar5iv.labs.arxiv.org/html/2206.07682
    https://www.jasonwei.net/blog/emergence
    https://www.assemblyai.com/blog/emergent-abilities-of-large-language-models/

    Emergence does not equal AGI or self-awareness, but they mimmick what many neuroscience papers are focused on in regards to how our brain manifest abilities out of increasing complexity. And we don't yet know how combined models will function.

    There's no intelligence, let alone self-awareness being demonstrated.fishfry

    No one is claiming this. But equally, the problem is, how do you demonstrate it? Effectively the Chinese room problem.

    There's no emergence in chatbots and there's no emergence in LLMs. Neural nets in general can never get us to AGI because they only look backward at their training data. They can tell you what's happened, but they can never tell you what's happening.fishfry

    The current predictive skills are extremely limited and far from human abilities, but they're still showing up, prompting a foundation for further research.

    But no one has said that the current LLMs in of themselves will be able to reach AGI. Not sure why you strawman in such conclusions?

    This common belief could not be more false. Neural nets are classical computer programs running on classical computer hardware. In principle you could print out their source code and execute their logic step by step with pencil and paper. Neural nets are a clever way to organize a computation (by analogy with the history of procedural programming, object-oriented programming, functional programming, etc.); but they ultimately flip bits and execute machine instructions on conventional hardware.fishfry

    Why does conventional hardware matter when it's the pathways in the network that is responsible for the computation? The difference here is basically that standard operation is binary in pursuit of accuracy, but these models operate on predictions, closer to how physical systems do, which means you increase the computational power with a slight loss of accuracy. That they operate on classical software underneath does not change the fact that they operate differently as a whole system. Otherwise, why would these models vastly outperform standard computation for protein folding predictions?

    Their complexity makes them a black box, but the same is true for, say, the global supply chain, or any sufficiently complex piece of commercial software.fishfry

    Yes, and why would a system that is specifically very good at handling extreme complexities, not begin to mimic complexities in the physical world?
    https://www.mdpi.com/1099-4300/26/2/108
    https://ar5iv.labs.arxiv.org/html/2205.11595

    Seen as the current research in neuroscience points to emergence in complexities being partly responsible for much of how the brain operates, why wouldn't a complex computer system that simulate similar operation not form emergent phenomenas?

    There's a huge difference between saying that "it forms intelligence and consciousness" and saying that "it generates emergent behaviors". There's no claim that any of these LLMs are conscious, that's not what this is about. And AGI does not mean conscious or intelligent either, only exponentially complex in behavior, which can form further emergent phenomenas that we haven't seen yet. I'm not sure why you confuse that with actual qualia? The only claim is that we don't know where increased complexity and multimodal versions will further lead emergent behaviors.

    And consider this. We've seen examples of recent AI's exhibiting ridiculous political bias, such as Google AI's black George Washington. If AI is such a "black box," how is it that the programmers can so easily tune it to get politically biased results? Answer: It's not a black box. It's a conventional program that does what the programmers tell it to do.fishfry

    This is just a false binary fallacy and also not correct. The programmable behavior is partly weights and biases within the training, but those are extremely basic and most specifics occur in operational filters before the output. If you prompt it for something, then there can be pages of instructions that it goes through in order to behave in a certain way. In ChatGPT, you can even put in custom instructions that function as a pre-instruction that's always handled before the actual prompt, on top of what's already in hidden general functions.

    That doesn't mean the black box is open. There's still a "black box" for the trained model in which it's impossible to peer into how it works as a neural system.

    This further just illustrates the misunderstandings about the technology. Making conjectures about the entire system and the technology based on these company's bad handling of alignment does not reduce the complexity of the system itself or prove that it's "not a black box". It only proves that the practical application has problems, especially in the commercial realm.

    So I didn't need to explain this, you already agree.fishfry

    Maybe read the entire argument first and sense the nuances. You're handling all of this as a binary agree or don't discussion, which I find a bit surface level.


    Like what? What new behaviors? Black George Washington? That was not an emergent behavior, that was the result of deliberate programming of political bias.

    What "new behaviors" to you refer to? A chatbot is a chatbot.
    fishfry

    Check the publications I linked to above. Do you understand what I mean by emergence? What it means in research of complex systems and chaos studies, especially related to neuroscience.

    Believe they start spouting racist gibberish to each other. I do assume you follow the AI news.fishfry

    That's not what I'm talking about. I'm talking about multimodality.

    Most "news" about AI is garbage on both sides. We either have the cryptobro-type dudes thinking we'll have a machine god a month from now, or the luddites on the other side who don't know anything about the technology but sure likes to cherry-pick the negatives and conclude the tech to be trash based on mostly just their negative feelings.

    I'm not interested in such surface level discussion about the technology.

    Well if we don't know, what are you claiming?

    You've said "emergent" several times. That is the last refuge of people who have no better explanation. "Oh, mind is emergent from the brain." Which explains nothing at all. It's a word that means, "And here, a miracle occurs," as in the old joke showing two scientists at a chalkboard.
    fishfry

    If you want to read more about emergence in terms of the mind you can find my other posts around the forum about that. Emergent behaviors has its roots in neuroscience and the work on consciousness and the mind. And since machine learning to form neural patterns is inspired by neuroscience and the way neurons work, there's a rational deduction to be found in how emergent behaviors, even rudimentary ones that we see in these current AI models, are part of the formation of actual intelligence.


    The problem with your reasoning is that you use the lack of a final proven theory of the mind as proof against the most contemporary field of study in research about the mind and consciousness. It's still making more progress than any previous theories of the mind and connects to a universality about physical processes. Processes that are partly simulated within these machine learning systems. And further, the problem is that your reasoning is just binary; it's either intelligent with qualia, or it's just a stupid machine. That's not how these things work.

    I would not dispute that. I would only reiterate the single short sentence that I wrote that you seem to take great exception too. Someone said AGI is imminent, and I said, "I'll take the other side of that bet." And I will.fishfry

    I'm not saying AGI is imminent, but I wouldn't take the other side of the bet either. You have to be dead sure about a theory of the mind or theories of emergence to be able to claim either way, and since you don't seem to aspire to any theory of emergence, then what's the theory that you use as a premiss for concluding it "not possible"?

    In my opinion, that is false. The reason is that neural nets look backward. You train them on a corpus of data, and that's all they know.fishfry

    How is that different from a human mind?

    They know everything that's happened, but nothing about what's happening.fishfry

    The only technical difference between a human brain and these systems in this context is that the AI systems are trained and locked into an unchanging neural map. The brain, however, is constantly shifting and training while operating.

    If a system is created that can, in real time, train on a constant flow of audiovisual and data information inputs, which in turn constantly reshape its neural map. What would be the technical difference? The research on this is going on right now.

    They can't reason their way through a situation they haven't been trained on.fishfry

    The same goes for humans.

    since someone chooses what data to train them onfishfry

    They're not picking and choosing data, they try to maximize the amount of data as more data means far better accuracy, just like any other probability system in math and physics.

    And the weights and biases is not what you describe. The problem you aim at is in alignment programming. I can customize a GPT to do the same thing, even if the underlying model isn't supposed to do it.

    Neural nets will never produce AGI.fishfry

    Based on what? Do you know something about multimodal systems that others don't? Do you have some publication that proves this impossibility?

    You can't make progress looking in the rear view mirror. You input all this training data and that's the entire basis for the neural net's output.fishfry

    Again, how does a brain work? Is it using anything other than a rear view mirror for knowledge and past experiences? As far as I can see the most glaring difference is the real time re-structuring of the neural paths and multimodal behavior of our separate brain functions working together. No current AI system, at this time, operates based on those expanded parameters, which means that any positive or negative conclusion for that require further progress and development of these models.

    I don't read the papers, but I do read a number of AI bloggers, promoters and skeptics alike. I do keep up. I can't comment on "most debates," but I will stand behind my objection to the claim that AGI is imminent, and the claim that neural nets are anything other than a dead end and an interesting parlor trick.fishfry

    Bloggers usually don't know shit and they do not operate through any journalistic praxis. While the promoters and skeptics are just driving up the attention market through the shallow twitter brawls that pops up due to a trending topic.

    Are you seriously saying that this is the research basis for your conclusions and claims on a philosophy forum? :shade:

    Neural nets are the wrong petri dish.

    I appreciate your thoughtful comments, but I can't say you moved my position.
    fishfry

    Maybe stop listening to bloggers and people on the attention market?

    I rather you bring me some actual scientific foundation for your next premises to your conclusions.

    You would not ordinarily consider that machines could have selfhood, but the arguments for AI could subvert this. A robot enabled with AI could be said to have some sort of rudimentary selfhood or subjectivity, surely... If this is the case then the subject itself is the subject of the machine. I, Robot etc...Nemo2124

    I think looking at our relation to nature tells a lot. Where do we draw the line about subjectivity? What do we conclude having a subjective experience? We look at another human and, for now disregard any P-zombie argument, claim them to have subjectivity. But we also look at a dog saying the same, a horse. A bird? What about an ant or a bee? What about a plant? What about mushrooms which have been speculated to form electrical pulses resembling a form of language communication? If they send communication showing intentions, do they have a form of subjective experience as mushrooms?

    While I think that the Japanese idea of things having a soul is in the realm of religion rather than science, we still don't have a clear answer to what constitutes subjectivity. We understand it between humans, we have instincts about how animals around us has it. But where does it end? If sensory input into a nervous system prompts changed behaviors, does that constitute a form of subjectivity for the entity that has those functions? Wouldn't that place plants and mushrooms within the possibility of having subjectivity?

    If a robot with sensory inputs has a constantly changing neurological map that reshapes based on what it learns through those sensory inputs, prompting changed behavior, does a subjective experience emerge out of that? And if not, why not? Why would that just be math and functions, while animals, operating on the exact same way, experience subjectivity?

    So far, no one can draw a clear line at which we know: here there's no experience and no subjectivity, and here it is.
  • Philosophy of AI
    In terms of selfhood or subjectivity, when we converse with the AI we are already acknowledging its subjectivity, that of the machine. Now this may only be linguistically, but other than through language, how else can we recognise the activity of the subject? This also begs the question, what is the self? The true nature of the self is discussed elsewhere on this website, but I would conclude here that there is an opposition or dialectic here between man and machine for ultimate recognition. In purely linguistic terms, the fact is that in communicating with AI we are - for better or for worse - acknowledging another subject.Nemo2124

    People, when seeing a beautiful rock falling and smashing to pieces, speak of the event with "poor rock", and mourn its beauty to have been destroyed. If we psychologically apply a sense of subjectivity to a dead piece of matter, then doing so with something that for the most part simulate having consciousness is even less weird. What constitutes qualia or not is the objective description of subjectivity, but as a psychological phenomena, we apply subjectivity to everything around us.

    And in places like Japan, it's culturally common to view objects as having souls. Just as western societies view and debate humans as having souls in relation to other thing, and through that put a framework around the concept of what things that have souls, it draws the borders around how we think about qualia and subjectivity. In Japan, those borders are culturally expanded even further into the world of objects and physical matter, and thus they have a much lower bar for what constitutes something having consciousness, or at least are more open to examining how we actually define it.

    Which approach is closest to objective truth? As all life came from dead matter and physical/chemical processes, it becomes a sort of metaphysical description of what life itself should be defined as.
  • Philosophy of AI
    Don't you think we're pretty close to having something pass the Turing Test?RogueAI

    The current models already pass the turing test, but it doesn't pass the Chinese room analogy. The turing test is insufficient to evaluate strong AI.

    This would require solving the Problem of Other Minds, which seems insolvable.RogueAI

    Yes, it is the problem with P-Zombies and the chinese room. But we do not know in what ways we are able to decode cognition and consciousness in the future. We might find a strategy and technology to determine the sum internal experience of a certain being or machine, and if so we will be able to solve it.

    It might also even be far easier than that. It could be that the foundation for deciding it only becomes a certain bar of behavior at which we conclude the machine to have consciousness in the same way we do so towards each other and other animals. For instance, if we have a certain logic gate that produce certain outcomes we wouldn't call that conscious as we can trace the function back to an action we've taken for that function to happen.

    But if behaviors emerge spontaneously out of a complex system, behaviors that demonstrate an ability to form broader complex reasoning or actions that does not follow simple paths of deterministic logics towards a certain end goal, but rather exploratory actions and decisions that show behaviors of curiosity for curiosity's sake and an emotional realm of action/reactions, then it may be enough to determine based on how we rationalize animals and other people around us to not be P-Zombies.

    In essence, why are you not concluding other people to be P-Zombies? Why are you concluding a cat to have "inner life"? What point list of attributes are you applying to an animal or other human being in order to determine that they have subjectivity and inner life? Then use the same list onto a machine.

    That's the practical philosophical approach that I think will be needed at some point if we do not develop technology that could determine qualia as an objective fact.

    I am raising a philosophical point, though: what sort of creature or being or machine uses the first person singular? This is not merely a practical or marketing question.

    Pragmatically speaking, I don't see why 'AI' can't find a vernacular-equivalent of Wikipedia, which doesn't use the first person. The interpolation of the first person is a deliberate strategy by AI-proponents, to advance the case for it that you among others make, in particular, to induce a kind of empathy.
    mcdoodle

    You don't have a conversation with Wikipedia though. To converse with "something" requires language to flow in order to function fluidly and not become an obstacle. Language has been naturally evolved to function between humans and maybe in the future we have other pronouns as language evolves over time, but at the moment the pronouns seems to be required for fluid communication.

    On top of that, since language is used to train the models, they function better in common use of language. Calling it "you" function better for its analytical capabilities for the text you input, as there are more instances of "you" being used in language than language structured as talking to a "thing".

    But we are still anthropomorphizing, even if we tune language away from common pronouns.
  • Philosophy of AI
    The proponents and producers of large language models do, however, encourage this anthropomorphic process. GPT-x or Google bard refer to themselves as 'I'. I've had conversations with the Bard machine about this issue but it fudged the answer as to how that can be justified. To my mind the use of the word 'I' implies a human agent, or a fiction by a human agent pretending insight into another animal's thoughts. I reject the I-ness of AI.mcdoodle

    But that's a problem with language itself. Not using such pronouns would lead to an extremely tedious interaction with it. Even if it was used as a marketing move from the tech companies in order to mystify these models more than they are, it's still problematic to interact with something that speaks like someone with psychological issues.

    There is an aspect of anthropomorphism, where we have projected human qualities onto machines. The subject of the machine, could be nothing more than a convenient linguistic formation, with no real subjectivity behind it. It's the 'artificialness' of the AI that we have to bear in mind at every-step, noting iteratively as it increases in competence that it is not a real self in the human sense. This is what I think is happening right now as we encounter this new-fangled AI, we are proceeding with caution.Nemo2124

    But if we achieve and verify a future AI model to have qualia, and understand it to have subjectivity, what then? If we know that the machine we speak to has "inner life" in its subjective perspective, existence and experience. How would you relate your own existence and sense of ego to that mirror?
    Screaming or in harmony?

    Chat-GPT and other talking bots are not intelligent themselves, they simply follow a particular code and practice, and express information regarding it. They do not truly think or reason, it's a jest of some human's programming.Barkon

    We do not know where the path leads. The questions raised in here are rather in front of possible future models. There's still little explanations for the emergent properties of the models that exist. They don't simply "follow code", they follow weights and biases, but the formation of generative outputs can be highly unpredictable as to what emerges.

    That they "don't think" doesn't really mean much when viewing both the system and our brains in a mechanical sense. "Thinking" may just be an emergent phenomena that starts to happen in a certain criticality of a complex system and such a thing could possibly occur in future models as complexity increases, especially in AGI systems.

    To say that it's "just human programming" is not taking into account what machine learning and neural paths are about. "Growing" complexity isn't something programmed, it's just the initial conditions, very much like how our genetic code is our own initial conditions for "growing" our brain and capacity for consciousness.

    To conclude something about the current models in an ongoing science that isn't fully understood isn't valid as a conclusion. They don't think as they are now, but we also don't know at what level of internal perspective they operate under. Just as we have the problem of P-Zombies in philosophy of the mind.

    The fact is that it can analyze and reason about a topic and that's beyond merely regurgitate information. That's a synthesis and closer to human reasoning. But it's rudimentary at best in the current models.
  • Philosophy of AI
    The question is how do we relate to this emergent intelligence that gives the appearance of being a fully-formed subject or self? This self of the machine, this phenomenon of AI, has caused a shift because it has presented itself as an alternative self to that of the human. When we address the AI, we communicate with it as another self, but the problematic is how do we relate to it. In my opinion, the human self has been de-centred. We used to place our own subjective experiences at the centre of the world we inhabit, but the emergence of machine-subjectivity or this AI, has challenged that. In a sense, it has replaced us, caused this de-centring and given the appearance of thought. That's my understanding.Nemo2124

    Haven't we always done this? Like Copernicus placed our existence in our solar system outside the center, which made people feel less "special" and essentially de-centralized their experience of existence.

    These types of progresses in our existential self-reflection throughout history have always challenged our sense of existence, constantly downplayed ourselves as being special in contrast to the universe.

    None of this has ever "replaced us", but rather challenged our ego.

    This collective ego death that comes as a result of this constantly evolving knowledge of our own insignificance in existence is something that I really think is a good thing. There's harmony in understanding that we aren't special, and that we rather are part of a grander natural holistic whole.

    These reflections about AI has just gone mainstream at the moment, but been part of a lot of thinkers focusing on the philosophy of the mind. And we still live in a time when people generally view themselves as the center of the universe, especially in the political and ideological landscape of individualism that is the foundation of westernized civilisations today. The attention economy of our times have put people's ego back into believing themselves to be the main character of this story that is their life.

    But the progress of AI is once again stripping away this sense of a central positioned ego through putting a spotlight on the simplicity of our human mind.

    This progress underscores that the formation of our brilliant intelligent mind appears to be rather fundamentally simple and that the complexity is only due to evolutionary fine-tuning over billions of years. That basic functions operating over time ends up in higher complexity, but can be somewhat replicated through synthetic approaches and methods.

    It would be the same if intelligent aliens landed on earth and we realize that our mind isn't special at all.

    -----

    Outside of that, what you're describing is simply anthropomorphism and we do it all the time. Combine that with the limitations in language to have conversations with a machine in which we use words that are neutral of identity. Our entire language is dependent on using pronouns and identity to navigate a topic, so it's hard not to anthropomorphize the AI since our language is constantly pushing us in that direction.

    In the end, I think the identity crisis people sense when talking to an AI boils down to their religious beliefs or their sense of ego. Anyone who's already viewing themselves within the context of a holistic whole doesn't necessarily feel decentralized by the AI's existence.
  • Philosophy of AI
    Is AI a philosophical dead-end? The belief with AI is that somehow we can replicate or recreate human thought (and perhaps emotions one day) using machinery and electronics. This technological leap forward that has occurred in the past few years is heralded as progressive, but as the end-point in our development is it not thwarting creativity and vitally original human thought? On a positive note, perhaps AI is providing us with this existential challenge, so that we are forced even to develop new ideas in order to move forward. If so, it represents an evolutionary bottle-neck rather than a dead-end.Nemo2124

    I do not understand the conclusion that if we have an AI that could replicate human thought and neurological processes, it would replace us or anything we do with our brain.

    How does the emergence of a self-aware intelligent system disable our subjectivity?

    That idea would be like saying that because there's another person in front of me there's no point of doing anything creative, or think any original thoughts, because that other person is also a brain capable of the same, so what's the point?

    It seems people forget that intelligences are subjective perspectives with their own experiences. A superintelligent self-aware AI will just be its own subjective perspective and while it could manifest billions of outputs in both images, video, sound or text, it would still only be driven by its singular subjective perspective.

    I'll take the other side of that bet. I have 70 years of AI history and hype on my side. And neural nets are not the way. They only tell you what's happened, they can never tell you what's happening. You input training data and the network outputs a statistically likely response. Data mining on steroids. We need a new idea. And nobody knows what that would look like.fishfry

    That doesn't explain emergent phenomenas in simple machine learnt neural networks. We don't know what happens at certain points of complexities, we don't know what emerges since we can't trace back to any certain origins in the "black box".

    While that doesn't mean any emergence of true AI, it still amounts to a behavior similar to ideas in neuroscience and emergence. How complex systems at certain criticalities emerge new behaviors.

    And we don't yet know how AGI compositions of standard neural systems interact with each other. What would happen when there are pathways between different operating models interlinking as a higher level neural system. We know we can generate an AGI as a "mechanical" simulation of generalized behavior, but we still don't know what emergent behaviors that arise from such a composition.

    I find it logically reasonable that since ultra-complex systems in nature, like our brains, developed through extreme amount of iterations over long periods of time and through evolutionary changes based on different circumstances, it "grew" into existence rather than got directly formed. Even if the current forms of machine learning systems are rudimentary, it may still be the case that machine learning and neural networking is the way forward, but that we need to fine tune how they're formed in ways mimicking more natural progression and growth of naturally occuring complexities.

    That the problem isn't the technology or method itself, but rather the strategy of how to implement and use the technology for the end result to form in a similar high complexity but still aligned with what purpose we form it towards.

    The problem is that most debates about AI online today just reference the past models and functions, but rarely look at the actual papers written out of the computer science that's going on. And with neuroscience beginning to see correlations between how these AI systems behave and our own neurological functions in our brains, there are similarities that we shouldn't just dismiss.

    There are many examples in science in which a rudimentary and common methods or things, in another context, revolutionized technology and society. That machine learning systems might very well be the exact way we achieve true AI, but that we don't know truly how yet and we're basically fumbling in the dark, waiting for the time when we accidentally leave the petri dish open over night to grow mold.