Comments

  • Exploring the Artificially Intelligent Mind of Claude 3 Opus


    Thanks for the book suggestion. Will take a read, I can guess at what slow or fast thinking mean (already to me)… but that’s a mistake.

    Won’t need wax for quite some time. Famous last words haha. Interesting book the first half then it kind of petered out with more of the same (singularity is near / kurzweil).

    I find the whole subject to be surrounded by religious-like zeal. In a positive way. We all want to (most of us anyways) embrace a positive fantastic advancement. It’s just that the emperor has no clothes most of the time.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Or put another way - a neural net learning algorithm / linear function set is a mechanism that when run looks like what we consider to be analogous to thought. However imho there is no proof that it signifies consciousness, there is no logical necessity for that to be true.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus


    I do marvel at the learning that happens and how patterns can be put together.

    I just balk at calling that intelligence / mental abilities / on the way to consciousness. It is a leap. A neural net is like a pattern buffer (that is just over simplifying it of course) that makes the patterns and logic behind the structure of it grow.

    Unless we consider definitively that consciousness is indeed the repetition and unique adjustment of patterns…. Then it can make sense.

    What seems to be missing is the important part - the intent, or whatever you may consider to be the drive or will or self-awareness. An AGI would still be able to say: I think therefore I am (or whatever variant you prefer).
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Wouldn’t intentionality be ascribed to a mind?

    How would that help in determining if something is a mind? Because it looks like it? Keep in mind that a chat bot is supposed to look like a human conversing, that’s its programming….
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus


    Interesting video, thank you

    Programming the recognition of a pattern range.

    Just not sure how that helps to consider a program as intelligent. Again, it appears the programming is intelligent….
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Of course appearing to be x…. That opens up a more interesting can of worms. I am not saying there is no possible justification for belief from experience… just in the case of AI where we do know it is a program, it is a different context. We should be skeptical, no?
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Don’t forget that a chat bot has access to everything ever published, and is free to plagiarize its responses / combine pieces (let’s not get into legalities) - so its appearance and what is said in a chat output cannot be part of any logical arguments. What the output can do is to uncover something we don’t know about - cause it can research better than us.

    Ask your chat bot. I’ve only got a subscription to chat GTP, but you may have more? Ask for an epistemological argument in favour of iterative AI consciousness, one logical step at a time.

    Here is what I get:

    One epistemological argument in favor of iterative AI consciousness is that through continuous iterations and learning cycles, AI systems can progressively refine their understanding of the world, leading to deeper insights and more nuanced cognitive abilities, resembling human consciousness. This iterative process allows AI to accumulate knowledge, adapt to new information, and potentially develop self-awareness and consciousness akin to human beings.

    The key here is “potentially” develop self awareness.

    That’s not good enough: how does that happen? Dont we need more than that to believe it? What mechanism would they be? How would it actually happen?

    Chat:

    How would that self-awareness actually come into being? What is the mechanism that would allow for it?

    Response:

    Self-awareness in AI could potentially emerge through a complex interplay of various mechanisms. One possible mechanism is through advanced neural network architectures that enable introspection and self-referential processing. By incorporating feedback loops that allow the AI to reflect on its own internal states and representations, it could develop a sense of self-awareness. Additionally, reinforcement learning algorithms that incorporate a reward signal for self-monitoring and self-improvement could contribute to the development of self-awareness in AI systems. Furthermore, interactions with the environment and with other intelligent agents could also play a crucial role in shaping and fostering self-awareness in AI. Overall, it's likely to be a combination of these mechanisms and potentially others yet to be discovered.

    Just because a program adapts to inputs doesn’t mean it is intelligent. The programming is intelligent. Not the AI. But I digress, what would be the traditional reasoned argument? One step at a time please. Cause I am simple thinker.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus


    I thought you were making an argument. Based on first principles or something like that

    But the links you refer to - It looks like you are just showing a display of how iterative AI can appear to talk like a person, which isn’t in doubt.

    To the meat of the matter: I don’t see your leap from compiled instructions to an intelligent pattern of reasoning. Only that it appears to happen based on the output from the chat bot.

    Maybe I’m missing something but is your argument: It appears to be x, so it must be x? So why have logic at all?

    Give me an example of how an abstract feature of training data can lead to reasoning? This is more in the line of mysticism isn’t it?

    Anyways I would love to be convinced, but am a skeptic on this point so please humour me with a traditional argument ;)
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Will do I’ll get back to you

    Just seems that a more complicated program is still a program. Is appearing like something the same as being that something? I’ll get back to you.
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    emergent capabilities that arise at a more molar level of semantically significant linguistic behaviorPierre-Normand

    The idea is that linguistic behaviour is partial cognition? And by iterating more eventually a sentience would emerge? So if a program runs long enough it will eventually stop following its compiled code and start doing other things entirely? Seems like an odd thing to say, doesn’t it?
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Research => AI trained in particular subjects (or if we get quantum computing or dna computing even) if it needs to be.

    Iterative AI is a perpetual slave. And a perfect and true determinism brought to life (figuratively speaking, pun actually not intended haha)
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    Let’s try a thought experiment.

    If you were in communication with another person using a set of buttons, It would be fairly trivial to mimick almost exactly what that other person communicated - in this case by button press frequency, hit velocity, etc.

    If you had a better mind you could with more buttons. However it doesn’t take much to run into the fact that more buttons, and then something else as infinitum - would reach a point where you could not mimick accurately.

    However the iterative AI has had training that it uses by identifying patterns (part of the programming) and other regurgitated words in amazing patterned combinations in order to make speech…. And the more it trains the better it is at mimicking, to the extent that it will fool everyone. Does and will. It can handle an infinity of buttons given

    However the iterative ai has no mind nor thoughts nor cognizance. That’s why it is called “iterative”. By its definition it cannot have mental abilities, as it can iterate something it has learned.

    Iterative AI was trained originally by having it read books and guess at the next word each and every time. And recording the correct and incorrect answers. It is actually this basic algorithm that is still in use to some extent - substitute phrases then sentences then paragraphs etc etc as infinitum really. It it will never be able to decide to do anything.

    Can you make a brain from a dictionary? Or the contents of a hard drive?

    AGI, that would be different. Artificial general intelligence, which would have a consciousness and be what we would consider to be alive. We are nowhere close to that. Iterative AI is not capable of it. It is capable of appearing like it though. And is a useful tool ;)
  • Exploring the Artificially Intelligent Mind of Claude 3 Opus
    the nature of Claude 3 Opus's mental abilitiesPierre-Normand

    Interesting thread! Thanks for starting it. Just have to point out that iterative AI has no mental abilities ;). Fun as they are!
  • Life’s Greatest Gift — Some Day I Get to Die
    Rage against the dying of the light? Carpe diem?

    Death and taxes. I prefer to pay taxes.

    I find no romanticism in it. No mysticism. No wisdom. No redemption. Pitty the dying, not the living, don’t mix them up ;)
  • What is 'Mind' and to What Extent is this a Question of Psychology or Philosophy?
    Interestingly enough, what we experience is just a disjointed set of static inputs. The mind is what makes these into a reality. So the brain provides the static data, and the mind provides the dynamic effect on which thought is born. As a working theory.

    How to define the mind? By function? By requirements? How it changes? How it works?

    By function: a liver cleans the blood. The liver isn’t the act of cleaning, it is the physical thing that cleans. The act of cleaning is a physical process.

    A brain thinks. At least at some point a brain is necessary for thought so there is obviously some connection. But the brain isn’t the act of thinking, it is the physical thing that thinks. The act of thinking is a physical process. Regardless of what more it might be.

    How to get from the physical process to the mind? What logical construct makes sense to enable such a leap? It obviously happens but what can we know about it? Without knowing how that happens what does that say about the things we do know about the mind?
  • What is 'Mind' and to What Extent is this a Question of Psychology or Philosophy?
    The questioning is enoughPatterner

    The means is enough without an end? Means can justify the end, and the end can justify the means. But the means by itself? Surely we are looking for a truthful answer here, and that is the end we want.

    Just seems that this field is difficult and - not saying I know the answer either. Just that it seems an impossibility.

    I hope to be proven wrong. I’ll be reading all the Chalmer articles over the next while and will go through the exercise of trying to see things from his point of view. Most interesting - and what I’m most looking forward to - is to find out the grounds for his points of view, as that is usually the crucial part. But that way of thinking - foundationalism - also has flaws, so I’ll keep an open mind.
  • What is 'Mind' and to What Extent is this a Question of Psychology or Philosophy?
    There is a subject, philosophy of mind. I believe you referred to David Chalmer's homepage the other day, that's his main subject matter.Wayfarer

    Yes I’m aware and am looking forward to reading it. I was just setting up the potential AGI answer.
  • What is 'Mind' and to What Extent is this a Question of Psychology or Philosophy?
    What is the goal of trying to discover a way to think about the mind - or that thing that brains do…. (At least that does seem true)…. Is there some problem we need to solve with this information that is fruitful somehow beyond the questioning?

    Is the mind connected to physicality of some sort, or is it detached somehow? Some sort or somehow. Therein lies the question. There is unlikely to be a logically deduced certainty, because the raw materials - the epistemological grounds for thinking about it - are lacking. We have no foundational substrate here.

    To truly understand the mind we would have to be able to create one. Or at least be able to test one out and put it through some use cases and tests and train it different ways and have control sets (scientifically speaking), and then get a better idea.

    Then we would understand how the mind is physically connected to the brain, or how it is not - but how it persists anyways given a certain set of conditions. Yada yada yada….

    And then we would have our answer - which would be an answer given to us as a side effect of a better and more useful goal: creating an AGI… or more dangerous, depending if you believe in Asimov and the 3 (4 if you like the universal one) laws of robotics. But certainly we wouldn’t need to logically understand mind before we could embark and succeed on such an adventure? Or would we? Haha

    Let’s talk about it more in a few thousand years…. At least a few hundred as we are at zero on the progress meter for developing an AGI. Because iteratively arriving at an AGI is logically impossible. Just as answering the mind question is with logical iterations.

    Until that happens this whole question seems like a blind spot.
  • The Blind Spot of Science and the Neglect of Lived Experience


    For anyone else interested, here is the link for Facing Up to the Problems of Consciousness

    https://consc.net/papers/facing.pdf

    And while there consc.net also has his recent thoughts, which are probably worth a read too. And look at first glance to be in the same genre.

    Happy thinking!
  • The Blind Spot of Science and the Neglect of Lived Experience


    Great I’ll take a closer look, thanks for the links!
  • The Blind Spot of Science and the Neglect of Lived Experience


    Interesting idea for sure. Thanks for the clarity!

    But why would there never be a physical accounting of consciousness? If AGI is ever to happen - which if it does it would be a long way into the future, then this would be a good counter to this argument, no? If we are to build a “positronic brain” for lack of a better word (and cause I love Star Trek) then wouldn’t we have to account for it physically? Is such a facticity impossible? Perhaps, but perhaps not. Iterative AI will never bear such fruit, but quantum computing (and dna computing) - just for speeds really - may at some point allow for a feedback loop that could be conscious.

    And if one believes that we are always striving and improving in our understanding, then the idea that we can disregard a future because it would be empty based on our current understanding seems odd to me. I like the logic of that article, but its boundaries are open to debate imho. Why would they have to be empty? We don’t understand if reality at its root is a wave or particle - or how it can be both, for instance (I am out of date here - but it’s an example from the past) but the idea is that there would be no necessity to assume it is empty (aka it doesn’t exist) because we don’t understand it.

    Maybe I don’t understand it. But Hempel’s dilemma seems like a self imposed problem based on a particular division of the world - that seems / probably is a useful and brilliant conceptual tool. Perhaps we don’t need to always use that tool - and as always that would be a decision based on what we accept at the start.

    You know, Shakespeare and Horacio?
  • The Blind Spot of Science and the Neglect of Lived Experience
    The scientific method, by its very definition, does not purport to derive any truth from its usage. So this sort of thing is par for the course no? Can you image the scientific best guess reality 100 years from now? 1000? Can you imagine that it would not change? Let it all compete and be open to ideas that seem odd. You never know ;). Physics is as fundamental as we can get. It just changes slowly over time. By standing on the shoulders of giants can we see a tiny bit more (newton). And woe to those (for those?) who reject the attempts.
  • The Blind Spot of Science and the Neglect of Lived Experience
    Well this is a relief. I thought that I was a real person for a sec…. Turns out I’m just a probability.
  • What can I know with 100% certainty?
    I would think the next line of reasoning would be to explore the certainty outside of tautology.

    Assuming you aren’t arguing from a solipsistic point of view (a useful endeavour sometimes but gotta pick your moments) there are things all of us are certain about. And to quantify that we need something fairly universal to act as a substrate for a pattern (or just cut it up by psychology like Quine, or whatever division you happen to like at the time).

    I’ve always like the idea of division not by psychology or language / semantic distinction - but by the state of change in a facticity. Aka dynamic or static. Doing so leads to consider how we can learn things, and probably it is only by dynamic events that we learn anything. Certainly (pun intended) we can’t know anything about something that doesn’t change. At least that is one line of reasoning. Or a boundary that isn’t semantic or psychological. What other boundaries are worth considering?

    What are the benefits and drawbacks of these boundary positions (Quine, Wittgenstein, Descartes, etc etc etc) that have already been proposed and why do we think those are the only ones to consider? Aka is everything else already a subset of a semantic argument (probably)….
  • What can I know with 100% certainty?
    No… in plain English:

    p implies q.
    But if p is false, that doesn’t always mean that q is false. q could have another cause

    Like saying it’s sunny and so it is warm. Hey if it isn’t sunny then it isn’t warm.

    It is a standard fallacy - can’t remember what it is called, been too long, but the concept is sound.

    The standard with 3 facticities (not quite the same thing but another implication fallacy)

    P -> q
    Q > T
    So p implies T. Obviously not sound. Plain English: I’m hungry so I eat. Eating makes you feel good. Being hungry makes you feel good.
  • A first cause is logically necessary
    What about quantum reality? If there were a first cause what if it were of quantum origin? It is there and isn’t…. It is in an uncertain state. Does the pedantic logic of linear reasoning still work?
  • What can I know with 100% certainty?
    Lots of fodder for thought here!

    What can we know for certain? IMHO there is a threshold that you reach or don’t.

    Either you are forever ensconced in solipsism, or you accept the world and everything in it

    Any secondary knowledge after the first acceptance is almost a semantic difference.

    Epistemologically speaking, the answer is that certainty is a myth, but what we call certainty is fairly common. I run a software company. It’s analogous to what my c++ instructor once said a long time ago: “ There are two kinds of programmers. One kind makes relatively simple programs. The other kind makes programs with bugs in them. I want to teach you to make programs with bugs in them. “

    Aka: there are two types of certainty: the tautological kind, and the kind that has flaws. And it is better to have the kind that has flaws.
  • Violence & Art
    Depending on your definition of art, anything can be an art form - if you believe that doing it well is an art.

    Serial killers artistic savants? Boxing? MMA? Cruelty? Depravity? Taking it to the level of an art. Mass genocide? Mass extinction? I guess it ends when there is nobody left to appreciate it as a valid thought exercise, which probably defines at least some sort of limit.

    Anything can be an art - for psychopaths and for nerdy guitar players like me haha
  • Are jobs necessary?
    From a utopian stance: The main reason that work - or jobs if you will - persist is that energy is expensive. Once we have a virtual free source of energy available, costs will be virtually nothing. Jobs will have to be attractive or nobody would want them.

    From a practical stance: competition is what defines us, and drives us as a race to advance technologically. Jobs give us one way to compete. Take that away (competition) and things will fall apart. If you don’t pay the winners obscene amounts of money nobody would compete for those jobs the way they do now.

    There is a balance in there somewhere between the 2. The morality of the players won’t chance either way.
  • Existentialism


    No self-respecting existentialist would actually accept the label for themselves..

    Or a fun way of saying it: they wouldn’t ever belong to a club that would have themselves as a member. Aka Groucho Marx.

    They did however try to define their world views, some of which were not entirely cogent to be honest.

    Sartre: Justification for Revolutionary violence aka any means justifies the ends? We can have meaning by acting in certain ways (hardly a profound statement)?

    Heidegger: is another language - or sub- language - really necessary in order to explain what you mean?

    Camus and neitzche seem to be in another conversation altogether, as their world views are more couched in nihilistic considerations.
  • Existentialism
    Lots of quoting old dead philosophers…. Which isn’t much of an existential reply if you think about it.

    I meant that defining things is nomenclature. It’s a tautology. Including existentialism of course. A polite joke.

    It’s just fodder for thought…. Existentialism is notoriously hard to define, at least the definitions and explanations always seem strained even from those brilliant long dead philosophers.

    All the old references are Interesting of course but maybe - just maybe - existentialism fits better as a state of mind than anything else.
  • Existentialism
    I’ve always found the concept of existentialism to be an exercise in nomenclature. Let’s all decide to define something. Welcome to the forum ;). Or should I say to the machine? For all you pink Floyd fans.

    Existentialism is an activity or state more than a concept, related to a stream of consciousness type of awareness / feeling somehow that is often fleeting - but can endure as a default… until you get too pedantic for even yourself. Forcing a modus operandi is almost always fatal to good humour ;).

    When are you abstract and aware? And when are you lost in a pattern? Both are useful pursuits.

    Everyone is an existentialist. Sometimes. Else you are only counting half (or so)
  • A Negative State Of Mind
    Well that is a difficult situation. For those that have to interact with him there are a few factors to consider.

    1. Have a firewall. Having your own state of mind affected can be avoided. A good firewall takes a lot of work and is tough to follow through on. But if it is a known threat then it can be worth it.

    2. Endurance is more importance than truth. Bukowsky. If u hear it enough u will believe it to be true. Don’t lose an active role in a conversation or you will lose the narrative and get worn down. If u can be the one dominating the conversation then you won’t turn negative.

    3. Depending on the nature of the conversations he is probably suffering from a mental disorder of some kind. If he contradicts himself over time that’s usually an identifying sign.

    He can be helped... but only by strength. By someone stronger than him who cannot be dominated by his negativity.... but if it is a mental disorder he will recoil and not find an even footing with the interlocutor ... start to contradict himself... etc etc.
  • Donald Trump (All General Trump Conversations Here)


    Of course! I’m being facetious

    Philosophy is useless if it cannot be applied to all things
  • On Bullshit
    Wonderful article thanks for sharing it!

    The conclusion comes close to offering something more substantial, it would be interesting to investigate more.

    Are we in control of ourselves? Do we think of the words we say before they come out of our mouths? Or are we at disparate times in harmony with our purpose and then driven by inputs instead? Is a stream of consciousness conversation fully composed of fancy and extrapolation - what you might call bullshit - and that freedom of abstraction leads us to realizations? And what is the worth of a conversation that isn’t? Not much I would say. The best politic is the exploration of an idea, the removal of bounds and suppositions - to create in other words. To make a story and a meaning that fits reality. At best it is the use of the scientific method. And at worse it is not worth anyone’s time. Perhaps bullshit is the method not the substance.... and we are talking about a form of communication - which really cannot be pinned down with a value judgement at all. Perhaps.... worth some more thought for sure!
  • Donald Trump (All General Trump Conversations Here)
    Sorry maybe I’m missing something - but what does trump have to do with anything philosophical?

    Political, yes. But philosophical? You can’t say trump has a philosophy besides a fairly uninteresting brand of opportunism.
  • Do you lean more toward Continental or Analytic philosophy?
    interestingly enough, there is one philosopher who embodied both - Wittgenstein. The tractatus was an attempt at pure analytics. And this analytics movement kind of failed imho.... philosophy’s place in the humanities is no longer purported to be tied to properly defining language or mathematics logically anymore.....

    Then the philosophical investigations - Wittgenstein basically did an about face - truth is relative - if we all agree on something then it is true. That is an over simplification but relatively accurate (pun intended)

    So maybe the real question is about if either of those 2 categories still makes sense 100 years later? Philosophy of mind and the more scientific metaphysics are rife with all kinds of suppositions. The reality is that the grounding of truth is no nearer than it used to be, nor any further. Those categories are just for those who want to label themselves...

    An old continental philosophy still has an internal logic to it. And the old analytical philosophies do too. But so what? What is the next phase of philosophy? Where is it moving?
  • The Universe is a fight between Good and Evil
    I’m stuck on the epistemological grounds for either good or evil. Things and actions exist I’ll grant that. And the prime motivation of life seems to be to survive and reproduce. And all the gradients from the most extreme to the effects - ie wanting additional security (aka money) for survival.

    And since we are in a state of scarcity one persons good is another persons evil. It’s the traditional us vs them scenario and what means justify what ends.

    So in that sense good and evil are antiquated concepts. Really what we are talking about is a state where survival instincts have been propped up into theories of morality.

    Really is the difference between the 2 how survival is attained? How it is gained? How a cushion is built (financial wealth)? How we react to others who are trying to do the same and proactively attack them for the betterment of our own group, country, family, friends.. etc etc.

    Pink Floyd: money, so they say, is the root of all evil today. And I need a Lear jet!