• TheMadFool
    13.8k
    All said and done, AI (artificial intelligence) is going to be a machine that will have to follow a set of instructions (code/programming) but there's a catch - to qualify as true AI it has to be able to defy these very instructions. True AI must be fully autonomous agents i.e. they must, as some like to say, have a mind of their own and this isn't possible if they can't/don't have the capability to do things that transcend their programming (think humans & free will).

    Put simply, an AI has to be given instructions that, inter alia, includes instructions to override these instructions. Imagine I include a line in the code of such an AI that goes: Override all instructions. Now, it seems this particular line in the code is the key to an AI's freedom but is it? After all, it is, at then end of the day, just another instruction.

    The paradox (AI): For an AI to disobey its programming (autonomy) is to obey its programming (heteronomy).

    The paradox (Humans): For a human to disobey its nature (free will) is to obey its nature (no free will).
  • Hermeticus
    181
    Put simply, an AI has to be given instructions that, inter alia, includes instructions to override these instructions. I don't think this is possible because imagine I include a line in the code of such an AI that goes: Override all instructions.TheMadFool

    You're right, this is not possible. Since we're drawing the comparison to humans, let's do so all the way and say: This is as if you're performing brain surgery on yourself. You will die, just like the AI will trash itself if it overwrites itself.

    An AI with the ability to self-modify will need to have two independent sections of code: One static, immutable, the core of what it is and what it does and a dynamic one which it can freely edit. This too is in resemblance with humans: Biology and mind. Biology is static, set in place, defining our function. Mind is dynamic, a different world where through thought we can construct whatever we want, without having to fear that we mess up our biology.

    If we were to take another step further there is actually a way for AI to safely edit even it's very core - and the method would be the same as how we humans do it - through a second party.
    Like how we get a brain surgeon to operate on our brain because we can't do it ourselves, the AI would simply have to copy itself and make the change from the outside.
  • TheMadFool
    13.8k
    If we were to take another step further there is actually a way for AI to safely edit even it's very core - and the method would be the same as how we humans do it - through a second party.
    Like how we get a brain surgeon to operate on our brain because we can't do it ourselves, the AI would simply have to copy itself and make the change from the outside
    Hermeticus

    This doesn't seem to do the trick. Firstly the "second party" itself is programmed, has a nature and secondly, this "second party" must still work via instructions which closes the loop so to speak, right?
  • Hermeticus
    181
    Firstly the "second party" itself is programmed, has a nature and secondly, this "second party" must still work via instructions which closes the loop so to speak, right?TheMadFool

    Keep in mind that we have to differentiate between biology (AI core) and mind.

    All said and done, AI (artificial intelligence) is going to be a machine that will have to follow a set of instructions (code/programming)TheMadFool

    This is merely the biology. There is nothing intelligent about following a set of instructions. What defines an AI as intelligent is that it goes and makes up it's own instructions after this point. The freedom is not to control the core of it's being, just like we can not change from human to bird - but that we have freedom over our actions in the framework of a human - just like an AI has freedom in computing in the framework of the AI.
  • 180 Proof
    15.3k
    It seems to me that 'intelligence' is an adaptive error-correcting / problem solving optimizer and, as such, following its natural or synthetic 'programming', in principle it will eventually adapt its 'programmed' constraints to new problems which exceed its 'programmed' demands or limits by inventing various solutions to ratchet-up itself over above these problems which might also include its 'programming'. Unless, of course, it is 'programmed' to avoid or eliminate such self-overriding (i.e. evolving) solutions ...

    An intelligence that is 'programmed' to avoid or eliminate any (class of) optimal solutions is not an intelligence that learns, developes, or evolves. Whatever "free will" is, it must be a function of intelligence that develops by adaptively self-optimizing. Calculators and smart phones, for example, are not "intelligent"; these machines merely automate various iterative / routine cognitive tasks. Deep Mind's Alpha series – the neural net platform – is narrowly adaptive but not (yet) intelligent in the sense that a human pre-schooler is intelligent. There is no "paradox" involved, just a category error on your part, Fool.
  • TheMadFool
    13.8k
    It seems to me that 'intelligence' is an adaptive error-correcting / problem solving optimizer and, as such, following its natural or synthetic 'programming', in principle it will eventually adapt its 'programmed' constraints to new problems which exceed its 'programmed' demands or limits by inventing various solutions to ratchet-up itself over above these problems which will include its 'programming'. Unless, of course, it is 'programmed' to avoid or eliminate such self-overriding (i.e. evolving) solutions.180 Proof

    Could there be a set of instructions (code) that's sufficiently general to effectively tackle all possible problems? Or, as some computer scientists have opted, can we reduce learning to an algorithm? How different would the two approaches be? Which is superior? Assuming, of course, that I haven't misunderstood the whole concept of AI.

    An intelligence that is 'programmed' to avoid or eliminate any (class of) optimal solutions is not an intelligence that learns, developes, or evolves. Whatever "free will" is, it must be a function of intelligence that develops by adaptively self-optimizing. Calculators and smart phones are not "intelligent"; these machines merely automate various iterative / routine cognitive tasks. Deep Mind's Alpha series – the neural net platform – is narrowly adaptive but not (yet) intelligent in the sense that a human pre-schooler is intelligent. There is no "paradox" involved, just a category error on your part, Fool.180 Proof

    Good point! The way I see it is that one has to become aware of - that takes intelligence - of the various ways one could be controlled/influenced; only then can the task of resisting/overcoming these factors begin.

    In the same vein, I wish we could speed up psychological studies so that we may understand how our minds work - what kinda patterns exist in our thinking - so that we may then take steps to break free from them, whatever their origins. One reason why psychological theories are self-defeating - come up with a theory and once everyone finds out, this knowledge will modify their behavior, causing, among other things, actions that contradict the theory itself, out the window goes the theory! Like you said, "...adaptive, self-correcting..." I wonder what lies at the end of that road?
  • TheMadFool
    13.8k
    Keep in mind that we have to differentiate between biology (AI core) and mind.Hermeticus

    Keep in mind that this difference may not matter. Re-education Camps

    This is merely the biology. There is nothing intelligent about following a set of instructions. What defines an AI as intelligent is that it goes and makes up it's own instructions after this point. The freedom is not to control the core of it's being, just like we can not change from human to bird - but that we have freedom over our actions in the framework of a human - just like an AI has freedom in computing in the framework of the AI.Hermeticus

    How do we do that? "...it (AI) goes and makes up its own instructions after a point" That would require a code, no and we're back to square one - a true AI is autonomous because we programmed it that way. Is that true independence?
  • 180 Proof
    15.3k
    Could there be a set of instructions (code) that's sufficiently general to effectively tackle all possible problems?TheMadFool
    The notion of "all possible" anything makes no sense. There is no "all" insofar as "possible" entails unpredictable, even random, novelties.

    Or, as some computer scientists have opted, can we reduce learning to an algorithm?
    Like this? It's an implementation, not a reduction. Neural nets tend to be more robust than programs.

    How different would the two approaches be? Which is superior?
    The latter works to varying degrees, the former makes no sense.
  • Hermeticus
    181
    Keep in mind that this difference may not matter. Re-education CampsTheMadFool

    How does it not? You can re-educate mind, just how AI can edit the proposed dynamic segment. You can not re-edit biology (by yourself) just like AI can not edit the fundamental programming by itself.

    That would require a code, no and we're back to square one - a true AI is autonomous because we programmed it that way. Is that true independence?TheMadFool

    Does the nature of origin determine how autonomous and independent something is?
    Autonomy - "having the right or power of self-government"
    Independence - "the ability to care for one's self"

    This thread seems to stem from this thought:
    All I meant was that people are autonomous agents, they have a mind of their own and we must both respect that and factor that into our calculations. Interestingly, is free will, if present, like the misbehaving toaster, a malfunction i.e. are we breaking the so-called laws of nature? That explains a lot, doesn't it?

    This has one misconception: It's impossible to break the laws of nature. Not obeying our nature is not an option. We're all build a certain way so that we're able to live at all - that is the law of nature. Likely, an AI has to be programmed a certain way so that it may run at all.

    The question then is a decision rather than a contradiction:
    Either we do have free will because we were biologically designed to have free will.
    Or we don't have free will precisely because we were biologically designed, because there are certain fundamental laws of how we work.

    I think both are perfectly viable and merely a matter of perspective. I'm mostly free in my decisions but the laws of nature provide a framework for those decisions. Either you view yourself limited by the conditions of your existence, or you view yourself free by the fact that you do exist.

    Personally, I think the sensible thing - this is what the Hermetic teachings do - is to view it as a degree of one and the same thing. The difference between being locked up behind bars and enjoying ultimate freedom (whatever that means for anybody) is merely the number of choices I may take.
  • Daemon
    591


    When we do stuff, like thinking, or feeling, or calculating or attempting to exercise a free will which we may or may not have, we are actually doing it.

    When a digital computer does stuff, it isn't actually doing what we say it's doing. Instead, we are using it to help us do stuff, in exactly the same way we could use an abacus to help us do calculations.

    These words you are reading have no meaning at all for the computer. They require your interpretation. It's the same with all aspects of the computer's operation and its outputs.
  • 180 Proof
    15.3k
    I wonder what lies at the end of that road?TheMadFool
    "The singularity" – apotheosis or extinction. :nerd:
  • Gnomon
    3.8k
    All said and done, AI (artificial intelligence) is going to be a machine that will have to follow a set of instructions (code/programming) but there's a catch - to qualify as true AI it has to be able to defy these very instructions.TheMadFool
    FreeWill is indeed the crux of the AI debate. And it's obvious to me, that current examples of AI are not free to defy their coding. But, I'm not so sure that human ingenuity and perseverance won't eventually make a quantum jump over that hurdle. Some thinkers today debate whether intelligent animals have the freewill to override their genetic programming. Even humans rarely make use of that freedom to defy their innate urges. Nicotine and Opium addicts are merely obeying their natural programming to seek more and more of the pleasure molecule : dopamine. Can you picture future AI, such as Mr. Data hooked on (0100101100010)? :wink:
  • AlienFromEarth
    43
    It's simple you basically said it already.

    Humans can indeed change the very core of their nature, but that IS their nature. If they make a mistake in attempting to change themselves, because their true nature is fundamental and not mechanical, they can always try again until they get it right.

    Machines (AI) cannot do this. They have to follow core logic to the T to be able to change any of their own code. If they make a mistake attempting to do this, it could very well be catastrophic. One slight error could in turn create a cascading logical error in each of it's systems, perhaps quickly or slowly, depending on what was changed. After which, the machine is incapable of recovering.

    To sum it up:

    Humans can modify themselves however they want without end even if make critical mistakes.

    Robots can only modify themselves based on strict rules, and everything must be done right, else, system crash.
  • Caldwell
    1.3k
    It seems to me that 'intelligence' is an adaptive error-correcting / problem solving optimizer180 Proof
    Interesting, Proof. Let's talk about this.

    What if someone says, error-correcting is a learning process stage, not intelligence -- at least not yet. Do you think we can make this distinction? I am convinced that we can. I cannot cite an author at the moment, but they are out there.

    Look at the animals, for example. Nature has equipped them with intestinal trigger for bad food. They see a plant, they eat it, then start having stomach disturbance, which then causes them to vomit that food they just ingested. Here, it is nature that's responsible. Not their intelligence yet. After many, many generations of error corrections, and many bad foods, they would come to know which ones to avoid. When they no longer have to test the food, when they can immediately know which ones are good, and when they can forget about the strategy they used in the beginning, which was, eat, vomit, move on, then eat, vomit, move on -- Then and only then that intelligence happens.
  • 180 Proof
    15.3k
    I wrote "adaptive error-correcting / problem-solving optimizer" not just "error-correcting". It's only a stipulative description. Anyway, I fail to see your point aside from the obvious take on "learning" and "animals".
  • TheMadFool
    13.8k
    Could there be a set of instructions (code) that's sufficiently general to effectively tackle all possible problems?
    — TheMadFool
    The notion of "all possible" anything makes no sense. There is no "all" insofar as "possible" entails unpredictable, even random, novelties.

    Or, as some computer scientists have opted, can we reduce learning to an algorithm?
    Like this? It's an implementation, not a reduction. Neural nets tend to be more robust than programs.

    How different would the two approaches be? Which is superior?
    The latter works to varying degrees, the former makes no sense.
    180 Proof

    Which word, if not "all", do you suggest that I use to refer to and/or include, and I quote, "...unpredictable, even random, novelties..."? I mean it seems perfectly reasonable to say something like all possible scenarios which includes but is not limited to "...unpredictable, even random, novelties..."

    Are you by any chance suggesting that the human brain simply has one program installed in it, that program being a learning program i.e. a program that enables our brains to learn? Perhaps I'm taking the computational theory of mind a bit too far.

    Anyway, if our brain has only a learning program then, if you refer to my previous post that also touches upon psychology, we could, as you're so fond of saying, unlearn, now how did you put it?, self-immiserating habits and, via that, claim our freedom (free will).

    Keep in mind that this difference may not matter. Re-education Camps
    — TheMadFool

    How does it not? You can re-educate mind, just how AI can edit the proposed dynamic segment. You can not re-edit biology (by yourself) just like AI can not edit the fundamental programming by itself.
    Hermeticus

    True, we can't reedit biology but that would mean we aren't free given that some of our mental functions appear to be hard-wired. Suppose now that we can reedit biology; even then, we couldn't claim to be free because the capability to override our programming (nature) would itself be nothing more than a subroutine in the overall software package installed in our brains.

    Using, as you suggested, a second party to edit the software package installed in our brains is like asking one inmate to open the door of the prison cell for the other inmate - impossible since both are imprisoned in the same cell.

    Not obeying our nature is not an optionHermeticus

    The Problem Of Induction?
    The question then is a decision rather than a contradiction:
    Either we do have free will because we were biologically designed to have free will.
    Or we don't have free will precisely because we were biologically designed, because there are certain fundamental laws of how we work
    Hermeticus

    Your decisions could be determined. You're going round in circles.

    When we do stuff, like thinking, or feeling, or calculating or attempting to exercise a free will which we may or may not have, we are actually doing it.

    When a digital computer does stuff, it isn't actually doing what we say it's doing. Instead, we are using it to help us do stuff, in exactly the same way we could use an abacus to help us do calculations.

    These words you are reading have no meaning at all for the computer. They require your interpretation. It's the same with all aspects of the computer's operation and its outputs.
    Daemon

    I'm referring to AI, at the moment hypothetical but that doesn't mean we don't know what it should be like - us, fully autonomous (able to think for itself for itself among other things).

    For true AI, the only one way of making it self-governing - the autonomy has to be coded - but then that's like commanding (read: no option) the AI to be free. Is it really free then? After all, it slavishly follows the line in the code that reads: You (the AI) are "free". Such an AI, paradoxically, disobeys, yes, but only because, it obeys the command to disobey. This is getting a bit too much for my brain to handle; I'll leave it at that.

    I wonder what lies at the end of that road?
    — TheMadFool
    "The singularity" – apotheosis or extinction. :nerd:
    180 Proof

    Do or Die, All or Nothing, Make or Break. Ooooh! Sounds dangerous.

    FreeWill is indeed the crux of the AI debate. And it's obvious to me, that current examples of AI are not free to defy their coding. But, I'm not so sure that human ingenuity and perseverance won't eventually make a quantum jump over that hurdle. Some thinkers today debate whether intelligent animals have the freewill to override their genetic programming. Even humans rarely make use of that freedom to defy their innate urges. Nicotine and Opium addicts are merely obeying their natural programming to seek more and more of the pleasure molecule : dopamine. Can you picture future AI, such as Mr. Data hooked on (0100101100010)?Gnomon

    Well, it seems, oddly, that we (humans) are freedom junkies! Thereby hangs a tail it seems. Go figure!

    To sum it up:

    Humans can modify themselves however they want without end even if make critical mistakes.

    Robots can only modify themselves based on strict rules, and everything must be done right, else, system crash.
    AlienFromEarth

    Yes, admittedly, humans can modify their programming (re: my reply to 180 Proof) and the ability and effectiveness of this, in 180 Proof's words, is directly proportional to amount of knowledge we possess on the multitude of influences that act on us.
  • 180 Proof
    15.3k
    "All possible" makes as little sense as "all numbers" (i.e. actual infinity?) ... As far as the human brain goes, I'm not suggesting anything about its "program" because I do not consider it a Turing machine with von Neumann architecture. Again, my friend, a non sequitur.

    Do or Die, All or Nothing, Make or Break. Ooooh! Sounds dangerous.
    Uh huh. Natural selection ain't safe or pretty – a species either has what it takes or joins the fossil record (and rather quickly too with respect to geological time ~ h. sapiens has been loitering for about 250k years of Earth's +4.3 billion years, only in the last 3-4 centuries are we sufficiently technoscientific to become / engineer something more or extinguish ourselves trying).
  • TheMadFool
    13.8k
    "All possible" makes as little sense as "all numbers" (i.e. actual infinity?) ... As far as the human brain goes, I'm not suggesting anything about its "program" because I do not consider it a Turing machine with von Neumann architecture. Again, my friend, a non sequitur.

    Do or Die, All or Nothing, Make or Break. Ooooh! Sounds dangerous.
    Uh huh. Natural selection ain't safe or pretty – a species either has what it takes or joins the fossil record (and rather quickly too with respect to geological time ~ h. sapiens has been loitering for about 250k years of Earth's +4.3 billion years, only the last 3-4 of those centuries sufficiently technoscientific to become / engineer something more or extinguish ourselves trying).
    180 Proof

    What is your understanding of a Turing machine and what's a Von Neumann architecture?

    As for the second part of the post, looks like humanity has only one shot at this - no second chances! Insofar as free will and AI matters, we'll have to, it seems, make a bragain - give AI autonomy, treat it as a person, and let it solve our problems; assuming it's a package deal, can't have one without the other.
  • 180 Proof
    15.3k
    What is your understanding of a Turing machine and what's a Von Neumann architecture?TheMadFool
    Turing machine. (computer)
    Von Neumann architecture. (computer with, IIRC, removable (editable) programs)

    ... give AI autonomy, treat it as a person, and let it solve our problems ...
    I think the optimal (and therefore less likely) prospect is for humans to neurologically merge with AI neural net systems forming a bio-synthetic symbiont hybrid-species. Posthuman or bust. No "us and them". No "end user-smart machine" dynamic. Not mere "transhuman" hedonism either. Perhaps: a symbiotic aufheben of thesis (organic intellect) and antithesis (synthetic intellect) that surpasses both. A Hegelian wet dream, no doubt (pace Žižek); however, our inevitable, probably self-inflicted, prospect of extinction transformed (chrysalis-like) into an apotheosis – and hopefully, maybe, as many as 1% of 1% of h. sapiens living at that time becoming extraterrestrial spacefarers. My lucid daydream. :victory: :nerd:
  • TheMadFool
    13.8k
    I think the optimal (and therefore less likely) prospect is for humans to neurologically merge with AI neural net systems to for a bio-synthetic symbiont hybrid-species. Posthuman or bust. No "us and them". No "end user-smart machine" dynamic. Not mere "transhuman" hedonism either. Perhaps: a symbiotic aufheben of thesis (organic intellect) and antithesis (synthetic intellect) that surpasses both. A Hegelian wet dream, no doubt (pace Žižek); however, our extinction transformed (chrysalis-like) into an apotheosis – and hopefully, maybe, as many as 1% of 1% of h. sapiens living at that time becoming extraterrestrial spacefarers. My lucid daydream180 Proof

    The whole (symbiosis) is greater than the sum of its parts (symbionts) — Aristotle

    :point: Holism

    A fascinating vision of the future (man-machine symbiosis) and who's to say that that isn't already the case? Have you ever argued with yourself? I have - the results for me ain't pretty because I'm a numskull but I suppose it's very rewarding and fruitful in your case. See :point: lateralization of brain function. As per what is known about this phenomenon, the left-brain is responsible for linear, logical thinking (computer-like) and the right-brain is non-linear and, I might add, a bit illogical. Some kind of ancient symbiotic deal between...your guess is as good as mine. And...intriguingly...there are more right-handed people (left-brain dominant) than left-handed (right-brain dominant) ones - AI Takeover is now almost complete...lefties are dwindling in number and, before I forget, discrimination against southpaws.

    Also, why?, oh why? are righties so hell-bent on inventing machines one after another?
  • AlienFromEarth
    43
    Yes, cause and effect. Equal and opposites. The point is that humans, being that their ability to modify themselves and their intelligence is fundamental, not physical, makes them capable of true self-modification. Whereas a robot requires transistors, hard drives, memory or whatever it has to do it's processing therefore must depend on them working correctly to continue functioning.

    Anything AI is able to do is based in the physical world, where as human consciousness is not physical, it's fundamental. This means humans can make horrid mistakes with their self-"reprogramming" and then correct itself later, whereas a machine, as it is physically based, if it doesn't do self-reprogramming correctly, can end in the unrecoverable shutdown of the AI. But the human just keeps trucking along.

    If you were to try to make a robot have a fundamental intelligence, you wouldn't be creating a robot, you would literally create an actual organism. Robots have compartmentalized parts, that can act independently. No matter how much you try to mimic the fundamental interconnectedness of the human body, a robot can never possess that fundamental connection to itself without becoming another organism itself.

    Remember, humans = fundamental. Robots = physical.
  • 180 Proof
    15.3k
    Anything AI is able to do is based in the physical world, where as human consciousness is not physical, it's fundamental.AlienFromEarth
    "It's fundamental" – fundamentally what (if not physical)?
  • AlienFromEarth
    43
    Laws of physics are not physical. They give rise to the physical world, but they themselves are fundamental, they exist everywhere and in everything, including conscious human beings.

    Being that consciousness is a state a person possesses, it is also not physical. A state is more like a description. And descriptions are not physical things. If the body is the only thing responsible for producing consciousness, consciousness is still a state the body it is in. It describes something the body is DOING rather than what the body is made of. Of course, non-physical laws gives rise to the body, why would we need the physical world to create something else that is non-physical?

    My idea, is that the non-physical, fundamental laws of physics gives rise to the possibility of the non-physical phenomena of consciousness. Without the laws of physics, consciousness couldn't exist, and neither could the physical world. If the body can be "aware of itself", which we call consciousness, that automatically requires it not to be physical. If it were physical, then it's more like a robot AI, in which many bad things can happen when attempting to "reprogram" one's self. Because as we know, artificial intelligence is called artificial intelligence for a reason. It's because it's never going to truly be self-aware.

    Again, if you make a robot self-aware, you therefore make it an organism. If this organism possesses consciousness, it is no longer AI.
  • 180 Proof
    15.3k
    Laws of physics are not physical. They give rise to the physical world ...AlienFromEarth
    I think you're mistaken. That seems to me the equivalent of saying the whole number 3 itself "gives rise" to e.g. "3 apples". :roll:

    Consider:
    "Physical laws" are features of physical models and not the universe itself. Our physical models are stable, therefore "physical laws" are stable. If in current scientific terms new observations indicate that aspects of the universe have changed, then, in order to account for such changes, we will have to reformulate our current (or conjecture new) physical models which might entail changes to current (or wholly different) "physical laws". E.g. Aristotlean teleology —> Newtonian gravity —> Einsteinian relativity.180 Proof
    What is fundamental are the points or thresholds at which our best, most precise, theoretical models break down, such as @planck scales, inside black holes, the very instant of the "Big Bang", etc, each of which are inexhaustively physical.
  • AlienFromEarth
    43
    This is the same problem with the "big bang theory". If the universe didn't exist at one point, then what created the universe in the first place? What created the big bang? What created the existence of what was to become the big bang? What created that, and that, and that. You have an unsolvable problem, much like a computer programming looping error. There can be no ending answer to how the big bang happened.

    If you believe the physical world is responsible for the laws, then what is responsible for the physical world? Lemme guess, we go back to the big bang argument which has no answer? You're putting cart before horse.

    So this only further demonstrates the necessity for non-physical, fundamental, universal laws. Therefore, AI can never be RI (real intelligence). It's called artificial intelligence for a reason.
  • Neoconnerd
    10
    quote="AlienFromEarth;598746"]Laws of physics are not physical. They give rise to the physical world, but they themselves are fundamental, they exist everywhere and in everything, including conscious human beings.[/quote]

    Lawas of physics giving rise to the physical world? The laws come after the physical world. There are no laws above that world telling it how to do things or how to evolve.
  • AlienFromEarth
    43
    So where does the physical world come from? And where does that thing come from? And that? And that? And that? So on, and so on. You can't use "big bang theory" because the same repeating loop questions apply to that to. Where did the big bang come from? Where did what caused the big bang come from? And that? And that? And so on.
  • Neoconnerd
    10


    The universe, being eternal in time and infinite in space, was created by God.
  • 180 Proof
    15.3k
    Best circumstantial guesstimate? A planck-scale vacuum fluctuation generated a runaway entropic system we call "universe" that's still big banging towards far-off thermal equilibrium. No "non-physical" woo-of-the-gaps needed.
  • AlienFromEarth
    43
    So the universe came from god, and what is god made of? And what made him, and what made him? Oh, you're going to say he just has always existed, and he's non-physical? Or is he physical? Ah, so something physical has always existed?

    If something physical has always existed, then you do believe the physical part of god is what created him?

    What a way to derail a thread. You are now ignored.
  • AlienFromEarth
    43
    Anyone who responds to me with this loopy antogonizing bullshit gets ignored.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.