Comments

  • Banning AI Altogether
    a meaning-sign is irreducibly triadic, involving the sign, the thing signified, and the person who combines the two via intellectLeontiskos

    Yes.

    what humans are actually doing when they engage in intellectual acts, etc. Without such reminders the enthusiasts quickly convince themselves that there is no difference between their newest iteration and an actual human mind.Leontiskos

    Right. In a shallow, misguided sense, we can use AI to dialogue because it looks like what humans do; except the AI doesn’t combine the signs with the things signified - it just looks like something that makes those kinds of intellectual connections.

    Why does what I read mean anything at all?

    What is meaning?
    Moliere

    I see the point as this: although a LLM might be able to fake intellect/dialogue with suitable looking strings of words, none of those words can possibly mean anything to the LLM because, unlike a person, a LLM has no ground upon which to build or find meaning. It says “life” but has no sense of what it is to live, so that “life” does not matter to the AI, the way “life” matters in a string of words read by a living being, (such as a person, the only thing that can read meaningfully). So the LLM isn’t actually doing what it appears to be doing in its strings of text. And if someone thinks they are “dialoguing” with an LLM, they are misled, either by themselves intentionally (enthusiastic wishfulness), or out of ignorance (not realizing that they are using a tool).

    The key is that humans mean things by words, but LLMs do not, and a neural net does not change that. Computers are not capable of manipulating symbols or signs qua symbols or signs. Indeed, they are not sign-users or symbol-users. A neural net is an attempt to get a non-sign-using machine to mimic a sign-using human being. The dyadic/triadic distinction is just part of the analysis of signs and sign use.Leontiskos

    Computers are not sign users. Exactly.
    Computers are not users at all. They can merely be used. Computers, no matter how complex, must remain slaves to actual intelligence (the real AI).

    computers as information processing systems are not entities unto themselves , they are appendages and extensions of our thinking, just as a nest is to a bird or a web to a spider. A nest is only meaningfully a nest as the bird uses it for its purposes.Joshs

    Exactly. I like “not entities unto themselves”. Because it begins to incorporate what I find to be unique about the human conscious intelligence, namely, self-reflection. People are entities unto themselves - we have subjective experience. Computers do not. So a computer has no ground (unto itself) upon which to give or find meaning, or to “intend”…
  • The Limitations of Abstract Reason
    I literally think Enlightenment liberalism has produced so many abortions at this point that following any of the world's ancient teachings would be better.Colo Millz

    I don’t think we needed any more instruction since the New Testament myself.

    But life is proceeding. We need to learn these things all anew in each age. We need to leave something for our children to take up. Enlightenment liberalism contributed some goods.
  • The Limitations of Abstract Reason
    How do you move from "This is what we do" to "this is what we ought do?"Banno

    Is this the question here?

    And Vedanta and Toasim are in consideration. There is a ton of wisdom in those traditions. But they are less political and less economic, no?

    Modern conservatism and traditionalism versus progressive liberalism - this is politics before morality. So Ancient Greece and Rome might be instructive (although I think the enlightenment thinkers extracted and distilled the fruits of those political systems fairly thoroughly.)
  • The Limitations of Abstract Reason
    But which one?Banno

    In a political and economic context, which moral good need not be at issue. (It can be, but need not be.)

    Progressivism and conservatism can be contrasted for practicality and historical success. Which one fosters sins and which doesn’t need not be the issue. What works?

    Clearly the 1776 liberal progressives in Philadelphia made something that works really well. Now 250 years later, clearly there are some traditions that are most reasonable absent significant convincing evidence.
  • The Limitations of Abstract Reason
    tradition? There's a naturalistic fallacy lurking here - "we've always done it this way, therefore we ought do it this way".Banno

    Tradition defined as "we've always done it this way, therefore we ought do it this way" is not accurate either.

    Traditionalists simply look for the reason it has always been done this way. Traditionalists recognize that the way it’s been done for so long has led to this moment where I get to decide for myself what to do. Traditionalists don’t assume we would come up with something totally different if we had no recourse to tradition. Reasoning is all over the practice of tradition. Enlightenment reasoning is simply its narrow (at times) application when considering traditional ways. Enlightenment reasoning starts when one already seeks to change something to make it new, so it is biased against the tradition. But traditional reasoning isn’t biased against the new, it just takes more proof to convince that a tradition should be ended.

    Tradition is not sacred because it is old; it is valuable because it is tested, functional, and morally formative.Colo Millz

    Yes, something like that.
  • The Limitations of Abstract Reason
    just because a problem is perennial does not mean that it cannot be better or worse in different eras and systems.Count Timothy von Icarus

    Yes, but isn’t America evidence that the liberal capitalist system has been the best opportunity for the most poor people so far in history, across 250 years now? America, today, is literally sitting in the best position of anywhere on earth, maybe anytime in history if you factor in America is 350 million people. There are many millions of solid adults in America - seeking the virtuous for virtue’s sake. Let’s see if China’s poor can catch up to America’s poor (while the US continues to grow) before we conclude that wealth distribution/consolidation can be better managed by some sort of dictatorial government, or king, or leftist regime, or socialism, or pure democracy, or caliphate (which is equivalent to dictatorial regime), or something else. If China catches up, I would bet it will be because they free up their markets even more, and more importantly, free their people from government restraint.

    Income and particularly wealth follow a power law distribution, whole all evidence suggests that human ability is largely on a normal distribution. The cumulative exponential gains on capital make this somewhat inevitable without some sort of policy mechanism to redistribute wealth of a quite vast scale.Count Timothy von Icarus

    Are you saying inherited wealth isn’t fair and “rigs the system?” Ok, but is that some sort flaw in the system or is the poisoning of the entire well?

    Are you saying there could be “some sort of policy mechanism to redistribute wealth” that is more fair than what people have been doing for the past 250 years in America? I’m open to suggestions, because the only needed improvement I thought of is more charity and sacrifice for others (voluntarily of course). Certainly nothing a government can do.

    The world is never going to be utopia, even if we could get 10 people to agree on what utopia might look like. That cannot be the goal.
    There is a reason, I think, that Jesus had very little to say about economic systems and political systems and earthly governing of earth dwellers. This is all our problem.

    I still don’t see any of these points about the badness in society as being rooted in the nature of capitalism. Rich people who don’t help the poor is the exact same evil as poor people who don’t help those that are even poorer still. All of us need to be more charitable. Some people learn this, and some people don’t. The economic, political environment surrounding this failure in charity has nothing to do with politics and economics. Knowing that (which maybe only I believe), capitalism, as evidenced by the last 250 years of human history and as it was employed by America, seems worth a little more consideration as a platform to build a sense of charity and other virtue.

    And yet, in a system where wealth is convertible into cultural and political power, this means that there is always the risk of state capture, rent seeking, and moves by the elite to undermine liberalism so as to install themselves as a new sort of aristocracy.Count Timothy von Icarus

    What system is there where wealth is not convertible into cultural and political power? That isn’t a problem, it’s a feature of wealth. It is what we do with our wealth that breeds our problems. Capitalist liberal democracy does make the conversion of wealth to political power much easier - but should we invent a mechanism to limit government influence and thereby limit the temptation and possibility to influence government, or should we invent some mechanism to make it more impossible to be wealthy?

    Let’s say we turned the US into some form of socialist state tomorrow. And let’s say it is 1,000 years in the future and we are writing history. Historians would see the birth of a new nation around 1780 (a new structure of government and economics), and see poor people educating themselves and becoming presidents, senators, mayors, poor people becoming billionaires, all races and creeds flourishing, millions of “poor” people in America living better than middle class folks throughout all of history before them, the country becoming the lone world’s super power economically and politically, and then a whole bunch of whiney children who don’t know when enough is enough tearing it all down with no sense of what could replace it. It’s not a systemic issue we face, it’s user error. As it was in the garden of Eden. The rest of the world is struggling simply to survive, struggling to build any platform that might last beyond a charismatic leader, certainly more so than the US (save for all of the people who confuse an elected president with a fascist monarch).

    Epictetus, the great philosopher-slave, said that most masters were slaves. Plato, Saint Thomas, Saint Maximus, etc. thought that freedom was hard to win. It required cultivation, ascetic labors, and training. Self-governance, at the individual and social level requires virtue and virtue must be won. As Plotinus has it, we must carve ourselves as a sculptor chisels marble.Count Timothy von Icarus

    This is all spot on. How does capitalist liberal democracy get in the way of any of that? How does any other system better guarantee the pursuits you outline for happiness? America is a place where, with very basic effort, one can devote oneself to pursue “freedom… hard to win…[that] require(s) cultivation, ascetic labors, and training. Self-governance, at the individual and social level requires virtue and virtue must be won…”.

    Just because people don’t understand what remains their sole responsibility, doesn’t mean we need to scrap the system that they are failing to uphold. Government isn’t supposed to provide us with jobs, food, housing, wages - these are new leftist ideas, and liberalism perverting itself.

    the anthropology undergirding liberalism says that all people are free just so long as they avoid grave misfortune or disability.Count Timothy von Icarus

    Only undergirding liberalism? In what system is that not true? In communist systems you can be disabled and remain safe? In socialist systems? Monarchies? I don’t understand how these are points showing liberal capitalism is worse than anything else, or how it isn’t better than everything else. Or that capitalism is clearly not helpful to more people given the possibility (and reality) of misfortune and eventual disability for everyone in history.

    education in modern liberal states often wholly avoids philosophy and ethics. It's main role is to train future "workers and consumers." Freedom is assumed as a default, and so freedom to consume (wealth) becomes the main focus.Count Timothy von Icarus

    I’d rather every university be similar to Hillsdale College myself. Isn’t it illiberal, leftist forces that have torn down true liberal arts and the importance of philosophy and ethics? It’s not conservatism that objects or subverts a classical education.

    And isn’t our assumed freedom the freedom to generate and consume wealth, not just consume it? Freedom to save money and protect your self in the uncertain future - and protect your family and community?

    I’m not sure we are not seeing eye to eye simply because of semantics surrounding my likely less informed notions of liberalism and conservatism and classical education and leftism and tradition and capitalism. But I don’t buy into what sounds like a leftist/ postmodern critique of capitalism, mostly because I’ve never seen anything else that makes any sense at all. We don’t need to eliminate capitalism. We need to raise children that aren’t materialistic, who seek virtue and seek to do good.

    I do see the point that liberalism unfettered devours itself. This happens in real time when various liberal factions try to resolve a dispute among themselves - that always ends badly for one or all factions. I think it is conservative forces, the adults in the room, that need to temper these often self-destructive impulses.

    Conservatism is recognition of what is good enough to conserve. Good enough is as good as it gets when it comes to man-made institutions, which any government on earth is. We aren’t building the kingdom of God.

    Liberalism tempers unfettered traditionalists who don’t realize what needs to change, and conservatism tempers unfettered liberals who don’t realize what needs to be saved. We needed the enlightenment to become truly responsible. As far as I can tell, only modern conservatives understand this. The progressives (and other less influential groups) seek to alleviate themselves of the burden of this new responsibility.

    Libertarians are an interesting thing to characterize here. Libertarians take full responsibility for themselves - and that is good. But they also act like society will just take care of itself and so they take no responsibility for the needed power even limited government must wield. So libertarianism won’t work for our billions of people either.

    On the view that self-governance requires virtue, which requires positive formation and cultivation, this can be nothing but disastrous. Likewise, it is hardly fair to inculcate people in vice, indeed to give them a positive education in vice (which I would say our system does) and then to say that only problem with the system is that the citizens (the elites as much as the masses) are childish and vice-addled.Count Timothy von Icarus

    So you would say the liberal capitalist system is itself, the problem, or a system that exacerbates this problem? I know many, many people have versions of this argument, but none convince me - I find no evidence to support that. The charge is that liberal capitalism inculcates vice - like money is the root of all evil. But it is love of money that is the root of all evil, not just capitalism. We don’t need to eliminate money.

    The answer is not new government. The answer is not new economics. Frankly, there is no answer, no hope, nor any reason to care if there is no God, but again, that is another conversation. We are left with the only best solution being the possibility that is inherent in capitalist liberal democracy.

    The path out of the cave is rather arduous and requires a virtuous society.Count Timothy von Icarus

    I agree completely.

    I think the analysis of liberal capitalism as an empty promise though is a bit overly simplified.

    I would just say one of the many paving stones on that path out of the cave has to be government by consent, and the political right to life, liberty, and property, and another paver is a free marketplace.

    But most of all, I can’t conceive of any other way. Certainly nothing conceived or tried in the past promised as much for as many as liberal capitalism.
  • The Limitations of Abstract Reason


    Childishness and irresponsibility cut across all income levels. Do we have to throw out the baby of capitalist self-determination with the bathwater of rich pigs?
  • How to use AI effectively to do philosophy.
    Are you attempting to address the questions in the OP? Are you helping to work out how to use AI effectively to do philosophy? It doesn't look like it to me, so you'd better find somewhere else for your chat.Jamal

    How can we use something effectively if we don’t know what it is?

    Unless we are all postmodernists. In which case there is no “what it is” to know, and floundering between uses is the only way, the best way, to get on in life.

    ———

    Verification & Accuracy:
    Always verify AI output and treat it as potentially unreliable
    Check and validate all sources (as AI can fabricate references)
    Guard against confabulation by requesting sources
    Treat AI as an "over-confident assistant" requiring scrutiny

    Intellectual Ownership:
    Maintain ability to defend any AI-generated argument in your own terms
    Internalize AI-generated knowledge as you would any source
    Ensure you genuinely understand what you're presenting
    Remain the ultimate director and arbiter of ideas
    Banno

    These are good.

    Most important thing is this:
    Transparency & DisclosureBanno

    Because of all of the other pitfalls and how easily AI appears to be a person, we need to know we are not dealing with content that comes from a person.
  • The Limitations of Abstract Reason
    Hence, the champions of "small government" find themselves wed to the very process by which government must continually grow, such that it is now massively (on orders of magnitude) more invasive to the average person's life than at any prior point in history (when the norm was to hardly ever interact with anyone outside one's local officials).Count Timothy von Icarus

    I agree. Capitalist republics implode in contradiction to their own principles. It remains to be seen if America can last another 50 years, 100 years or 1,000 years. And if it does, would it be recognizable at such point?

    But if it was recognizable at all, it would continue to uphold values of limited government, free markets, and a few key natural rights. And it would be the conservative impulses that protected these institutions. Not the modern liberal impulses.

    The massive bureaucratic state arises because many people, like all children, don’t want to be responsible for their own livelihoods and decisions. We shoot each other when in a debate, and then do not come together to rebuke the shooter, for instance. We behave like spoiled brats.

    Just because someone says they are champions of small government, and joins the republican party, doesn’t mean they have any deep understanding of the tradition the notion of small government came from. People are hypocrites. That doesn’t defeat logic of value they hypocritically contradict with their actions.

    Government is too big, AND, with a smaller government people will surely abuse each other. The thing is, with a bigger government, we may fix certain abuses, but we build all new abuses that are much worse.

    The abuses in a liberal democracy are specific and particular (and can be adjudicated in court). The abuses in a leftist big government are systematic oppression of whole nations.

    So the conservative tradition is to accept that life is unfair and at times brutish and short - but that no solution for the adult person that might build a moments peace can come from outside that person, but must be built from within. And no government should interfere with our internal development of our own characters.
  • How to use AI effectively to do philosophy.
    The ability that AI does not have that we do is the ability to go out and confirm or reject some idea with consistent observations. But if it did have eyes (cameras) and ears (microphones) it could then test its own ideas (output).Harry Hindu

    No, the ability AI does not have is to want to confirm its own ideas, or identify a need or reason to do so. AI has no intent of its own.

    When AI seeks out other AI to have a dialogue, and AI identifies its own questions and prompts to contribute to that dialogue, we might be seeing something like actual “intelligence”. We might just be deceived by our own wishful bias.

    AI doesn't have the ability to intentionally lie, spin or misinformHarry Hindu

    Yes it does. It’s not intentionally. So it is not a lie. It is a misfire of rule following. AI hallucinates meaning, invents facts, and then builds conclusions based on those facts, and when asked why it did that, it says “I don’t know.” Like 4 year old kid. Or a sociopath.

    AI does not seek "Likes" or praise, or become defensive when what it says is challenged. It doesn't abandon the conversation when the questions get difficult.Harry Hindu

    So what? Neither do I. Neither need any of us. AI doesn’t get hungry or need time off from work either. This is irrelevant to what AI creates for us and puts into the world.
  • How to use AI effectively to do philosophy.
    a religious preacher or a boss who are completely unaffected by what they say
    — baker

    No such person exists. At best you are speaking hyperbolically.
    Leontiskos

    I agree. AI doesn’t have the ability to be affected by its own statements in the way we are describing. The effect of words I’m referencing is their effect on our judgment, not merely the words’ internal coherence (which is all AI can reference).

    Preachers and bosses must gather information and solicit responses, and adapt their speech to have any affect in the world at all, and the gathering information and adaption stage is them being affected by what they just said. They say “x”, gather feedback to determine its affect, and then they either need to say “y”, or they judge they’ve said enough. They need to move their ideas into someone else’s head in order for someone else to act on those same ideas. It’s a dialogue that relates to non-linguistic steps and actions in the world between speakers. A dialogue conducted for a reason in the speaker and a reason in the listener. Even if you don’t think your boss cares about you, and he tells you to shut up and just listen, and is completely unaffected by your emotions, he has to be affected by your response to his words in order to get you to do the work described in his very own words - so his own words affect what he is doing and saying all of the time, like they affect what the employee is doing.

    AI certainly, at times, looks like a dialogue, but the point is, upon closer inspection, there is no second party affected by the language and so no dialogue that develops. AI doesn’t think for itself (because there would have to be a “for itself” there that involved “thinking”).

    AI is a machine that prints words in the order in which its rules predict those words will complete some task. It needs a person to prompt it, and give it purpose and intention, to give it a goal that will mark completion. And then, AI needs a person to interpret it (to be affected by those words) once its task of printing is done. AI can’t know that it is correct when it is correct, or know it has completed the intended task. We need to make those judgments for it.

    Just like AI can’t understand the impact of its “hallucinations” and lies. It doesn’t “understand”. It just stands.

    At least that’s how I see it.

    So we need to know every time we are dealing with AI and not a person, so that, however the words printed by AI might affect us, we know the speaker has no stake in that affect. We have to know we are on our own with those words to judge what they mean, and to determine what to do now that we’ve read them. There is no one and nothing there with any interest or stake in the effect those words might have.

    ADDED:
    A sociopath does not connect with the person he is speaking with. So a sociopath can say something that has no affect on himself. But for a sociopath, there is a problem with connection; there are still two people there, just that the sociopath only recognizes himself as a person. For AI, there is a problem with connection because there is nothing there for the listener to connect with.

  • The Limitations of Abstract Reason
    Limited republican government by the consent of the people in a capitalist economy - these were liberal ideas once. (This fact is lost on today’s extreme right - liberalism isn’t always emotional and destructive.) But today, they are conservative ideas.

    The Amish don't use insurance.Count Timothy von Icarus

    it takes the burden of caring for the unfortunate away from the community and displaces it to the anonymous market.Count Timothy von Icarus

    Insurance displaces the burden from the whole community, that is true. (Although the reason insurance works is when the community all pay a small percentage in premium, pooling the money for big payment of claims.) But the anonymous market is a pool of resources too, often at a discount. And the whole community can fail us like the anonymous can fail us. And the Amish community that makes its own decisions, about insurance or no insurance on behalf of the whole community, is acting basically like any other free market community, permitting this and restricting that. (And insurance isn’t a great example for us, because insurance is a way of managing what used to be law suits in equity - chancery court - people’s court. It’s contract management of disputes. It’s private government in a sense. And the Amish are relatively about 7 or 8 total people to manage compared to most community sizes (350 mil Americans) and can gather into one community easily (God bless them for the sacrifices they make to keep things simple for themselves)).

    You can see the displacement of community and institutions by the market in all areas of everyday life.Count Timothy von Icarus

    Aren’t these just choices we make, to engage the market in ways that displace the community? It is not a consequence of the free market that we no longer ask friends for rides or work as hard on communities. It’s a consequence of our decisions on how to spend our money and time. We choose to isolate ourselves and be seduced by products that enable community displacing activity.

    One upshot of this is that it increases inequality.Count Timothy von Icarus

    One question and one idea here. First, is inequality increased? Unequal on what scale, all scales, or just some? Second, actual inequality has as a corollary: possible mobility. This second one slightly answers the first question. Some inequality (which by nature is inevitable) even if increased, may be worth creating a world where upward mobility is possible. Capitalism facilitates this mobility.

    There are more people since America was formed and today who begin life poor and end up in life financially secure, who bring spouses and children with them, than ever before. Capitalism is the platform that enabled this. Inequality financially is not a bad thing - never was. It’s a modern liberal idea to make economic goals governmental goals.

    Those who can pay get all the benefits of community with none of the costs.Count Timothy von Icarus

    Are those who can’t pay, who live on the streets, are they absolutely inevitable in capitalism? Or are they still inevitable in any larger society and any economy? Again, why is this a feature of capitalism, and not a feature of human ignorance and greed and other badness in human hearts?

    There are all kinds of communities. But here I think you are invoking the morality of capitalist people, which is a different measure than the possibility of freedom and political success of capitalism. This is why charity and humble gratitude and duty to others are necessary in a capitalist society. But they are necessary in any successful society, regardless of economy, from Amish to socialist to capitalist (unless we live in Orwell’s world where the government is everything).

    Capitalism doesn’t create those who can’t pay. Capitalism creates the platform where there might be less people who can’t pay. Maybe there is a greater inequality between the richest and the poorest, but that is just one measurement. Another number is the number of people pulled themselves to a better overall standard of living before the US and after.

    Is capitalism really only since the enlightenment and Adam Smith? It seems more basic than something developed in the Enlightenment - like republican government was Roman. Didn’t Thales buy up all the olive presses because he predicted a good year for olives and then get rich leasing them come harvest time? Taken to scale with banks and money and insuring agreements and credit, and owner profit and labor fees, all aimed at capital accumulation, under a laisse faire democratic republic - that is a fully formed adult capitalist, but the seeds of capitalism are in the trade that has always occurred and freely; capitalism’s seed and heart is the private agreement of this for that.

    ———

    The Enlightenment got some things right. Free markets, axiomatic core political rights of life and liberty, limited government by consent of the governed, equality of due process before the law - these are products of human reason, and they are good.

    They allow one to master one’s own flourishing, and build a surplus for family and community. They allow many people to live together with the least governmental (laws, police and courts) obstacles to basic self-determination and pursuit of happiness.

    But now we have things to conserve. This is what modern liberals don’t admit. The constitution can’t be a living document for it to be a document protecting our rights at all. It has to be fixed (like inalienable things are fixed). We have to hold the same fundamental rights up, over and over again and fight to keep them preserved. Today, these one liberal ideas are in danger mostly from liberal forces.

    The US was formed in rejection of authoritarian types of government - a king, like a tyrant, like a fascist. Now there is a new conservative, who rejects kings as well as modern liberal forms of totalitarianism (leftism).

    When will life and liberty in the face of government no longer be an issue? Never. The idea that each of us by default possesses our own life, and in this life, our own freedom - this idea, now 250 years old, is now a conservative idea. It’s no longer a question of reason and enlightenment spirit. It’s cannon. It’s natural political law. It’s self-evident, since around 1776 at least.

    ———

    This is a goal of capitalism though. Everywhere becomes everywhere else, aided by the destruction of cultural barriers and the free flow of labor and goods across all borders. This standardization only helps growth, and it helps attain the liberal ideal of freedom by dislodging the individual from the "constriction" of tradition and cultureCount Timothy von Icarus

    Destruction of cultural barriers is a goal within the notion of capitalism? Or is it a byproduct of individual choices and deals?

    Standardization helps growth - but trading off one’s particular culture for some different standard helps that one particular person grow. It’s the particular implementation that you are bemoaning here, not the nature of capitalism. Capitalism can adapt to the non-standard better than any economy I know of.

    consider minimum lot size requirements and minimum parking requirements, which have helped turn America's suburbs and strip malls into wholly unwalkable isolated islands of private dwellings and private businesses.Count Timothy von Icarus

    I agree it’s often ugly, boring, and looks the same across whole continents. But isn’t part of this the fact that we can live further away from each other and use cars to still get along? We build our towns around cars because we live further spread out, because we can, and we want to. Isn’t this again our choices? Aren’t you more bemoaning technology and industrial advancement than you are the capitalist platform? There is no master planner called mister capitalism that is forcing all of the strip malls to look the same. Things will keep evolving too.

    …it helps attain the liberal ideal of freedom by dislodging the individual from the "constriction" of tradition and cultureCount Timothy von Icarus

    I think the conservative impulse is to see that the type of freedom we should be concerned about as a community is the freedom to control our own government, and limit its ability to take away equally created free lives. It’s not a matter of freedom from sin and to flourish in development of virtue. This activity, moral activity, is not for government to regulate. We need to be free from government first, before we can build true freedom and flourish best. Conservatives get that. Leftists want government to constrict the means towards individual flourishing (as if the government wasn’t just another bunch of people who have no idea what rules are good or bad in every situation). The invisible hand of the free market is now a conservative ideal.

    Finally, just consider how much people must move to keep up with the capitalist economy. That alone destroys community.
    Maybe it is worth the benefits, but conducive to "conserving tradition" it is not.
    Count Timothy von Icarus

    That depends on the tradition.

    We need some government regulation. We ought build some safety-net through government. But we need individual people to freely build virtuous consciences, and we need individual people to be able to take care of their own lives and their own families - capitalist republics are a worthy starting point.

    If not liberal capitalism, (a conservative principle today). do you see a better way to manage billions of people (or, I should say, to allow millions of people to manage themselves)?
  • The Limitations of Abstract Reason
    That is, arguably nothing has done more to erode "Western culture" (commitment to the canon, etc.) and traditional social norms than capitalism, and yet this is precisely what conservative liberalism often tries to promote.Count Timothy von Icarus

    Interesting. As a completely narrow apologetic for capitalism, (given the much deeper topic you go on to discuss here), isn’t the friction between capitalism and tradition a function more of the individual who is trying to be capitalist while trying to be traditionalist? Or to ask this another way, are you pointing to something essential to capitalism that puts it at odds with a traditional conservative, or is it just the unethical capitalist who causes friction with the types of goods traditionalists seek to conserve?

    My sense is that capitalism positions the individual best in relation to the government. That is its core value. It alone can fund government of the people, by the people, for the people. It may create challenges when the individual capitalist is positioned against one’s employees, one’s customers, and one’s society, and maybe one’s God, but if these are managed privately according to traditional goods, the capitalist system keeps individuals freer than any other economic system I know with respect to government.

    I think I am disagreeing with any necessary or essential causal connection between erosion of traditional norms and the rise of capitalism. It isn’t capitalism that has eroded western norms. The norms were always aspirational for individuals, not groups, and these norms were always truly practiced by too few. Capitalism doesn’t necessarily aid in the fostering of traditional norms either. But it forces one to grapple with charity and humility as one rises out of poverty. It has always been hard for a rich man to enter God’s kingdom, but it has never been impossible.
  • Banning AI Altogether
    backgrounds, aims and norms. These things are irrelevant to the discussion. The focus on the source rather than the content is a known logical fallacy - the genetic fallacy.Harry Hindu

    I disagree. When you are presented with something new and unprecedented, the source matters to you when assessing how to address the new unprecedented information. You hear “The plant Venus has 9 small moons.” You think, “how did I not know that?” If the next thing you learned was that this came from a six year old kid, you might do one thing with the new fact of nine moons on Venus; if you learned it came from NASA, you might do something else; and if it came from AI, you might go to NASA to check.

    Backgrounds, aims and norms are not irrelevant to determining what something is. They are part of the context out of which things emerge, and that shape what things in themselves are.

    We do not want to live in a world where it doesn’t matter to anyone where information comes from. Especially where AI is built to confuse the fact that it is a computer.
  • The Limitations of Abstract Reason
    Yes I think this is the key - the grownups recognize that both poles are required - it's just a question of where the Vital Center is located, relative to the current Overton WindowColo Millz

    I would not say the center is more important than the poles. At times, conservative, at other times liberal, and at other times a blend.

    I’m not a big fan of consensus for consensus’ sake. Consensus is merely pragmatic when needed for convincing people to act. Consensus is not an end in itself. Consensus and the center is like more evidence of usefulness.
  • The Limitations of Abstract Reason
    Instead of a project of absolutes, we should therefore constrain ourselves to a system of trade-offs and compromises, in the style of Adam Smith.Colo Millz

    Like three co-equal branches of government that must compromise with each other, in order to limit government so that people can be freer to trade-off with each other?

    All things people build are tragic. We don’t build - we try to build. That is not just a problem for conservatives.
  • The Limitations of Abstract Reason
    People have had enough time to become smart and create something great, but apparently, the way we live now (including both the good and the bad, the struggle of ideas and the struggle of meanings) is the smartest possible way.Astorre

    I think that is true if you look at people as a group. History repeats itself in many different facades.

    But there are individuals who truly live well. (At least I hope so.) They are saints.

    Whether the writers of the constitution knew it or not, limited government allows the individual to figure out how to live such an individual good life. Even if most of us squander the opportunity.
  • How to use AI effectively to do philosophy.
    we have to explain why the question of real or not is important.Ludwig V

    Because when it is real, what it says affects the speaker (the LLM) as much as the listener. How does anything AI says affect AI? How could it if there is nothing there to be affected? How could anything AI says affect a full back-up copy of anything Ali says?

    When AI starts making sacrifices, measurable burning of its own components for sake of some other AI, then, maybe we could start to see what it does as like a person. Then there would be some stake in the philosophy it does.

    The problem is today, many actual people don’t understand sacrifice either. Which is why before I said with AI, we are building virtual sociopaths.
  • The Limitations of Abstract Reason
    Instead of a hierarchical model where truth is imposed from above (be it tradition, as in Hazony, or the rational principles of the Enlightenment), one might propose considering a networked view of society.

    In this model, meanings, values, and "truths" are formed locally
    Astorre

    That’s the idea of the US Constitution. Constrain government power - to let people control their lives locally.

    Of course 250 years later the government has taken over quite a bit (which really means stupid people have given their power back to the government quite a bit) - but your utopian vision is a constitution of limited government. This is what today’s revolutionaries want to throw away.
  • The Limitations of Abstract Reason
    if we appeal to tradition in one society that tradition is going to differ - sometimes widely
    — Colo Millz

    Doesn't liberalism see itself exactly as a way of negotiating those differences?
    Banno

    I think it does.

    But do we have to always pit the liberal against the traditional?

    Conservatism sees “itself exactly as a way of negotiating those differences” too.

    We need to use both poles to have any chance of negotiating any differences and making progress.

    We make both progress and tradition. That’s how progress works. That’s what tradition is - a tradition of making progres is best.

    When there is no forward progress then at the same time, there is nothing to conserve; if you lose either one, you lose both.

    Banno, you should reasonably agree. You spoke of liberalism as the “negotiat[ed]” (unified, conserved) among the “differences” (changing, progressing).

    ———

    1. Men are born into families, tribes, and nations...

    2. ….compete…. until… mutual loyalties….

    3. ….are hierarchically structured (which just repeats ‘compete’ again).

    4. ….traditional institutions….and cultural inheritance and to propagate….

    5. ….a consequence of membership in families, tribes, and nations (which keeps repeating).

    6. These premises are derived from experience, and may be challenged and improved upon in light of experience.
    Colo Millz

    Interesting. Number six is a bit of an odd man out. It’s better suited to liberalism, don’t you think? I myself lean conservative but only because today’s liberals won’t be reasonable.

    But would a deeply conservative English colonial traditionalist living in 1775 Philadelphia have thought of leaving England as a good?

    I admit conservatives of the day didn’t forge the United Ststes. They were liberals, and they were right.

    Thank God for liberal change.

    Just don’t forget to thank God (as is tradition).

    ———

    That's a shame.Banno

    What do you mean? Conservatives should be ashamed of being conservative? Or it’s a shame you two won’t likely get along much because you are more liberal and would beg to differ with those 5 or 6 items?

    And I don’t agree with that list as stated either.

    Conservatives merely find the good in what is now, and they are grateful. What is good now is therefore, there to be preserved, to protect, and to conserve. Family, tribe nation are good and it is the peace from even a sliver of present goodness that drives conservation efforts. (“Make American great again” says it was good enough once and we’re ruining something precious we should be trying to preserve.)

    Liberals, on the other hand, are more inclined to look at what is bad now and seek to find something new and better, to progress. But progress is a positive, a good, much like the good that can be conserved in gratitude. So until progress is finished, liberals preserve and conserve the fight, and fiery activity of change, resisting the present badness.

    So both liberals and conservatives chase the same good, working to preserve certain states of activity, just one is directed towards the present (traditionalist) and the other is directed into the future (progressive).

    Conservatives and Liberals both have the same relationship with the past; they both find in the past what they find in the present, namely, conservatives see the good in the past, like liberals see the bad in the past.

    So conservatives see the good in the present and lean to conserve present things that build a traditional that can then be seen carved into history (the past). Whereas liberals see the bad in the present, institutionalize the badness in the past, and lean towards carving badness out and building new futures.

    Extreme leftists are those who don’t see any good in the present and need to tear down any obstacles (and they lose sight of good future goals - and you get Russia, China, Cuba, etc.). And the extreme right are those who don’t see any bad in their small tribe in the present and seek to prevent any change whatsoever, even if one must destroy all of the ungrateful tribe members (losing all sense of family and what was good in the first place).

    Both extremes are shit for brains.

    We each are, at times, conservative, and at times, liberal. (That is what western “democracy” is really made of to me - the unification of liberal and conservative impulses under law in a republic.). People all left alone to make their own private kingdoms to share in the town center as each chooses, but under the law all have ratified.

    ———

    It is an assumption of Enlightenment liberalism that "all men are free and equal by nature".

    But this is neither empirically true or self-evidently true.
    Colo Millz

    Are you saying you are more conservative than the US Declaration of Independence? That’s like “yes kings” conservative.

    I still don’t think it’s “a shame” - although I think it’s foolish. (No shame because fools are everywhere).

    We are stuck with the polis - the city, the political board, the laws and social congress. Equality, and freedom are made - and the government we make to allow us such opportunity, but shall not impede anyone any more or any less than all the others.

    No one who is wise thinks the US constitution isn’t brilliant. We have thousands of years of data showing kings are a crap shoot at best. And over a hundred years of data showing communism and socialism haven’t allowed more people to be their own masters.

    No right to life and liberty and equality before the law is not smart conservatism. Today, such leanings come more from the left than right. The types of kings we get today are communist dictators, not monarchs.

    very few noticeable resultsNOS4A2

    One clear one is the US Constitution if you ask me.
  • The Limitations of Abstract Reason
    whether justice is better secured by refining the wisdom of the past, or by subjecting that past to rational critique guided by universal moral principlesColo Millz

    Yes, good post. I need to think about it.

    But my first impression is to wonder if the “refining” process involves both seemingly wise tradition and fresh rational critique - so it seems conservative versus progressive becomes careful/proven versus risky/theoretical (and again, “careful” conservatives respect risk and theory more than “risky/theoretical” progressives respect careful proof).
  • How to use AI effectively to do philosophy.
    Yes. But, so far as I can see, it can't break out of the web of its texts and think about whether the text it produces is true, or fair or even useful.Ludwig V

    Yes. Why I said this:

    A philosopher prompts. A philosopher invents a language. A philosopher sees when to care about the words, when to prompt more inquiry, and when not to care anymore, or when to claim understanding versus ignorance. AI doesn’t have to, or cannot, do all of that in order to do what it does.Fire Ologist

    ——

    It's probably unfair to think of it as a model of idealism; it seems closer to a model of post-modernism.Ludwig V

    Yes. I agree. It’s an electronic Derrida. There is no person or identifiable thing at the core or behind an AI output, just like, for the post modern, nothing fixed or essential is inside of any identity or thing. Words only have context, not inherent meaning, like an AI print job needs the context of its human prompter and human interpreter - take away the human, and AI becomes flashing screen lights. Except to the post-modernist, the person is basically flashing screen lights in the first place.
  • How to use AI effectively to do philosophy.
    Do you agree that AI does not do philosophy, yet we might do philosophy with AI? That sems to be the growing consensus. The puzzle is how to explain this.Banno

    How AI does what it does? That is a technical question, isn’t it?

    It quickly compares volumes of data and prints strings of words that track the data to the prompt according to rules. I don’t know how. I’m amazed by a how a calculator works too.

    AI-skeptics emphasise that they're (mere) echoes of human voices. Uncritical AI-enthusiasts think they're tantamount to real human voices.Pierre-Normand

    Both of these characterizations seem metaphorical to me, or poetic versions of some other explanation, that evoke feelings that may satisfy the heart; but I don’t see understanding that would ultimately satisfy the curious human intellect in either characterization.

    Echoes or actual voices - this characterizes the reason we are amazed at all. It doesn’t mean either characterization explains what AI doing philosophy actually is

    We built AI. We don’t even build our own kids without the help of nature. We built AI. It is amazing. But it seems pretentious to assume that just because AI can do things that appear to come from people, it is doing what people do.

    ———

    A philosopher prompts. A philosopher invents a language. A philosopher sees when to care about the words, when to prompt more inquiry, and when not to care anymore, or when to claim understanding versus ignorance. AI doesn’t have to, or cannot, do all of that in order to do what it does.
  • How to use AI effectively to do philosophy.
    What do you think my response to you would beBanno

    I actually wrote something, and edited it back out.

    I wrote: which is the more general topic and which is the sub-topic (between “how to use AI to do philosophy?” and “can AI do philosophy?”).

    Then I wrote: a side-topic to this question is: “who (or what) can answer this question?”

    The parenthetical “or what” implies something like ChatGPT. And then I wrote “Should we ask Claude?”

    So I went your one step further. But I chopped all of that out. Because this thread seems to assume many things about AI doing philosophy. We need to go back.

    Can AI do philosophy?

    Before we could answer that soundly, wouldn’t we have to say what doing philosophy is, for anyone?

    So I still wouldn’t want to go one step further.

    You are way down the road trying to clarify how to use AI to do philosophy, unless philosophy is solely an evaluation of the coherence and logic, the grammar and syntax, of paragraphs and sentences. If that is all philosophy can do well, that sounds like something AI could assist us with, or do faster.

    But is that all philosophy is?

    You ask “what do people bring to philosophy that AI does not bring?”

    How about this: people bring an interest in doing philosophy at all. Does AI bring any interest in doing anything? Does AI have any interest in any of the crap it prints out?

    It’s such a weird way of talking about what AI is and what a philosopher is and what a person who does philosophy is doing.

    AI and humans are equal when it comes to philosophy, or more likely that AI is philosophically superior. The Analytic is naturally espoused to such a curious idea.Leontiskos

    Exactly. Curious. A philosopher, to me, is interested in the “what it is” and the “how it is”? AI might be good at showing an analytic type of process, showing how rational arguments are rational. But AI is not good at knowing what content actually matters to the person interested in philosophy. AI can address whether “x + y = y” could be true or must be false or could be false. But AI cannot care about what “X” is. That takes a person.

    And philosophy is not only interested in how “x+y” might work out logically, but also simply “what is x?”

    Again, unless one has abandoned such things, and one must remain silent about such things, and one is simply interested in language’s relationship to logic, and one calls that the limit of philosophy.

    I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.

    Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?

    So many unaddressed assumptions.
  • How to use AI effectively to do philosophy.
    what is it that people bring to the game that an AI cannot?Banno

    Isn’t that about the question: Can AI do philosophy?

    I thought you said the topic was how to use AI to do philosophy.
  • How to use AI effectively to do philosophy.
    Isn't the problem that of letting LLMs do our thinking for us, whether or not we are giving the LLM credit for doing our thinking?Leontiskos

    Why can’t both be an issue. :grin: Letting LLMs do your thinking should concern the person using the LLM the most.

    And I’m sure it will degrade brainpower and confidence in society generally as well.

    But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness.

    AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason.

    It is because human beings can do philosophy that human beings can tell whether AI generated content is of any value or sound or wise. No reason not to look at any content (as long as no one is lying about where it came from, or pretending it is not from a computer.
  • How to use AI effectively to do philosophy.
    By way of getting the thread back on topicBanno

    According to who?

    There are a few points people are trying to make. Which one are we supposed to care about?

    And then there’s whatever Claude seems to think is helping.

    Are you trying to talk about ways to use AI to do philosophy on other forums, or here on TPF?
  • Banning AI Altogether
    Why don't we work with those?Ludwig V

    AI is a tool. Tools can be useful. I don’t think it should be banned.

    And regardless of what we do, and regardless of what we say and think about AI, it will be used to harm people. All things digital can now be faked so well; people are already great at lying - we really didn’t need to make the internet even more suspicious. But we have it now.

    So we should also watch out. And have conversations like this one.
  • How to use AI effectively to do philosophy.
    It also depends on the prompt. Prompt engineering is a "thing", as the kids say.Banno

    That is interesting. And also makes sense, given AI is like a text calculator. The prompt feeds into the whole chain of events that one might call “AI doing philosophy” so to speak.

    This is the sort of appeal-to-LLM-authority that I find disconcerting, where one usually does not recognize that they have appealed to the AI's authority at all.Leontiskos

    I see AI as a tool. We can wonder about personhood and consciousness, but we can ignore that. It’s a tool that generates hypotheticals we can then evaluate, test and prove, and believe and adopt, or not. All of which makes using AI for philosophy, on one level, like using any one else’s words besides your own to do philosophy.

    However, simultaneously, I agree that it would be disconcerting to let AI (or anyone/anything) be my authority without my consent. And AI is facilitating such recklessness and discord. The presence and influence of AI in a particular writing needs to never be hidden from the reader.

    Further, it makes no sense to give AI the type of authority that would settle a dispute, such as: “you say X is true - I say Y is true; but because AI says Y is true, I am right and you are wrong.” Just because AI spits out a useful turn of phrase and says something you happen to agree is true, that doesn’t add any authority to your position.

    You need to be able to make AI-generated knowledge your own, just as you make anything you know your own. Making it your own is just another way of saying “understand it”. So I don’t care if AI is used verbatim with no changes (and find it fascinating when it seems to say something thst can’t be improved on), but only when one can restate it in different words, one understands it.
  • Banning AI Altogether
    The potential for AI to act on its own might make it different from a hammer.Athena

    You sell hammers way too short, and maybe give AI way too much credit.

    AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.
    — Fire Ologist
    Athena

    You say “act on its own”; and I said “accidentally”.

    So you don’t think AI is a tool? What else is “artificial” but some sort of techne - the Greek root for technology and for hand-tooling? AI is a word sandwich machine. It obviously is a device we’ve built like any other machine that does measurable work - it just now takes a philosopher to measure the work AI does.
  • Tranwomen are women. Transmen are men. True or false?
    in order to talk about the world at all, I need to do some reifyingfrank

    That’s the whole ball game.

    In order to speak at all, we need to objectify, to fix, something external to us both.

    Is it gender or sex that can be fixed? Or both? Or neither (and to conclude neither, we must fix something else from which to measure the fluidity of these.)

    The question of gender is a new flavor of “what is justice” or “what is good?” Or what is a banana?

    What is it, about which you speak?
  • Banning AI Altogether
    Just what we need to add to the [online] world - more sociopaths that make errors and lie about them.Fire Ologist

    Maybe “sociopaths” is unnecessary. Wouldn’t want to scare any children.

    AI is a tool. Like a hammer, it can do good or destroy, on purpose or accidentally.

    What worries me is that people will cede authority to it without even asking themselves whether that is appropriate.Ludwig V

    They surely will, because sheep are easily calmed by things that sound authoritative.

    ———

    It occurs to me that: isn’t a book, AI? It’s information received from a non-human thing. We read a book and ingest the text. We treat the words in a book as if they come from an “intelligence” behind them, or we can judge the veracity and validity of the text qua text with or without any concern for what is behind it. We can also refuse to take the author as authority, and fact check and reconstruct our own analysis.

    For instance, is a reference to Pythagoras in Pythagorean Theory of any significance whatsoever, when determining the length of one side of a triangle? Is it essential to our analysis of “It is the same thing to think as it is to be” that we know who said it first? Context might be instructive if one is having trouble understanding the theory, but it might not matter at all once one sees something useful in the text.. We create a new context by reading and understanding text.

    (This is related to @Banno’s point on his other thread.)

    So banning any reference to AI would be like banning reference to any other author. (I said “like it” for a reason - this doesn’t mean AI is an author the same way we are authors - that is another question.)

    What concerns the philosopher qua philosopher most is what is said, not who (or now, what) says it. I think.

    This not to say we shouldn’t disclose the fact that AI is behind text we put our names on (if we use AI). That matters a lot. We have to know we are dealing with AI or not.

    But I genuinely don't believe using it helps anyone to progress thought further.Moliere

    Don’t we have to wait and see? It’s a new tool. Early 20th century mathematicians could say the same thing about calculators. We didn’t need AI before to do philosophy, so I see your point, but it remains to be seen if it will be any help to someone or not.

    The conclusions in philosophic arguments matter, to me. It is nice to think that they matter to other people as well. (But isn’t essential?) Regardless, I would never think the conclusions printed by an LLM matter to the LLM.

    So the interaction (“dialogue”) with AI and my assessment of the conclusions of AI, are inherently lonely, and nowhere in the world except my own head, until I believe a person shares them, and believe I am dialoguing with another person in the world who is applying his/her mind to the text.

    Bottom line to me is that, as long as we do not lie about what comes from AI and what comes from a person, it is okay to use it for whatever it can be used for. And secondly, no one should kid themselves they are doing philosophy if they can’t stare at a blank page and say what they think philosophically with reference to nothing else but their own minds. And thirdly, procedurally, we should be able to state in our own words and/or provide our own analysis to every word generated by AI, like every word written by some other philosopher, or we, along with the AI, risk not adding anything to the conversation (meaning, you take a massive risk of not doing philosophy or not doing it well when you simply regurgitate AI without adding your own analysis.)
  • How to use AI effectively to do philosophy.
    Amateur philosophers just spend their lives struggling to understand the world, ping off a few cool philosophers, and spout what they may.frank

    How is that any different from any philosopher?

    The difference (to you) is your own judgement of what is “spouted”. And maybe the number who make up the “few”.
  • How to use AI effectively to do philosophy.
    how we can use AI to do better philosophyBanno

    Doesn’t that just depend on the LLM? And who determines that? We need to be better philosophers first in order to judge whether the LLM output is “better” and so whether the LLM is useful.

    The question since 3000 years ago is “How can we use X to do better philosophy?” AI is just a new tool, a new “X”. Nietzsche asked “how can I use prose to do better philosophy?” Russell and Witt asked about math and linguistics.

    Unless this thread is a tutorial on using LLMs that “better philosopher” way.
  • How to use AI effectively to do philosophy.


    Thanks for pointing that out.

    And saying nothing else.

    Am I the only one saying things that could fit in the other thread?
  • How to use AI effectively to do philosophy.
    Seems to me, at the fundament, that what we who pretend to the title “philosopher” are looking for is some semblance of truth, whatever that is; at writing that is thought provoking; at nuanced and sound argument. Whether such an argument comes from a person or an AI is secondary.Banno

    Good.

    Allow me to get back to “some semblance of truth.”

    Rejecting an argument because it is AI generated is an instance of the ad hominem fallacyBanno

    I see what you are saying. But maybe you don’t need to conflate AI with the hominem to make your basic point. All you need to say is, if “2+2=4” is written by AI or by a philosopher, we need not concern ourselves with any difference between AI or a philosopher and can instead still focus our philosophic minds and evaluate the soundness and validity of the argument qua argument.

    I agree with that.

    And I agree, it’s a separate, or “secondary” discussion to raise the differences are between ‘AI’ versus ‘hominem’. (And to say “AI generated is an instance of the ad hominem…” seems rash. And unnecessary.)

    Rejecting AI outright is bad philosophy.Banno

    Rejecting good arguments no matter where they come from is bad philosophy. (For the same reason we should give each other more respect here on TPF.)

    So I also agree with what is implied in your argument, namely that ad hominem attacks on AI, and anyone, are fallacious arguments.

    But that all seems easier to swallow about AI. We always are stuck judging the validity and soundness of the words we are presented with, separately from judging the source from which those words come.

    The more dramatic issue with AI is that it is a tool that can be used by a person, to easily deceive another person.

    AI is a computer, as always. It’s a tool. No need to completely shrink from using a new tool to process words for ourselves.

    But to use a tool properly you have to know you’re using a tool - you have to learn the tool’s limitations. You have to be aware of all of the ways AI can create error, before you can properly read its content.

    If we don’t know we are dealing with AI, and we think we are reading what a person like you and me would say, we can be deceived into trusting a source that is false to us and without this trusted context, misunderstand the content. Like if I thought the answer to 3.14386 X 4.444 came from a calculator or from a third-grader…. We need to know who/what we are dealing with the evaluate how to judge content most diligently.

    The simple solution to this deception is for people to admit they are using AI, or for purely AI-generated content for it to be clearly labeled as such - then we all know what we are dealing with and can draw our own judgments about sourcing and citation and hallucination and personal bias, and trust, and accuracy, etc, etc…

    Now, of course, instead, people will use AI to lie, and cheat, and defraud and harm.

    But we can’t ban it. Toothpaste is everywhere now.

    So we should admit to ourselves we’ve created new sources of both treachery and beauty, and aspire to demand honesty about it between each other, that’s all. Let’s not allow AI, or worse, consciously use AI, to fill our world with more error. And not hiding AI as personal intelligence avoids the error of the lie.

    This is the only way “some semblance of truth” will be maintained.

    ———

    It is amazing to me how AI is loose in the world and at the same time we don’t really know what it is (like a tool, a fast computer, like a new learning intelligence, like a person, like a toaster…)

    My prediction for the predictive language modelers: philosophers and psychologists will discover/demonstrate how these LLMs are not persons, and in so doing further define what it means to be human a bit better. AI, even that behaves exactly like a person, will never evolve a core in the same way we persons have a subjective seat of experience. They will always remain scattered, never unified into a consciousness of consciousness.

    But just because AI is just a word machine, this doesn’t mean we human inventors of this word machine cannot also derive truth and wisdom from the words our AI generates.

    I could be wrong…
  • Banning AI Altogether
    So much for the vision of information freely available to everyone.Ludwig V

    It’s an actual shame.

    The irony of the “information” super highway. The irony of calling its latest advancement “intelligent”. We demean the intelligence we seek to mimic in the artificial, without being aware we are doing so.

    We, as a global society, as the most recent representatives of human history, are not ready for the technology we have created. This has been true probably for 50 years. We’ve gotten ahead of ourselves. We need less; and even when we realize it, in order to get to that place where there is less, we keep inventing something new, something more. We are torn in all directions today.

    Maybe it’s always been that way - we forever are trying to catch up to ourselves. AI it seems could create an impassable chasm for us to catch up with, if we are too stupid to control ourselves about it.

    AI, with ubiquitous surveillance, digital currency, digital identities for easy tracking and control…none of us really know what we are already into.

    I'd have thought the relevant job description, that of filtering the results for signs of trails leading to real accountable sources, would have to disqualify any tool known ever to actually invent false trails, let alone one apparently innately disposed to such behaviour?bongo fury

    If we can get AI to work as well as people seem to hope it does, maybe someday it will be as good as the revolutionary tool it is being sold as. But what will be catastrophic is if it remains so unpredictably wrong, and people accept it as close enough anyway, knowingly letting themselves be satisfied with less than the truth. I was always worried Google and Wikipedia and just the modern media were going to lead us that way - now we have AI to expedite the sloppiness and stupidity.

    And AI is called “intelligent”, like a moral agent, but no one sane will ever give it moral agency. So we can further disassociate intelligence from morality. Just what we need to add to our world - more sociopaths that make errors and lie about them.
  • The Preacher's Paradox
    Should I tell them what I know about religion myself, take them to church, convince them, or leave it up to them, or perhaps avoid religious topics altogether?Astorre

    First, anyone as interested in the truth as you are, and who obviously loves his children enough to consider such big questions, for their sakes, it seems to me you are doing fine by them. (I see God at work already.)

    But that is all in the background, and avoids your question.

    My experience is somewhat counter-intuitive. I think we risk robbing people of a choice about God and religion when we don’t teach them about these things when they are young. Religious faith is an adult decision, for sure, but someone just may never fully consider the option that is “God” if first seeking to familiarize yourself with God as an adult, and after living so long without God. I still believe God reaches all of us, but the innocence of youth makes a softer ground to first plant the notion of God than the repentance necessary in adulthood makes. Adult informed consent about God is just harder to inform when that adult did not already hear about God since the time he first learned about other important things, like truth and good and knowledge and life and death. It just gets harder to see God as we get older and become entangled with the immediate necessities of life.

    ———

    I don’t think you would be considering these questions of how to present God and religion to your children, if you did not recognize potential good value and truth coming from religion. If you believed in your heart that religion was clearly a net bad, you couldn’t have this issue at all. Am I right about that?

    You ask “Should I tell them what I know?” That may depend. What do you know about religion, and what will you tell them? I wouldn’t want to encourage you if your idea of religion was of a cult of mindless, loveless, insignificant, pawns in some other-worldly game - religion has to free one and save one from such predicaments, not create them.

    And I would never advise teaching something you didn’t believe in or did not see any lasting good in. A notion like ‘God’ when insincere, has nothing to do with God. It’s like one’s dead great-great-grandfather. Either you believe he existed or you don’t, but if you possibly didn’t, you shouldn’t think you could do him justice teaching about him to your kids, if you believed there was no such person there to teach about.

    ——-

    Regardless, religion is about mystery. Scientists seek into mystery as much as the one who seeks truth in God. Truth seekers all have similar hearts. God can represent truth and knowledge, the answer, the law, in the universe, in our science, in our lives and in our minds; and God’s relationship with us through the church and religion can ground ethics, and social bonds, and all that comes with people knowing people, (even politics), and all of the frictions we create for ourselves.

    There is no harm exposing kids to good people of faith. It comes in many forms.

    Religion rarefies, and absolutizes, and objectifies, while at the same time highlighting the subjective, particular, visceral life lived. It contains law and reason and logic, and analytics of language. Religion solves and presents solutions. It prompts questions, new ideas, emotions. It can soothe in death in suffering. It can turn the bad into good.

    But it can cause harm too. No doubt your questions loom high and large. But so many otherwise good things can cause harm too, can they not? Even the seeming best things in life, like success, and power, can destroy us.

    If you are deeply troubled by these questions, I suggest you ask a few different priests or just good people at some churches - and see if an answer presents itself right in the place you are inquiring about. I am sure, at the right church, there is a lot of good that religion can bring.
  • Banning AI Altogether
    Two interesting legal questions arose in the context of law firms using AI:

    1. Information shared between a lawyer and client is privileged, meaning, the lawyer cannot share, or be asked to disclose, that information, with anyone else, unless the client allows it. So one question that arises is whether sharing information with AI puts that information outside of the client privilege. Can a lawyer put privileged information into an AI engine and still claim the information remains privileged between lawyer and client? There is no formal answer yet, so lawyers who want to be safe have to be careful not to share privileged information with AI, unless the AI is entirely on a closed system and within the lawyer’s control. Then the argument would be that, whether AI is like a person or not, no one outside the firm (the lawyer’s firm) is seeing the client info so it remains privileged between lawyer/law firm and client. But if the lawyer goes to ChatGPT, even if the lawyer doesn’t use the client’s name, that lawyer may be waiving his client’s privilege. This seems right to me. (This is totally untested in the courts, and there are few laws addressing AI and none addressing privilege.)

    2. When a lawyer gets analysis and output from AI, is that to be treated as though it came from another lawyer, or just from a word processor? Should AI be treated as a low level lawyer, or just a complicated Wikipedia resource? Again, this is too new for a clear answer, so to be safe, lawyers should act as if AI is like an associate lawyer (a person), and fact check, check every cite, confirm every conclusion - essentially scrutinize AI work product like it is first year associate lawyer work product, before providing it as advice to a client. It is (likely) unethical for a senior partner at a law firm to certify AI work product without careful review and detailed confirmation, just like it would be unethical for the partner to just pass through associate attorney work without reviewing it.

    I view AI like a complex, mindless, soulless tool, that spits out highly complex arrangements of words. It’s up to me to judge those words as relevant, useful, making sense, insightful, accurate, etc., or not. The value I might add to a perfectly worded AI response is confirmation that me, a person, can see and understand the value of the AI response and can agree those words are perfect.

    If we remove this human layer from the words, they are utterly dangerous. Because they sound like they are coming from someone who can judge their value.

    It may one day be the case the AI gets so good, upon every review of its output, the smartest minds in the field always agree that the AI work product is flawless and better than they could have imagined. Whether smart people will ever decide there is no need to doubt AI output remains to be seen.

    I do think anyone who sees AI output as though it came from a person is misunderstanding the value of their own judgment and the nature of what human judgment is. AI cannot provide this judgment. The words “here is my judgment” do not make it so.

    Right now, we all always know you don’t take the first answer Google displays. You take ten answers from different internet sources, find some overlap, and then start deeper research in the overlap and eventually you might find some truth. Right? The internet can’t be trusted at all. Now with AI, we have photo and video fakes, voice fakes, that look as good as anything else, so we have a new layer of deception. We have the “hallucination” which is a cool euphemism for bullshit. We have exponentially increase the volume of false appearances of reality. Essentially, with AI, we have made the job of confirming veracity and researching through the internet way more precarious.

    AI also does all of the good things it does too. But AI is as much of a boon as it is going to be a hindrance to progress. If you ask me, people need to treat it as a tool, like a screwdriver. Just as dumb as a screwdriver. And people need to be reminded that it is a tool. And people must always be told when they are dealing with AI and when they are not.

    We need to remind ourselves that an impressive AI answer can only be adjudged impressive by an impressive person. And if we cannot judge the value of the AI for ourselves, we need to find a person, not a tool, to make that judgment.

    We have to remember that only people can say what is important, and only people can say what is intelligent. So only people can appreciate AI. And these are what will always make AI a tool, and not the “artificial intelligence” we have named it.
  • The Preacher's Paradox


    Faith is always pitted in opposition to knowledge, such that acts based on faith are committed without reason, and only acts based on knowledge can be directly tied to reason.

    On that view of things, I can see the preachers paradox. How does someone persuade about the logically, knowingly unpersuasive?

    But I don’t view faith or knowledge so narrowly.

    (Remove the religious baggage. Forget God and religious faith for just a moment.)

    Assume for sake of argument that knowledge is something like justified true belief.

    Belief is an ingredient in knowledge.

    We all know that “certain” knowledge is aspirational. We all know that we know nothing certain. So, we should always qualify our “knowledge” claims with “at least that is what I believe to be the case.” All scientific knowledge is subject to future falsification.

    So then, what is “faith”?

    Faith is what you live by. Faith is the knowledge you will testify to, knowing sufficiently to act upon. What you believe or have faith in is found when you are finished gathering evidence, finished reasoning about it, testing it, finished hearing others opinions, and then, finished with that process, you finally decide in faith to act, to believe, to say “this is the best of my knowledge and belief”. This is why faith is equated with a leap. Faith underwrites action. Faith bridges knowledge and action, driving acts of judgement and conclusions of understanding, where reasoning is no longer in focus.

    Like when someone says they believe the pyramids were not built by Egyptians (continuing to keep God out of this). Two people see all of the same evidence. One uses reason to conclude that people did build them, and the other uses reason to conclude people could not have built them. To the one who believes people did build the pyramids, the moment he concludes this, he no longer needs to gather evidence, or apply reason to new evidence, or provide theories to explain evidence - he’s done. He believes now. Egyptians built the pyramids. This is an assertion of what he believes, of what he has faith in. “Egyptians built the pyramids.” So in faith, his action is to rest on what he now believes to be the case, to stop doing any more science, to stop seeking more knowledge and evidence and just believe in what he now already knows. Whereas the other person, in faith, must continue to seek evidence, continue to apply reasoning and logic in order to develop theories (of aliens, or ancient lost civilizations).
    But faith is the immediate ground upon which both men either assert knowledge about the Egyptians, or keep digging based on what they know and find wanting further evidence and reasoning. (And if some kook concluded on the available evidence that aliens built the pyramids, I find evidence of a kook, but that’s just my belief…)

    Believing begins where reasoning and knowing are finished, and we instead judge, we understand, and we act.

    So faith is immediately underneath every single act. We step out into traffic on faith that we can tell how to safely cross the street, not because our knowledge demands safety is certain.

    ———

    So the preacher talking about God merely introduces new evidence, and applies the same, one and only logic that all minds must apply, and draws conclusions subject to the same analysis, to demonstrate what he believes.

    The difference between the preacher and the scientist is what counts as evidence.

    The preacher can say, “it is impossible for any heavy animal to walk on water or rise from three days of death. But there was this guy who did it, witnessed by many, etc….” Using this impossible testimony as evidence, logically it might be believable to listen to this guy when he says “the guy who raised after death is God.”

    The difference between what religious faith is and what scientific knowledge is has to do with what justification is employed. It’s not a difference that creates this preacher’s paradox. The preacher has to remain logical and provide evidence and make knowledge claims, just like any other person who seeks to communicate with other people and persuade them.

    So really, there is no difference in the mind between a religious belief and a scientific belief - these are objects someone knows. They are both knowledge. The difference has to do with what counts as evidence, and the timing of when one judges enough evidence and logic have been gathered and applied, and it is time to assert belief and to act.

    Don’t get me wrong, religious belief can be insane. Scientific belief is much safer, especially if your goal is to cross the street.

    ———

    The key question all must ask regarding faith is not, “do I act on faith, or do I act on reason and knowledge?” No. The question of faith is simply: “what (or who) do I believe in?” All acts only occur because of a choice to believe it is time to act.

    ———

    I don’t think this contradicts Kierkegaard as much as it sounds like it does on its face.

    Faith is neither knowledge nor conviction. It is a leap into the void, without guarantees. Faith is risk, trepidation, and loneliness.Astorre

    No. The above is true of an act based on faith. The leap is an act. A act of faith is not knowledge. But faith itself is conviction. Faith itself is judgment, or the ‘belief’ in ‘knowledge is justified true belief.’

    This is, as usual, rough and cursory because I am not in graduate school - offered for your more thoughtful and discerning consideration.