• Mojo
    4
    2023 will forever be the year AI technology went mainstream. With the release of Chat-GPT, produced by OpenAI, controlled and run by Microsoft - the world has for the first time both had access and has been exposed to what the not so distant future may include with this technology, and the pace it is accelerating.

    I am interested to debate the ethics of this tech, particularly in the context of its ownership and use by corporations. America is defined by its capitalistic culture but if data is currency and large corporations use the ability to both harvest and ingest that data, to inevitably profit off of it (by offering services we may not need now, but become eventually normative thus a necessity), how do the scales continue to tip in favor of large entities with superior neural network models, pursuing an arms race for super dominance? In turn, what impact can this have geo-politically?

    My interest are the considerations philosophical ethics and morality can teach us about this problem. How can safe guards be implemented? Recently there was a poll from people who work on AI, stating 10% of them in fact believe that AI may become too powerful and in fact wipe us all out. 10%. While this might be hyperbole I am interested to hear from anyone that is tracking this issue - important viewpoints, particularly from a philosophical context that might be directing both political and private enterprise.

    I am fascinated by this logic argument that I heard recently on this podcast: Undivided Attention - The AI Dilemma

    In it stating:
    1) When a new technology is invented, there emerges a new level of responsibility for those who invented it
    2) Inevitably when that technology is invented, if it is powerful, there is an arms race to use it
    3) If collaboration and cooperation is not foundational to this, it will end in tragedy

    Citing Cambridge Analytica as an example, I wonder if we are not on the same trajectory with AI and what there is to mitigate this with philosophical rules of ethics that are trying to be laid presently. It is very observable that the US Congress is out of touch and it will likely take highly capable, independent entities that can exert force on legislation to out power corporate lobbyists to manage this situation. I believe this is a highly quickly complex to consider unilaterally and ethics is highly outnumbered by enterprise and the desire to innovate, creating David vs. Goliath moral quandary.

    I include some recently produced materials that provide addt background:
    -White House AI Bill of Rights
    -Microsoft Responsible AI

    -Role of Arts and Humanities in Thinking about AI
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.