• Climate change denial
    Simon expertly describes much of the background of gullible people's anti-climate science arguments in this:




    It's stuff like this that really gets my blood pumping. I absolutely hate how gullible people are, how they're so easily manipulated by anti-climate science agendas. People who push their idiotic world views onto masses of easily manipulated people who know absolutely nothing about how science works, its methods and how to read actual data.

    There's no debate, there's no discussion, there's only idiots raising their voices so high that it disturbs the public space to the degree that normally functioning people have to deal with it. :vomit:
  • TPF Haven: a place to go if the site goes down
    Add as a contact/preferred sender or to your email client's whitelist. One of those should do the trick.Outlander

    Thanks, this did do the trick. :up: Glad it wasn't a wide spread thing.
  • TPF Haven: a place to go if the site goes down


    Another thing I noticed, and I sent a message to you about it but didn't get a reply, is that when I sent for changing my password I don't get an email properly for it. Not in the trash or spam folder either. So if there's ever some strike of hackers registering our passwords, it might be impossible to change for some.

    Is this something to maybe look into so that people won't be locked out or unable to secure their account if the site gets hacked in the future?
  • Perception of Non-existent objects
    Yet again, isn't Hallucination totally different way of seeing non existent objects? You see images of the objects which are existent or non-existent in the external world, but the cause of the seeing is the abnormal state of your brain due to the chemically induced condition?Corvus

    The hallucination is the only state. Your perception that is the experience through your senses aren't a 1 to 1 process. You aren't registering photons with your eyes and that is producing an image internally. The experience of seeing is your brain constructing a predicted image that is hallucinated into existence based on the interplay between the sensory information grounding the expectations rooted in memory information.

    In essence, when you see a cup of coffee, it forms a constant stream of information that holds in place and time that shape and form while your memory has categorized what a cup of coffee from past experiences and the interplay between them forms a hallucinatory state of predictions about the next step in time we experience.

    This way, we see a cup of coffee not as an unknown stream of information, but an unknown stream of information that is evaluated against memory categories of similar objects and producing a constant prediction process of what to expect of this experience.

    Without the sensory information grounding experience, this interplay is cut off and our prediction hallucinations start to flow without grounding and forming the abstract and surreal experience that is our dreams or psychedelic trips.

    Asking and discussing on seeing non-existence images in dreams and also daily life could tell us more on our perception how it works, which could allow us to explore on the way mind works.Corvus

    But you are also saying:

    I am not too sure on the details of technicality of hallucination on why and how it occurs. But that is my idea on it. Anyway, it is not the OPs interest here.Corvus

    You can't ignore the actual scientific research about perception and consciousness which points directly towards explanations on how dreams work, and then say that we can understand how the mind works by discussing in the way you want.

    You're asking a question about how we perceive abstractions in dreams, but you don't like the answer so you want to steer it in another direction that ignores the science.

    If you think it has no more scope of discussion than talking about hallucination and making predictions, then maybe you are not interested in the topic of the workings of mind and perception.Corvus

    It's one and the same process. It just seems like you ignore what's being said here because it doesn't align with what you believe about the subject.

    The images in our dreams are simply based on our past experiences, our memory and our mind forming a predictive hallucination without grounding them through sensory data. You aren't seeing anything, you are perceiving a free flowing predictive process using memory as a bucket of raw data.

    Not sure what more's needed to be said to explain it? Even if the science of consciousness haven't a final objective answer on all of it, there's no point ignoring existing research and scientific theories that is as close to an explanation that is currently possible. Anything else is just arbitrary unfounded speculation and belief.

    Here's some medieval paintings of animals the artist never saw. They dreamt up the visuals based on descriptions. The more "data" we have before we imagine or dream something, the more accurate those prediction hallucinations become. It's evolutionary logical for a predictive function to work this way. The more experience, the better we are at predicting accurately. You can imagine a white tiger looking just like a white tiger if you've seen tigers before. Our mind can easily switch out a color and basic attributes, but if you actually never saw a tiger you would have a major dissonance if this is what you imagined and then saw a real tiger.

    Ejl3U1uX0AEW0sJ.jpg

    Or an elephant:

    Ejl3XkHXYAAnHYG.jpg

    Or some lions and bears:

    Ejl3WCqWoAY_DVG.jpg

    Or this poor leopard:

    Ejl3YnVXcAELKWC.jpg


    In essence, those were imagined and dreamt up through descriptions of these animals or they got a glimpse in the heat of the moment on some crusade somewhere, and the emotions affected their experience. But they're all trying to form a prediction of what an animal looks like using previous visual experiences and trying to fuse them with other's descriptions. Without any prior visual information, a description will only use what's available in memory.

    It's the same process as with AI models forming images. If they don't have enough data on tigers in lots of situations, they will not be able to predict an image into an accurate depiction of what a tiger looks like. The more memory data of tigers, the more accurate it makes them. Our mind works in the same way. It's the reason why we began experimenting with neural nets in computer science in the first place, because it has correlation with how neurons work and how the brain works. It's only now we're starting to form theories of why this is.

    In some cases artists have very little to draw from.

    Poor guy:

    ?u=https%3A%2F%2Fi.redd.it%2Fgdcua9l535681.jpg&f=1&nofb=1&ipt=1bb38659ddfa09750e458a159875fe23198d938565ed31c24c8a3d0cf742f287&ipo=images


    And this is why religion forms so easily. A lack of information and explanations lead to extremely abstract ideas that try to predict why something is happening around the individual. And why the comfort of someone spreading an explanation lessen the strain on the mind to construct accurate predictions.

    It may even be the reason why we form social groups. That in order to efficiently speed up the process of prediction in cognition, a group of people spread ideas among the group rather than each individual having to learn on their own.
  • Donald Trump (All General Trump Conversations Here)
    It's that people absolutely despise the Bluegeoisie and the destruction they've wrought on society.

    They destroy livelihoods and lives, destroy bonds between family and friends, destroy any and every institution they govern.

    They're tone policing, cry-bullying, joy-killing, emotionally incontinent hacks, whose attempts at imposing their "Progressive" theocracy onto the rest of us has created a society in which we're more lonely, loveless, depressed and stupid than ever, and in which our youth are more lost and hopeless than ever.

    All this, while maintaining 100% confidence in their intellectual and moral supremacy over everyone else.

    The support for Trump extends far beyond the man himself—it's that people want to see a peevish, arrogant, and nakedly contemptuous pseudo-aristocracy punished for its abuses, and re-electing the Orange Man is clearly THE most effective way to do it.
    Chisholm

    And this is their false narrative, perpetuated by pseudo-intellectuals online. It's primarily just white men who're angry that their patriarchal power has been diminished, so they construct this conspiracy narrative that all this progress is some intentional plan by some organized "enemy" on the left.

    No, it's society slowly adjusting to rid itself of past injustices and some people who were favored by the old ways can't cope with this modern life. So they lash out in any direction that resembles a representation of this societal progress, slowly turning themselves into white supremacist, racist, transphobic, homophobic extremists who cluster around evangelist influencers, techno-kings and tech bros who look like them and think like them.

    It's the same making of extreme ideologies as in any other time in history. Take the part of the population that are angry and showing resentment about progress and be the beacon of hope for them and using their anger to radicalize them into a cult following. With enough of them it is possible to take control and some of them will follow you into death.

    This plague of Curtis Yarvin's (see above) and similar people's ideas are pure extremist ideologies. And not recognizing it and how it affects US politics is dangerous.
  • Donald Trump (All General Trump Conversations Here)
    Trump is just a populist in the purest sense of the word. Populism is a reaction to a failed political establishment.Tzeentch

    Except the driving forces underneath is far more spread out than just Trump. And the inability to guard against these people being granted so much power risks destabilizing the entire system. Populists in other nations, like here in Sweden, are part of a parliament structure in which if they took things too far it would just prompt previously unheard of collaborations between parties in order to just snuff out their stupid ideas. It's close to impossible to push their populist ideas into reality because the parliament actually represents the democratic voice of the people. Enough people stand up against their bullshit and so enough politicians do so as well.

    But in the US there's an undercurrent of white supremacy Christian fundamentalism that is infecting the halls of power more and more over time. And the people and other political figures are becoming more and more desensitized to it, slowly moving pushing the limits of what's tolerated in politics.

    This is why I say that Trump is just a symptom; he's become a front figure and "mascot" of the movement. But surrounding him, supporting him and working their way into more power, are the evangelical fundamentalists together with pure capitalists, who take advantage of the uneducated masses to a point it's forming an actual cult. It's not voters anymore that are just voting within political ideas, it's a fundamentalist cult brewing underneath the capitol.

    It's the kind of thing that is a joke... until it isn't.

  • Donald Trump (All General Trump Conversations Here)
    I don't agree with that. Trump is someone who is an absolute expert at exploiting democratic systems and also financial systems for his own advantage. If there is a fault, it's that a satisfactory anti-Trump hasn't emerged - someone who is also charismatic, bombastic, and telegenic, but who has at least a core of common decency which has long died inside DJT.Wayfarer

    You can't find someone who's both equipped to do what's necessary to fix problems in a nation (which most often than not angers people affected negatively when installing changes) and one who fundamentally ignites a cult like behavior in gullible people.

    Most people are easily tricked, easily manipulated. A decent person who is charismatic will always have to balance their personality with what's necessary for the good of the nation; but a manipulator can always have their cult followers stay, regardless of behavior. They are essentially protected by the delusions while the "good" person is always scrutinized.

    All of this is part of the systemic problem, since one of the pillars of this problem is the fanatical focus the US population has on personality of the president over the competence as a leader.

    Trump is a symptom, not the cause. And there will be more symptoms in the future.

    But on top of that, when a bipartisan system becomes only one valid choice, that in itself is not a good sign for the future either.

    Why are the US public so obtuse about improving the political system? This is part of the fanatical belief the US being the best nation in the world. Forming a delusion that because of that, it also has the best political system. And then pouring all personal voting effort into something that essentially functions like a sham democracy seen as there's little to no choice to be represented.

    Remove the presidential power as it is now, install a proper representative democracy with a parliament that includes more parties and reduce the extreme lobbying culture, especially criminalization of lobbying money, seen as it can easily be corrupted into a form bribery, giving more people to the rich than to the people.

    There are so many ways to improve a political system, but all I can see is a patchwork trying to calm everyone into a bureaucratic system that obscures the cogs.
  • Donald Trump (All General Trump Conversations Here)
    If you mean, why is it possible that Donald Trump has come to dominate American politics, f***ed if I know. It makes zero sense.Wayfarer

    A free nation will always have bad actors popping up, fractions of society trying to install some fascist ideologies etc.

    But enabling such a person to reach so high as into presidency in a nation in which the leader has almost an autocratic power; speaks to a systemic problem of how politics are handled.

    Looking at the whole system, looking at the lack of actual guardrails... I think it makes a lot of sense.

    The problem isn't Trump, it's a badly patched system that enables Trump to happen. The freedom of a nation does not get lost by guarding against such people, it protects it.
  • Donald Trump (All General Trump Conversations Here)
    If our society collapsed, I doubt the population would be any better at examining and judging the individual pieces than they are at choosing leaders today. I'd expect that the situation would be more likely than ever to search for simple solutions to the complex problems.Relativist

    Post WWII the world pretty much gathered around trying to figure out a better way forward. While it can be argued to death what worked and what didn't, there were good things that came out of having a true philosophical debate out of the rubble of the war. It spawned such an extreme introspection into how things could turn so bad that much of the progress we've seen since that can be attributed to this absolute horror lurking in the back of everyone's mind.

    Facing actually bad consequences for mishandling democracy is a wake up call to the sleepwalkers who just shrug at warning signs. People slowly become desensitized to it all and that's a wide open door for fascism to take hold.

    The irony is that the very point people have tried to make out of WWII is to be vigil in the face of fascism, but we're collectively eroding away our ability to spot it.

    And the usual counter argument to this is that, "oh what happens if a Trump win doesn't lead to actual fascism?". But there's no reason whatsoever to balance on that knives edge. There's no reason to allow society to even come close to the notion of that becoming true. It's the publics mission to force society towards a better future for all, because that's essentially what history has been moving towards for thousands of years.

    Over the course of history we've seen the rise of absolute terrors, but there's little spoken about the time after such horrors. How society discourse aimed to change for it to not happen again. And while this could be a cycle over and over, if we look at the holistic history as a whole, eventually it has started to form a bettering of society. But it all requires people to recognize the bad and work for the better.

    When people stop being able to differentiate we either get lucky or things collapse to form a new cycle of building a better place.

    The US collapsing will not become some Mad Max scenario. I would say that it doesn't even have to be something like the Civil War movie; it could be a total collapse of how politics are run, leading to millions marching for change, to unrest and justice being demanded. But through that turmoil, the bad actors will show their faces and the people who once sleepwalked through it all would finally take up the responsibility and be part of trying to fix things.

    History repeats itself for a reason, in the right circumstances, and the right amount of work, a cycle can be avoided.

    So far, I see none of that behavior, all I see is apathy and good people ignoring what's necessary to change the status quo. And no, the "necessary" is not some call to violence, it's a call to restructure the politics, update the constitution to reflect the modern world and 200 years of progress in moral philosophy and separate church and state for real. Leave behind the manifest destiny cult behavior and form an actual parliament with better representative democracy. There's enough template examples in the world to build from.

    There is much anti-American sentiment, on the streets and on this forum. I don't buy that 'it's all f***ed anyway, no point in either party, they're all equally bad.' The anti-democratic forces feed on that sentiment.Wayfarer

    I don't either, but when there's only one functioning party and candidate to choose from, it's important to ask the question if there's actually a democracy left? Why not ditch the bipartisan way in favor of an actual parliament in which there are actual representatives for the people? With the republicans having transformed themselves into an actual cult, where's the possibility for democracy?

    It's either go with democrats or risk fascism at the hands of a cult.

    They're fighting like hell. They're trying desperately to do everything possible to prevent it.Wayfarer

    Why is it even possible in the first place? There's no guardrails whatsoever to guard against the corruption and incompetence of Trump and his kin. There's no actual separation of power, there's no actual separated entities that can evaluate and block such risks. When Trump goes on a lying rampage, when his followers and senators say things that are actual fascist statements, they should be removed from power. This is the very point of protecting democracy.

    If you tolerate the intolerable, the tolerating society will erode.

    How is it so hard to draw the line? Are people so morally illiterate to not be able to judge if Trump is suitable as a presidential candidate or not?

    People are so bad at understanding how to balance free speech in a free society, with protecting that society from bad actors.

    You cannot fight against a manipulator, you cannot fight against someone who turns truth into whatever he waves it to be... it's the damn lesson learned from WWII that should have been in the back of everyone's head. Regardless of the consequences of a Trump win, it's this manipulation of truth that shouldn't happen in the first place. Such people should be blocked from political careers. It's not silencing them, they can spew whatever hate and bullshit they want, but they can't be given the keys to the nation if they're actively eroding truth and law to a point where democracy implodes.

    Which is why I said in an earlier post that I think he really is actually evil. He's become like a window through which a great number of social evils are manifesting. I don't know if you heard the racist crap that was being spouted at his NY convention the other night, but he's creating a permission structure, an 'Overton window', to enable millions of people to indulge in their darkest instincts. One of the contributors on the old forum said it best: Trump is the manifestation of the American Id.

    It's very clear: this election is hope vs hate.


    Let's hope.
    Wayfarer

    I don't want to hope, I want democracy to have fail safes against that which can destroy democracy so that this dichotomy does not happen. The people, the majority of people, are unfortunately too uneducated or too stupid to realize the importance of keeping democracy healthy... every day. People just view democracy as one time election and then they don't care about it until the next one four years later.

    Society shouldn't end up in a position like this, it speaks to a fundamental problem with how politics are handled.

    Stop just voting for hope and start working for a better system. It doesn't matter if hope wins this time if society erodes even further into the next election.

    At a certain point in the future, if the system isn't fixed into a more healthy state, there will be someone who takes things too far.

    Part of the Civil War movie warns about this. It's not a warning of what happens if Trump wins or trying to paint some picture of Trump like that, but it's a warning about what eventually happens if this erosion of truth and a stable democracy tips over the knives edge.

    The polarisation in the US is on part with how it was before the civil war. Having hope the next four years will not fix things, it will just postpone the eventual further until the people actually wakes up and start a movement to improve the fundamental political system and remove corruption and bad actors from its halls.
  • Donald Trump (All General Trump Conversations Here)
    I don't agree that a total collapse of the nations is either needed or is likely. What is needed is better education.Relativist

    I agree in practice or course, but what we're seeing right now is a convoluted system that cannot heal itself. It doesn't matter if you introduce, in lack of a better term, "better people" into this system, the bad actor can take advantage of the problems in the system to take control of the system.

    The protective measures that are meant to safeguard the system from hostile takeover do not work, otherwise we would have seen Trump be blocked from running for presidency. There's enough evidence that he is unsuited for the job and the protection he gets are corrupted to the point of protection being more present for him than for the system itself.

    One single person in politics should never be more protected than the system of democracy. The idea behind democracy is power from the people, but if the system represents that lineage of power and one single individual gets more protection than the system, democracy is fundamentally dead.

    So, education does not matter anymore as the bad actor would always be able to manipulate past it. And if the risk is that all it takes is one bad actor to take power in order to change the system further in his/her personal favor, then education is too slow to function against such events.

    Trump is an opportunist, and the opportunity he takes advantage of is the disconnect between detailed policy and political rhetoric. Candidates can't win an election by presenting detailed policies; they need to dumb it down into slogans and soundbites. So the vast majority makes their decision on these soundbites, not by carefully examining the pros/cons of competing detailed policy positions. In many cases with Trump, he just has the soundbites that appeal to many - with little or no details.Relativist

    But there are plenty of functioning democracies in the world in which a single bad actor cannot screw up the nation regardless of manipulation.

    On top of that, this is what does not work about democracy when it's centered around personality. The way to improve democracy is to move away from making it about personality traits. That's not democracy in my opinion, but a demagogy.

    A form of state that does not represent the people or that have manipulated the people before hearing their will.

    What good is a democracy if you have programmed the people into a certain opinion? It's as easy as any other form of marketing. There's a reason why marketing agencies pour money into commercials for products, because it actually works. And since it works, why not use the same methods, why not create a whole landscape simulacra that produces a consensus ideal about what a nation is and then use that as the foundation to steer the population into a the political corner that benefits your political ideas in order for them to vote for you.

    Most democracies function by these principles, and so it's important to know this in order to install as many guardrails as possible to mitigate it. This is what the most healthy democracies in the world have done, and what the US entirely lacks. But it's also a fundamental problem with the concept of democracy.

    In essence, how can a democracy be about actual choice when the illusion of choice is the preferable method of strategy for the people in power? In the worst case, it just becomes another form of autocracy, plutocracy or feudalism within an illusion of a free and democratic society.

    This will be a learning opportunity for the American public.Relativist

    The same thing was being thrown around in 2016 and then again during Jan 6th. But the population does not learn, they do not care and they keep being shuffled around like the sheep they are. Until people prove to be better and more thoughtful than easily manipulated zombies, they will be easily manipulated zombies and they will never see a learning opportunity even if it slammed a sledge hammer in their face.

    The system is so fundamentally broken that it needs to collapse so that all can examine the individual pieces, throw away the bad and rebuild with the working parts. It's too much of a patch work at the moment, it needs a reset and a new better protected democratic system needs to be built by philosophers and thinkers who knows the shit, not emotional narcissistic clowns and uneducated and manipulated sheep.
  • Perception of Non-existent objects
    However, the OP was more interested in discussing and find out the nature of the visual images we see in our dreams, rather than how dreams work, and why we dream.Corvus

    It's in there in the post. All hallucinations in our dreams are the result of ungrounded hallucinations based on the past memories in our short term and long term memory.

    But some of the images are the ones that we never came across in daily lives, or have anything to do with our experience and memories. The white tiger I have seen my dream for example, was a clear vivid image of a tiger, but I have never seen it in my entire life in real world.Corvus

    This make little sense as hallucinations are failures of prediction. If your brain tries to predict a tiger and you know that white tigers exist, the ungrounded prediction function may produce such a hallucination. It's at the core of what happens when predictions aren't verified by a flow of sensory data.

    It also makes little sense by just mentioning art. Artists do this all the time. Imagination is a form of controlled manipulation of our predictions. Are you saying that you cannot possible imagine a pink elephant, even though you have never seen one?

    It's just a merge of previously known concepts that you mash up internally. You know pink and you know elephants and now you can expand that hallucinatory imagination to highly detailed rendition of the pink skin on that elephant.

    The difference is that dreaming and psychedelic drugs enable a much more intense experience of it since it since it dislocates you from the constant flow of sensory flow data as well as the lack of ability to take action in sync with our prediction function makes the flow of that experience very abstract and nonsensical.

    It's why if people close their eyes they seem to have a better ability to imagine something. They essentially subdue the visual sensory flow of data and frees up that grounding mechanism, making it easier to imagine something.

    So where does it come from? How is it different from the images we see in daily life from the real objects? Are they same type of images? Then how it does not have its real existence of the object?Corvus

    When you look at a cup of coffee, your eyes and your sense of smell constantly feeds your brain with sensory data. Your brain is processing this in relation to memory of cups, coffee, the table which it stands and so on. It uses the sensory data to verify that our internal prediction is correct so as to move our experience forward in time. If we cut of that verification data, nothing prevents our predictions to run out of control, reshaping the color of that cup as we've seen other cups with other colors, or imagine new forms of a cup since nothing grounds our categorization of what "a cup" means to us.

    So the question of "where does it come from" and how it differs from real objects becomes somewhat of a nonsense question. Your experience of real life is an hallucination that is verified by the real object. That process forms memory categories that becomes the foundation of how we think about reality and the world around us.

    But it's still just an hallucination stored in memory and hallucinations can take any form if nothing grounds it.

    And artists create things out of their imagination all the time and these are all coming from their internal manipulation of memorized concepts. Tapping into a similar form of ungrounded hallucination.

    I'm not sure where you're going with the OP question, what you are aiming for, but there's not much more to it than what I described. Our experience is an hallucination bound by a flow of sensory data. Cutting that flow makes us hallucinate freely and our memorized concepts start to merge into new forms, shapes and concepts. The combinations of concepts stored in our memory has an almost infinite amount of combinations. A white tiger included.
  • Perception of Non-existent objects
    So where do the images come from? Does this phenomena implies that human perceptions could occur without actual existence of objects? Do human perceive things all differently?
    Can humans perceive objects which don't exist?
    Corvus

    Human consciousness, by the latest research, revolves around our brain being a prediction machine; "predictive coding theory".

    Our perception of reality is basically a controlled hallucination, with our sensory inputs grounding our hallucination so that we can navigate reality. Without that grounding, we hallucinate by the textbook sense of the word. Psychedelic drugs activate such unbound hallucinations by obscuring the flow of sensory information and increasing the brain's predictive measures and in so dislocates us from reality.

    This also happens when we dream. The brain predicts without grounding, and because of it we are essentially forming a feedback loop in which we predict based on nothing but memory, that is then fed into itself as the grounding information and because of this unbounded nature, it "swells" into the abstract and surreal nature of our experience.

    Like...

    So it seems that our minds are not completely shut off from the world and we interpret external stimuli as part of the dream.Harry Hindu

    Is supporting this theory. Real world sensory information starts to ground the dream as we return back to normal processing.

    We are, as Harry says, not dislocated from existence when we dream. We are connected through our memories as the source for our dreams, but unbound to reality in a loss of sensory grounding. Previous research theorized that dreams "manage our memories" and help us categorize and organize our functions. Since if we deprave people of sleep, they become disoriented with reality. With the recent research, it also points to our predictive ability becoming skewed and broken, since we hallucinate when depraved of sleep. Dreams may therefor be our way of "consolidating memory and categorization" while calibrating our predictive function and stream of memory information.

    In essence, while sleep resets and balance chemicals in our body, it further cuts off sensory grounding in order to calibrate this link and process. The sensor data that is stored as raw data of short term memory is a very energy costly process that is a strain on our brain, like a muscle. And just like we need to let our muscles heal when pushed to the limits, we need to let the brain organize our short term memory into long term experience for the sake of purging the short term memory so that the next day we can use the previous day experience as coded data used with our prediction function and in turn store new short term memory in order to further reshape the long term coding.

    It further supports why children are better at learning and have changing sleep patterns while they grow up; and why the older we get, the more stable our navigation of reality become. Less erratic, and more wise. As long as learning and experiences keep continue in our adult life.

    It also supports why the continued use of our brain in old age, help keeping dementia and declining cognitive function away, since just like training our muscles in old age becomes harder, if we don't do it, we quickly deteriorate.

    And it supports research into learning, how tests are clear that when we do something intensely before sleep, then the next day we have become slightly better at it.

    And this is why sleep and dreaming is so important. Especially if you are feeding a lot of new experiences and information to the brain. The more you learn, experience and do things differently during a day, the more the brain needs to go through enough sleep to settle that information into predictive coding.

    We use this to automate our functions and behavior. The more we do something, the more we automate it as the prediction becomes better. The reason we don't think about how we ride bikes is because the predictions are automated, we don't need to.

    Getting better at something, therefor is a process of automation. Which can also have the negative effect of automating bad information into the process.

    Which is a good explanation for our cognitive biases becoming more rigid the more we focus on just information that aligns with what we already know. And why broadening our knowledge is key to becoming truly wise.
  • Donald Trump (All General Trump Conversations Here)
    Make no mistake: Democracy is on the ballot. This is not Democratic Party hyperbole. If Trump were not to loose, the USA will be managed by rich oligarchs.Wayfarer

    US democracy has been eroding for a long time now. But the people don't really care until the sledgehammer of reality hits them in the face.

    Autocrats gonna autocrat, lobbyists will lobby, billionaires gonna billionaire.

    The problem is always boiling down to the people ignoring society eroding into a worse state. I'm sick and tired of constantly hearing about the "bad politicians", the corruption and corporate affairs. Because the people are actually the ones to blame. Democracy requires the people to take care of it, to nurture it and heal it when it's down. A population who just ignore that, who don't care to educate themselves, who let themselves be shuffled around like stupid sheep and do deeds ordered by whoever's in power, regardless of how despicable the act is... deserves the broken society that eventually ends up on their door step.

    If democrats are so sure that Trump and modern republicans have been infiltrated by fascists and that democracy is threatened, that the constitution is threatened. Then what exactly are they doing about it?

    Democracy cannot win against brainwashing despots. People are less rational than they think and will be herded by those who knows how to manipulate.

    Democracy cannot win against the manipulators, because it relies on honesty and truth, which is easy to get rid of with the right technology and rhetoric.

    I've said this before, the US does not have any protection for its democracy. It's been a patchwork of convoluted bureaucracy for so long that no one knows how to install protections from despots and corruption.

    Even when someone like Trump do things that in any other previous political era would lead to almost political and societal ostracism, it just makes him stronger. And the system just ends up unable to get rid of him for the sake of keeping democracy healthy and away from demagogical actors.

    The US democracy needs to get away from this celebrity politics. It needs to focus on competence over personality. That the majority of the US population is so focused on the personality traits of a president should be all the warning signs that's needed to conclude that the US isn't a real democracy, it's a damn television show. It's reality TV, not politics.

    And people in the world actually dies because of it. By the actions of presidents who are unfit. By actions that are obscured by this focus on personality over policy.

    I'm starting to believe that what the US need, what the population really needs... is a sledge hammer to the face. A total collapse of the nation in which the true horrors of not caring for the health of democracy comes into view. Where the people crash right into that reality.

    Because when a system is so broken that it cannot protect against potential despots, all it can do is collapse.

    A broken building that is held up by pins where people just move the pins around to wherever the house starts to lean towards. It doesn't matter how many new pins are introduced, it will eventually collapse if the structure isn't replaced and improved.
  • Where is AI heading?
    Just like with alchemy, people could forge metals well and make tools, weapons and armour, but we aren't reading those antique or medieval scriptures from alchemy to get any actually insights today. Yes, you can have the attitude of an engineer who is totally satisfied if the contraption made simply works. It works, so who cares how it works.ssu

    I'd rather look at it as science was in its infancy as a proper procedure. Back then, there were many more experiments that led to theories, but now we put forth theories that we then put to test.

    The foundation is the same, not everything goes from thought to manifested reality in experiment or engineering, sometime, many times it goes from "huh, that's odd" to later answers as to why.

    Well, this is an site for philosophy, so people aren't satisfied if you just throw various things together and have no idea just why it works.ssu

    Even as philosophers we have to acknowledge when there's too little information and data to conclude anything with solid premises. We can speculate and we can point to what's partially experimentally validated and go from there. However, the problem I see is that most thinkers aren't really looking at the partial pieces and including them in their ideas.

    As far as I see it, emergence has enough support to be relevant and predictive coding enough validity that it has to be addressed for any philosophical concept about the mind.

    In essence, we have enough scientific foundation that can't be ignored when forming philosophical arguments about how our mind works.

    What other way could consciousness become to exist than from emergence? I think our logical system here is one problem as we start from a definition and duality of "being conscious" and "unconscious". There's no reasoning just why something as consciousness could or should be defined in a simple on/off way. Then also materialism still has a stranglehold in the way we think about existence, hence it's very difficult for us to model consciousness. If we just think of the World as particles in movement, not easy to go from that to a scientific theory and an accurate model of consciousness.ssu

    Emergence is fundamentally a materialistic concept, but it differentiate in that you cannot simply see the cogs working towards a direct deterministic result. Instead emergence models consciousness on the same basis as everything else in nature and reality, an increase of chaos that at a certain point reach a tipping point and emergent complexity appear. All over nature, we see systems that can be categorized in defined ways up to a certain scale where they become something else that express singular behaviors. Like for instance, states of matter.

    The level of how advanced a complexity is depends on underlying levels. A state of matter is not as complex as an ecosystem of different states of matter forming a new emergent system, as an example.

    Fundamentally, I'm approaching this topic in the way of detaching any assumed ideas about consciousness that we've formed in culture and language, and instead look at consciousness as being part of the same natural system as everything else, without any bias towards ourselves and our human ego in experiencing it. We tend to attribute a mythical status to consciousness, tainting our ability to look at it as any other system in nature. But when we realize that consciousness is just part of the same physical reality as everything else, and really accept that "banality" of it, then it frees us up to think more clearly about what that entails.

    The questions then become, why does this system produce this effect? What is it that pushed our evolution towards this ability? What was the evolutionary purpose that formed it and is the way we use consciousness part of that evolutionary drive or a byproduct of a more fundamental cognitive purpose.

    My idea is the latter. Evolution has gifted us a system that was supposed to only be a highly advanced predictive "algorithm" for the purpose of navigating nature in more adaptable ways than having to wait generations in order to reprogram instinctual reactions and behaviors.

    It may be that the reason why mostly mammals have shown signs of higher cognitive abilities is because it was necessary to form evolutionary functions of adaptability after the asteroid killed the dinosaurs and so in order for animals to survive, evolution leaned towards forming organisms that were able to not just adapt over generations, but adapt to day to day dangers of the post-asteroid environment. And that the evolutionary branches of these cognitive abilities continued as the more advanced these abilities to predict became, the better the species survived. Evolution formed a new bias that survivability gravitated towards.

    Eventually the predictive function became so advanced that it layered many predictions on top each other, forming a foundation for advanced planning and advanced navigation for hunting, finding shelter, procreation and expanding tribes.

    This spread of different levels of cognitive behaviors checks out when comparing our basic psychology with the rest of the animal kingdom. Even mushrooms show signs of rudimentary cognitive abilities so there is enough evidence to say that advanced cognitive abilities are evolutionary preferable to develop.

    But the way homo sapiens have used our consciousness is a byproduct of the basic functional reason we have consciousness. The level of complexity in prediction that it formed made us able to correlate different predictions and regulate emotion around it. And so we began to conceptualize highly advanced and expanded predictive models about our surroundings, for navigation, social structures and it drove our need to understand how things function in order to predict advanced systems. Our drive to explain why something happens formed extremely complicated internalized scenarios in the form of religious beliefs that then transformed into scientific thinking as we matured as an intelligent species.

    Our consciousness and how we use it is basically a fundamental system that produced highly complex consequences but that is still fundamentally basic in its function. Like any other system in the universe that is fundamentally simple, but where the results are extremely varied and systematically complex in themselves..

    A form of rare emergent behavior of fundamental organic physical processes.

    Therefore it's rational to reason why it's hard to model consciousness as it's not one single thing, but rather a process over different levels of emergent complexities that in turn creates byproduct results that seemingly do not directly correlate with the basic function.

    So the fault might be that we view consciousness from the high level complexity down or try to materialistically view it from the bottom up to complexity, but it may require a much more holistic view of many things forming the emergent behavior as seemingly unrelated parts and systems that as a whole produce this result.

    I think our (present) view of mathematics is the real problem: we focus on the computable. Yet not everything in mathematics is computable. This limited view is in my view best seen that we start as the basis for everything from the natural numbers, a number system. Thus immediately we have the problem with infinity (and the infinitely small). Hence we take infinity as an axiom and declare Cauchy sequences as the solution to our philosophical problems. Math is likely far more than this.ssu

    I don't really see the problem you describe. Mathematics function to describe nature and our reality. Infinite is even included in mathematics and is a computable part of equations. We can also see how infinity usually turns up in physical reality when spacetime essentially breaks down. So in essence our math works within the reality it is calculated, and can in some cases even expand calculations to models that deal with reality beyond our own. More often than not, the problems in computation is not due to math being incomplete, but because we don't have enough means to compute. As our minds have reached its limitation to compute, we use computers, but those are limited to their computing power. We are essentially limited by the speed of our systems, not math itself.

    But the machines we've built haven't emerged as living organisms have, even if they are made from materials from nature. A notable difference.ssu

    Our machines still operate on physical laws. We build them to operate on these laws. Living organisms in comparison, formed and evolved to operate on the same physical laws. The only difference is that one grows into being, the other is calculated into existence. I see no notable difference, other than our machines being rudimentary in comparison to the organic, since we're trying to understand all parts while the other forms from a system in which all parts develop in reaction to the previous.

    Actually, in engineering today it's common to use the same methods as evolution rather than trying to make things from scratch. Aerodynamics use iterative designs that forms out of the conditions rather than a human designing them. That way they reach the optimal function within their dedicated space of operation. So we are already using emergent complexity in building machines.

    And with the AI models we have, we're doing it with software as well. The reason why robotics have taken a giant leap today is because of evolutionary iteration of behaviors rather than trying to program movement.

    It becomes obvious that "growing" forth knowledge in evolutionary ways is much more powerful than trying to simply design something.

    In doing so... aren't we transitioning into "emergent machines" as our need for complex operation increases? Much like all other systems in nature and the universe?

    A big if. That if can be still an "if" like for the alchemists with their attempts to make gold, which comes down basically to mimicking that supernova nucleosynthesis (that would be less costly than conventional mining or the mining bottom of the sea or asteroids etc).ssu

    It is not impossible. Our brain isn't detached from natural processes, everything we have in our skull is a composition of matter and biased functions that produce the result that is our consciousness. If we replicated all of that perfectly, maybe even requiring us to "grow" it into existens, or simulate that growth process, we would eventually end up with a perfect replica.

    The philosophical question is not if we can do it today... it's if we can do it at all. And there's nothing that says that we can't. We've already replicated much of what exists in universe, even producing matter that might be impossible to form elsewhere, just because we know how neutrons and protons and the atom works.

    The only thing I see is that we attribute the mystical onto our consciousness again, attributing it to be so complex that we will never be able to see its composition and function. But we've done things with physics and nature in our modern time that is considered magic by previous generations in history.

    All I see is a defense mechanism. People don't want to know how we work, because when we do, we dispel the notion of a divine soul. Just like people have existentially suffered by the loss of religious belief in favor of scientific explanations. So will they do, maybe even more, by the knowledge of how we function. So people defend against it and need the comfort of us never being able to explain our consciousness.

    It is happening consciously or unconsciously, but it is a vast abyss for people and staring into it makes some go mad as it's a feedback loop of ideas. The being able to understand itself fully. That process can break it.

    Exactly. It cannot do anything outside the basics of operation, as you put it. That's the problem. An entity understanding and conscious of it's operating rules, can do something else. A Turing Machine following algorithms cannot do this.ssu

    I don't think you understood how I explained algorithms. The "algorithms" are no different in what they essentially mean, to that of our own parts guiding our consciousness. As I described above about how consciousness probably formed, our consciousness is fundamentally basic, operating on basic algorithms of prediction models. Very similar to that of our current AI models, but much more advanced in how it changing during operation.

    My point is that you don't need a complex system at its foundation. You need a chaotic system that is guided by simple rules and complex emergent behaviors can form out of it. How we see our consciousness today is more likely only a byproduct of these basic functions and operations, and so if an AI model operates on similar basics it may form similar emergent byproduct operations.

    We do have free will. Laplacian determinism is logically false. We are part of the universe the hence idea of Laplacian determinism is wrong even if the universe is deterministic and Einstein's model of a block universe is correct.ssu

    No, we do not have free will. The properties of our universe and the non-deterministic properties of quantum mechanics do not change the operation of our consciousness. Even random pulls of quantum randomness within our brains are not enough to affect our deterministic choices. Human's have a tendency to attribute our ego more abilities than it has. We are still a rudimentary consciousness that operates on prediction operation and thus we choose based on deterministic events in nature. It is a human arrogance, akin to religious belief that drives us to attribute ourselves free will in the sense its used. The randomness we see in quantum mechanics do not counteract deterministic macro events. Everything gravitates towards deterministic outcomes in which any deviant random event in quantum mechanics ends up too weak to affect the macro. Quantum mechanics are probabilistic, but it's false to think that this probability enable novel random events outside of the most probable outcomes and the scales at which such deviant random events happen on are so small that even the slightest interaction erases it and forms a bias towards the most probable.

    Our consciousness isn't "hacking" our choices beyond this probabilistic behavior and even if it were to, it would not be enough to form large scale conscious decisions that exist unrelated to any events that affect our consciousness.

    Good description. Being a good prediction machine makes one fit, but being fit isn't necessarily critical to a successful AI, at least not in the short term. Should development of AI be guided by a principle of creating a better prediction machine?noAxioms

    I think the way to successful AI, or rather to an AI that is able to think for itself and experience self-reflection, requires it to "grow" into existence. We're tapping into this with our training operations, but we require more guidelines for it to follow in order to create the same feedback loop that our consciousness have to control our hallucination of experience. We essentially hallucinate reality, and in turn our senses verify and this goes on in a constant loop that grounds us. We need to replicate that in a way that is constantly updating the system.

    Other than that, we are seeing seeds of consciousness as these models are operating on prediction already. It tries to predict information based on memory and training data, but it does not have any guiding principles to why it should predict something in a certain way. If we are functioning on the idea that we need to predict a possible danger on the other side of a hill when out hunting, that guides us to predict possible dangers, we use our memory and stored information to predict the likelihood of there being danger beyond the hill. The guiding principle are forces like survival driving our emotional reaction to start predicting and our experience driving the confidence in that prediction. The question is how we can give machines similar guiding principles to guide their predictions.

    Right now we are the ones guiding them with our prompts and thus there's neither an internal input for that predictive reasoning or an external consequence after that predictive reasoning.

    I'd say that we already have the foundation of predictive thinking built into these models. For instance, the o1 model already shows significant reasoning ability compared to previous models, but that's only because of the guiding principles it's built around. It still uses the same basic predictive model as the 4o model.

    The two are not mutually exclusive. It can be both.noAxioms

    Yes. The only thing that truly separate the organic entity from the mechanical replica is how we as humans categorize. In the eye of the universe, they're the same thing.
  • Can this headline be answered by the word "no"?
    Can this headline be answered by the word "no"?

    NYEOS

    or

    YNEOS

    Is Hinchliffe's Rule True?
  • Where is AI heading?
    Yet understanding why something works is crucial. And many times even our understanding can be false, something which modern science humbly and smartly accepts by only talking of scientific theories, not scientific laws. We being wrong about major underlying issues doesn't naturally prevent us innovative use of something.

    Just look how long people believed fire being one of the basic elements, not a chemical reaction, combustion. How long have we've been able to create fire before modern chemistry? A long time. In fact, our understanding has changed so much that we've even made the separation between our modern knowledge, chemistry, from the preceding endeavor, alchemy.

    Now when we have difficulties in explaining something, disagreements just what the crucial terms mean, we obviously have still more to understand that we know. When things like intelligence, consciousness or even learning are so difficult, it's obvious that there's a lot more to discover. Yet to tell just why a combustion engine works is easy and we'll not get entangled into philosophical debates. Not as easily, at least.
    ssu

    It's important, but not needed for creating a superintelligence. We might only need to put the initial state in place and run the operation, observing the superintelligence evolve through the system without us understanding exactly why it happens or how it happens.

    As per other arguments I've made in philosophies of consciousness, I'm leaning towards emergence theories the most. That advanced features and events are consequences of chaotic processes forming emergent complexities. Why they happen is yet fully understood, but we see these behaviors everywhere in nature and physics.

    The question is if the emergent behaviors arise from pure chaotic systems or if there are certain controllable conditions that can be adjusted to form certain emergent behaviors. I'm leaning towards the latter since the mathematical principles in physics, constants like the cosmological constant and things like the golden ratio seem to provide a certain tipping point for emergent behaviors to occur.

    And if that's true in physics, I'd generally consider nature overall operating under similar basics, including the formation of consciousness.

    Replicating that in synthetic form means trial and error on the initial states in order to find the emergent behavior that ends up being the formation of a thinking mind.

    But it would not need us to fully understand why it happens.

    In a similar way we could describe us human being mechanical machines as Anthropic mechanism defines us. That too works in many cases, actually. But we can see the obvious differences with us and mechanical machines. We even separate the digital machines that process data are different from mechanical machines. But it was all too natural in the 17th Century to use that insight of the present physics to describe things from the starting point of a clockwork universe.ssu

    Everything is nature. Everything operates under physical laws. What is a machine compared to an organic machine with the same function? A mechanically built simulation of an organic function that operates under the same conditions of physical forces.

    If we were able to mechanically replicate the exact operation of every physical part of our brain, mind and chemistry, did we create a machine or is it indistinguishable from the real organic thing?

    Where is the line drawn? It's easy to be drawn for now, but philosophically, where's the line drawn?

    Arbitrarily, the spiritual ones object to the notion of us being the same as such a machine, but there's no rational line that can be drawn.

    Physical reality, is shared between machines and organic beings and the closer each get to the other's operation and behavior, the less a line can be drawn to distinguish between the two.

    Matter is matter.

    When you just follow algorithms, you cannot create something new which isn't linked to the algorithms that you follow. What is lacking is the innovative response: first to understand that here's my algorithms, they seem not to be working so well, so I'll try something new is in my view the problem. You cannot program a computer to "do something else", it has to have guidelines/an algorithm just how to act to when ordered to "do something else".ssu

    The algorithms need to form the basics of operation, not the direction of movement. Meaning, algorithms that inform "weights" to which a behavior gravitates.

    We are no different. Our genes and our chemical processes determine how we behave. A balanced person, in that physical regard, will operate within the boundaries of these "algorithms" of programming we all have. We try to fight against it, but mostly we're slaves to this programming whether we like it or believe it or not. Otherwise we would just be able to turn off our sexuality, our anxiety, our anger and sadness, but we can't. Trying to will create disturbing personalities and if the chemical balance or genes are damaged or faulty we can either get divergent minds or in the worst cases deeply disturbed minds and mental health issues that fundamentally blocks normal operation.

    We are still able to operate with an illusion of free will within these boundaries. So the same goes for a synthetic intelligence. It needs to have an algorithm that guides behavior and operation, but enable free operation within those boundaries.

    All physical processes, in physical reality, are only able to operate within the boundaries of something. If there were no boundaries, there would be nothing holding reality together in our dimensional soup of existence. Without boundaries, the matter of my body would just merge with the matter of everything else around me.

    Freedom is only possible within a boundary that defines where that freedom can exist, and in relation to what.
  • Where is AI heading?
    I think the major problem is that our understanding is limited to the machines that we can create and the logic that we use when creating things like neural networks etc. However we assume our computers/programs are learning and not acting anymore as "ordinary computers", in the end it's controlled by program/algorithm. Living organisms haven't evolved in the same way as our machines.ssu

    But we invent things all the time that utilize properties of physics that we're not yet fully able to explain. Some of the properties and emerging effects of neural networks are unknown to us because we can't explain the causal chains that produce a certain effect as the complexity is mathematically astronomical.

    To say that we can only create something that is on par with the limits of our knowledge and thinking is not true. Either by these complexities, but also how we've accidentally invented things in history and through those inventions we've formed new understandings. It's not always a causal line from theory to practice, some times we invent something that in turn informs us to form a theory.

    And what we're seeing in scientific work on understanding the mind, part of the research into neural networks have been returning knowledge back into the theories of the mind. I remember that I proposed something like this back when AI started to take off, and as it happens, the research in this field of science started to form similar theories about the mind. Mainly, the most up to date theory is "predictive coding".

    The concept I had and that has found support in science recently, is that our brains are mostly just prediction machines. It's basically a constantly running prediction that is, in real time, getting verifications from our senses and therefore grounds itself to a stable consistency and ability to navigate nature. We essentially just hallucinate all the time, but our senses ground that hallucination. Whenever we take something like psychedelic drugs, it severs this grounding function and our verification slips away, making us hallucinate in the sense of the word we're used to. It's also why dreams occur as they do, since the only thing that verifies our hallucinations are the memories we already have, constantly creating a feedback loop that can't be grounded. But such dreams are necessary in order to fine tune and calibrate our predictions and verification loop.

    So, in essence, it might be that we are not at all that different from how these AI models operate. The missing aspect is the real time nature of the verification and adaption. What we've created with these AI models are basically a locked and frozen version of our mind, trained on material that forms a memory bank, but a memory bank that isn't updating and a lack of verification method that keeps it grounded. The verification methods we use on it is in the form of the algorithms of processing it uses. Basically, they're told how to ground their answers, which is risking hallucinations constantly.

    The solution would be to solve the real time issue. These models need to have a fluid training function, enabling it to continue to train its own foundational training data as a memory function in real time, while having a sensory grounding function keeping them grounded to logical and factual outputs. With the amount of computing power needed to train models today, I'm not sure how this is supposed to be possible without a tenfold improvement in computing power or more, but if we're to create a truly human-like intelligence, it would need to be able to change itself on the fly and move away from pre-established algorithm-boundraries and locked training data foundations as well as getting a stream of reality-verified sensory data to ground them.

    But if we want to experimentally verify how our own mind and brain works, it may very well be through these AI systems and how they operate. If the current predictive coding theory of the mind is correct, then it would be verified by simulating the exact nature of this process, which we pretty much have the blueprint for already.
  • Donald Trump (All General Trump Conversations Here)
    The fact he's not in jail yet is proof of the corruption inherent in the US system.Benkei

    Corruption can be fought by enough engagement and pushback by the people. But it seems the people don't care enough either about the corruption or Trump's actions.

    Where's the push back? Where's the demands? The outrage? The damn revolution?

    Maybe the nation need to update the constitution to incorporate all the improvements into democracy and government that has occurred since the last 240 years in favor of protecting democracy and the competence of the government?

    There are ways to do this... but the people... don't... care.

    I will never criticize the people in power more than the people letting themselves be governed by such people. It's the people's responsibility to take care of a democracy for the people. Just relaxing and being lazy towards caring for it will roll out the carpet for bad actors to corrupt it over time.

    And there will always be bad actors, outliers that are absolutely power hungry and unfit to be in power. So it is the responsibility of the people to organize and make sure those outliers never seize power and to do it in time before it's too late, as well as install regulations and laws that prevent the abuse of power by a small entity.

    If there's any fundamental part of civilization that protect the people from abusers of power, it is the people themselves caring for that power to be handled with care.

    If someone has the ability to seize such power as to absolutely put society under their boot, there had to be a long series of events that placed them in that situation. Events populated with unthinking, uninterested and complacent people rolling out the carpet for them until it's too late.

    There should be at least 100 million people marching for a change of the system, against corruption and to change the system to a better one. But there isn't and won't be, until someone in power, abuses that power too far.
  • Am I my body?
    I am not a soul, and I am not my brain. I am a whole, conscious, physical unit.Kurt Keefner

    With all scientific research at the moment behind it... yes.

    Like with the recent reconstructed fly brain in a computer that's been in the news; it seems that the only way to "upload" this consciousness or "soul" is to copy all aspects and simulate the whole entity of the fly, in order for it to be a direct copy of its consciousness. In essence, the idea of separating the conscious mind, the "soul" require to copy the "body" as well, and so we can only copy our consciousness by copying the whole.

    Either mind is part of body, or body is part of mind. The point is the distinction needs to be made as to what the difference between these two are - IF it can be stated.Manuel

    I'd say that the problem essentially lies in how we linguistically differentiated between mind and body in a time of religion. The consequence of this echoes far into modern science where we still treat the two as somewhat separate. But the arguments for separation comes from the illusion of separation; the emotional sensation of our body being something other than our ethereal and abstract "inner self". And as we all know, our senses and our emotions are the worst foundation for rational reasoning.

    In medicine, we've seen this separation play out to the detriment of patients. On one hand, therapists and psychologists or doctors have given them prescription medicines which alter the chemicals in the brain, but that's not enough. On the other hand they've attempted to just treat the mind through rational reasoning and therapy, ignoring the effect that the chemical system has on the mind.

    Both of these sides ignoring the other have failed to fully treat people with mental illness. It's only just recently we've been handling both as a single treatment seriously. Treating the body and mind as a whole entity.

    Because it's an ouroboros. The physical body influences the mind through its genetics, chemicals and substances; and the mind influences the body's functions. It's a closed loop that cannot be separated without severely altering the psychology of the being.

    Imagine someone with a healed serious injury to his body. It affects his personality, his opinions, his social sphere and experience of himself. If you were to cut off his head and place it on another body (if that would work), the entirety of his consciousness will alter through this new body. All of a sudden there are functions that work that didn't before, a body shape that is different as an experience, chemicals that affect the mind based on the genetics of that body and so on. It would alter that person's mind to the point of the original consciousness not being the same anymore. We know this due to what we know about how the physical body affects the mind.

    What then is the mind separated from the body, if that happens? Only an empty template; a prediction system that cannot value its predictions through chemical feedback.
  • Backroads of Science. Whadyaknow?
    And then there's this:unenlightened

    Unfortunately, that new finding has blown things out of proportion in science media. We already know that quantum mechanical processes occur in biological systems and it's nice that there's some evidence for that happening in us, but people falling into the conclusion that consciousness is definitely a quantum mechanical process missed that this is not proven yet. And seen as how neural networks simulate similar behaviors, it may just be that these quantum mechanical processes are necessary for neurons to function properly, but a single neuron function does not equal consciousness as far as we know today. We still seem to need the sum of all parts to produce consciousness.

    Reminds me about the recent news of Princeton team copying a fruit fly brain. And how science influencers and media started talking about it in the form of some cyberpunk uploading of our mind into a computer. But the problem is that just a copy isn't enough, we need to understand how the chemicals that flow through the brain affect the brain, but we don't know yet what the "cocktail effect" of many different chemical compounds do with out brain so how do we simulate it enough to effectively give the full experience of a fruit fly? And if microtubules are part of the neuron function, and that quantum mechanical process isn't accounted for, the neurons might not act between each other in the way needed for accurate simulation.
  • Philosophy Proper
    Is that so? Got a manual?Wayfarer

    I'm referring to just the common psychology of how people reason. We don't start with factual cold logic, we tend to think creatively first and apply logic second. Kind of like letting go of all the animals and then building a fence around where these animals want to be, not where you want them to be. As a half-baked analogy.
  • Can we always trust logical reasoning?
    I've read several arguments here on the forum where people come to logical conclusion like: "Therefore, there must exist an entity, the so-called mind, that can freely decide." or "This proves that god exists" or ”Logical proof that the hard problem of consciousness is impossible to solve"

    Is it possible that with solid premises and correct logical steps, we cannot always accept the conclusion?
    Carlo Roosen

    Conclusions like "this proves that god exists" is not a conclusion out of a chain of logic, but a chain of logical fallacies.

    What about all these discussions in metaphysics & epistemology? Could it be that these topics cannot be addressed logically? Somebody must have said a few words about this already, I guess?Carlo Roosen

    The reason why such discussions never ends in an objective and final conclusion is because they either consist of paradoxical conclusions, or we do not have enough understanding of physics or how our mind works to be able to conclude anything final, thus it becomes more a discussion around the premises and which argument has the most valid premises as they might sometime hint at the most likely probable conclusion.

    And to counter-question; is there anything better than using logical reasoning for arriving at conclusions? In order to avoid biases and fallacies? If not using that, then what could possibly get closer to anything objective, classified as truth, or most probable? I tend to see these types of questioning of logical reasoning, using the fact that not everything can be summed up in a logical chain of reasoning down to a solid conclusion; to be some kind of evidence for logical reasoning not to be trusted.

    In the end, it mostly looks like attempts by those who feel their opinions trumped by logic to try and dispel logic as a tool of thought and reasoning, and thereby give more validation to their illogical and just random opinions by somehow bringing down the logical arguments to some kind of pseudo-equality with the illogical ones.
  • Philosophy Proper
    o, would you consider the proper way of doing philosophy mostly conceived as with the analytic school, as philosophy proper or are we still struggling with how philosophy should be done?Shawn

    I follow the idea that philosophy is "soft science". It requires a starting point that is abstract, lacking rules and logic, creatively critical, like a stream of consciousness around a certain topic, bouncing back and forth between the specific and the holistic.

    But then it needs rigor and structure. If the ideas that flow cannot flow down into a more concentrated logic and find a grounded state, then it has to be dismissed.

    A problem with the analytical school or similar methods can be that it demands so much initial logic that it limits how the brain finds new paths of ideas. It's one of the reasons behind Einstein's "thought labs". A place to play with ideas before solidifying them with proof, logic and math. Philosophers who get stuck in just the analytical rarely find new paths forward in their thinking.

    The problem with other methods are that they seem to feature an inherit contempt for the analytical and thus they abandon all logic and apply a kind of "anything goes", inviting all sorts of biases and fallacies.

    Most debates seem to just be about the methods rather than the subject being discussed. One interlocutor criticizing the other's way of conducting philosophy based on the above problems, and no common ground is found.

    I think the "method" needs to be formed around how our brains actually work. We do not come up with anything analytical from the get go. We form abstractions and wild, illogical concepts through creativity and only when we've reached a point of confusion do we apply rigorous analytical logic to test our ideas.

    It's only when we let go of our analytical side that we can think freely, but it's only when we apply our analytical side we can establish concepts as closer to truth.

    There are no "best schools" of thought. There's only one way our brain works and it's better to follow that and then apply the analytical tools that exist in order to present ideas to the world that has sound logic for all and not just yourself.
  • Site Rules Amendment Regarding ChatGPT and Sourcing


    Even if it's a quick side-track of the thread, let's, for the fun of it, check how far the current system handles it. Would have loved to test the o1 since it's better at analyzing, but it does not have image interpretation at the moment. (For some reason the quote code for this gets broken when doing lists and stuff so don't know how to fix it).

    check that the AI understands the way the input design functionswonderer1

    So I asked what the schematics mean:

    -------------------------

    GPT-4o:

    This schematic appears to represent an analog model of a biological synapse and neuron, typically used in neuromorphic circuits, where electronic components are used to mimic the behavior of biological neurons and synapses.

    Main Sections of the Schematic:

    1 Electronic Soma (Neuron's Body)

    • This section represents the soma of a neuron, the central part of the neuron that integrates input signals.
    • Components like capacitors (C_M), resistors (R_RU, R_B, etc.), and transistors (such as QM) likely work together to mimic the membrane potential of a biological neuron.
    • The diode-like components (like BAR43) might be used to model thresholding behavior or membrane potential clamping.
    • The V_ECH node likely represents a specific control or threshold voltage.

    2 Electronic Synapse

    • This section models the synaptic transmission, where a signal from the IN node is processed and transmitted to the neuron.
    • The synapse includes capacitors, transistors (BC848, BC857), and resistors which emulate the behavior of neurotransmitter release and postsynaptic response.
    • Long-term potentiation (LTP) and Long-term depression (LTD) mechanisms are often involved in synaptic learning and plasticity. Signals from LTP OUT and LTP IN appear to be involved in adjusting synaptic strength.
    • Diodes like BAS45 and resistors like R_LU and R_F might modulate these synaptic strength adjustments over time.

    3 Postsynaptic Neuron

    • This portion represents the receiving neuron that responds to the incoming signal from the synapse.
    • The capacitor C_M, transistor QM, and other passive components model the integration of the incoming signal to produce an output.

    4 LTP and LTD Mechanisms

    • The labeled nodes LTP IN and LTP OUT likely represent pathways for inducing Long-Term Potentiation (LTP) and Long-Term Depression (LTD), which are fundamental processes for learning and memory in biological neurons.
    • In this electronic model, these mechanisms adjust the strength of the synaptic connection, thereby altering the output in response to input patterns over time.

    Functionality in Brief:

    • Signal Input (IN): The signal enters the circuit through the electronic synapse, where the synaptic strength can be modified by LTP and LTD mechanisms.
    • Synaptic Processing: Transistors, resistors, and capacitors modulate the strength of the synapse based on the input and history of previous signals, mimicking biological synaptic plasticity.
    • Neuron Output (OUT): The processed signal is passed to the electronic neuron, which then generates an output signal based on the integrated input, similar to how a biological neuron would fire an action potential.


    Purpose:
    This circuit is an example of neuromorphic engineering, a field that seeks to emulate the information processing capabilities of the brain using analog electronics. It likely serves as part of a larger network of such circuits to model neural behavior, synaptic learning, or to implement artificial neural networks.

    --------------------

    Is it close to a good interpretation and analysis of the schematics function?

    then ask the AI to suggest a variation which matches some criteria that the input design cannot achieve. E.g. higher accuracy, higher power, more compact. (With the specific components needed for the alternate design specified in detail.)wonderer1

    Tried to ask for higher accuracy.

    "I need you to list a variation of this that aims for higher accuracy, higher power and to be more compact. Also list the specific components needed for such alternate design, specified in detail."

    It's answer:


    ----------------------

    GPT-4o:

    To achieve a more accurate, higher power, and more compact variation of the neuromorphic circuit shown in your schematic, we would need to make several modifications. These could involve switching to more advanced and precise components, improving the layout for compactness, and using components that handle higher power. Here’s a suggested variation, along with a list of specific components for such a design:

    Design Considerations:

    1 . Accuracy Improvements:

    • Replace general-purpose transistors with low-noise, high-precision transistors.
    • Use precision resistors and capacitors with tighter tolerances to reduce error margins.
    • Consider using operational amplifiers (Op-Amps) for precise analog signal processing.

    2. Higher Power Handling:

    • Use transistors that can handle higher voltages and currents.
    • Utilize low-dropout regulators for stable power supply and noise reduction.
    • Switch to high-performance diodes capable of higher current throughput.

    3. Compactness:

    • Use surface-mount technology (SMT) components, which are smaller than through-hole components.
    • Use integrated circuits (ICs) that combine several functions into a single package.
    • Use custom ASIC (Application-Specific Integrated Circuit) designs for more compact neuromorphic processors.

    Specific Component Changes:

    1. Transistors:

    • Original Components: BC848, BC857 (General-purpose NPN/PNP transistors)

    Suggested Upgrades:
    • Low-noise, high-precision: 2N2222A (NPN) and 2N2907A (PNP) – For higher current handling and lower noise.
    • SMT Versions: BC850 (NPN) and BC860 (PNP) – Surface-mount versions for compact designs
    .

    2. Diodes:

    • Original Components: BAW56, BAS45, BAR43 (General-purpose diodes)

    Suggested Upgrades:
    • Schottky Diodes: BAT54S (dual Schottky in a small package for faster switching and lower forward voltage).
    • Power Diodes: 1N5819 – Schottky diode with higher current handling (1A) for power applications.

    3. Operational Amplifiers:

    New Addition for Accuracy:
    • OPA2140 (Precision Op-Amp): Ultra-low noise, low offset voltage, and high precision; great for synaptic weight processing and accurate integration of signals.
    • TLV9052 (Compact, Low-Power Op-Amp): For lower power applications and compact designs.

    4. Capacitors:

    • Original Components: 10nF, 47nF, 1µF, 2.2nF capacitors (General-purpose ceramics)
    Suggested Upgrades:

    • Tighter Tolerance Capacitors: Use C0G/NP0 ceramic capacitors with 1% tolerance for better precision.
    • Compact Capacitors: Replace electrolytic capacitors with MLCCs (Multi-Layer Ceramic Capacitors) like TDK or Murata’s series for compact designs.

    5. Resistors:

    • Original Components: Resistors of varying values (e.g., 20kΩ, 10kΩ, etc.)

    Suggested Upgrades:
    • Precision Resistors: Use metal film resistors with 0.1% tolerance for better accuracy, like Vishay Dale resistors.
    • SMT Resistors: For compact design, use 0603 or 0805 size surface-mount resistors.

    6. Power Regulation:

    • Original Components: General supply voltage lines labeled as VDD.

    Suggested Upgrades:
    • Low-Dropout Regulators (LDOs): For stable power supply, use LT3080 (high current, adjustable LDO) or TPS7A49 (high-precision, low-noise).
    • Switching Regulators: For more power efficiency, use LM2675 (step-down converter).

    7. Synaptic Learning and Memory (LTP/LTD Circuits):

    • Original Components: Diodes and transistors for LTP and LTD processing.

    Suggested Upgrades:
    • Flash or Ferroelectric Memory (FRAM): Use memory elements like FM24CL16B for synaptic weight storage and non-volatility, improving accuracy and long-term retention.
    • Memristors: For future technology designs, consider integrating memristors, such as HP’s memristor technology for synaptic learning and plasticity.

    8. Neuromorphic ICs:

    New Addition for Compactness:

    • Loihi 2 (Intel Neuromorphic Chip): A custom IC designed specifically for neuromorphic computing, providing higher accuracy, lower power consumption, and more compact designs compared to traditional analog circuits.
    • RASP 3.0 (Reconfigurable Analog Signal Processor): Analog processor specifically designed for compact neuromorphic systems.

    Alternate Design Schematic Outline:

    1 Input Stage:

    • Precision Op-Amps (like OPA2140) for synapse modeling and precise signal processing.
    • Schottky diodes (BAT54S) for fast switching in synaptic circuits.
    • Surface-mount transistors (BC850/BC860) for the compact layout of synaptic elements.
    2 Synaptic Plasticity:

    • Synaptic learning weights stored in non-volatile FRAM memory (e.g., FM24CL16B) for accurate long-term potentiation (LTP) and depression (LTD).
    • Memristors for compact and future-proof synaptic modeling (optional but futuristic).
    3 Neuron Soma:

    • Replace discrete components with a reconfigurable neuromorphic IC like Loihi 2 for better integration and compactness.
    4 Power Regulation:

    • Low-dropout regulators (LDOs) for noise-free, stable power, e.g., TPS7A49.
    • Use compact, efficient switching regulators like LM2675 for power management.

    Sk-rmbild-2024-10-03-231123.png


    Summary:
    This alternate design improves accuracy by using precision components (such as precision resistors and capacitors, low-noise transistors, and op-amps), increases power handling with better diodes and transistors, and enhances compactness using SMT components, integrated circuits, and potentially even custom neuromorphic chips. Incorporating advanced memory elements like FRAM or memristors also allows for non-volatile and adaptive learning features.

    These changes would result in a neuromorphic circuit that is more suitable for applications requiring high accuracy and compactness, such as in embedded systems, robotics, or advanced AI hardware.

    End

    -----------------------------------------------------------------------------

    As I don't know any of this it's up to you to interpret how well the 4o-model handled that schematic.

    If it's even close to doing the job, even if it's off on some stuff, consider where these models were incapable of just two years ago.

    With the improvement of analysis that the o1-model has, and if that gets improved on a year from now in the same step-up it did for that, then it might very well be that you will see such a tool before retirement (depending on when you retire that is).

    As you point out, the problem is accuracy, reliability and consistency. It's that threshold that needs to be crossed before any of these can be used as a serious tool for work related tasks that require accuracy higher than what a human is consistently capable of and reliable enough to be trusted directly.

    As it is now, everything needs to be double checked. I think that any "industrial revolution" scale of societal shift will only happen once we cross that threshold. Before that we still need to have a human analyzing the result.

    Maybe the first step for using these models in a serious way is to have a person hired to verify the outputs. On paper it sounds like more people needs to be hired, but it also means faster processing of what was only a human task before. I don't know how that would apply in your profession, but it's at least interesting in what ways these tools will integrate into professions.

    I don't think it's going to be like the doomsday people speak of it. Some jobs might disappear, like Baden spoke of certain marketing people (or rather copywriters), but in my own experience, copywriters usually are overpaid for what they actually do and I see no point in not letting AI take over those tasks.

    But in general, I think that many who are scared they will get replaced will find themselves with tools that just make their working lives a bit less tedious. Less long hours with large amounts of information and data to go through and more focus on the better aspects of human input and engineering.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Eh?

    Never mind.
    bongo fury

    Then care to explain what you really mean then? Because the generalized official criticism of plagiarism against LLMs generally is as you describe; attributing plagiarism to generative AI without proving that the function of the system is actually able to be defined by such action, especially in the context of how humans themselves operate. Then applying it all as a general criticism of the system itself, and not the use of the system.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Electronic schematics, so something like:wonderer1

    And what is it that you would like an AI to do with such schematics?
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Can you, or can't you?bongo fury

    Because it is a nonsense request that fails burden of proof. You claim plagiarism, so you have to prove plagiarism beyond the doubts that I raised. It's proper philosophical scrutiny.

    You're waffling. I'm talking about a common sense understanding of plagiarism as warned about in typical forum guidelines.bongo fury

    That's not the issue here. The issue is that you attribute all use of LLMs to plagiarism. Or what is your point?

    You don't say.bongo fury

    What's with the arrogant tone?.

    Asking anything or anybody for advice on formulating and expressing ideas, or on refining and redrafting a text, is perfectly clearly plagiarism if unacknowledged.bongo fury

    So you mean that previous grammar software that did not use AI is also plagiarism? Since it had the ability to reshape text far beyond just a spell check. Or when an author collaborates with an editor who makes suggestions and edits in collaboration, mean that the author is plagiarizing as well? Or when an author talk about his ideas with friends, family or other people before writing?

    In that case, you either don't really know how most officially released text by humans are actually handled and would have to attribute almost all released works and texts as plagiarism.

    I don't think your premise there is strong enough.

    Apparently my definition leaves you without a counter example, so no I don't.bongo fury

    What counter example? Can you be more vague? I'm still reading hoping to hear a strong premise in support of your conclusion.

    Not while the cases are clear.bongo fury

    What cases?

    That human is mortified, and hopes not to be judged deliberate or negligent in their error.bongo fury

    Is your rhetorical strategy to try and ridicule the question? I'm afraid that won't work very well.

    Not at all then.bongo fury

    And what does that entail regarding your definition?

    Only compounding the crime of failing to acknowledge their input.bongo fury

    What crime do you speak of? The same crime as an artist pooling from sources, cutting out inspirations and putting them around their studio? Concept artists painting over photographs? Authors taking entire events from other works and putting them into new context? Filmmakers copying compositions and camera moves, VFX works? Composers using whole segments just outside of established court cases amount of notes?

    Yes, the crimes are all over human creativity and no one cares until it's blatant or obvious to the common man or judge. But in terms of AI, the same process of remixing inputs occur. Like a person with photographic memory visiting a library, or an arts museum. Able to gather, in fine detail, every word and stroke he's seen. Should we ban people with photographic memory?

    Can I photograph art and use in my privacy for what I create, as long as the creation isn't an obvious copy?

    As a person with lots of insight into artistic areas of creativity while also understanding the technology, it is remarkable the amount of misconception that exists in the public debate around AI models. There's lots of hyperbolic use of words related to crime and punishment going around obscuring the nuances.

    Or even in their present condition of (rather drastic) fallibility, let them join in. But properly acknowledged, and properly scrutinized. Is my point.bongo fury

    Not all humans are acknowledged in helping an artist, even though all involved are humans.

    Could be either, of course.bongo fury

    Only accidental, a form that is constantly being chipped away from these models. An AI cannot intentionally plagiarize without having agency of its own.

    So there are clear cases? Or not?bongo fury

    I asked a question. Would you attribute the computer to conducting plagiarism because the human copy/pasted something?
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Any competent and reflective practitioner of English will define plagiarism as something like: deliberate or negligent misattribution of authorship.

    Authorship tends to have unclear cases as well as clear ones. So does deliberateness or negligence with respect to this criterion.

    But show me a case of unacknowledged chat-bot-assisted writing that isn't a perfectly clear case of plagiarism by this definition?
    bongo fury

    How does that lead to such a clear conclusion?

    You're talking more about the philosophy of authorship and not specifically plagiarism as a legal phenomena. And it's in court where such definitions will find their final form.

    The range of how LLMs are used is broader than this. Someone using it to generate an entire text might not be the author, seen as how copyright laws have concluded that generated images cannot have copyright attributed to the prompt engineer. But LLMs are however a bit different since it's not a clear dividing line between prompt and output if the prompt engineer includes their own text as part of the output. Just asking the LLM to do all the work is a clear case, but this is not the best use of LLMs for text generation and not really how it's used by those actually using it as a tool.

    You need to define in what intended use-case of an LLM you attribute to making plagiarism, is operating in. And also include a comparison to how a humans process available information into their own text and when that person is stepping over into plagiarism. What happens when a human accidentally produces exact copies of sentences from memory, without even knowing that they do so? How does that differ?

    Add to that the improvements of LLMs and the future scenario in which LLMs have become better than humans at not copying training data text directly and always providing citation when referencing direct information. Because the fact still remains that legal battles over this will end up demanding a defined difference between how humans and LLMs process and produce text. In the end it may just be ruled that we shouldn't allow LLMs just because... they're machines mimicking human processes. At which we need to define where that line is drawn as well, ending up in luddite territory of anti-progress around a lot of different technologies, rather than setting up clear regulations that can function together with the benefit of AI-models. And if the systems start to operate better than humans at avoiding plagiarism and using these models as assistive tools might even help avoid accidental plagiarism, what then?

    Because the uses of these models have much broader use-cases than some loser wanting to pose as an educated person online or trick their teacher. If the models are to be banned over loosely defined parameters in law, they may also stifle use-cases like research in medicine, in which the same system is used for data analysis, speeding up that research so much that it takes days to do something that previously took years.

    So the textbook definitions aren't as important as how well they work in court and the critics of AI risk blowing their entire frontline of attack if they pool too much faith into the definitions being "crystal clear". There are lots of copyright and plagiarism cases through history that seemed to be obvious by people saying that their interpretation is crystal clear, only to then be turned on their heads by the complexity of reality.

    So, instead, show me a clear case of plagiarism that can be representative of the entire LLM AI-technology and all its use cases as well as be a defined operation over the course of any improvements going forward.

    As far as I can see, just as a hammer can be both a tool to build with, it can also destroy, or even be a murder weapon. Attributing plagiarism to the LLMs specifically is a losing battle, especially over the course of time improving the models.

    In the end, the plagiarism will be attributed to the human, not the machine. Or should we blame the computer of plagiarism for the use of CTRL+C, CTRL+V and not the human inputting that intention?

    I'd want an AI that can take schematic diagrams as input, and produce schematics as output, before I could see an AI as highly useful for my work.wonderer1

    What types of schematic diagrams do you mean? And we're still early in development. The programming of new software that has specific uses of AI-systems seem to require a deep understanding of the AI-systems themselves. So far we've only seen this in medicine research since they've already been working with algorithmic coding. But I'll bet we'll see tailored software for specialized tasks soon.

    Not surprising, as marketing-speak is probably the most annoying, uninspired, and aesthetically ugly verbal trash to be imposed on the human mind up until AI LLMs offering it some competition.Baden

    I think LLMs are already more capable of producing scripts for marketing that offers a language that doesn't come off as out of touch with reality or tone-deaf. Copywriters for big corporations trying to "talk" to their customer base usually sounds like aliens trying to communicating with the human race. That LLMs are more capable of finding the correct tone and language to sound closer to their customers seems rather ironic.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    you cannot trust the generic end user to use it responsibly. I wish this were not true, but it is.fdrake

    Then we return to the problem of how to distinguish the use? The more advanced these get, the less likely it's possible to spot their use for post-generation.

    This whole thing really becomes very philosophical in nature... fitting for this forum. It all becomes a P-Zombie problem for written posts; if you cannot distinguish someone's writing from someone who knows how to use LLMs for a perfect mimic of a user writing, how can the rules be enforced?

    It's similar to the problem of spotting generated images and deep fakes. Since the tech advances so fast, the solution ends up being another AI being used for the purpose of analyzing if an image is generated.

    At some point we might need proper and judicious use of AI to counter posts that can't be judged being generated or not. Either by analyzing the language used, or to use it for deconstructing the merits of the argument in order to find the sources or lack of sources.

    But then we end up in a situation in which the intention is to spot the misuse of LLMs, but the method is ending up being a proper philosophical debate, using LLMs pitted against each other.

    I don't know if it's possible or not, but one of the only concepts I can think of that would properly pinpoint the use of LLMs is if there was a timer coded into the forum, tracking how long it took to write a post. I have no idea if this is possible or not, and the only flaw would be if someone writes posts outside of the forum and then paste them here, but I do think most members write directly here in the forum in order to properly use quote tools and such.

    Point being, if the written post can track how long it took to write and if it was formed with key strokes within the range of how people write normally, then it would be somewhat of an evidence that a post is actually written and not just generated and copy-pasted into the forum.

    At least food for thought on coding new features and functions in a post-LLM world of discussions online. :chin:

    Should we have some guidelines on acceptable use of plagiarism on the forum?bongo fury

    If you mean the concept of LLMs generating plagiarism, the more I've deep dived into the technology and compared it to how humans generate something, the less I think we can blame LLMs for plagiarism. Much of the evidence have been attributed to quirks of on going technological development and the arguments keep getting into cult like behavior online by people who try to weaponize language in order to fight against AI technology. The use of terms like plagiarism, theft and such is being used so haphazardly that it risks making valid points of criticism easily dismissed due to an overreliance on the terms being factual descriptions of the technology when in fact no such definitions have yet to be defined.

    The overlap between how the tech operates and how humans operates in creating something produces problems in reliably and properly defining the boundaries. Similar to how some concept artists blamed the initial models for plagiarism when they themselves used tracing off photographs they grabbed from a Google search, which technically is a much more direct use of someone else's work without credit.

    And for text generation, the problem with LLMs usually comes down to accidental plagiarism rather than intentional. And accidental plagiarism mostly occurs when sources aren't cited properly and the sourced text ends up as part of the authors text. This often happens in academic writing and is sometimes hard to spot. But new reinforcement learning for models like the o1 seems to combat these accidents better (not perfect) and in time they might function even better than the majority of human writers do in this regard.

    Point being that any text that's written as a new sentence cannot be considered plagiarism, even if the communicated information and context of that sentence is coming from memorized information. Human language would become a mess if we had to double check everything we write like that. We assume that when we write something, the processes in our brain counts as enough creativity to be considered not plagiarism. Yet, we have the same ability to accidentally plagiarize, even when writing normally and we aren't aware of any of it until someone points it out. Like, how do I know that what I write here hasn't been written somewhere else; some lines I've read some time in the past and I'm accidentally typing up the same sentence because my memory accidentally formed it around the same contextual information I'm intending to communicate?

    We source other's information constantly. Anything we hear, see, read or even taste becomes part of a pool of data we use to create something new, a remix of it all. The problems with just summarizing all AI models as plagiarism or theft is that the terms aren't used properly within the context of the criticism. It's the cart before the horse; people want to criticize and take down the AI models first and tries to apply a reason for it as an after-thought. But for the terms to apply correctly, they must have a contextual definition for how they actually apply to the AI models and there aren't any since people only use them haphazardly. By the definitions we use them for judging human outputs, it would most likely free the AI models of plagiarism and theft accusations rather than them being guilty especially since we don't attribute a person's single plagiarism to all text they have ever written and every new sentence they will ever write.

    It is semantics, but semantics are always important when defining law and moral in discussions about uncharted territories like this. What it boils down to is rather that until the criticism against AI models can find a better philosophical ground that's solid, maybe even as a newly defined concept of how humans and AI models will co-exist legally and morally going forward; all the AI criticism just ends up being "I don't like AI so I want it banned". It's ok not to like it, it's ok to fear the abuse that they can be used for, but never has "I don't like it" been enough to properly ban or help structure a working foundation and boundary for new technology. It ends up becoming just another luddite argument to ban machines rather than the necessary philosophical arguments for how we can co-exist with this new technology.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    You need to check what it says in the source, too. Do NOT trust its word on anything.fdrake

    Isn't this true for any source? Isn't the correct way of using any source to double check and verify rather than outright use it as a source of facts? If we objectively compare ChatGPT with unverified human sources or pseudo-media that's being tailored to function as factual sources, I find ChatGPT to be more safe than just using the online flow of information uncritically. People using unverified sources that sometimes are malicious and intentional in their pursuit of manipulating online discourse to the point of reducing truth into obscurity.

    I'm of the opinion that regardless of source, they all need to be double checked and verified in discourse, but I've not seen this type of doubling down on other uses of sources?

    Why do we value other unverified sources that may very well be constructed by malicious intents or by people who want to pose as being factual? Just because they look and sound like news articles or papers? I've seen blog posts being used as "factual sources" without a critical attempt to dissect those sources before using them as critical pillars of a conversation.

    On top of that:

    The intent of the rule is to stop people from using it to spread misinformation and from generating reams of undigested content.fdrake

    This could be a worse problem with human generated sources that are malicious or obscuring their belief in the appearance of factual representation. It's an ongoing problem that a lot of what is found online is generated by ChatGPT. So even sourcing "real" articles could be the same or even worse than using GPT directly, since we don't know the intent of what those online source's prompts were for their seemingly "human written" articles.

    Don't just put what someone says on the forum into it and get it to write your response. That's the kind of stuff which will eventually make us have to enforce a no-tolerance policy on it.fdrake

    This is something I strongly agree with. The laziness of using it in that way really only shows that the person doing it is only here to pretend to think about different topics and to rather engage in the emotional dopamine of winning and participating in debates on a higher level, rather than being interested in the actual subject on a curious and honest level.

    The problem is still that it's impossible to know if someone does this going forward. The more advanced these systems become, the less obvious their responses will be. Especially if the one using them are good at prompt engineering, since they could just engineer away the quirks and structure of language that is a sign of a specific model.

    On the other end, and going by how well the o1-model has shown to be in reasoning and analysis, I also think that it's not good to over-correct in all of this. There might soon come a time when these models are much more valid in their responses than finding anything online in a traditional way; especially in the context of philosophy, science and literature when not able to find it on traditional academic sites, or if they are themselves able to sift through academic sources.

    None of this is an argument against the rule, only a conversation about it and what parameters it should possess.

    I see a lot of conversations online that draw hard lines between human generated content and LLMs, without underscoring just how bad most human sources online really are. Statistically, there's very little information produced by people online that's factual, but they're still used as grounds for arguments.

    Posts should not be written by an LLM, but using something like an o1-analysis and clearly marking it as such wouldn't be much less problematic than using unverified links to blogs or other texts online.

    So I think it's good that the sentiment about how to use LLMs should be to always mark it as LLM generated, and that such analysis or texts cannot be used as a main supporting pillar of an argument, but rather a source of a different perspective, of giving clues into which direction to look for further answers and information.

    And referencing Pierre:

    They therefore have the latent ability to burst epistemic bubbles from the outside in, as it were. But this process must be initiated by a human user willing to burst their own epistemic bubbles with some assistance by the LLM.Pierre-Normand

    It may also be good to have instructions for those who feel they want to use LLMs. Because what Pierre writes here is possibly why some have a good experience with the LLMs while others just generate trash. Asking the LLMs to analyze something critically, including opening yourself to criticism by asking for it, produces a much more balanced output that often engages you into better self-reflection since there's no human emotion behind that generated criticism. There's no one being angry at you and criticize you because you say something they don't like. The LLM, when prompted to be critical of your own writing, often cite sources that specifically underscore the weakness in your argument and it forms quite a powerful form of ego-death in reasoning, bringing you back to a more grounded state from the high of your own writing.

    In my own research about LLMs and testing them out, they can act as great Socratic partners for testing ideas. And often when getting stuck on certain concepts, help break them down my concepts to show the problems with my own biases and fallacies in reasoning.

    So, while pointing out in the rules about how not to use LLMs, we can't ignore the fact that LLMs will keep evolving and being used more and more, so tips on how to use them and for what could also benefit the forum. Things like what type of questions and how to ask them, how to take what you write and ask the LLMs questions about your own arguments that improves your understanding of your own opinions and ideas before posting.

    There's so much polarization and extremely binary ideals around AI today that I think the nuances get lost. It's either "ban it" or "let it loose" rather than ban certain use and find the use that's beneficial.

    Since LLMs will only grow and be more popular in use, it might even be important to have pinned information about "how to use LLMs" in which the clarifications of what not to use it for, as well as tips for which models are considered preferred and how to prompt them correctly in order to get balanced outputs that does not play into the users own biases and bubbles.

    That telling people not to use a tool in a certain way is just as important as telling them how to use a tool in the correct way.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    You can still use and reference ChatGPT output, just don't use it as an authoritative source.fdrake

    Do I interpret it correctly that we can use ChatGPT in arguments as long as we mark it as a ChatGPT reference? Like, supporting reasoning, but not as a factual source?

    In essence, as a tool, not as a source of facts.
  • Ukraine Crisis
    You do understand that then denying atrocities that have happened is totally normal for this neutral platformssu

    How is that an answer to what I clarified?

    And is something wrong with that?ssu

    Why would it be? Why do you interpret it as wrong when I've lifted this forum as good example of neutral praxis that would conform with the same ideals that a UN based social media would do?

    This is totally true. This is the weird and unfortunate reality of social media. At worst it might be that we start to change even our real world exchanges with other people into the kind that are so popular in the social media, because people don't care so much anymore even if they flame in their own name.ssu

    It wouldn't be if the algorithms didn't cater to conflict and negativity, since the research concluded that such behaviors drive attention and interactions more, which is key to ad revenues. Removing the concept of having the users as the product for the real customers (brands and marketing) and focusing on the users as the main and only focus of the platform would drastically lower the level of toxicity that occurs online. This has been researched and reported on for the last ten years so it's an obvious conclusion as to what action would fix most of the problem.

    And this forum is such an example. Without any need to push ads and engagement, it kind of keeps itself in check on the basic level of human decency, even if we're operating under pseudonyms and avatars. The forum moderators lift up good behavior and shut down the bad. Social media does the opposite as much as they can without getting public criticism.

    Who pays for it? The one that does, holds power over the media. That fact of reality you simply cannot disregard. UN? That member countries put their tax money to the media?ssu

    Yes, why not? Managing a social media without the purpose of pooling billions into profit means that the taxes for nations are miniscule for the output of good it will do to the world. Some nations will of course object and refuse, but there's enough nations that see the benefit to keep the social media platform afloat. And each nation might even want to join paying for it in order to be part of influencing the platform management. The more the less governed by anything other than consensus.

    First, the UN organization can itself be corrupt. If someone then wants to criticize the UN organization responsible of this free neutral social media, how if then the organization shuts down such hate speech.ssu

    You're implying a totalitarian takedown of free speech criticizing the platform, which there's no evidence for would happen. Remember, the UN isn't consisting of just some top people, it is consisting of multiple organizations and overseers. The more parties involved, the more nations involved, the less it can be corrupted. Even if some are, it wouldn't equal the entirety being corrupt. Such ideas just sound like some kind of genetic or slippery slope fallacy. The UN still operates far better than the uncertainty of having singular governments or entities in control, which is the state of things now looking at both global commercial social media platforms and state owned like in China.

    Secondly, member countries will try to influence directly this "neutral" media. Many countries would just love to have the control just what is determined to "hate speech" and what is "supporting terrorism". Now it's defined usually from what country the media is from.ssu

    And the consensus from which the UN operates by, is what? So far, adhering to human rights have proven to have more cases of being on the side of good morals than any specific agendas that individual governments have had. They can try and influence all they want, but the nice thing about the UN is still that they force consensus and there are usually more morally good people than bad. The worst things can get is "stupid", but "stupid" generally bounce back and self-correct better than immorality.

    In Finland we had a government funded public service that had a monopoly for example of the radio waves until 1985. Then the first commercial radio started. Guess what: young people didn't listen to the radio prior to that while they now are and have been for a long time the largest group that listens to the radio. What was the reason? They was ONE radio program ONCE per week playing Pop & rock music prior to 1985. And I'm not making this up. Yet for the public broadcast corporation didn't understand why people didn't listen to radio anymore in the early 1980's.

    This is the actual reality of a government monopoly of a media. And don't think it will be different under the UN.

    And I think you should understand the real implications of your proposal: An UN mandated social media won't start to compete with the commercial medias... it would be changed by law with the commercial medias being disbanded by legal actions. Because it would be whimsical to think that some UN lead media would have the ability to compete with the other medias and somehow obtain now a monopoly situation just by free competition.
    ssu

    You're comparing the wrong things here. You speak of a single government, you speak of mainstream media that's about a binary output and receiver. Social media is nothing of this.

    And yes, I think that social media should be illegal for commercially based tech companies to have the right to operate. Social media has become an infrastructure and no public would want the same operating methods of these companies to apply to other forms of infrastructure or communication. We don't operate telephones on the standards of engagement bait and ad-revenue. We aren't pooled by force into public squares in which people get into fights that grabs our attention while officials walk around with physical ad signs. The absurdity of how social media operates for something so integral to our modern world is clear once realizing the problems of tech owned social media.

    Hence basically your idea just comes down to squashing free speech and make it more bureaucratic.ssu

    This conclusion does not follow the argument you made. There's nothing of what I propose that leads to squashing free speech, quite the opposite, since there's no single entity in control of it, but a collective of the world, using open source standards and by the guidelines of human rights, it takes free speech seriously and not in the pseudo-way that bullshitters like Elon Musk or Zuckerberg do.

    "Free speech" is a concept that people have lost an understanding of. There's no such thing as free speech absolutism or anything like that. Free speech today has become an acronym for excuses made by those who just want to spew out their hate, not actually talk criticism. Actually, it's the promoters of free speech absolutism like Elon Musk who generally silence people who criticize them or something they like. So it's the people who scream about free speech the most who seem the most keen on suppressing it.

    The beauty of collaboration and consensus among the many is that these morons like Elon Musk become suppressed in their psychopathic oxymoronic attempts to abuse the term "free speech".

    What leads to free speech is keeping platform rules open source, always under scrutiny by the consensus of the world, under the banner of basic human rights. Free speech, ACTUAL free speech is part of those human rights. Tech companies do not operate under such ideals, they use the terms in marketing strategies for their own agendas.

    How on Earth you think that will happen with your proposal? Sovereign states do understand just how important and crucial public discourse is. Some give more leeway to this, some are totally paranoid about it. I really doubt that this would be a function that the UN as an organization could handle well.ssu

    By concluding social media as an communication infrastructure of the world and not a business for companies to exploit.

    NO IT'S NOT!
    It's not a "losing battle". I would argue that it's the other way around: government's around the world now understand the new media quite well and can use it well to spread their own propaganda and disinformation. I do agree that earlier in the turn of the Milennium, many governments were still quite clueless about the new media, but that is history now.
    ssu

    Once again, I underscore that a global platform is under the scrutiny of the consensus and being an open platform. The openness in this means that any attempt to take control is impossible without it being seen by the public of the world.

    I do not produce arguments out of some conspiracy of some cabal operating in the UN. There's more proof of corruption for how things operate today through tech companies and individual states than any notion that a consensus and collaboration on a global scale with an open source structure would ever lead to such corruption. That's just conspiracy theories as the basis for an argument.

    It ought to be quite evident that people can tow the official line happily, especially if the subject is about national security, natural importance and so on. I find this is a battle that the naive IT geeks who thought that the World Wide Web would free people from the shackles of government control have already lost quite dramatically. It just took a couple of decades for the governments around the World to understand how to control the new media.ssu

    You're still speaking of individual governments, not how a consensus would operate. The only reason the UN can't do much on the global scale is because they don't have such power. But if nations in the west start to primarily ban commercially driven social media with the intent that we globally build such an infrastructure as a replacement, then they will be able to as there won't be enough revenue for the tech companies to operate social media sites.

    A closed infrastructure, regardless of being controlled by an individual state or a tech company is still more in control of individual agendas than a global collaboration in an open source structure. That should be obvious.

    Besides, people will try to find ways to reach out beyond government control when government is totally obvious and basically ludicrous[/i]ssu

    Yes, but what does that have to do with this? You're creating scenarios that doesn't fit how it would be so for a globally consensus-governed social media platform, but rather mix together individual totalitarian states with how the UN operates. The argument seems oblivious from the nuances here, a form of binary perspective of a governing body always being corrupt and totalitarian, even when the structure prevents such corruption from manifesting, or at least preventing it far better than tech companies and individual governments do.

    Again something that people said sometime in the 1990's.ssu

    You disagree with the assessment that ridding social media of these algorithms and market driven operations would make for a better public space online? It's not an ideology, it's the truth of how social media affects society today, it's research backed knowledge, not some IT-people idealist ideas from the 90s.

    If there's a will, there's a way. And today governments understand how social media can be used to attack against them. And can use quite similar tactics themselves.ssu

    Yes, so remove individual state influence and tech companies power over them. It's delusional to think that such operation is better preventing such malicious control, than an open platform that's globally collaborated on and open to scrutiny from anyone.
  • ChatGPT 4 Answers Philosophical Questions


    I failed to spot that you were the one asking ChatGPT for the topics, I interpreted it as you asking the questions and it answering. So I think this went into misunderstanding.

    I agree with what you say.

    Have you tried the o1 model? It summarize the train of thought it has while answering. So even though it's not a peek into its black box, it's a peek into the reinforcement learning they used for it.

    People don't like that it takes a long time to answer, but the thing for me is that if it would even take 10 minute to answer, as long as the answer is so correct that it's more reliable than a human, it will become the level of AI that can be used professionally for certain tasks.

    My view is that AI advances will result in greater disparity between the higher and lower intellects. With chess, for example, the world didn't become filled with equally matched players, all who had the answers at the ends of their fingertips. The world became filled with higher chess performers who learned to utlize the programs to advance themselves. It's like if you and I worked with Einstein and you spent your days picking his brain and asking him every question you had about physics and I just avoided him because he was quirky and boring, you'd come out much smarter than me. Making information available helps only those with the curiosity to look at it.Hanover

    Agreed. I think we're in a time in which we're trying to figure out how to use these systems. I think the whole "AI will make artists out of a job" is ridiculous and a red herring. Real art is about intention and the subjective interpretation of life and reality. AI will only produce products in that regard, the soulless market usage of artists talent. Yes, many artists make a living doing that in order to sharpen their craft, but I've also seen the opposite, artists giving up on their careers because they're burned out by mindless and endless trash for marketing and whatever corporate interests that require art.

    No, I think it will be more like you describe here, AI will become personal and professional assistants, who help augment already talented and experienced people into being better than they would have been otherwise. In that regard I view this AI development as positive.

    As you say, it comes down to curiosity and I think curiosity is actually the key component of mastering anything. If people lack curiosity, they will just stagnate at the point at which they believe to have the necessary level of competence for a task. It's why there's an abundance of artists who never seem to get any better, just producing more and more lackluster variants of the same lackluster original they created when reaching their peak performance. Curiosity is what drives the ones who's already reached a master level of craft or knowledge, it's what drives towards wisdom and its holistic levels of understanding interconnections between specific knowledge nodes.

    For instance, I don't know any math in theoretical physics, I'm barely good at math in general. But I'm reasonably good at abstract thinking and the more I learn about physics, the more curious I get about the abstract consequences of different concepts in physics. While I would never propose my ideas as anything other than philosophical and no way near the scrutiny that theoretical physicists has, I find it interesting to come up with abstract ideas for physics and test them using GPT. Especially the new o1 model is impressively good at analyzing the merits of an idea.

    And that is similar to the concept of "speaking to Einstein" in some ways. If I would instead be studying physics or working as a physicist, then the better these systems get to high level reasoning and working with math, it would speed up my work or studies massively.
  • ChatGPT 4 Answers Philosophical Questions


    The problem with these is that they're probably easy for the LLM to find similar questions in other literature that it's trained on, as well as probable answers to those questions.

    The harder test for it would require more complex problem solving involving abstract deduction for more specific and complex scenarios. A longer question with more interconnected details and red herrings in logic.

    It's interesting when it becomes more of a challenge to invent a problem than for the AI to solve it.
  • ChatGPT 4 Answers Philosophical Questions
    I tested the o1 model reasoning with some made up, for humans, easy deduction and it nails them all without a problem.

    I would need some new material to test. Some really logical, but requiring abstract reasoning that would push its limits, as well empathy questions that rely on interpreting feelings and intentions and deducing such behavior.

    Can't be something that exists online as the available answers may be part of the training data, so it needs to be a very hard problem that requires the level of reasoning that humans do by understanding causality and implied consequences or chains of consequences.
  • Ukraine Crisis
    What do you think a neutral platform would be like?ssu

    replace them with globalized social media platforms that are decentralized, open source and handled by human rights overseers and directives.Christoffer

    That form of neutral. Adhering to the values that underpin the core value of the UN, to the actual understanding of how freedom of speech as a concept is protected, and not the skewed corrupted use of the concept that most people use as excuses for spreading hate and vile behavior.

    This site itself is an example of what we call social media. And it is in my view a fairly neutral platform. The bannings are quite reasonable.ssu

    And what rules and values does this site aspire to? This forum pretty much aspire to remove the hateful, vile and propaganda spammers. All in the name of basic decency. It also has rules of engagement in which endless trash posting isn't allowed.

    Basically it acts like normal society with healthy freedom of speech. It's the disconnected behavior between online and offline that creates monsters of people who are decent offline. Treating the online space as a real offline place with the same rules and social cues we have normally makes things much more healthy.

    Medias just give people what they actually want, not the polished image of what people answer when asked "what ought to be the social media be like?".ssu

    Not when used as a business of the user being the product. Social media that rely on ads and algorithms for ads aren't websites for the people and users, it's for the companies that make up the cash flow for these tech companies. The business is tailoring attention towards these ads so the revenue goes up. So any algorithm and function of social media today is among the most advanced algorithms in computer science, carefully, over the course of many years in use, fine-tuned for the purpose of increasing addictive behavior in the user in order to keep their eyes on the ads and paying customers needs.

    That's why I'm proposing social media free of it. Anyone who thinks that media is for the people if it's formed around ad revenue, don't understand how that business actually works.

    You either have government funded public service systems and media. In low corrupted nations this can work and be neutral depending on how the laws and regulations are between state and that media outlet. But in many cases and in many nations, public service media is used for propaganda purposes. So it's not a good fit for social media to be funded by a single government.

    Media funded social media is what we have now and it's obvious how malicious and negative its functions are for the people. Any other perspective would be living under a rock the last decade.

    So, the only way to be able to have a social media space that is actually for the people, would be to have it under the core principles of the people as a whole. The only form of global collaboration that isn't a form of business transaction, is the UN and the UN have neutral organizations that have the purpose of overseeing the core values of human rights on a global scale. Manifesting a cooperation between nations to fund and drive a social media that is detached from business and from any single government influence, on the basis of human rights values and a normal understanding of freedom of speech, would be the only way to handle such a system.

    The core here is to remove single government control of social media, and to remove market interest that make the users into products rather than the purpose of the site.

    And of course, some nations don't want this human rights-based social media, since it's a threat to their state control. But that battle is a losing one since people will always find ways of reaching out beyond government control. And without the focus on ads and products, the algorithms won't push endless trash and may very well push the right kind of grass root movements that help people organize against state violence in these nations.

    We've seen examples of how social media helped arming people with information and quickly organizing against governments. Imagine how it would be on social media that removes all the endless trash and ads obscuring the actual people and their movements.

    At least I'm arguing for a solution to the problems of current social media platforms.
  • Ukraine Crisis
    you seem to be arguing in favor of cracking down on social mediaTzeentch

    Am I? I seem to promote that we should crack down on the predatory algorithms that these tech companies enforce on us in their social media platforms, not the construct of social media itself as I'm further pointing out that we need social media platforms free from market-driven intentional or unintentional manipulation of our perception.

    Modern governments unfortunately have become part of the problem, and therefore cannot be trusted to solve it.Tzeentch

    Which government are you talking about? You're just summarizing all governments in one big pile? Things are more nuanced than that.

    They could help solve this issue by creating platforms where constructive discussions can take place (as they have done in the past), but modern governments show no interest in doing so.Tzeentch

    Governments as singular entities cannot do this without the risk of corruption. It should be a global effort and collaboration since social media is a global function.

    Because modern governments have gone all-in on propaganda (now euphemistically called 'narrative'), and they don't want their propaganda to be questioned on authoritative platforms.Tzeentch

    Can't do propaganda like that if you have a global collaboration and decentralized and open source nature of the system.

    In fact, governments don't want their propaganda questioned on any platform if they had their way, and that's of course exactly what they would strive for - an iron hold on public opinion à la China - a monopoly on "truth".Tzeentch

    Yes, that's why the concept is to exist under a UN movement rather than any single government. You think the corporate interests of someone like Elon Musk and his conspiracy theory narratives is better than any governments handling social media? Both are equally problematic.

    Which is why both singular governments, like China's own platforms and tech companies platforms should be taken down in order to replace them with globalized social media platforms that are decentralized, open source and handled by human rights overseers and directives.

    Believing in social media as they are constructed now, buying into Musk's free speech absolutism etc. is ignoring how current social media operates and how it skews people's world view to the point of societal collapse. We already have elections being manipulated through it, even without any governments owning their functions. I don't think you're looking at this with enough scrutiny.
  • Ukraine Crisis
    Nothing that happens on social media is what I would consider public debate, and certainly not healthy public debate.

    And that's my point: healthy public debate is lacking.
    Tzeentch

    I think you need to read what I wrote again. The public debates that should be the core of forming rational opinions in a democracy, have been taking on the behavior of how social media function, meaning the lack of healthy public debate is the cause of how social media operates by these large tech companies. It doesn't matter if the public debate is in the public or in traditional media or wherever, the attitudes, herd mentality biases and emotional fallacies have taken over as the primary focus for any debates.

    Without social media, in its current form, influencing and programming the public into a toxic debate behavior, you will have healthy public debates showing up. It's the rejection of these toxic market algorithm-driven social media platforms that will lead to better care for democracy.

    We already had a major problem with traditional media channels that were reporting biased news, but nowadays it's everyone's behavior due to how these algorithms push people into narratives that align with what makes most money for the tech giants. They simply do not care about healthy democracy until war is on their door step. Because these companies have zero intellectual insight into the broad consequences of their actions, as proved by documentaries like The Social Dilemma and thinkers like Lanier.

    That's almost exactly how government agenda-setting functionsTzeentch

    What does what I said have to do with that? I'm talking about how algorithms on social media corrupts the perception of knowledge and change people's behavior into a corrupted mass herd with a lost ability to form a healthy democratic movement.

    For that you would need impartial decisionmakers, which we have just established the government is not.Tzeentch

    It becomes impartial by it's decentralized nature, open source structure and global form. You can only be truly impartial by including as many different voices as possible so that it forms a broad consensus, otherwise it will always lean towards someone specific. And if the funding is a neutral fund that cannot be infused by lobbyists and alike, but rather a form of global tax for every nation in the UN to fund, it will both be a very low cost for the world, but also impossible for any government to influence to the point of corruption. Add to that even further structures of how to democratically rotate leaders of it's operation, have impartial oversight reviewers, and it becomes even less prone to corruption.

    There are forms of governing that are less prone to corruption and which enforce more rational decision making. Don't make the mistake of using singular government examples to dismiss ideas of functioning politics. There's no alternative to handling society than some form of governing, so it's more about systemic changes to advocate for better functioning democracies. And the form I described the global and neutral social media system is not governed by one government. I'm not sure why you interpreted it in that way.

    Frankly, people flinging shit at each other on the market square doesn't concern me one bit.Tzeentch

    Well it should. You seem to think that the people and the government exist on two different planets. They're intertwined and push and pull on each other.
  • Ukraine Crisis
    People sharing bullshit wouldn't be much of an issue if democracies such as ours would foster healthy public debate. Governments are trying to crackdown on social media precisely because it disrupts the echo chambers they're so keen on maintaining.Tzeentch

    I don't think you have insight into how social media platforms function. They earn more money on engagement and ads, which means, going by psychology research, negative comments and posts generate more engagement while attention bait and rage bait generate more ad revenue.

    So you have things backwards, the current form of public debates is a result of catering to how people interact on a large scale today, i.e how people act on social media. Changing public debates will not do a single thing towards changing how people interact. Only transforming social media from market driven algorithms into fostering an algorithm that is neutral for the sake of normal interactions, without any ads or market driven influencers consisting of the majority of views and interactions, as well as a clear line drawn on behaviors reflecting what a normal public space would allow behaviors to be would generate a true social media for the people and not corporations.

    Take this forum as an example. Imagine if there was an algorithm that pushed just the most conflict ridden topics to the top and only the ones who pay for algorithm priority raises to the top, flooding the entire front page with their topics, most of them being rage baits in order to earn money through influencing people to buy a certain product. And the mods ignoring most of the obvious racism, threats of violence and bad behavior that we see on other platforms.

    Be very thankful this forum is free of such bullshit and that the mods actually ban people for misbehavior. It's actually impossible to be on social media platforms today, and so this forum has for me become a sort of oasis for online debates and discussions as it's not driven by the bullshit that makes up most online communication today.

    So NO facebook, Instagram, X?ssu

    I am of the opinion that there needs to be a neutral social media platform, funded by a UN type collaboration so that there's enough money to run the site, with no incentives to push market driven algorithms or influencer economies. A decentralized, but collaboratively driven global social media platform that features similar functionality as a combination of the major ones.

    Since there's a lot of people, like me, who have been present on social media a lot in the past, but who have now seen its decline in quality with the rise of ads and bullshit and losing it's fundamental core values of connecting actual people; having a neutral alternative, that is backed by an open source, non-profit global collaboration for the purpose of being a space for the people and not market forces, would be an obvious choice to move over to.

    If people choose to stay on the trash pile that is modern social media compared to that, then let them rot there in that brain rot until there's nothing left but a pile of meat with an inability to do anything outside consuming endless pages of AI produced engagement-trash.
  • Donald Trump (All General Trump Conversations Here)
    What does it mean that it now gets treated as a rhetorical flourish?Paine

    It either means that racism is getting normalized, or that people have forgotten when we battled enemies in deadly combat to free ourselves from such depraved world views. Either way leads to either future; a place where racism is the norm, or a place in which respectful and good people don't dare to speak up against the racist norms.

    To be harsh... maybe good hearted people need to understand that setting limits and hard lines on what's acceptable or not in society also requires there to be sufficient consequences for those who cross the line. Stop accepting all the racism. Fight back against it. Stop acting like it "will just go away on its own", because it won't. Racists can act more and more according to their ideology if there are less and less pushback from society. There's no wonder that someone like Trump gets into power with his obvious racism... no one cares. If people cared, there would be riots and demonstrations pushing the nation to the brink of collapse. But the US would end up in civil war long before society actually pushed back against these forces.

    Generally, people are sleepwalking into dangerous territory and if shit hits the fan they'll just cry out "how did this happen!?" :shade: