Comments

  • Site Rules Amendment Regarding ChatGPT and Sourcing


    Even if it's a quick side-track of the thread, let's, for the fun of it, check how far the current system handles it. Would have loved to test the o1 since it's better at analyzing, but it does not have image interpretation at the moment. (For some reason the quote code for this gets broken when doing lists and stuff so don't know how to fix it).

    check that the AI understands the way the input design functionswonderer1

    So I asked what the schematics mean:

    -------------------------

    GPT-4o:

    This schematic appears to represent an analog model of a biological synapse and neuron, typically used in neuromorphic circuits, where electronic components are used to mimic the behavior of biological neurons and synapses.

    Main Sections of the Schematic:

    1 Electronic Soma (Neuron's Body)

    • This section represents the soma of a neuron, the central part of the neuron that integrates input signals.
    • Components like capacitors (C_M), resistors (R_RU, R_B, etc.), and transistors (such as QM) likely work together to mimic the membrane potential of a biological neuron.
    • The diode-like components (like BAR43) might be used to model thresholding behavior or membrane potential clamping.
    • The V_ECH node likely represents a specific control or threshold voltage.

    2 Electronic Synapse

    • This section models the synaptic transmission, where a signal from the IN node is processed and transmitted to the neuron.
    • The synapse includes capacitors, transistors (BC848, BC857), and resistors which emulate the behavior of neurotransmitter release and postsynaptic response.
    • Long-term potentiation (LTP) and Long-term depression (LTD) mechanisms are often involved in synaptic learning and plasticity. Signals from LTP OUT and LTP IN appear to be involved in adjusting synaptic strength.
    • Diodes like BAS45 and resistors like R_LU and R_F might modulate these synaptic strength adjustments over time.

    3 Postsynaptic Neuron

    • This portion represents the receiving neuron that responds to the incoming signal from the synapse.
    • The capacitor C_M, transistor QM, and other passive components model the integration of the incoming signal to produce an output.

    4 LTP and LTD Mechanisms

    • The labeled nodes LTP IN and LTP OUT likely represent pathways for inducing Long-Term Potentiation (LTP) and Long-Term Depression (LTD), which are fundamental processes for learning and memory in biological neurons.
    • In this electronic model, these mechanisms adjust the strength of the synaptic connection, thereby altering the output in response to input patterns over time.

    Functionality in Brief:

    • Signal Input (IN): The signal enters the circuit through the electronic synapse, where the synaptic strength can be modified by LTP and LTD mechanisms.
    • Synaptic Processing: Transistors, resistors, and capacitors modulate the strength of the synapse based on the input and history of previous signals, mimicking biological synaptic plasticity.
    • Neuron Output (OUT): The processed signal is passed to the electronic neuron, which then generates an output signal based on the integrated input, similar to how a biological neuron would fire an action potential.


    Purpose:
    This circuit is an example of neuromorphic engineering, a field that seeks to emulate the information processing capabilities of the brain using analog electronics. It likely serves as part of a larger network of such circuits to model neural behavior, synaptic learning, or to implement artificial neural networks.

    --------------------

    Is it close to a good interpretation and analysis of the schematics function?

    then ask the AI to suggest a variation which matches some criteria that the input design cannot achieve. E.g. higher accuracy, higher power, more compact. (With the specific components needed for the alternate design specified in detail.)wonderer1

    Tried to ask for higher accuracy.

    "I need you to list a variation of this that aims for higher accuracy, higher power and to be more compact. Also list the specific components needed for such alternate design, specified in detail."

    It's answer:


    ----------------------

    GPT-4o:

    To achieve a more accurate, higher power, and more compact variation of the neuromorphic circuit shown in your schematic, we would need to make several modifications. These could involve switching to more advanced and precise components, improving the layout for compactness, and using components that handle higher power. Here’s a suggested variation, along with a list of specific components for such a design:

    Design Considerations:

    1 . Accuracy Improvements:

    • Replace general-purpose transistors with low-noise, high-precision transistors.
    • Use precision resistors and capacitors with tighter tolerances to reduce error margins.
    • Consider using operational amplifiers (Op-Amps) for precise analog signal processing.

    2. Higher Power Handling:

    • Use transistors that can handle higher voltages and currents.
    • Utilize low-dropout regulators for stable power supply and noise reduction.
    • Switch to high-performance diodes capable of higher current throughput.

    3. Compactness:

    • Use surface-mount technology (SMT) components, which are smaller than through-hole components.
    • Use integrated circuits (ICs) that combine several functions into a single package.
    • Use custom ASIC (Application-Specific Integrated Circuit) designs for more compact neuromorphic processors.

    Specific Component Changes:

    1. Transistors:

    • Original Components: BC848, BC857 (General-purpose NPN/PNP transistors)

    Suggested Upgrades:
    • Low-noise, high-precision: 2N2222A (NPN) and 2N2907A (PNP) – For higher current handling and lower noise.
    • SMT Versions: BC850 (NPN) and BC860 (PNP) – Surface-mount versions for compact designs
    .

    2. Diodes:

    • Original Components: BAW56, BAS45, BAR43 (General-purpose diodes)

    Suggested Upgrades:
    • Schottky Diodes: BAT54S (dual Schottky in a small package for faster switching and lower forward voltage).
    • Power Diodes: 1N5819 – Schottky diode with higher current handling (1A) for power applications.

    3. Operational Amplifiers:

    New Addition for Accuracy:
    • OPA2140 (Precision Op-Amp): Ultra-low noise, low offset voltage, and high precision; great for synaptic weight processing and accurate integration of signals.
    • TLV9052 (Compact, Low-Power Op-Amp): For lower power applications and compact designs.

    4. Capacitors:

    • Original Components: 10nF, 47nF, 1µF, 2.2nF capacitors (General-purpose ceramics)
    Suggested Upgrades:

    • Tighter Tolerance Capacitors: Use C0G/NP0 ceramic capacitors with 1% tolerance for better precision.
    • Compact Capacitors: Replace electrolytic capacitors with MLCCs (Multi-Layer Ceramic Capacitors) like TDK or Murata’s series for compact designs.

    5. Resistors:

    • Original Components: Resistors of varying values (e.g., 20kΩ, 10kΩ, etc.)

    Suggested Upgrades:
    • Precision Resistors: Use metal film resistors with 0.1% tolerance for better accuracy, like Vishay Dale resistors.
    • SMT Resistors: For compact design, use 0603 or 0805 size surface-mount resistors.

    6. Power Regulation:

    • Original Components: General supply voltage lines labeled as VDD.

    Suggested Upgrades:
    • Low-Dropout Regulators (LDOs): For stable power supply, use LT3080 (high current, adjustable LDO) or TPS7A49 (high-precision, low-noise).
    • Switching Regulators: For more power efficiency, use LM2675 (step-down converter).

    7. Synaptic Learning and Memory (LTP/LTD Circuits):

    • Original Components: Diodes and transistors for LTP and LTD processing.

    Suggested Upgrades:
    • Flash or Ferroelectric Memory (FRAM): Use memory elements like FM24CL16B for synaptic weight storage and non-volatility, improving accuracy and long-term retention.
    • Memristors: For future technology designs, consider integrating memristors, such as HP’s memristor technology for synaptic learning and plasticity.

    8. Neuromorphic ICs:

    New Addition for Compactness:

    • Loihi 2 (Intel Neuromorphic Chip): A custom IC designed specifically for neuromorphic computing, providing higher accuracy, lower power consumption, and more compact designs compared to traditional analog circuits.
    • RASP 3.0 (Reconfigurable Analog Signal Processor): Analog processor specifically designed for compact neuromorphic systems.

    Alternate Design Schematic Outline:

    1 Input Stage:

    • Precision Op-Amps (like OPA2140) for synapse modeling and precise signal processing.
    • Schottky diodes (BAT54S) for fast switching in synaptic circuits.
    • Surface-mount transistors (BC850/BC860) for the compact layout of synaptic elements.
    2 Synaptic Plasticity:

    • Synaptic learning weights stored in non-volatile FRAM memory (e.g., FM24CL16B) for accurate long-term potentiation (LTP) and depression (LTD).
    • Memristors for compact and future-proof synaptic modeling (optional but futuristic).
    3 Neuron Soma:

    • Replace discrete components with a reconfigurable neuromorphic IC like Loihi 2 for better integration and compactness.
    4 Power Regulation:

    • Low-dropout regulators (LDOs) for noise-free, stable power, e.g., TPS7A49.
    • Use compact, efficient switching regulators like LM2675 for power management.

    Sk-rmbild-2024-10-03-231123.png


    Summary:
    This alternate design improves accuracy by using precision components (such as precision resistors and capacitors, low-noise transistors, and op-amps), increases power handling with better diodes and transistors, and enhances compactness using SMT components, integrated circuits, and potentially even custom neuromorphic chips. Incorporating advanced memory elements like FRAM or memristors also allows for non-volatile and adaptive learning features.

    These changes would result in a neuromorphic circuit that is more suitable for applications requiring high accuracy and compactness, such as in embedded systems, robotics, or advanced AI hardware.

    End

    -----------------------------------------------------------------------------

    As I don't know any of this it's up to you to interpret how well the 4o-model handled that schematic.

    If it's even close to doing the job, even if it's off on some stuff, consider where these models were incapable of just two years ago.

    With the improvement of analysis that the o1-model has, and if that gets improved on a year from now in the same step-up it did for that, then it might very well be that you will see such a tool before retirement (depending on when you retire that is).

    As you point out, the problem is accuracy, reliability and consistency. It's that threshold that needs to be crossed before any of these can be used as a serious tool for work related tasks that require accuracy higher than what a human is consistently capable of and reliable enough to be trusted directly.

    As it is now, everything needs to be double checked. I think that any "industrial revolution" scale of societal shift will only happen once we cross that threshold. Before that we still need to have a human analyzing the result.

    Maybe the first step for using these models in a serious way is to have a person hired to verify the outputs. On paper it sounds like more people needs to be hired, but it also means faster processing of what was only a human task before. I don't know how that would apply in your profession, but it's at least interesting in what ways these tools will integrate into professions.

    I don't think it's going to be like the doomsday people speak of it. Some jobs might disappear, like Baden spoke of certain marketing people (or rather copywriters), but in my own experience, copywriters usually are overpaid for what they actually do and I see no point in not letting AI take over those tasks.

    But in general, I think that many who are scared they will get replaced will find themselves with tools that just make their working lives a bit less tedious. Less long hours with large amounts of information and data to go through and more focus on the better aspects of human input and engineering.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Eh?

    Never mind.
    bongo fury

    Then care to explain what you really mean then? Because the generalized official criticism of plagiarism against LLMs generally is as you describe; attributing plagiarism to generative AI without proving that the function of the system is actually able to be defined by such action, especially in the context of how humans themselves operate. Then applying it all as a general criticism of the system itself, and not the use of the system.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Electronic schematics, so something like:wonderer1

    And what is it that you would like an AI to do with such schematics?
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Can you, or can't you?bongo fury

    Because it is a nonsense request that fails burden of proof. You claim plagiarism, so you have to prove plagiarism beyond the doubts that I raised. It's proper philosophical scrutiny.

    You're waffling. I'm talking about a common sense understanding of plagiarism as warned about in typical forum guidelines.bongo fury

    That's not the issue here. The issue is that you attribute all use of LLMs to plagiarism. Or what is your point?

    You don't say.bongo fury

    What's with the arrogant tone?.

    Asking anything or anybody for advice on formulating and expressing ideas, or on refining and redrafting a text, is perfectly clearly plagiarism if unacknowledged.bongo fury

    So you mean that previous grammar software that did not use AI is also plagiarism? Since it had the ability to reshape text far beyond just a spell check. Or when an author collaborates with an editor who makes suggestions and edits in collaboration, mean that the author is plagiarizing as well? Or when an author talk about his ideas with friends, family or other people before writing?

    In that case, you either don't really know how most officially released text by humans are actually handled and would have to attribute almost all released works and texts as plagiarism.

    I don't think your premise there is strong enough.

    Apparently my definition leaves you without a counter example, so no I don't.bongo fury

    What counter example? Can you be more vague? I'm still reading hoping to hear a strong premise in support of your conclusion.

    Not while the cases are clear.bongo fury

    What cases?

    That human is mortified, and hopes not to be judged deliberate or negligent in their error.bongo fury

    Is your rhetorical strategy to try and ridicule the question? I'm afraid that won't work very well.

    Not at all then.bongo fury

    And what does that entail regarding your definition?

    Only compounding the crime of failing to acknowledge their input.bongo fury

    What crime do you speak of? The same crime as an artist pooling from sources, cutting out inspirations and putting them around their studio? Concept artists painting over photographs? Authors taking entire events from other works and putting them into new context? Filmmakers copying compositions and camera moves, VFX works? Composers using whole segments just outside of established court cases amount of notes?

    Yes, the crimes are all over human creativity and no one cares until it's blatant or obvious to the common man or judge. But in terms of AI, the same process of remixing inputs occur. Like a person with photographic memory visiting a library, or an arts museum. Able to gather, in fine detail, every word and stroke he's seen. Should we ban people with photographic memory?

    Can I photograph art and use in my privacy for what I create, as long as the creation isn't an obvious copy?

    As a person with lots of insight into artistic areas of creativity while also understanding the technology, it is remarkable the amount of misconception that exists in the public debate around AI models. There's lots of hyperbolic use of words related to crime and punishment going around obscuring the nuances.

    Or even in their present condition of (rather drastic) fallibility, let them join in. But properly acknowledged, and properly scrutinized. Is my point.bongo fury

    Not all humans are acknowledged in helping an artist, even though all involved are humans.

    Could be either, of course.bongo fury

    Only accidental, a form that is constantly being chipped away from these models. An AI cannot intentionally plagiarize without having agency of its own.

    So there are clear cases? Or not?bongo fury

    I asked a question. Would you attribute the computer to conducting plagiarism because the human copy/pasted something?
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    Any competent and reflective practitioner of English will define plagiarism as something like: deliberate or negligent misattribution of authorship.

    Authorship tends to have unclear cases as well as clear ones. So does deliberateness or negligence with respect to this criterion.

    But show me a case of unacknowledged chat-bot-assisted writing that isn't a perfectly clear case of plagiarism by this definition?
    bongo fury

    How does that lead to such a clear conclusion?

    You're talking more about the philosophy of authorship and not specifically plagiarism as a legal phenomena. And it's in court where such definitions will find their final form.

    The range of how LLMs are used is broader than this. Someone using it to generate an entire text might not be the author, seen as how copyright laws have concluded that generated images cannot have copyright attributed to the prompt engineer. But LLMs are however a bit different since it's not a clear dividing line between prompt and output if the prompt engineer includes their own text as part of the output. Just asking the LLM to do all the work is a clear case, but this is not the best use of LLMs for text generation and not really how it's used by those actually using it as a tool.

    You need to define in what intended use-case of an LLM you attribute to making plagiarism, is operating in. And also include a comparison to how a humans process available information into their own text and when that person is stepping over into plagiarism. What happens when a human accidentally produces exact copies of sentences from memory, without even knowing that they do so? How does that differ?

    Add to that the improvements of LLMs and the future scenario in which LLMs have become better than humans at not copying training data text directly and always providing citation when referencing direct information. Because the fact still remains that legal battles over this will end up demanding a defined difference between how humans and LLMs process and produce text. In the end it may just be ruled that we shouldn't allow LLMs just because... they're machines mimicking human processes. At which we need to define where that line is drawn as well, ending up in luddite territory of anti-progress around a lot of different technologies, rather than setting up clear regulations that can function together with the benefit of AI-models. And if the systems start to operate better than humans at avoiding plagiarism and using these models as assistive tools might even help avoid accidental plagiarism, what then?

    Because the uses of these models have much broader use-cases than some loser wanting to pose as an educated person online or trick their teacher. If the models are to be banned over loosely defined parameters in law, they may also stifle use-cases like research in medicine, in which the same system is used for data analysis, speeding up that research so much that it takes days to do something that previously took years.

    So the textbook definitions aren't as important as how well they work in court and the critics of AI risk blowing their entire frontline of attack if they pool too much faith into the definitions being "crystal clear". There are lots of copyright and plagiarism cases through history that seemed to be obvious by people saying that their interpretation is crystal clear, only to then be turned on their heads by the complexity of reality.

    So, instead, show me a clear case of plagiarism that can be representative of the entire LLM AI-technology and all its use cases as well as be a defined operation over the course of any improvements going forward.

    As far as I can see, just as a hammer can be both a tool to build with, it can also destroy, or even be a murder weapon. Attributing plagiarism to the LLMs specifically is a losing battle, especially over the course of time improving the models.

    In the end, the plagiarism will be attributed to the human, not the machine. Or should we blame the computer of plagiarism for the use of CTRL+C, CTRL+V and not the human inputting that intention?

    I'd want an AI that can take schematic diagrams as input, and produce schematics as output, before I could see an AI as highly useful for my work.wonderer1

    What types of schematic diagrams do you mean? And we're still early in development. The programming of new software that has specific uses of AI-systems seem to require a deep understanding of the AI-systems themselves. So far we've only seen this in medicine research since they've already been working with algorithmic coding. But I'll bet we'll see tailored software for specialized tasks soon.

    Not surprising, as marketing-speak is probably the most annoying, uninspired, and aesthetically ugly verbal trash to be imposed on the human mind up until AI LLMs offering it some competition.Baden

    I think LLMs are already more capable of producing scripts for marketing that offers a language that doesn't come off as out of touch with reality or tone-deaf. Copywriters for big corporations trying to "talk" to their customer base usually sounds like aliens trying to communicating with the human race. That LLMs are more capable of finding the correct tone and language to sound closer to their customers seems rather ironic.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    you cannot trust the generic end user to use it responsibly. I wish this were not true, but it is.fdrake

    Then we return to the problem of how to distinguish the use? The more advanced these get, the less likely it's possible to spot their use for post-generation.

    This whole thing really becomes very philosophical in nature... fitting for this forum. It all becomes a P-Zombie problem for written posts; if you cannot distinguish someone's writing from someone who knows how to use LLMs for a perfect mimic of a user writing, how can the rules be enforced?

    It's similar to the problem of spotting generated images and deep fakes. Since the tech advances so fast, the solution ends up being another AI being used for the purpose of analyzing if an image is generated.

    At some point we might need proper and judicious use of AI to counter posts that can't be judged being generated or not. Either by analyzing the language used, or to use it for deconstructing the merits of the argument in order to find the sources or lack of sources.

    But then we end up in a situation in which the intention is to spot the misuse of LLMs, but the method is ending up being a proper philosophical debate, using LLMs pitted against each other.

    I don't know if it's possible or not, but one of the only concepts I can think of that would properly pinpoint the use of LLMs is if there was a timer coded into the forum, tracking how long it took to write a post. I have no idea if this is possible or not, and the only flaw would be if someone writes posts outside of the forum and then paste them here, but I do think most members write directly here in the forum in order to properly use quote tools and such.

    Point being, if the written post can track how long it took to write and if it was formed with key strokes within the range of how people write normally, then it would be somewhat of an evidence that a post is actually written and not just generated and copy-pasted into the forum.

    At least food for thought on coding new features and functions in a post-LLM world of discussions online. :chin:

    Should we have some guidelines on acceptable use of plagiarism on the forum?bongo fury

    If you mean the concept of LLMs generating plagiarism, the more I've deep dived into the technology and compared it to how humans generate something, the less I think we can blame LLMs for plagiarism. Much of the evidence have been attributed to quirks of on going technological development and the arguments keep getting into cult like behavior online by people who try to weaponize language in order to fight against AI technology. The use of terms like plagiarism, theft and such is being used so haphazardly that it risks making valid points of criticism easily dismissed due to an overreliance on the terms being factual descriptions of the technology when in fact no such definitions have yet to be defined.

    The overlap between how the tech operates and how humans operates in creating something produces problems in reliably and properly defining the boundaries. Similar to how some concept artists blamed the initial models for plagiarism when they themselves used tracing off photographs they grabbed from a Google search, which technically is a much more direct use of someone else's work without credit.

    And for text generation, the problem with LLMs usually comes down to accidental plagiarism rather than intentional. And accidental plagiarism mostly occurs when sources aren't cited properly and the sourced text ends up as part of the authors text. This often happens in academic writing and is sometimes hard to spot. But new reinforcement learning for models like the o1 seems to combat these accidents better (not perfect) and in time they might function even better than the majority of human writers do in this regard.

    Point being that any text that's written as a new sentence cannot be considered plagiarism, even if the communicated information and context of that sentence is coming from memorized information. Human language would become a mess if we had to double check everything we write like that. We assume that when we write something, the processes in our brain counts as enough creativity to be considered not plagiarism. Yet, we have the same ability to accidentally plagiarize, even when writing normally and we aren't aware of any of it until someone points it out. Like, how do I know that what I write here hasn't been written somewhere else; some lines I've read some time in the past and I'm accidentally typing up the same sentence because my memory accidentally formed it around the same contextual information I'm intending to communicate?

    We source other's information constantly. Anything we hear, see, read or even taste becomes part of a pool of data we use to create something new, a remix of it all. The problems with just summarizing all AI models as plagiarism or theft is that the terms aren't used properly within the context of the criticism. It's the cart before the horse; people want to criticize and take down the AI models first and tries to apply a reason for it as an after-thought. But for the terms to apply correctly, they must have a contextual definition for how they actually apply to the AI models and there aren't any since people only use them haphazardly. By the definitions we use them for judging human outputs, it would most likely free the AI models of plagiarism and theft accusations rather than them being guilty especially since we don't attribute a person's single plagiarism to all text they have ever written and every new sentence they will ever write.

    It is semantics, but semantics are always important when defining law and moral in discussions about uncharted territories like this. What it boils down to is rather that until the criticism against AI models can find a better philosophical ground that's solid, maybe even as a newly defined concept of how humans and AI models will co-exist legally and morally going forward; all the AI criticism just ends up being "I don't like AI so I want it banned". It's ok not to like it, it's ok to fear the abuse that they can be used for, but never has "I don't like it" been enough to properly ban or help structure a working foundation and boundary for new technology. It ends up becoming just another luddite argument to ban machines rather than the necessary philosophical arguments for how we can co-exist with this new technology.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    You need to check what it says in the source, too. Do NOT trust its word on anything.fdrake

    Isn't this true for any source? Isn't the correct way of using any source to double check and verify rather than outright use it as a source of facts? If we objectively compare ChatGPT with unverified human sources or pseudo-media that's being tailored to function as factual sources, I find ChatGPT to be more safe than just using the online flow of information uncritically. People using unverified sources that sometimes are malicious and intentional in their pursuit of manipulating online discourse to the point of reducing truth into obscurity.

    I'm of the opinion that regardless of source, they all need to be double checked and verified in discourse, but I've not seen this type of doubling down on other uses of sources?

    Why do we value other unverified sources that may very well be constructed by malicious intents or by people who want to pose as being factual? Just because they look and sound like news articles or papers? I've seen blog posts being used as "factual sources" without a critical attempt to dissect those sources before using them as critical pillars of a conversation.

    On top of that:

    The intent of the rule is to stop people from using it to spread misinformation and from generating reams of undigested content.fdrake

    This could be a worse problem with human generated sources that are malicious or obscuring their belief in the appearance of factual representation. It's an ongoing problem that a lot of what is found online is generated by ChatGPT. So even sourcing "real" articles could be the same or even worse than using GPT directly, since we don't know the intent of what those online source's prompts were for their seemingly "human written" articles.

    Don't just put what someone says on the forum into it and get it to write your response. That's the kind of stuff which will eventually make us have to enforce a no-tolerance policy on it.fdrake

    This is something I strongly agree with. The laziness of using it in that way really only shows that the person doing it is only here to pretend to think about different topics and to rather engage in the emotional dopamine of winning and participating in debates on a higher level, rather than being interested in the actual subject on a curious and honest level.

    The problem is still that it's impossible to know if someone does this going forward. The more advanced these systems become, the less obvious their responses will be. Especially if the one using them are good at prompt engineering, since they could just engineer away the quirks and structure of language that is a sign of a specific model.

    On the other end, and going by how well the o1-model has shown to be in reasoning and analysis, I also think that it's not good to over-correct in all of this. There might soon come a time when these models are much more valid in their responses than finding anything online in a traditional way; especially in the context of philosophy, science and literature when not able to find it on traditional academic sites, or if they are themselves able to sift through academic sources.

    None of this is an argument against the rule, only a conversation about it and what parameters it should possess.

    I see a lot of conversations online that draw hard lines between human generated content and LLMs, without underscoring just how bad most human sources online really are. Statistically, there's very little information produced by people online that's factual, but they're still used as grounds for arguments.

    Posts should not be written by an LLM, but using something like an o1-analysis and clearly marking it as such wouldn't be much less problematic than using unverified links to blogs or other texts online.

    So I think it's good that the sentiment about how to use LLMs should be to always mark it as LLM generated, and that such analysis or texts cannot be used as a main supporting pillar of an argument, but rather a source of a different perspective, of giving clues into which direction to look for further answers and information.

    And referencing Pierre:

    They therefore have the latent ability to burst epistemic bubbles from the outside in, as it were. But this process must be initiated by a human user willing to burst their own epistemic bubbles with some assistance by the LLM.Pierre-Normand

    It may also be good to have instructions for those who feel they want to use LLMs. Because what Pierre writes here is possibly why some have a good experience with the LLMs while others just generate trash. Asking the LLMs to analyze something critically, including opening yourself to criticism by asking for it, produces a much more balanced output that often engages you into better self-reflection since there's no human emotion behind that generated criticism. There's no one being angry at you and criticize you because you say something they don't like. The LLM, when prompted to be critical of your own writing, often cite sources that specifically underscore the weakness in your argument and it forms quite a powerful form of ego-death in reasoning, bringing you back to a more grounded state from the high of your own writing.

    In my own research about LLMs and testing them out, they can act as great Socratic partners for testing ideas. And often when getting stuck on certain concepts, help break them down my concepts to show the problems with my own biases and fallacies in reasoning.

    So, while pointing out in the rules about how not to use LLMs, we can't ignore the fact that LLMs will keep evolving and being used more and more, so tips on how to use them and for what could also benefit the forum. Things like what type of questions and how to ask them, how to take what you write and ask the LLMs questions about your own arguments that improves your understanding of your own opinions and ideas before posting.

    There's so much polarization and extremely binary ideals around AI today that I think the nuances get lost. It's either "ban it" or "let it loose" rather than ban certain use and find the use that's beneficial.

    Since LLMs will only grow and be more popular in use, it might even be important to have pinned information about "how to use LLMs" in which the clarifications of what not to use it for, as well as tips for which models are considered preferred and how to prompt them correctly in order to get balanced outputs that does not play into the users own biases and bubbles.

    That telling people not to use a tool in a certain way is just as important as telling them how to use a tool in the correct way.
  • Site Rules Amendment Regarding ChatGPT and Sourcing
    You can still use and reference ChatGPT output, just don't use it as an authoritative source.fdrake

    Do I interpret it correctly that we can use ChatGPT in arguments as long as we mark it as a ChatGPT reference? Like, supporting reasoning, but not as a factual source?

    In essence, as a tool, not as a source of facts.
  • Ukraine Crisis
    You do understand that then denying atrocities that have happened is totally normal for this neutral platformssu

    How is that an answer to what I clarified?

    And is something wrong with that?ssu

    Why would it be? Why do you interpret it as wrong when I've lifted this forum as good example of neutral praxis that would conform with the same ideals that a UN based social media would do?

    This is totally true. This is the weird and unfortunate reality of social media. At worst it might be that we start to change even our real world exchanges with other people into the kind that are so popular in the social media, because people don't care so much anymore even if they flame in their own name.ssu

    It wouldn't be if the algorithms didn't cater to conflict and negativity, since the research concluded that such behaviors drive attention and interactions more, which is key to ad revenues. Removing the concept of having the users as the product for the real customers (brands and marketing) and focusing on the users as the main and only focus of the platform would drastically lower the level of toxicity that occurs online. This has been researched and reported on for the last ten years so it's an obvious conclusion as to what action would fix most of the problem.

    And this forum is such an example. Without any need to push ads and engagement, it kind of keeps itself in check on the basic level of human decency, even if we're operating under pseudonyms and avatars. The forum moderators lift up good behavior and shut down the bad. Social media does the opposite as much as they can without getting public criticism.

    Who pays for it? The one that does, holds power over the media. That fact of reality you simply cannot disregard. UN? That member countries put their tax money to the media?ssu

    Yes, why not? Managing a social media without the purpose of pooling billions into profit means that the taxes for nations are miniscule for the output of good it will do to the world. Some nations will of course object and refuse, but there's enough nations that see the benefit to keep the social media platform afloat. And each nation might even want to join paying for it in order to be part of influencing the platform management. The more the less governed by anything other than consensus.

    First, the UN organization can itself be corrupt. If someone then wants to criticize the UN organization responsible of this free neutral social media, how if then the organization shuts down such hate speech.ssu

    You're implying a totalitarian takedown of free speech criticizing the platform, which there's no evidence for would happen. Remember, the UN isn't consisting of just some top people, it is consisting of multiple organizations and overseers. The more parties involved, the more nations involved, the less it can be corrupted. Even if some are, it wouldn't equal the entirety being corrupt. Such ideas just sound like some kind of genetic or slippery slope fallacy. The UN still operates far better than the uncertainty of having singular governments or entities in control, which is the state of things now looking at both global commercial social media platforms and state owned like in China.

    Secondly, member countries will try to influence directly this "neutral" media. Many countries would just love to have the control just what is determined to "hate speech" and what is "supporting terrorism". Now it's defined usually from what country the media is from.ssu

    And the consensus from which the UN operates by, is what? So far, adhering to human rights have proven to have more cases of being on the side of good morals than any specific agendas that individual governments have had. They can try and influence all they want, but the nice thing about the UN is still that they force consensus and there are usually more morally good people than bad. The worst things can get is "stupid", but "stupid" generally bounce back and self-correct better than immorality.

    In Finland we had a government funded public service that had a monopoly for example of the radio waves until 1985. Then the first commercial radio started. Guess what: young people didn't listen to the radio prior to that while they now are and have been for a long time the largest group that listens to the radio. What was the reason? They was ONE radio program ONCE per week playing Pop & rock music prior to 1985. And I'm not making this up. Yet for the public broadcast corporation didn't understand why people didn't listen to radio anymore in the early 1980's.

    This is the actual reality of a government monopoly of a media. And don't think it will be different under the UN.

    And I think you should understand the real implications of your proposal: An UN mandated social media won't start to compete with the commercial medias... it would be changed by law with the commercial medias being disbanded by legal actions. Because it would be whimsical to think that some UN lead media would have the ability to compete with the other medias and somehow obtain now a monopoly situation just by free competition.
    ssu

    You're comparing the wrong things here. You speak of a single government, you speak of mainstream media that's about a binary output and receiver. Social media is nothing of this.

    And yes, I think that social media should be illegal for commercially based tech companies to have the right to operate. Social media has become an infrastructure and no public would want the same operating methods of these companies to apply to other forms of infrastructure or communication. We don't operate telephones on the standards of engagement bait and ad-revenue. We aren't pooled by force into public squares in which people get into fights that grabs our attention while officials walk around with physical ad signs. The absurdity of how social media operates for something so integral to our modern world is clear once realizing the problems of tech owned social media.

    Hence basically your idea just comes down to squashing free speech and make it more bureaucratic.ssu

    This conclusion does not follow the argument you made. There's nothing of what I propose that leads to squashing free speech, quite the opposite, since there's no single entity in control of it, but a collective of the world, using open source standards and by the guidelines of human rights, it takes free speech seriously and not in the pseudo-way that bullshitters like Elon Musk or Zuckerberg do.

    "Free speech" is a concept that people have lost an understanding of. There's no such thing as free speech absolutism or anything like that. Free speech today has become an acronym for excuses made by those who just want to spew out their hate, not actually talk criticism. Actually, it's the promoters of free speech absolutism like Elon Musk who generally silence people who criticize them or something they like. So it's the people who scream about free speech the most who seem the most keen on suppressing it.

    The beauty of collaboration and consensus among the many is that these morons like Elon Musk become suppressed in their psychopathic oxymoronic attempts to abuse the term "free speech".

    What leads to free speech is keeping platform rules open source, always under scrutiny by the consensus of the world, under the banner of basic human rights. Free speech, ACTUAL free speech is part of those human rights. Tech companies do not operate under such ideals, they use the terms in marketing strategies for their own agendas.

    How on Earth you think that will happen with your proposal? Sovereign states do understand just how important and crucial public discourse is. Some give more leeway to this, some are totally paranoid about it. I really doubt that this would be a function that the UN as an organization could handle well.ssu

    By concluding social media as an communication infrastructure of the world and not a business for companies to exploit.

    NO IT'S NOT!
    It's not a "losing battle". I would argue that it's the other way around: government's around the world now understand the new media quite well and can use it well to spread their own propaganda and disinformation. I do agree that earlier in the turn of the Milennium, many governments were still quite clueless about the new media, but that is history now.
    ssu

    Once again, I underscore that a global platform is under the scrutiny of the consensus and being an open platform. The openness in this means that any attempt to take control is impossible without it being seen by the public of the world.

    I do not produce arguments out of some conspiracy of some cabal operating in the UN. There's more proof of corruption for how things operate today through tech companies and individual states than any notion that a consensus and collaboration on a global scale with an open source structure would ever lead to such corruption. That's just conspiracy theories as the basis for an argument.

    It ought to be quite evident that people can tow the official line happily, especially if the subject is about national security, natural importance and so on. I find this is a battle that the naive IT geeks who thought that the World Wide Web would free people from the shackles of government control have already lost quite dramatically. It just took a couple of decades for the governments around the World to understand how to control the new media.ssu

    You're still speaking of individual governments, not how a consensus would operate. The only reason the UN can't do much on the global scale is because they don't have such power. But if nations in the west start to primarily ban commercially driven social media with the intent that we globally build such an infrastructure as a replacement, then they will be able to as there won't be enough revenue for the tech companies to operate social media sites.

    A closed infrastructure, regardless of being controlled by an individual state or a tech company is still more in control of individual agendas than a global collaboration in an open source structure. That should be obvious.

    Besides, people will try to find ways to reach out beyond government control when government is totally obvious and basically ludicrous[/i]ssu

    Yes, but what does that have to do with this? You're creating scenarios that doesn't fit how it would be so for a globally consensus-governed social media platform, but rather mix together individual totalitarian states with how the UN operates. The argument seems oblivious from the nuances here, a form of binary perspective of a governing body always being corrupt and totalitarian, even when the structure prevents such corruption from manifesting, or at least preventing it far better than tech companies and individual governments do.

    Again something that people said sometime in the 1990's.ssu

    You disagree with the assessment that ridding social media of these algorithms and market driven operations would make for a better public space online? It's not an ideology, it's the truth of how social media affects society today, it's research backed knowledge, not some IT-people idealist ideas from the 90s.

    If there's a will, there's a way. And today governments understand how social media can be used to attack against them. And can use quite similar tactics themselves.ssu

    Yes, so remove individual state influence and tech companies power over them. It's delusional to think that such operation is better preventing such malicious control, than an open platform that's globally collaborated on and open to scrutiny from anyone.
  • ChatGPT 4 Answers Philosophical Questions


    I failed to spot that you were the one asking ChatGPT for the topics, I interpreted it as you asking the questions and it answering. So I think this went into misunderstanding.

    I agree with what you say.

    Have you tried the o1 model? It summarize the train of thought it has while answering. So even though it's not a peek into its black box, it's a peek into the reinforcement learning they used for it.

    People don't like that it takes a long time to answer, but the thing for me is that if it would even take 10 minute to answer, as long as the answer is so correct that it's more reliable than a human, it will become the level of AI that can be used professionally for certain tasks.

    My view is that AI advances will result in greater disparity between the higher and lower intellects. With chess, for example, the world didn't become filled with equally matched players, all who had the answers at the ends of their fingertips. The world became filled with higher chess performers who learned to utlize the programs to advance themselves. It's like if you and I worked with Einstein and you spent your days picking his brain and asking him every question you had about physics and I just avoided him because he was quirky and boring, you'd come out much smarter than me. Making information available helps only those with the curiosity to look at it.Hanover

    Agreed. I think we're in a time in which we're trying to figure out how to use these systems. I think the whole "AI will make artists out of a job" is ridiculous and a red herring. Real art is about intention and the subjective interpretation of life and reality. AI will only produce products in that regard, the soulless market usage of artists talent. Yes, many artists make a living doing that in order to sharpen their craft, but I've also seen the opposite, artists giving up on their careers because they're burned out by mindless and endless trash for marketing and whatever corporate interests that require art.

    No, I think it will be more like you describe here, AI will become personal and professional assistants, who help augment already talented and experienced people into being better than they would have been otherwise. In that regard I view this AI development as positive.

    As you say, it comes down to curiosity and I think curiosity is actually the key component of mastering anything. If people lack curiosity, they will just stagnate at the point at which they believe to have the necessary level of competence for a task. It's why there's an abundance of artists who never seem to get any better, just producing more and more lackluster variants of the same lackluster original they created when reaching their peak performance. Curiosity is what drives the ones who's already reached a master level of craft or knowledge, it's what drives towards wisdom and its holistic levels of understanding interconnections between specific knowledge nodes.

    For instance, I don't know any math in theoretical physics, I'm barely good at math in general. But I'm reasonably good at abstract thinking and the more I learn about physics, the more curious I get about the abstract consequences of different concepts in physics. While I would never propose my ideas as anything other than philosophical and no way near the scrutiny that theoretical physicists has, I find it interesting to come up with abstract ideas for physics and test them using GPT. Especially the new o1 model is impressively good at analyzing the merits of an idea.

    And that is similar to the concept of "speaking to Einstein" in some ways. If I would instead be studying physics or working as a physicist, then the better these systems get to high level reasoning and working with math, it would speed up my work or studies massively.
  • ChatGPT 4 Answers Philosophical Questions


    The problem with these is that they're probably easy for the LLM to find similar questions in other literature that it's trained on, as well as probable answers to those questions.

    The harder test for it would require more complex problem solving involving abstract deduction for more specific and complex scenarios. A longer question with more interconnected details and red herrings in logic.

    It's interesting when it becomes more of a challenge to invent a problem than for the AI to solve it.
  • ChatGPT 4 Answers Philosophical Questions
    I tested the o1 model reasoning with some made up, for humans, easy deduction and it nails them all without a problem.

    I would need some new material to test. Some really logical, but requiring abstract reasoning that would push its limits, as well empathy questions that rely on interpreting feelings and intentions and deducing such behavior.

    Can't be something that exists online as the available answers may be part of the training data, so it needs to be a very hard problem that requires the level of reasoning that humans do by understanding causality and implied consequences or chains of consequences.
  • Ukraine Crisis
    What do you think a neutral platform would be like?ssu

    replace them with globalized social media platforms that are decentralized, open source and handled by human rights overseers and directives.Christoffer

    That form of neutral. Adhering to the values that underpin the core value of the UN, to the actual understanding of how freedom of speech as a concept is protected, and not the skewed corrupted use of the concept that most people use as excuses for spreading hate and vile behavior.

    This site itself is an example of what we call social media. And it is in my view a fairly neutral platform. The bannings are quite reasonable.ssu

    And what rules and values does this site aspire to? This forum pretty much aspire to remove the hateful, vile and propaganda spammers. All in the name of basic decency. It also has rules of engagement in which endless trash posting isn't allowed.

    Basically it acts like normal society with healthy freedom of speech. It's the disconnected behavior between online and offline that creates monsters of people who are decent offline. Treating the online space as a real offline place with the same rules and social cues we have normally makes things much more healthy.

    Medias just give people what they actually want, not the polished image of what people answer when asked "what ought to be the social media be like?".ssu

    Not when used as a business of the user being the product. Social media that rely on ads and algorithms for ads aren't websites for the people and users, it's for the companies that make up the cash flow for these tech companies. The business is tailoring attention towards these ads so the revenue goes up. So any algorithm and function of social media today is among the most advanced algorithms in computer science, carefully, over the course of many years in use, fine-tuned for the purpose of increasing addictive behavior in the user in order to keep their eyes on the ads and paying customers needs.

    That's why I'm proposing social media free of it. Anyone who thinks that media is for the people if it's formed around ad revenue, don't understand how that business actually works.

    You either have government funded public service systems and media. In low corrupted nations this can work and be neutral depending on how the laws and regulations are between state and that media outlet. But in many cases and in many nations, public service media is used for propaganda purposes. So it's not a good fit for social media to be funded by a single government.

    Media funded social media is what we have now and it's obvious how malicious and negative its functions are for the people. Any other perspective would be living under a rock the last decade.

    So, the only way to be able to have a social media space that is actually for the people, would be to have it under the core principles of the people as a whole. The only form of global collaboration that isn't a form of business transaction, is the UN and the UN have neutral organizations that have the purpose of overseeing the core values of human rights on a global scale. Manifesting a cooperation between nations to fund and drive a social media that is detached from business and from any single government influence, on the basis of human rights values and a normal understanding of freedom of speech, would be the only way to handle such a system.

    The core here is to remove single government control of social media, and to remove market interest that make the users into products rather than the purpose of the site.

    And of course, some nations don't want this human rights-based social media, since it's a threat to their state control. But that battle is a losing one since people will always find ways of reaching out beyond government control. And without the focus on ads and products, the algorithms won't push endless trash and may very well push the right kind of grass root movements that help people organize against state violence in these nations.

    We've seen examples of how social media helped arming people with information and quickly organizing against governments. Imagine how it would be on social media that removes all the endless trash and ads obscuring the actual people and their movements.

    At least I'm arguing for a solution to the problems of current social media platforms.
  • Ukraine Crisis
    you seem to be arguing in favor of cracking down on social mediaTzeentch

    Am I? I seem to promote that we should crack down on the predatory algorithms that these tech companies enforce on us in their social media platforms, not the construct of social media itself as I'm further pointing out that we need social media platforms free from market-driven intentional or unintentional manipulation of our perception.

    Modern governments unfortunately have become part of the problem, and therefore cannot be trusted to solve it.Tzeentch

    Which government are you talking about? You're just summarizing all governments in one big pile? Things are more nuanced than that.

    They could help solve this issue by creating platforms where constructive discussions can take place (as they have done in the past), but modern governments show no interest in doing so.Tzeentch

    Governments as singular entities cannot do this without the risk of corruption. It should be a global effort and collaboration since social media is a global function.

    Because modern governments have gone all-in on propaganda (now euphemistically called 'narrative'), and they don't want their propaganda to be questioned on authoritative platforms.Tzeentch

    Can't do propaganda like that if you have a global collaboration and decentralized and open source nature of the system.

    In fact, governments don't want their propaganda questioned on any platform if they had their way, and that's of course exactly what they would strive for - an iron hold on public opinion à la China - a monopoly on "truth".Tzeentch

    Yes, that's why the concept is to exist under a UN movement rather than any single government. You think the corporate interests of someone like Elon Musk and his conspiracy theory narratives is better than any governments handling social media? Both are equally problematic.

    Which is why both singular governments, like China's own platforms and tech companies platforms should be taken down in order to replace them with globalized social media platforms that are decentralized, open source and handled by human rights overseers and directives.

    Believing in social media as they are constructed now, buying into Musk's free speech absolutism etc. is ignoring how current social media operates and how it skews people's world view to the point of societal collapse. We already have elections being manipulated through it, even without any governments owning their functions. I don't think you're looking at this with enough scrutiny.
  • Ukraine Crisis
    Nothing that happens on social media is what I would consider public debate, and certainly not healthy public debate.

    And that's my point: healthy public debate is lacking.
    Tzeentch

    I think you need to read what I wrote again. The public debates that should be the core of forming rational opinions in a democracy, have been taking on the behavior of how social media function, meaning the lack of healthy public debate is the cause of how social media operates by these large tech companies. It doesn't matter if the public debate is in the public or in traditional media or wherever, the attitudes, herd mentality biases and emotional fallacies have taken over as the primary focus for any debates.

    Without social media, in its current form, influencing and programming the public into a toxic debate behavior, you will have healthy public debates showing up. It's the rejection of these toxic market algorithm-driven social media platforms that will lead to better care for democracy.

    We already had a major problem with traditional media channels that were reporting biased news, but nowadays it's everyone's behavior due to how these algorithms push people into narratives that align with what makes most money for the tech giants. They simply do not care about healthy democracy until war is on their door step. Because these companies have zero intellectual insight into the broad consequences of their actions, as proved by documentaries like The Social Dilemma and thinkers like Lanier.

    That's almost exactly how government agenda-setting functionsTzeentch

    What does what I said have to do with that? I'm talking about how algorithms on social media corrupts the perception of knowledge and change people's behavior into a corrupted mass herd with a lost ability to form a healthy democratic movement.

    For that you would need impartial decisionmakers, which we have just established the government is not.Tzeentch

    It becomes impartial by it's decentralized nature, open source structure and global form. You can only be truly impartial by including as many different voices as possible so that it forms a broad consensus, otherwise it will always lean towards someone specific. And if the funding is a neutral fund that cannot be infused by lobbyists and alike, but rather a form of global tax for every nation in the UN to fund, it will both be a very low cost for the world, but also impossible for any government to influence to the point of corruption. Add to that even further structures of how to democratically rotate leaders of it's operation, have impartial oversight reviewers, and it becomes even less prone to corruption.

    There are forms of governing that are less prone to corruption and which enforce more rational decision making. Don't make the mistake of using singular government examples to dismiss ideas of functioning politics. There's no alternative to handling society than some form of governing, so it's more about systemic changes to advocate for better functioning democracies. And the form I described the global and neutral social media system is not governed by one government. I'm not sure why you interpreted it in that way.

    Frankly, people flinging shit at each other on the market square doesn't concern me one bit.Tzeentch

    Well it should. You seem to think that the people and the government exist on two different planets. They're intertwined and push and pull on each other.
  • Ukraine Crisis
    People sharing bullshit wouldn't be much of an issue if democracies such as ours would foster healthy public debate. Governments are trying to crackdown on social media precisely because it disrupts the echo chambers they're so keen on maintaining.Tzeentch

    I don't think you have insight into how social media platforms function. They earn more money on engagement and ads, which means, going by psychology research, negative comments and posts generate more engagement while attention bait and rage bait generate more ad revenue.

    So you have things backwards, the current form of public debates is a result of catering to how people interact on a large scale today, i.e how people act on social media. Changing public debates will not do a single thing towards changing how people interact. Only transforming social media from market driven algorithms into fostering an algorithm that is neutral for the sake of normal interactions, without any ads or market driven influencers consisting of the majority of views and interactions, as well as a clear line drawn on behaviors reflecting what a normal public space would allow behaviors to be would generate a true social media for the people and not corporations.

    Take this forum as an example. Imagine if there was an algorithm that pushed just the most conflict ridden topics to the top and only the ones who pay for algorithm priority raises to the top, flooding the entire front page with their topics, most of them being rage baits in order to earn money through influencing people to buy a certain product. And the mods ignoring most of the obvious racism, threats of violence and bad behavior that we see on other platforms.

    Be very thankful this forum is free of such bullshit and that the mods actually ban people for misbehavior. It's actually impossible to be on social media platforms today, and so this forum has for me become a sort of oasis for online debates and discussions as it's not driven by the bullshit that makes up most online communication today.

    So NO facebook, Instagram, X?ssu

    I am of the opinion that there needs to be a neutral social media platform, funded by a UN type collaboration so that there's enough money to run the site, with no incentives to push market driven algorithms or influencer economies. A decentralized, but collaboratively driven global social media platform that features similar functionality as a combination of the major ones.

    Since there's a lot of people, like me, who have been present on social media a lot in the past, but who have now seen its decline in quality with the rise of ads and bullshit and losing it's fundamental core values of connecting actual people; having a neutral alternative, that is backed by an open source, non-profit global collaboration for the purpose of being a space for the people and not market forces, would be an obvious choice to move over to.

    If people choose to stay on the trash pile that is modern social media compared to that, then let them rot there in that brain rot until there's nothing left but a pile of meat with an inability to do anything outside consuming endless pages of AI produced engagement-trash.
  • Donald Trump (All General Trump Conversations Here)
    What does it mean that it now gets treated as a rhetorical flourish?Paine

    It either means that racism is getting normalized, or that people have forgotten when we battled enemies in deadly combat to free ourselves from such depraved world views. Either way leads to either future; a place where racism is the norm, or a place in which respectful and good people don't dare to speak up against the racist norms.

    To be harsh... maybe good hearted people need to understand that setting limits and hard lines on what's acceptable or not in society also requires there to be sufficient consequences for those who cross the line. Stop accepting all the racism. Fight back against it. Stop acting like it "will just go away on its own", because it won't. Racists can act more and more according to their ideology if there are less and less pushback from society. There's no wonder that someone like Trump gets into power with his obvious racism... no one cares. If people cared, there would be riots and demonstrations pushing the nation to the brink of collapse. But the US would end up in civil war long before society actually pushed back against these forces.

    Generally, people are sleepwalking into dangerous territory and if shit hits the fan they'll just cry out "how did this happen!?" :shade:
  • Donald Trump (All General Trump Conversations Here)
    My purpose was to find out if Trump was really crazy as the media portray. I did not find him crazy.L'éléphant

    Stating that immigrants eat house pets as some form of large scale problem isn't something you would find defines him as crazy? Or in other terms, unfit for presidency? It's either that he's a total nutcase, or he's a blatant and obvious racist, or both. You think either is a good foundation for a presidency?
  • Donald Trump (All General Trump Conversations Here)
    The whole saga has been so totally unlikely from the very beginningWayfarer

    And thus, it shows that the problem isn't that morons can run for president, but that the guardrails of democracy are non-existent. The solution is not to fight the morons, but to evolve democracy and the ways of how governments works and function in democracies.

    Finding a place in which the system itself guards against bad actors, while promoting competence and problem solving for the people that does not play into and promote people's inability to understand what an actual solution is.

    This is a philosophical question I'm constantly thinking about. That society today is too quick to just summarize democracy as being the end point of how society is functioning.

    It only takes minimal insight into psychology, sociology and media theory to summarize a conclusion that while democracy is the best of what we have, it is extremely flawed as it is used in society today, and a perfect tool for malicious actors to rule as a dictator under the guise of the people's freedom. Essentially giving people a Baudrillardian illusion of reality while living as a king, all while the people fight for things they have been fooled to believe is forth fighting for.
  • Donald Trump (All General Trump Conversations Here)
    ‘childless cat ladies’ and ‘pet-eating immigrants’. And that’s really the best they can do.Wayfarer

    With that combo, they really need to protect the childless cat ladies cats from the immigrant pet eaters! :scream:
  • Donald Trump (All General Trump Conversations Here)
    Believing that it's hopeless plays into his hands.Wayfarer

    What I mean and criticize is how someone like Trump is even approved to be running for presidency. That people in democracies play far to loose with democracy and are generally totally oblivious about how to protect democracy from collapsing.

    The US is not electing a representative of the people, they elect a king or queen, someone who can be totally incompetent and play into any demagogical scheme they want in order to gain power beyond the laws of the nation that any other citizen is obligated to follow.

    Someone like Trump should, in a healthy democracy, be blocked from running as a representative, because people like him are clearly incompetent for the job.

    In any other job that feature dangerous operation, you need a license, or be approved to work with said dangerous operation. But not presidents in the US; they can be any level of moron and idiot and use any methods to mislead and fool the population into voting for them. And the people of the nation can only shrug and be forced to go along with it.
  • Donald Trump (All General Trump Conversations Here)
    He can spout lies, exaggerations and hyperbole for 90 minutes, and everyone can acknowledge that this is what he's done, but it may not matter.Wayfarer

    For a large portion of his voters, it doesn't seem to matter. It also doesn't seem to matter that his statements about immigrants is one of the most blatant examples of his racism. Maybe because his core voters are blatant racists themselves? Who knows. I just know that in modern politics, it seems that racism is fine and there's no apparent will or action to get the racists and racism out of corridors of power.

    That in itself is a clear sign of how the current structure and system of government is a failure in every form other than playing with authoritarianism under a plutocracy.
  • Uploading images, documents, videos, etc.
    I pinned this for a bit. Unpin it when you like.fdrake

    Still good to be pinned for reference to a good upload site. Or, it could go into forum rules thread?
  • Donald Trump (All General Trump Conversations Here)
    You couldn’t dispute anything I wrote, resorting to ad hominem by your own admission, then lecture me about “epistemological responsibility”.NOS4A2

    You are making wild conjecture through right-wing bias, with no actual evidence of corruption against Trump. Since there are actual evidence presented for what Trump has done and which was the foundation for the court rulings, you have to provide an extraordinary body of evidence for your counter claims to supersede this. But what you're doing is rather make wild conjecture through an emotional response to everything. This lack of epistemic responsibility is further proven by how you look at the supreme court dismissal of Trump's guilt in a positive light, an act that should be equally treated as a line of corruption, this time in Trump's favor. The treatment you make of each event that's happened just underscores how any event that's favorable to Trump gets treated as a win in your posts, but anything even remotely in his disfavor gets a rant about how the democrats are corrupted. Your right-wing bias shines brighter than a neutron star, and it absolutely makes you fail epistemic responsibility. This is not an ad hominem, this is an analysis of your ability to stay neutral in analysis of what's going on about Trump. The reason why most of us are in a position against Trump is because it's been proven so many times over how unfit he is for presidency, and it's this sum of all parts that produce a consensus on the matter. If anyone tries to dispute this, then they need extraordinary evidence and rational reasoning to support such defense claims. So far, you've contributed nothing of the sort.
  • Donald Trump (All General Trump Conversations Here)
    If you fear the facts and resort to imagination, and seek only comments that validate what amounts to propaganda-driven gossip, you’re not fit for discussing anything. None of that is going to stop me from pointing out your failures.NOS4A2

    You don't provide facts in the way in support of your conclusions. You are doing the same as any other conspiracy theoris; connecting dots you want to connect between stuff that you deem to support those dots. It's impossible to deconstruct your arguments or have a proper discussion with you, as seen with how people try to do with you, because you're basically using all fallacies and biases known to man in order to just slither your way past what everyone says. I, and the others, have no problem discussing with people of opposite opinions, but such ability relies on the two interlocutors able to handle facts and arguments with epistemological responsibility. There's no point in discussing this with you because you don't care about such scrutiny.
  • Donald Trump (All General Trump Conversations Here)
    As a cult member myself, being ruled guilty by a corrupt, anti-Trump, Democrat Judge, whose daughter raked in millions from the Biden and Harris campaign, is a badge of honor. Non-cult members are seeing that too and they’re joining in droves.NOS4A2

    Having conspiracy theories invalidates your input's value in this discussion. Risking an ad hominem, you're mostly a laughing stock in this discussion. I'm not interested in discussing Trump with you so I'll just ignore any future writing that isn't a proper argument with actual substance.
  • Donald Trump (All General Trump Conversations Here)
    It's the moderates and independents that he's losing.Michael

    But then the question is, if they didn't understand how he's unfit for presidency before, would any of the current problems matter to them? Being ruled guilty in a trial should have been enough for them to sway away from voting him into office, so what's the parameters that actually matter to these people who aren't directly in the cult?
  • Donald Trump (All General Trump Conversations Here)
    Between this and his VP pick, I wonder if one or more of his advisors are intentionally trying to sabotage him.Michael

    Would any of these things even matter to the core followers of his cult? The only ones who might be swayed seems to be regular swing-voters who're on the low spectrum of education. All the people who are actually affected by political propaganda and demagogue speeches. But since the republicans found a cult that can feed them votes, they basically have a giant bag of votes that will never change, regardless if Trump at a rally, pulls off his meat face to be revealed to be a malicious alien.
  • Coronavirus
    No. The next one might be a variation of Ebola. We'll see.frank

    The "problem" with Covid-19 was that it wasn't deadly enough. It ended up in this middle ground in which people could just brush it off as "just another cold" while others got seriously ill and died or contracted long covid symptoms.

    And since the public operates on whatever bubble of convictions that each individual lives within, the seriousness of the pandemic were never truly taken seriously.

    But a larger and more deadly pandemic would drive the point home and get people to demand their governments to do precautions for it to never happen again. It's unfortunate that the only way society can truly change is if it gets hammered to inches of death, otherwise it would just create a polarized white noise that erodes any intellectually sound and proper precautionary practices to prevent a new outbreak.

    It's the same for nuclear annihilation. People think that governments will press a button and send everything they got, but it would be more likely that a tactical nuke is set off on the battlefield or an already destroyed city (that's occupied) and the act would be so shocking that it would shake the world into reducing nukes.

    The problem is that everyone rationalize based on hypotheticals that are filtered through fictional narratives all the time. And the subsequent hyperreality it creates makes progress slow down and precautionary and constructive actions and plans to be reduced over time.

    In Sweden during the 90s, the fall of the wall made our governments think that we don't have to be ready for war anymore and the cost of having a big military defense was irrational seen as we could use those funds to fund necessary things in society instead. We didn't need Nato, we didn't need much of our own defense etc. And nothing happened when Russia first invaded and annexed part of Ukraine in 2014 because it seemed (in the public) as some minor shenanigans by that Putin clown.

    But with a pandemic and a full blown invasion that shook the world, all of a sudden, seemingly out of the blue, the public supported a massive increase in rebuilding our defenses and joining Nato.

    What changed was the perception, the narrative, being hit by actual reality. It's only this sledgehammer that moves a society to take action, nothing else is as effective.

    So, whenever something more serious than the recent pandemic happens, it would quickly reprogram the population into supporting actions to mitigate, fight back and create precautions for future risks. But Covid-19 wasn't enough, so we will probably be unprepared for something more deadly and only after something like a billion deaths will people create demands that can move elections.

    Just think of all the elections going on this year. Has any politician, anywhere, had any election point around preventing new pandemics? No? Or course not. Covid-19 is treated like it's over and that it "wasn't that bad". So there's no interest from the public, they just want to move on to other stuff. But if something really deadly starts a new pandemic, it would be at the top of the list for elections and push politicians who speak of necessary changes to global society.
  • US Election 2024 (All general discussion)
    if Biden doesn't voluntarily step aside and endorse his replacement.180 Proof

    Yeah, him doing so would be the only possible positive outcome other than taking the chance at him winning, but that feels more far fetched the closer to the election we get.
  • US Election 2024 (All general discussion)
    50 is old. Presidents should be 50 at the most. I'm 45 and I read a lot and make music so I can't say my brain isn't regularly used and I can tell my retention rate for information is a fraction of what it used to be. I can barely memorise new pieces and forget them in a few months after I do but will play your anything I learned when I was a teenager.Benkei

    50 isn't old. 50 is a good age in which the maturity of ideas settle down. And since 40 year old's today act like they're still children, with immature handling of philosophical concepts and ideology, they have to get ten more years of maturing before they have the calm to act on their convictions and ideas.

    20-30 is the age in which people explore who they want to be as an adult. 30-40 they explore the validity of such aspirations. 40-50 they manifest the true aspirations, solidified as their true identity. 50+ is when a person has manifested who they really are, a stability matured by years of exploration in which they find wisdom rather than just knowledge.

    A presidency requires wisdom and it's something lacking these days.

    Of course, all this requires a sane psychology and proper introspection throughout life. Most people can live to their dying days without ever thinking an original thought or questioning themselves properly. But for a president, a leader, someone who's supposed to work for the people, it should be a requirement.

    I find democracies today to be pretty lackluster in their defense against those taking advantage of it. Just because democracy demands a representative of the people to be the person who wins an election, shouldn't mean that any dipshit should be able to. I rather think that a representative in a democracy should have demands of competence like any other job in society.

    A president do not stand above the staff of representatives for the people, they should be in service of it. They should be the ones taking the raw emotions, wills and wants from the people and fine-tuning it into working policy, law and national practice. They should be in service of the people.

    That's not what's going on in the US. The US president is a pseudo-king. It's a plastic kingdom similar to Disneyland. The US does not have a good structure of politics, there's no actual parliament. The congress is just a big funding party for lobbyists rather than a place to evaluate strategies for the nation and international politics.

    Democracy is still in need of philosophical progress, it needs further work. The term has become some plaque and adjective to describe a "good nation", but since none of the "good guys" in our world seem to know in what way to actually describe how democracy is producing "good nations", we end up with sham democracies and representatives of the people who can just con everyone into believing these representative are kings and religiously elevated deities. Until people see them as they are and goes for the next plastic king.

    Democracy today is in a hyperreal state of what democracy could be. And we lack the framework to produce actual leaders because we haven't thought through what is actually required to end up with stable, good leaders who are competent at their job.

    The entertainment industry that is democracy today must end and be replaced by democracy that has evolved past this shallow plastic shell of "a good nation".
  • US Election 2024 (All general discussion)
    It’s a lose-lose situation now for the democrats.Mikie

    It should be a walk in the park to stand up against the fascist MAGA movements, authoritarian republicans and Trump, but it's a special level of incompetence that the Democrats weren't able to do this with the mountain of shit that has been piled up against Trump.

    Maybe now they will simply put an age limit on their future candidates? And maybe be better at preparing younger party members for future candidate material. Like, get them started in their 30s, really build up their reputation in their 40s and let them run for presidency in their 50s. With enough work they would have 10-20 candidates to put forward and really nail home not only a candidate people like, but also have a number of backups that are also liked. Against the republicans, it would become easy.

    But I guess, since Gen Z doesn't seem to care about politics and just want some magical solution to everything, there won't be any young people available to be prepared for future presidency. We're not doomed because we have senile old people running the show, but that we have no young people actually caring for politics. Maybe when the fascist boots step on their throats they might get the memo to actually do something for real and not just continue their slacktivism thinking that accomplishes anything.
  • US Election 2024 (All general discussion)
    Is it so hard to find a younger and better suited candidate for the democrats? Is there no else who has the ability to act as a better president than a self-appointed dictator-cult-running clown and a demented relic who fled the nursing home?

    How is it possible that the dems have no other candidate that can just swoop in and take the reigns in a way that makes people actually hopeful about the future? Where there no other backup plan among the dems if Biden were to fail, go full demented or die?

    Everything about US politics is just stupid. It's an entertainment machine. We have more presidents depicted in movies and television that acts better than what actually exists.

    The US is a joke trying to act like adults in front of the world. Redo the entire political system, let intelligent philosophers and historians write a new constitution and rebuild a better nation. Right now it's just a patch-work of stupid interpretations of old politics, with a population who's suffering in both education, health and financial stability while at the same time being so indoctrinated into believing that the enlightened ghosts of the founding fathers inhabit the candidates running for president.

    :vomit: Fire everyone, ban lobbyists and manipulating narcissists, hire intellectuals who are humble educated experts without any interest in prestige or glory; rebuild everything.
  • Donald Trump (All General Trump Conversations Here)
    The Bangladeshi is paid too little for the pants he sows, his neighbour is affected by the toxic dyes that are unregulated there and you pay an exorbitant amount for the same pants considering the low quality (which fall apart after about a year), while being brainwashed to think the quality is acceptable and you need new pants next year (no wait, every other season) to stay fashionable. This conduct killed local tailors who couldn't compete fairly and in the end everybody is worse off. But hey, everybody "agreed" to the underlying transactions; so it's all fine and dandy and you can rest easy that as long as the market runs free, everybody gets what they deserve.Benkei

    I do, however, see a slight pushback from people in western societies. One would think out of moral grounds, but that rarely produce promising sustainability. No, it's rather that people have started to get fed up with the trash quality that's ever so increasing in all products. So they seek something more expensive but lasting. If this trend is growing, we may very well see the rise of tailors again, not just for suits and fine clothing, but for more common casual clothes. Based on the premise of actual quality and discounts for fixing damaged clothes by the same tailor. If a tailor were to set up a store for high quality T-shirts that actually last longer, I'd bet that they would need to expand their business fast as there would be far more customers than initially prepared for.

    I speak out of personal opinion here as well. For years I've always looked for the most decently priced product and viewed the expensive ones as some unnecessary luxury. But when checking older products, been auctioned out or just found in the attic, they have lasted for decades and still outperforms new products. Older tools are much more robust and get the job done for longer. And so I took a look all those luxury level versions of products that can be bought, only to realize that the best brands just do one thing good and that's caring for the material and work that went into making the product.

    Taking inflation into account, it's basically buying old sustainable products that have been updated for today. So all these cheap products that are mass produced and won't last a year is not just because modern technology help reduce prices, but rather that they've cut through normal rising inflation by cutting corners in production.

    I've been looking for a good office chair since my old one broke. There's a lot of options, but most of them are plastic and will fall apart after maybe 3-4 years. So I'll go to the prestige brands instead, find something made by actual hands who cares for the material. It will cost me a small fortune just for a damn chair, but it will maybe last for a majority of my lifetime I have left. If people were to split out the up front cost of something by the estimated lifetime of the product, people wouldn't view things as luxury brands anymore, they would look at it as a personal investment.

    People stare too much at the price tags and not at the value of something. But if that's changing, we may even end up in a more sustainable world in the end, without much of morality or anti-capitalism driving it.

    I will support my local casual clothes tailor if they ever showed up. I would never go anywhere else.
  • Is death bad for the person that dies?


    I work all day, and get half-drunk at night.
    Waking at four to soundless dark, I stare.
    In time the curtain-edges will grow light.
    Till then I see what's really always there:
    Unresting death, a whole day nearer now,
    Making all thought impossible but how
    And where and when I shall myself die.
    Arid interrogation: yet the dread
    Of dying, and being dead,
    Flashes afresh to hold and horrify.
    The mind blanks at the glare. Not in remorse
    - The good not done, the love not given, time
    Torn off unused - nor wretchedly because
    An only life can take so long to climb
    Clear of its wrong beginnings, and may never;
    But at the total emptiness for ever,
    The sure extinction that we travel to
    And shall be lost in always. Not to be here,
    Not to be anywhere,
    And soon; nothing more terrible, nothing more true.

    This is a special way of being afraid
    No trick dispels. Religion used to try,
    That vast, moth-eaten musical brocade
    Created to pretend we never die,
    And specious stuff that says No rational being
    Can fear a thing it will not feel, not seeing
    That this is what we fear - no sight, no sound,
    No touch or taste or smell, nothing to think with,
    Nothing to love or link with,
    The anasthetic from which none come round.

    And so it stays just on the edge of vision,
    A small, unfocused blur, a standing chill
    That slows each impulse down to indecision.
    Most things may never happen: this one will,
    And realisation of it rages out
    In furnace-fear when we are caught without
    People or drink. Courage is no good:
    It means not scaring others. Being brave
    Lets no one off the grave.
    Death is no different whined at than withstood.

    Slowly light strengthens, and the room takes shape.
    It stands plain as a wardrobe, what we know,
    Have always known, know that we can't escape,
    Yet can't accept. One side will have to go.
    Meanwhile telephones crouch, getting ready to ring
    In locked-up offices, and all the uncaring
    Intricate rented world begins to rouse.
    The sky is white as clay, with no sun.
    Work has to be done.
    Postmen like doctors go from house to house.
    — Philip Larkin
  • Last Rites for a Dying Civilization
    There is the sticking-point. The galvanizing charismatic leader is missing.Vera Mont

    People driven by rational ideas and ideals out of consensus formation through critical thought - self-organizing by such concepts as individual agents able to act on their own and amplify their neighbor along the same path... is infinitely better than some fist pumping charismatic leader who's, more often than not, right on the edge of self-indulgent deification, subsequently pulling their strings of manipulation a bit too far and collapsing the good into a state of utter destruction.

    People in an unhinged individualistic society can still be individuals and act as individuals. They all just need a bit of ego death before anything can happen as a collective. People just need to get better at understanding and sorting good ideas from bad ones and get better at sifting which knowledge is actual, real and rational from the endless trash formed by the attention economy and its representatives and slaves. The need isn't a charismatic leader, it's the ability of regular people to form an epistemic responsibility of knowledge in front of an endless sea of raw information.
  • Last Rites for a Dying Civilization
    Yes, changing the political landscape is hard! My vote means nothing. My little bit of feeble activism is ineffective. Can't introduce electoral reform, can't take financial interests out of the process, can't get media to focus on the relevant issues and give more than the most superficial cursory attention to climate science or climate policy. Whatever tiny headway we make, some other interest group overtakes and cancels it. Very discouraging.Vera Mont

    That's my critique... the modern ideal of individualism has ingrained itself so deep into the self-image of even rational, thinking people today that everyone feel that any kind of collective movement is a losing game and thus surrender all seeds of power that collectively could move mountains if ever they were to organize for real.
  • Last Rites for a Dying Civilization
    What, like cutting down on their energy use, meat consumption or plastic packaging? Walk instead of drive? Refrain from throwing out last year's fashion? You must be kidding!Vera Mont

    I'm speaking of actual change to the core problems. Those examples of actions, while good for the environment, have also become a sort of individualistic green washing, in which people act according to the fashion of behavior connected to the identity of being someone who cares for the environment, while still rolling out the carpet for people in power who act on a larger scale to dismiss or counter-act necessary changes. Such identity traits can in some cases be so ingrained into the hyperreality of moden living that it even obscures the idea of the self as being more environmentally conscious than it actually is. Effectively soothing their climate anxiety with a comfort blanket rather than being part of actual change.

    It's the same as people who just put money into charity while then voting for politicians involved with keeping a certain conflict going that as a consequence produce the conditions that the charity money then tries to mitigate. It's shallow, unengaging and centered on the ego of the person and their self-image as being morally good within a certain societal context.

    I can't stress enough the number of people I meet who live up to the agreed upon moral standards of "how to behave for the good of the environment" in day-to-day living, but at the same time know close to nothing of climate science or what is actually going on within the politics of climate change.
  • Last Rites for a Dying Civilization
    Our government is supposed to follow the precautionary principle, which states that "where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation." But alas, our government is not guided but what is best for human health or the planet's health, but what is best for the profits of corporations, no matter the price to the environment or future generations.xraymike79

    No, modern politics in western democratic societies are governed by modern individualism. It doesn't matter on what side you stand politically, the hyperreality of individualism still has corrupted the ability for society to socially and collectively hold the epistemic responsibility necessary to drive humanity in a sober and rational direction.

    What's abundantly clear and evidence for this is the inability of anyone to analyze our culture without finding blame onto anything else. While we can absolutely find perpetrators who are actively taking actions against societal rationality, the truth is that everyone is guilty. Statistically there are far more people who actually care for the environment than there are voices against it, but they simply just feed their ego and shadow of morality with pointing fingers.

    If people actually cared beyond their individualistic driving force, we would see politicians get kicked out of office and replaced by those who would take action. We would see massive shifts within society all over the world. But we don't.

    So the guilty aren't simply just those who are obvious perpetrators, it's not just the corporations and corrupted politicians, it's also everyone else who paints a picture of themselves as caring and rational while doing jack shit to produce or actually support any form of necessary change.
  • Are posts on this forum, public information?
    Well, this is a publicly visible forum so nothing stops ChatGPT from visiting the website and copying what shows on the page.

    It's possible to stop this by creating a robots.txt file that tells ChatGPT that it's not allowed to visit, but PlushForums doesn't provide such a file.

    As for the raw database, the PlushForums FAQ says "we do not sell or share your data with any third parties."
    Michael

    A kind of backdoor way into the collective body of work of philosophy; you get your ideas actually swooped up to be part of something larger. That's not all doom and gloom. I easily take pride in if my ideas are worth spreading, regardless of being credited or not. Won't matter in a hundred years anyway, only the ideas will remain, and if mine contribute to that body of knowledge then I won't just die off after only having been a decade going resource-soaking methane-leaking meatbag.

    But would be nice if GPT get its quotation algorithm working. It's basically the only requirement to bypass the problems with accidental plagiarism and would credit people if ideas from here spread into the usage with AI systems.