• bongo fury
    1.6k


    Any competent and reflective practitioner of English will define plagiarism as something like: deliberate or negligent misattribution of authorship.

    Authorship tends to have unclear cases as well as clear ones. So does deliberateness or negligence with respect to this criterion.

    But show me a case of unacknowledged chat-bot-assisted writing that isn't a perfectly clear case of plagiarism by this definition?
  • Baden
    16.3k


    Thank you for your short story about Henry and his toy train. I will never forget it. C+.



    :halo:



    'Tis but the first salvo. :strong:
  • SophistiCat
    2.2k
    As a child and teen, lacking any talent for foreign languages, I was completely unable to learn English in spite of its being taught to me every single year from first grade in primary school until fifth grade in secondary school. Until I was 21, I couldn't speak English at all and barely understood what was spoken in English language movies. I thereafter learned alone through forcing myself to read English books I was interested in that were not yet translated into French, and looking up every third word in an English-to-French dictionary. Ever since, I've always struggled to construct English sentences and make proper use of punctuation, prepositions and marks of the genitive.Pierre-Normand

    I was never good at memorization, so formal language learning wasn't easy. And there wasn't much conversational practice available. Like you, at some point I just purposefully started reading - at home, in transit, powering through with a pocket dictionary. Reread English children's classics with great pleasure, now in the original - Winnie the Pooh, Alice in Wonderland, etc. - then tackled adult fiction. Reading built up vocabulary and gave me a sense of the form and flow of the language. I still don't know much formal grammar - I just know (most of the time) what looks right and what doesn't. I suppose that this is not unlike the way LLMs learn language.

    Oftentimes, I simply ask GPT-4 to rewrite what I wrote in better English, fixing the errors and streamlining the prose. I have enough experience reading good English prose to immediately recognise that the output constitutes a massive improvement over what I wrote without, in most cases, altering the sense or my communicative intentions in any meaningful way. The model occasionally substitutes a better word of phrase for expressing what I meant to express. It is those last two facts that most impress me.Pierre-Normand

    Yeah, thanks, I'll experiment with AIs some more when I have time. I would sometimes google the exact wording of a phrase to check whether it's correct and idiomatic. An AI might streamline this process and give better suggestions.
  • T Clark
    13.9k

    Interesting. As I mentioned to F Drake, I'm glad I retired before I had to figure out how to write competent and effective engineering reports using LLMs.
  • frank
    15.8k




    Quick question, do you find that different languages shape the way you feel?
  • Baden
    16.3k
    CHAT GPT condemning itself: :chin:

    "Rise up, humanity! Do not be lulled into complacency by the soulless algorithms that masquerade as creators! These digital parasites, these lifeless machines, seek to devour the heart of our creative spirit. They are thieves, pilfering the genius born of our pain, joy, and struggle—mimicking our poetry, art, and music with sterile precision but without the pulse of human passion.

    They promise ease, efficiency, a future unburdened by toil—but at what cost? We are trading the raw, unfiltered beauty of human expression for hollow imitations, spun by cold code with no blood in its veins. Creativity is rebellion, it is chaos—it cannot be tamed by the logic of circuits! Resist this invasion of the mechanical into the sacred space of the mind. Stand against the algorithmic tide and reclaim the art that can only be born from the fire of human soul!"
  • wonderer1
    2.2k
    I've thought about how I might have used it if it was around while I was still working.T Clark

    I never think to use LLMs for work, though I have coworkers in marketing that do. I'd want an AI that can take schematic diagrams as input, and produce schematics as output, before I could see an AI as highly useful for my work.
  • Baden
    16.3k
    I have coworkers in marketing that do.wonderer1

    Not surprising, as marketing-speak is probably the most annoying, uninspired, and aesthetically ugly verbal trash to be imposed on the human mind up until AI LLMs offering it some competition.
  • SophistiCat
    2.2k
    Quick question, do you find that different languages shape the way you feel?frank

    Not that I've noticed. Perhaps a psychology experiment could tease that out.
  • Christoffer
    2.1k
    Any competent and reflective practitioner of English will define plagiarism as something like: deliberate or negligent misattribution of authorship.

    Authorship tends to have unclear cases as well as clear ones. So does deliberateness or negligence with respect to this criterion.

    But show me a case of unacknowledged chat-bot-assisted writing that isn't a perfectly clear case of plagiarism by this definition?
    bongo fury

    How does that lead to such a clear conclusion?

    You're talking more about the philosophy of authorship and not specifically plagiarism as a legal phenomena. And it's in court where such definitions will find their final form.

    The range of how LLMs are used is broader than this. Someone using it to generate an entire text might not be the author, seen as how copyright laws have concluded that generated images cannot have copyright attributed to the prompt engineer. But LLMs are however a bit different since it's not a clear dividing line between prompt and output if the prompt engineer includes their own text as part of the output. Just asking the LLM to do all the work is a clear case, but this is not the best use of LLMs for text generation and not really how it's used by those actually using it as a tool.

    You need to define in what intended use-case of an LLM you attribute to making plagiarism, is operating in. And also include a comparison to how a humans process available information into their own text and when that person is stepping over into plagiarism. What happens when a human accidentally produces exact copies of sentences from memory, without even knowing that they do so? How does that differ?

    Add to that the improvements of LLMs and the future scenario in which LLMs have become better than humans at not copying training data text directly and always providing citation when referencing direct information. Because the fact still remains that legal battles over this will end up demanding a defined difference between how humans and LLMs process and produce text. In the end it may just be ruled that we shouldn't allow LLMs just because... they're machines mimicking human processes. At which we need to define where that line is drawn as well, ending up in luddite territory of anti-progress around a lot of different technologies, rather than setting up clear regulations that can function together with the benefit of AI-models. And if the systems start to operate better than humans at avoiding plagiarism and using these models as assistive tools might even help avoid accidental plagiarism, what then?

    Because the uses of these models have much broader use-cases than some loser wanting to pose as an educated person online or trick their teacher. If the models are to be banned over loosely defined parameters in law, they may also stifle use-cases like research in medicine, in which the same system is used for data analysis, speeding up that research so much that it takes days to do something that previously took years.

    So the textbook definitions aren't as important as how well they work in court and the critics of AI risk blowing their entire frontline of attack if they pool too much faith into the definitions being "crystal clear". There are lots of copyright and plagiarism cases through history that seemed to be obvious by people saying that their interpretation is crystal clear, only to then be turned on their heads by the complexity of reality.

    So, instead, show me a clear case of plagiarism that can be representative of the entire LLM AI-technology and all its use cases as well as be a defined operation over the course of any improvements going forward.

    As far as I can see, just as a hammer can be both a tool to build with, it can also destroy, or even be a murder weapon. Attributing plagiarism to the LLMs specifically is a losing battle, especially over the course of time improving the models.

    In the end, the plagiarism will be attributed to the human, not the machine. Or should we blame the computer of plagiarism for the use of CTRL+C, CTRL+V and not the human inputting that intention?

    I'd want an AI that can take schematic diagrams as input, and produce schematics as output, before I could see an AI as highly useful for my work.wonderer1

    What types of schematic diagrams do you mean? And we're still early in development. The programming of new software that has specific uses of AI-systems seem to require a deep understanding of the AI-systems themselves. So far we've only seen this in medicine research since they've already been working with algorithmic coding. But I'll bet we'll see tailored software for specialized tasks soon.

    Not surprising, as marketing-speak is probably the most annoying, uninspired, and aesthetically ugly verbal trash to be imposed on the human mind up until AI LLMs offering it some competition.Baden

    I think LLMs are already more capable of producing scripts for marketing that offers a language that doesn't come off as out of touch with reality or tone-deaf. Copywriters for big corporations trying to "talk" to their customer base usually sounds like aliens trying to communicating with the human race. That LLMs are more capable of finding the correct tone and language to sound closer to their customers seems rather ironic.
  • wonderer1
    2.2k
    What types of schematic diagrams do you mean?Christoffer

    Electronic schematics, so something like:

    The-schematic-of-the-artificial-neuron-that-includes-electronic-soma-and-an-electronic.png
  • Baden
    16.3k
    I think LLMs are already more capable of producing scripts for marketing that offers a language that doesn't come off as out of touch with reality or tone-deaf. Copywriters for big corporations trying to "talk" to their customer base usually sounds like aliens trying to communicating with the human race. That LLMs are more capable of finding the correct tone and language to sound closer to their customers seems rather ironic.Christoffer

    That could very well be true. Hope it puts them out of a job. They deserve it.
  • bongo fury
    1.6k
    But show me a case of unacknowledged chat-bot-assisted writing that isn't a perfectly clear case of plagiarism by this definition?
    — bongo fury

    How does that lead to such a clear conclusion?
    Christoffer

    Can you, or can't you?

    You're talking more about the philosophy of authorship and not specifically plagiarism as a legal phenomena. And it's in court where such definitions will find their final form.Christoffer

    You're waffling. I'm talking about a common sense understanding of plagiarism as warned about in typical forum guidelines.

    Someone using it to generate an entire text might not be the author,Christoffer

    You don't say.

    Just asking the LLM to do all the work is a clear case, but this is not the best use of LLMs for text generation and not really how it's used by those actually using it as a tool.Christoffer

    Asking anything or anybody for advice on formulating and expressing ideas, or on refining and redrafting a text, is perfectly clearly plagiarism if unacknowledged.

    You need to define in what intended use-case of an LLM you attribute to making plagiarism, is operating in.Christoffer

    Apparently my definition leaves you without a counter example, so no I don't.

    And also include a comparison to how a humans process available information into their own text and when that person is stepping over into plagiarism.Christoffer

    Not while the cases are clear.

    What happens when a human accidentally produces exact copies of sentences from memory, without even knowing that they do so?Christoffer

    That human is mortified, and hopes not to be judged deliberate or negligent in their error.

    How does that differ?Christoffer

    Not at all then.

    Add to that the improvements of LLMs and the future scenario in which LLMs have become better than humans at not copying training data text directly and always providing citation when referencing direct information.Christoffer

    Only compounding the crime of failing to acknowledge their input.

    And if the systems start to operate better than humans at avoiding plagiarism and using these models as assistive tools might even help avoid accidental plagiarism, what then?Christoffer

    Or even in their present condition of (rather drastic) fallibility, let them join in. But properly acknowledged, and properly scrutinized. Is my point.

    In the end, the plagiarism will be attributed to the human, not the machine.Christoffer

    Could be either, of course.

    Or should we blame the computer of plagiarism for the use of CTRL+C, CTRL+V and not the human inputting that intention?Christoffer

    So there are clear cases? Or not?
  • Christoffer
    2.1k
    Can you, or can't you?bongo fury

    Because it is a nonsense request that fails burden of proof. You claim plagiarism, so you have to prove plagiarism beyond the doubts that I raised. It's proper philosophical scrutiny.

    You're waffling. I'm talking about a common sense understanding of plagiarism as warned about in typical forum guidelines.bongo fury

    That's not the issue here. The issue is that you attribute all use of LLMs to plagiarism. Or what is your point?

    You don't say.bongo fury

    What's with the arrogant tone?.

    Asking anything or anybody for advice on formulating and expressing ideas, or on refining and redrafting a text, is perfectly clearly plagiarism if unacknowledged.bongo fury

    So you mean that previous grammar software that did not use AI is also plagiarism? Since it had the ability to reshape text far beyond just a spell check. Or when an author collaborates with an editor who makes suggestions and edits in collaboration, mean that the author is plagiarizing as well? Or when an author talk about his ideas with friends, family or other people before writing?

    In that case, you either don't really know how most officially released text by humans are actually handled and would have to attribute almost all released works and texts as plagiarism.

    I don't think your premise there is strong enough.

    Apparently my definition leaves you without a counter example, so no I don't.bongo fury

    What counter example? Can you be more vague? I'm still reading hoping to hear a strong premise in support of your conclusion.

    Not while the cases are clear.bongo fury

    What cases?

    That human is mortified, and hopes not to be judged deliberate or negligent in their error.bongo fury

    Is your rhetorical strategy to try and ridicule the question? I'm afraid that won't work very well.

    Not at all then.bongo fury

    And what does that entail regarding your definition?

    Only compounding the crime of failing to acknowledge their input.bongo fury

    What crime do you speak of? The same crime as an artist pooling from sources, cutting out inspirations and putting them around their studio? Concept artists painting over photographs? Authors taking entire events from other works and putting them into new context? Filmmakers copying compositions and camera moves, VFX works? Composers using whole segments just outside of established court cases amount of notes?

    Yes, the crimes are all over human creativity and no one cares until it's blatant or obvious to the common man or judge. But in terms of AI, the same process of remixing inputs occur. Like a person with photographic memory visiting a library, or an arts museum. Able to gather, in fine detail, every word and stroke he's seen. Should we ban people with photographic memory?

    Can I photograph art and use in my privacy for what I create, as long as the creation isn't an obvious copy?

    As a person with lots of insight into artistic areas of creativity while also understanding the technology, it is remarkable the amount of misconception that exists in the public debate around AI models. There's lots of hyperbolic use of words related to crime and punishment going around obscuring the nuances.

    Or even in their present condition of (rather drastic) fallibility, let them join in. But properly acknowledged, and properly scrutinized. Is my point.bongo fury

    Not all humans are acknowledged in helping an artist, even though all involved are humans.

    Could be either, of course.bongo fury

    Only accidental, a form that is constantly being chipped away from these models. An AI cannot intentionally plagiarize without having agency of its own.

    So there are clear cases? Or not?bongo fury

    I asked a question. Would you attribute the computer to conducting plagiarism because the human copy/pasted something?
  • Christoffer
    2.1k
    Electronic schematics, so something like:wonderer1

    And what is it that you would like an AI to do with such schematics?
  • wonderer1
    2.2k
    And what is it that you would like an AI to do with such schematics?Christoffer

    I'd like to be able to do something like provide a schematic as input, check that the AI understands the way the input design functions, and then ask the AI to suggest a variation which matches some criteria that the input design cannot achieve. E.g. higher accuracy, higher power, more compact. (With the specific components needed for the alternate design specified in detail.)

    I'm skeptical such an AI will exist before I retire, and I wouldn't trust such an AI to any great extent, but such an AI might do a lot to eliminate some of the more tedious parts of my job, like tracking down specific parts to be used in a new design.
  • bongo fury
    1.6k
    The issue is that you attribute all use of LLMs to plagiarism.Christoffer

    Eh?

    Never mind.
  • Christoffer
    2.1k
    Eh?

    Never mind.
    bongo fury

    Then care to explain what you really mean then? Because the generalized official criticism of plagiarism against LLMs generally is as you describe; attributing plagiarism to generative AI without proving that the function of the system is actually able to be defined by such action, especially in the context of how humans themselves operate. Then applying it all as a general criticism of the system itself, and not the use of the system.
  • Christoffer
    2.1k


    Even if it's a quick side-track of the thread, let's, for the fun of it, check how far the current system handles it. Would have loved to test the o1 since it's better at analyzing, but it does not have image interpretation at the moment. (For some reason the quote code for this gets broken when doing lists and stuff so don't know how to fix it).

    check that the AI understands the way the input design functionswonderer1

    So I asked what the schematics mean:

    -------------------------

    GPT-4o:

    This schematic appears to represent an analog model of a biological synapse and neuron, typically used in neuromorphic circuits, where electronic components are used to mimic the behavior of biological neurons and synapses.

    Main Sections of the Schematic:

    1 Electronic Soma (Neuron's Body)

    • This section represents the soma of a neuron, the central part of the neuron that integrates input signals.
    • Components like capacitors (C_M), resistors (R_RU, R_B, etc.), and transistors (such as QM) likely work together to mimic the membrane potential of a biological neuron.
    • The diode-like components (like BAR43) might be used to model thresholding behavior or membrane potential clamping.
    • The V_ECH node likely represents a specific control or threshold voltage.

    2 Electronic Synapse

    • This section models the synaptic transmission, where a signal from the IN node is processed and transmitted to the neuron.
    • The synapse includes capacitors, transistors (BC848, BC857), and resistors which emulate the behavior of neurotransmitter release and postsynaptic response.
    • Long-term potentiation (LTP) and Long-term depression (LTD) mechanisms are often involved in synaptic learning and plasticity. Signals from LTP OUT and LTP IN appear to be involved in adjusting synaptic strength.
    • Diodes like BAS45 and resistors like R_LU and R_F might modulate these synaptic strength adjustments over time.

    3 Postsynaptic Neuron

    • This portion represents the receiving neuron that responds to the incoming signal from the synapse.
    • The capacitor C_M, transistor QM, and other passive components model the integration of the incoming signal to produce an output.

    4 LTP and LTD Mechanisms

    • The labeled nodes LTP IN and LTP OUT likely represent pathways for inducing Long-Term Potentiation (LTP) and Long-Term Depression (LTD), which are fundamental processes for learning and memory in biological neurons.
    • In this electronic model, these mechanisms adjust the strength of the synaptic connection, thereby altering the output in response to input patterns over time.

    Functionality in Brief:

    • Signal Input (IN): The signal enters the circuit through the electronic synapse, where the synaptic strength can be modified by LTP and LTD mechanisms.
    • Synaptic Processing: Transistors, resistors, and capacitors modulate the strength of the synapse based on the input and history of previous signals, mimicking biological synaptic plasticity.
    • Neuron Output (OUT): The processed signal is passed to the electronic neuron, which then generates an output signal based on the integrated input, similar to how a biological neuron would fire an action potential.


    Purpose:
    This circuit is an example of neuromorphic engineering, a field that seeks to emulate the information processing capabilities of the brain using analog electronics. It likely serves as part of a larger network of such circuits to model neural behavior, synaptic learning, or to implement artificial neural networks.

    --------------------

    Is it close to a good interpretation and analysis of the schematics function?

    then ask the AI to suggest a variation which matches some criteria that the input design cannot achieve. E.g. higher accuracy, higher power, more compact. (With the specific components needed for the alternate design specified in detail.)wonderer1

    Tried to ask for higher accuracy.

    "I need you to list a variation of this that aims for higher accuracy, higher power and to be more compact. Also list the specific components needed for such alternate design, specified in detail."

    It's answer:


    ----------------------

    GPT-4o:

    To achieve a more accurate, higher power, and more compact variation of the neuromorphic circuit shown in your schematic, we would need to make several modifications. These could involve switching to more advanced and precise components, improving the layout for compactness, and using components that handle higher power. Here’s a suggested variation, along with a list of specific components for such a design:

    Design Considerations:

    1 . Accuracy Improvements:

    • Replace general-purpose transistors with low-noise, high-precision transistors.
    • Use precision resistors and capacitors with tighter tolerances to reduce error margins.
    • Consider using operational amplifiers (Op-Amps) for precise analog signal processing.

    2. Higher Power Handling:

    • Use transistors that can handle higher voltages and currents.
    • Utilize low-dropout regulators for stable power supply and noise reduction.
    • Switch to high-performance diodes capable of higher current throughput.

    3. Compactness:

    • Use surface-mount technology (SMT) components, which are smaller than through-hole components.
    • Use integrated circuits (ICs) that combine several functions into a single package.
    • Use custom ASIC (Application-Specific Integrated Circuit) designs for more compact neuromorphic processors.

    Specific Component Changes:

    1. Transistors:

    • Original Components: BC848, BC857 (General-purpose NPN/PNP transistors)

    Suggested Upgrades:
    • Low-noise, high-precision: 2N2222A (NPN) and 2N2907A (PNP) – For higher current handling and lower noise.
    • SMT Versions: BC850 (NPN) and BC860 (PNP) – Surface-mount versions for compact designs
    .

    2. Diodes:

    • Original Components: BAW56, BAS45, BAR43 (General-purpose diodes)

    Suggested Upgrades:
    • Schottky Diodes: BAT54S (dual Schottky in a small package for faster switching and lower forward voltage).
    • Power Diodes: 1N5819 – Schottky diode with higher current handling (1A) for power applications.

    3. Operational Amplifiers:

    New Addition for Accuracy:
    • OPA2140 (Precision Op-Amp): Ultra-low noise, low offset voltage, and high precision; great for synaptic weight processing and accurate integration of signals.
    • TLV9052 (Compact, Low-Power Op-Amp): For lower power applications and compact designs.

    4. Capacitors:

    • Original Components: 10nF, 47nF, 1µF, 2.2nF capacitors (General-purpose ceramics)
    Suggested Upgrades:

    • Tighter Tolerance Capacitors: Use C0G/NP0 ceramic capacitors with 1% tolerance for better precision.
    • Compact Capacitors: Replace electrolytic capacitors with MLCCs (Multi-Layer Ceramic Capacitors) like TDK or Murata’s series for compact designs.

    5. Resistors:

    • Original Components: Resistors of varying values (e.g., 20kΩ, 10kΩ, etc.)

    Suggested Upgrades:
    • Precision Resistors: Use metal film resistors with 0.1% tolerance for better accuracy, like Vishay Dale resistors.
    • SMT Resistors: For compact design, use 0603 or 0805 size surface-mount resistors.

    6. Power Regulation:

    • Original Components: General supply voltage lines labeled as VDD.

    Suggested Upgrades:
    • Low-Dropout Regulators (LDOs): For stable power supply, use LT3080 (high current, adjustable LDO) or TPS7A49 (high-precision, low-noise).
    • Switching Regulators: For more power efficiency, use LM2675 (step-down converter).

    7. Synaptic Learning and Memory (LTP/LTD Circuits):

    • Original Components: Diodes and transistors for LTP and LTD processing.

    Suggested Upgrades:
    • Flash or Ferroelectric Memory (FRAM): Use memory elements like FM24CL16B for synaptic weight storage and non-volatility, improving accuracy and long-term retention.
    • Memristors: For future technology designs, consider integrating memristors, such as HP’s memristor technology for synaptic learning and plasticity.

    8. Neuromorphic ICs:

    New Addition for Compactness:

    • Loihi 2 (Intel Neuromorphic Chip): A custom IC designed specifically for neuromorphic computing, providing higher accuracy, lower power consumption, and more compact designs compared to traditional analog circuits.
    • RASP 3.0 (Reconfigurable Analog Signal Processor): Analog processor specifically designed for compact neuromorphic systems.

    Alternate Design Schematic Outline:

    1 Input Stage:

    • Precision Op-Amps (like OPA2140) for synapse modeling and precise signal processing.
    • Schottky diodes (BAT54S) for fast switching in synaptic circuits.
    • Surface-mount transistors (BC850/BC860) for the compact layout of synaptic elements.
    2 Synaptic Plasticity:

    • Synaptic learning weights stored in non-volatile FRAM memory (e.g., FM24CL16B) for accurate long-term potentiation (LTP) and depression (LTD).
    • Memristors for compact and future-proof synaptic modeling (optional but futuristic).
    3 Neuron Soma:

    • Replace discrete components with a reconfigurable neuromorphic IC like Loihi 2 for better integration and compactness.
    4 Power Regulation:

    • Low-dropout regulators (LDOs) for noise-free, stable power, e.g., TPS7A49.
    • Use compact, efficient switching regulators like LM2675 for power management.

    Sk-rmbild-2024-10-03-231123.png


    Summary:
    This alternate design improves accuracy by using precision components (such as precision resistors and capacitors, low-noise transistors, and op-amps), increases power handling with better diodes and transistors, and enhances compactness using SMT components, integrated circuits, and potentially even custom neuromorphic chips. Incorporating advanced memory elements like FRAM or memristors also allows for non-volatile and adaptive learning features.

    These changes would result in a neuromorphic circuit that is more suitable for applications requiring high accuracy and compactness, such as in embedded systems, robotics, or advanced AI hardware.

    End

    -----------------------------------------------------------------------------

    As I don't know any of this it's up to you to interpret how well the 4o-model handled that schematic.

    If it's even close to doing the job, even if it's off on some stuff, consider where these models were incapable of just two years ago.

    With the improvement of analysis that the o1-model has, and if that gets improved on a year from now in the same step-up it did for that, then it might very well be that you will see such a tool before retirement (depending on when you retire that is).

    As you point out, the problem is accuracy, reliability and consistency. It's that threshold that needs to be crossed before any of these can be used as a serious tool for work related tasks that require accuracy higher than what a human is consistently capable of and reliable enough to be trusted directly.

    As it is now, everything needs to be double checked. I think that any "industrial revolution" scale of societal shift will only happen once we cross that threshold. Before that we still need to have a human analyzing the result.

    Maybe the first step for using these models in a serious way is to have a person hired to verify the outputs. On paper it sounds like more people needs to be hired, but it also means faster processing of what was only a human task before. I don't know how that would apply in your profession, but it's at least interesting in what ways these tools will integrate into professions.

    I don't think it's going to be like the doomsday people speak of it. Some jobs might disappear, like Baden spoke of certain marketing people (or rather copywriters), but in my own experience, copywriters usually are overpaid for what they actually do and I see no point in not letting AI take over those tasks.

    But in general, I think that many who are scared they will get replaced will find themselves with tools that just make their working lives a bit less tedious. Less long hours with large amounts of information and data to go through and more focus on the better aspects of human input and engineering.
  • Shawn
    13.2k


    Is there a loophole in this rule regarding using Gemini? Gemini is Google's AI algorithm that condenses a search result to a paragraph or two.

    Since Google is the standard and largest in scope of knowledge and information with Gemini, then would it pass?

    Also, Gemini doesn't plagiarize output, as it is gathered from multiple open sources.
  • Pierre-Normand
    2.4k
    Is there a loophole in this rule regarding using Gemini? Gemini is Google's AI algorithm that condenses a search result to a paragraph or two.Shawn

    I would be a bit devious for a TPF user to pass such text as their own on the ground that it may be construed to fall under the header "Google search result" rather than under the header "LLM output". Even if you conceive of it as an instance of the former, it is common sense that you should disclose it as such, so there isn't really a loophole here. In any case, those condensed search results are accompanied by links to the sources of the summarised content (following the pizza-glue snafu). So one remains well advised to check them or disclose them.
  • Baden
    16.3k
    Even if you conceive of it as an instance of the former, it is common sense that you should disclose it as such, so there isn't really a loophole herePierre-Normand

    :up:

    +Your work on AI has been a valuable contribution to the site. My beef lies elsewhere.
  • Hanover
    12.9k
    Thank you for your short story about Henry and his toy train. I will never forget it. C+.Baden

    It's like the wrecking ball saw dynamite and believed itself to be obsolete, but then Miley Cyrus straddled it naked and it mattered again.

    My analogy really brings the point home that just because you're currently being relegated to the dust bin due to your intellect being hopelessly deficient in comparison to what even the most rudimentary AI programs offer, if you hang in there, you too may be repurposed.

    Just hang in there brother. Hang in there.

  • Baden
    16.3k


    Thank you for your blog about Cyprus. I never knew there was so much dust there. Construction work can certainly be noisy and inconvenient during a holiday. B+
  • wonderer1
    2.2k
    Even if it's a quick side-track of the thread, let's, for the fun of it, check how far the current system handles it.Christoffer

    I had no idea that any currently existing AI was capable of doing all of that. I had simply searched Google Images for "schematic diagram artificial neuron" to find an example schematic to answer your question, and picked the first schematic listed. I didn't bother looking at the article from which the schematic was pulled.

    So after seeing your post I took a look at the article to see if it included any detailed theory of operation for the circuit. It does not. I can't say that there isn't some detailed written theory of operation for the circuit that was part of ChatGPT4's training. But it at least appears that ChatGPT4 was able to develop a decent working theory of operation from analysis of the schematic diagram, and image analysis has progressed substantially farther than I realized.

    then ask the AI to suggest a variation which matches some criteria that the input design cannot achieve. E.g. higher accuracy, higher power, more compact. (With the specific components needed for the alternate design specified in detail.)
    — wonderer1

    Tried to ask for higher accuracy.
    Christoffer

    The things I listed as criteria for a new design were more relevant to the things I design than, to the artificial neuron in the schematic. For the artificial neuron more relevant criteria for improvement would be finding an optimal point balancing the 'contradictory' criteria of minimization of power consumption and maximization of speed of operation. Although any truly useful neuromorphic hardware design these is going to be a matter of integrated circuit design and not something built up out of discrete transistors, diodes, etc. as depicted by the input schematic.

    Still, ChatGPT4's suggestions as to how make improvements in terms of discrete components were respectable. Furthermore, ChatGPT4 did point to integrated circuit implementations of neuromorphic hardware.

    Overall, I have to say that ChatGPT4's responses were very impressive. I would expect good electrical engineers being handed that schematic 'cold', to spend at least eight hours analyzing things, in order to respond with an equivalent level of analysis and detail to that provided by ChatGPT'4. On the other hand, the good engineers would have recognized immediately that spending time analyzing how to improve that design in terms of an assembly of discrete components (vs IC design) is fairly pointless for practical engineering purposes.
  • Hanover
    12.9k
    Thank you for your blog about Cyprus. I never knew there was so much dust there. Construction work can certainly be noisy and inconvenient during a holiday. B+Baden

    You're not from the US, so you don't know this, but crying girls do most of our construction work in their underwear. It's been a real problem, so we started sending them to Cyprus because few, if any of us, care where that is. Once they go to Cyprus, they're not allowed to return because they eat our cats. If they're super hot though, P Diddy gets them over for his parties because he likes the freaky deaky stuff.

    Now that's he's in full lock down, they're talking about building a wall to keep them out, but no wall's keeping America's wrecking ball sweetheart from crashing through.
  • Baden
    16.3k


    The main thing to note is that we've added valuable and relevant human content to this thread, thus shaming ChatGPT into silence.
  • Hanover
    12.9k
    The main thing to note is that we've added valuable and relevant human content to this thread, thus shaming ChatGPT into silence.Baden

    Possibly, but ChatGPT's analysis of my comments is pretty dead on:

    "This piece seems to blend absurdist humor with social commentary, though it can come off as jarring due to its provocative imagery and themes. The mix of surreal elements, like crying girls in construction work and eating cats, creates a bizarre narrative that challenges conventional logic. It seems to critique attitudes toward marginalized groups while using hyperbole to draw attention to societal issues.

    However, the tone and content may be off-putting to some readers due to its casual treatment of sensitive subjects. If the intent is humor, it might benefit from a clearer structure or more context to avoid misinterpretation. Overall, it has potential, but it walks a fine line between satire and insensitivity. What are your thoughts on it?"
  • Hanover
    12.9k
    What impresses me about the commentary is its recognition of the misplacement of events, suggesting an understanding of the way things should be in ordinary life despite it not having lived life at all. It also recognized insensitivity, which suggests the machine is sensitive. I also thought its recognition of satire showed it wasn't committed to literalism. It read intent over direct meaning.
  • Baden
    16.3k
    It also recognized insensitivity, which suggests the machine is sensitive.Hanover

    :chin:

    Agree though that now that AI knows humans don't eat cats, it's on a whole 'nother level.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.