• The important question of what understanding is.
    Can you tell me how I could get all that stuff about the store shelf and banana bread into my translation memory? — Daemon

    I can sketch it out.

    You need some bootstrap capabilities outside of dictionaries... things like humans have; e.g.:

    A camera does not see. — Daemon

    ...the ability to see.
    InPitzotl

    This is just a waste of everybody's time. I mean, come back to us when there's a camera that can see and we'll have something to talk about.
  • The important question of what understanding is.
    To most people. Why do you keep refusing to accept this? In a place where the councillors are corrupt/vicious why not the opposite.I like sushi

    That doesn't affect the point of the example. If there were such a place, the computer wouldn't have access to that external set of circumstances.

    The point is that we are able to make a judgement about the meaning of the sentences which a computer can't possibly make.
  • The important question of what understanding is.
    You still aren't getting the point Tim. The two sentences are there solely to provide an example of the limits of machine translation.

    What exactly are we discussing and for what purpose? I do fail to see the profoundness or any possible fruit of this topic. Computers, AI =/= human comprehension. I doubt there was any disagreement at any point.Outlander

    That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI? — TheMadFoolDaemon
  • The important question of what understanding is.
    Now, I don't know how you have reduced such an interesting topic as "The important question of what understanding is" into a translation subject!Alkis Piskas

    Because I was responding to something TheMadFool said, which I quoted at the very start of this thread:

    That raises the important question of what understanding is and, more importantly, whether it is something beyond the ability of a computer AI?TheMadFool

    We could of course talk about a single language, but discussing computer translation is an excellent way to address the question of understanding.
  • The important question of what understanding is.
    How would you prove that this extra thing beyond rule following, this 'understanding' exists?frank

    Well, for example, there are sometimes mistakes in the source text. Maybe somebody writes "the saw blade must be touched with the fingers while it is still rotating". So I write to the customer and say "I think you missed out the word 'not' here". And they say "yes, thank you, you're right".

    Does that answer your question?
  • The important question of what understanding is.

    A. The councillors refused to give the protestors permission for their demonstration as they advocated violence.
    B. The councillors refused to give the protestors permission for their demonstration as they feared violence.

    In A. "they" refers to the protestors, in B. it refers to the councillors. We know this because of our experience of the world. It's an example of something a computer couldn't know.
  • The important question of what understanding is.
    Hm, I can see the pointOutlander

    No you can't. You're missing the point completely.
  • The important question of what understanding is.
    I guess a question would be: how do you know your experiences are similar enough to allow understanding?frank

    Not a very clear question Frank. But in 20 years of full time work as a translator I've translated around 10 million words, I very rarely receive complaints about my translations, my work is checked by an editor and I very rarely receive corrections from them, and my customers keep coming back to me and paying for my services. Does that answer your question?
  • The important question of what understanding is.
    What TMF is talking about is a mapping from "banana" to the stuff on the store shelf, the stuff infused within banana bread, the stuff in banana cream pies.InPitzotl

    Well, in my translation memory, in my computer, I would have a Dutch word, "banaan", and an English translation, "banana". Can you tell me how I could get all that stuff about the store shelf and banana bread into my translation memory? Or the stuff about councillors and protestors?
  • The important question of what understanding is.
    I guess you wrote these sentences because you seem to be offended because myself, and another above, have pointed out they are poorly written.I like sushi

    I did not write those sentences, I am not offended, they are not poorly written, and you are still completely missing the point.
  • The important question of what understanding is.
    Here’s a better example: “The chicken is ready to eat”I like sushi

    It's not a better example, it's just a slightly less interesting example.

    The point is that it is on the writer to avoid ambiguity in sentences when needed.I like sushi

    No, that completely misses the point of my example! The point was to show how our immersion in a world of experience allows us to understand things which a computer can't understand.
  • The important question of what understanding is.
    I don't know where you got your "rule" from but that isn't how language works.

    The point of the example, which it seems is rather wasted on you, is that we already understand the context without needing to see preceding and succeeding sentences. We know how councillors and protestors behave.
  • The important question of what understanding is.
    I've done it for you:

    Understand: perceive the intended meaning of (words, a language, or a speaker).
  • The important question of what understanding is.
    If you want a dictionary definition, Google it. I'm using the word in the standard way.
  • The important question of what understanding is.
    Ok so, what's your definition of understanding?

    Please don't repeat yourself by saying, "...as illustrated by my examples...".
    TheMadFool

    But why not? As Wittgenstein famously observed "meaning is use". You can tell what I mean by "understanding" by the way I use it in my examples. I'm using it in the standard way. I could of course provide you with dictionary definitions of "understand", but it hardly seems necessary as you already know how the word is normally used. If you didn't already understand the word, you wouldn't understand the definition.
  • The important question of what understanding is.
    If I'm correct, maybe someone can do a better job than I explaining why.tim wood

    No, I think you're not correct Tim. You can take the comma out of both sentences or add parentheses to both if you wish, without affecting the meaning (or the grammaticality).
  • The important question of what understanding is.
    Understanding is mapping — Hermeticus


    Thank you!
    TheMadFool

    Mapping is not understanding, as illustrated by my examples.
  • The important question of what understanding is.
    If we were to talk in hypotheticals:Hermeticus

    I'm not interested in discussing hypotheticals. The Cambridge Dictionary says hypothetical means "imagined or suggested, but perhaps not true or really happening".

    Sensation

    The physical principles behind these sensors and the senses of our body are literally the same. — Hermeticus

    We already have this.
    Hermeticus

    We do not. Sensors do not have sensations.

    And as for representations - computers are literally built on it. They're a representational system. Everything you see on your browser is a representation of a programming language. The programming language is a representation of another programming language (machine code). Machine code is a representation of bit manipulation. Bits are a representation of electric current.Hermeticus

    But the representation is to us, not to the computer. All there is in the computer is electric current. No bits, no languages. We say that the electric current represents something, in the same way that the beads on an abacus represent numbers.
  • The important question of what understanding is.
    I've argued that it's absolutely possible for an AI to have the same experiences we have with our sensesHermeticus

    In science fiction?
  • The important question of what understanding is.
    If you just want to talk about what understanding in general is, I'm totally with TheMadFool here. Understanding is mapping. Complex chains of association between sensations and representations.Hermeticus

    But computers don't have sensations, they don't make associations, they don't use representations.
  • The important question of what understanding is.
    Yet, they're doing things (proving theorems) which, in our case, requires understanding.TheMadFool

    They aren't doing things, we are using them to do things.

    It's the same with an abacus. You can push two beads to one end of the wire, but the abacus isn't then proving that 1 + 1 = 2.
  • The important question of what understanding is.
    But if you read that article, you can see that bacterial chemotaxis is entirely reducible to chemistry!
  • The important question of what understanding is.
    Please tell me,

    1. What understanding is, if not mapping?
    TheMadFool

    My examples were intended to illustrate what understanding is.
  • The important question of what understanding is.
    On what basis can we say that an artificial brain wouldn't be possible in the future?Hermeticus

    We can't, but this is science fiction, not philosophy. I love science fiction, but that's not what I want to talk about here.
  • The important question of what understanding is.
    The field of bionic prosthetics has already managed to send all the right signals to the brain. There are robotic arms that allow the user to feel touch. They are working on artificial eyes hooked up to the optic nerve - and while they're not quite finished yet, the technology already is proven to work.Hermeticus

    But this can't be done without using the brain!
  • The important question of what understanding is.
    There is a reason why all living beings, even very simple ones, cannot be described in terms of chemistry alone. It's that they also encode memory which is transmitted in the form of DNA.Wayfarer

    Bacteria can swim up or down what is called a chemical gradient. They will swim towards a source of nutrition, and away from a noxious substance. In order to do this, they need to have a form of "memory" which allows them to "know" whether the concentration of a chemical is stronger or weaker than it was a moment ago.

    https://www.cell.com/current-biology/comments/S0960-9822(02)01424-0

    Here's a brief extract from that article:

    Increased concentrations of attractants act via their MCP receptors to cause an immediate inhibition of CheA kinase activity. The same changes in MCP conformation that inhibit CheA lead to relatively slow increases in MCP methylation by CheR, so that despite the continued presence of attractant, CheA activity is eventually restored to the same value it had in the absence of attractant. Conversely, CheB acts to demethylate the MCPs under conditions that cause elevated CheA activity. Methylation and demethylation occur much more slowly than phosphorylation of CheA and CheY. The methylation state of the MCPs can thereby provide a memory mechanism that allows a cell to compare its present situation to its recent past.

    The bacterium does not experience the chemical concentration.

    The "memory" encoded by DNA can also be described entirely in terms of chemistry. So I think Mayr got this wrong.
  • The important question of what understanding is.
    Machines can experience physical phenomena that reflects our perception - from cameras to thermometers, pressure and gyro sensors - none of our senses can't be adopted digitally. This means that fundamental concepts like "up" and "down", "heavy" and "light" can indeed be experienced by computers.Hermeticus

    A camera does not see. A thermometer does not feel heat and cold. A pressure sensor does not feel pressure.
  • The important question of what understanding is.
    My own views on the matter is semantics is simply what's called mapping - a word X is matched to a corresponding referent, say, Jesus. I'm sure this is possible with computers. It's how children learn to speak, using ostensive definitions. We could start small and build up from there as it were.TheMadFool

    The examples I gave were intended to illustrate that semantics isn't simply mapping!

    Mapping is possible with computers, that's how my CAT tool works. But mapping isn't enough, it doesn't provide understanding. My examples were intended to illustrate what understanding is.

    Children learn some things by ostensive definition, but that isn't enough to allow understanding. I have a two-year-old here. We've just asked him "do you want to play with your cars, or do some gluing?"

    He can't understand what it is to "want" something through ostensive definition. He understands that through experiencing wanting, desire.
  • The important question of what understanding is.
    Because it is true? But maybe you don't get my meaning and are making it too concrete. I've met dozens of folk who work in government and management who can talk for an hour using ready to assemble phrases and current buzz words without saying anything and - more importantly - not knowing what they are saying.Tom Storm

    You are speaking rather loosely here. Exaggerating.
  • The important question of what understanding is.
    Luckily though, there is still work for translators who do understand what they are talking about.
  • The important question of what understanding is.
    It was more of an answer than a question.

    Yes, human translators sometimes don’t understand what they are translating. Everyone has been baffled by poorly translated product instructions I guess. And sometimes this is because the human translator does not have experience assembling or using the product, or products like it.
  • Artificial Intelligence & Free Will Paradox.
    fMRI measures blood flow in the brain which is related to neural activity. It doesn't provide anything like the fine grain of data that would allow computer code to replicate the behaviour of neurons.

    A family member is a neuroscientist. He uses what's called "two and three proton microscopy" to image individual neurons. Here's one of his papers: "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3864872/
    "Vision and Locomotion Shape the Interactions between Neuron Types in Mouse Visual Cortex." He can't feed this data into a computer to "replicate" vision. I can assure you, if he could do that, he would!

    You are just fantasising.
  • Artificial Intelligence & Free Will Paradox.
    There have been many studies where the mapped neural networks have been fed to the computer and the code was able to replicate the activity.
    That is, if I were to say, touch an apple with my finger, a set of localised neurons would fire up in my brain and that is recorded with the fMRI machine and the data set is transferred to a computer in hopes of replication.
    TheSoundConspirator


    In fMRI, brain activity is graphically represented by colour coding the strength of activation. What is in the data set that you imagine is transferred to a computer?
  • Artificial Intelligence & Free Will Paradox.


    The phrase "algorithm adjacent" caught my eye. It looks like it might have a technical meaning, so I Googled it, but I don't find any examples of the phrase being used in the way you did. All the Google hits are like this: Graph algorithm (adjacent matrix), where the words are not part of the same phrase.

    So I'm wondering what you think "algorithm adjacent" means.

    I'm also wondering what you mean by "mapped".

    Suppose we imagine an extremely simple set of neural connections, let's say Neuron A connects to Neuron B which connects to Neuron C. How would you "map" that, and how would you replicate it in a computer?
  • Artificial Intelligence & Free Will Paradox.
    AI (and neural nets) is just a showy way of talking about laptops and PCs. Can you point to an "AI" that isn't just a digital computer — Daemon


    Quantum computers? I'm not sure. Also, why do you ask?
    TheMadFool

    Because, as I said, computers don't actually do the stuff we say they do. Happily for my argument, this applies to quantum computers just as much as it does to laptops, PCs and pocket calculators.

    These devices don't for example carry out addition and subtraction, rather we use them to represent addition and subtraction.

    We say for example that a certain voltage is to represent 1, and another voltage stands for 0. The voltages do not have those meanings for the computer itself.

    The situation is just the same with an abacus. We can say for example that moving a bead along the wire to the right means addition, and moving it left means subtraction. But again, these positions don't have those meanings for the device, the abacus. We could if we wished decide that it should be the other way round, so that moving to the left means addition.

    We decide that, for example, the bottom row of the abacus represents units, the next row up tens. But we could equally say that the top row is units.

    Voltage Tolerance of TTL Gate Inputs

    TTL gates operate on a nominal power supply voltage of 5 volts, +/- 0.25 volts. Ideally, a TTL “high” signal would be 5.00 volts exactly, and a TTL “low” signal 0.00 volts exactly.

    However, real TTL gate circuits cannot output such perfect voltage levels, and are designed to accept “high” and “low” signals deviating substantially from these ideal values.

    “Acceptable” input signal voltages range from 0 volts to 0.8 volts for a “low” logic state, and 2 volts to 5 volts for a “high” logic state.

    “Acceptable” output signal voltages (voltage levels guaranteed by the gate manufacturer over a specified range of load conditions) range from 0 volts to 0.5 volts for a “low” logic state, and 2.7 volts to 5 volts for a “high” logic state:
    — https://www.allaboutcircuits.com/textbook/digital/chpt-3/logic-signal-voltage-levels/
  • Artificial Intelligence & Free Will Paradox.
    Very disappointing. You just want to spout shite and won't engage. This forum used to be quite good, seems like it's fucked now. On you go then, on to the next 12,000 vacuous posts.
  • Artificial Intelligence & Free Will Paradox.
    AI (and neural nets) is just a showy way of talking about laptops and PCs. Can you point to an "AI" that isn't just a digital computer?
  • Artificial Intelligence & Free Will Paradox.
    I'm referring to AI, at the moment hypothetical but that doesn't mean we don't know what it should be like - us, fully autonomous (able to think for itself for itself among other things).

    For true AI, the only one way of making it self-governing - the autonomy has to be coded - but then that's like commanding (read: no option) the AI to be free. Is it really free then? After all, it slavishly follows the line in the code that reads: You (the AI) are "free". Such an AI, paradoxically, disobeys, yes, but only because, it obeys the command to disobey. This is getting a bit too much for my brain to handle; I'll leave it at that.
    TheMadFool

    This is a somewhat disappointing response, you don't seem to have thought about what I said at all. If what I said is correct, and of course I think it is, then all this talk of self-governing, autonomous or conscious computers is vacuous (and you can move on to think about something more useful).