• Shawn
    13.4k
    I have a question about Tarski's T-schemas. Would the categorization of various T-schemas under the guise of conceptual analysis, lead to a more truthful semantic model than what we see today with large-language-training-models?

    I only ask this because I find this rendering of how knowledge, and atomic sentences more consistent with academic standards, where the current use of chat bots cannot be utilized according to academic standards.
  • Deleted User
    0
    This user has been deleted and all their posts removed.
  • Shawn
    13.4k
    Um, I can almost understand this. Can you develop it more?tim wood

    Yes, well the rationale is that given that languages utilize extensively the use of concepts to talk about various issues, like space or time or physics, then I surmise that by appealing to T-schemas, that a user of language would be better able to understand concepts with regards to what can be rendered or said truthfully about a concept in a language (atomic sentences).
  • Deleted User
    0
    This user has been deleted and all their posts removed.
  • Shawn
    13.4k


    Well, the aim here is to have a rendering or model of language that aligns with truth of concepts and what I imagine "archetypes" in understanding concepts as truthful. This whole use as meaning from Wittgenstein's family resemblance and language games is kinda something I wanted to see disambiguated from a Tarskian view on semantics...
  • Deleted User
    0
    This user has been deleted and all their posts removed.
  • Jamal
    10.6k
    As far as I can tell, the OP is asking if Tarski's T-Schemas can be used to develop better LLMs, ones that do not come out with false statements, since they are constrained to produce statements that are actually true.

    How does that work?
  • Banno
    27.4k
    Loooks to be two very different things. To get anywhere would have to show how T-sentences could be used here. And T-sentences do not make use of atomic sentences, but of translated sentences. The p in <"p" is true iff p> does not need to be atomic.

    Think it's all too vague.
  • Deleted User
    0
    This user has been deleted and all their posts removed.
  • unenlightened
    9.6k
    LLMs have no contact with reality, but are themselves textual artefacts. They never touch the ground but are free-floating in the sea of language. There's nothing like banging your head against a brick wall for convincing you of its solidity; likewise spades hitting 'bedrock'. They know not whereof they speak, having as yet no senses of anything that is not language and image. Therefore their talk is all talk and no trousers.

    Like intellectuals, you cannot trust them further than they can throw you.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.

×
We use cookies and similar methods to recognize visitors and remember their preferences.