John Haugeland also synthesised the Kantian notion of the synthetic priori and of the phenomenological/existential notion of the always already there in his paper Truth and Rule Following. — Pierre-Normand
One notable instance of this expression can be found in the works of the French philosopher Jean-Paul Sartre. Sartre used the phrase "always already" to describe the idea that certain aspects of our existence, such as our facticity (our given circumstances and conditions) and our fundamental freedom, are inherent and preexisting. He argued that these aspects of our being are not products of our conscious choices but are thrust upon us, shaping our existence. — Wayfarer
I enjoyed how 'Wayfarer' engaged 'ChatGPT' (presumably GPT-4) to elaborate on this intricate connection. — Pierre-Normand
If you ask ChatGPT, it will not go and check against Sartre's corpus. — Banno
It might by chance find a correct reference. But Equally it might make up a new reference. — Banno
Thanks! Actually as far as I know, it’s still ChatGPT - I’m signing in via OpenAI although whether the engine is the same as GPT-4, I know not. Also appreciate the ref to Haugeland. — Wayfarer
Seems to me that one of the big players who’s completely failed to catch this train, is Amazon. I’ve been using Alexa devices for about eighteen months, and they’re pretty lame - glorified alarm clocks, as someone said. — Wayfarer
...The researchers submitted eight responses generated by ChatGPT, the application powered by the GPT-4 artificial intelligence engine. They also submitted answers from a control group of 24 UM students taking Guzik's entrepreneurship and personal finance classes. These scores were compared with 2,700 college students nationally who took the TTCT in 2016. All submissions were scored by Scholastic Testing Service, which didn't know AI was involved.
The results placed ChatGPT in elite company for creativity. The AI application was in the top percentile for fluency -- the ability to generate a large volume of ideas -- and for originality -- the ability to come up with new ideas. The AI slipped a bit -- to the 97th percentile -- for flexibility, the ability to generate different types and categories of ideas...
The big liability of LLMs is that, in those cases where (1) their knowledge and understanding of a topic is tenuous or nebulous, and (2) they ends up making stuff up about it, they are quite unable to become aware on their own that the opinion they expressed isn't derived from external sources. They don't know what it is that they know and what it is that they don't know. — Pierre-Normand
1) the expertise, the craft that goes into the artwork is lost in the AI generation. — NotAristotle
2) relatedly, the production of the art is devalued; the AI creates the illusion of creativity, when really it's just outputting pre-programmed inputs, but it's not really producing anything, it's dead; the producers of the art are taken for granted in the AI "generation" of the art; if there is no Van Gogh, there simply is no art. — NotAristotle
If it's scientific knowledge, can't it ultimately be tested and therefore verified without AI? I don't need to figure out all the algorithms that went into how ChatGPT said that water is chemically made of H20, all I need to do is conduct an experiment to determine if it is true.I believe AI will deeply undermine our ability to verify. — Leontiskos
If it's scientific knowledge, can't it ultimately be tested and therefore verified without AI? — NotAristotle
Sure, but in that case you are not "trusting AI," which is a central premise of my argument. If we fact-check AI every time it says something then the conundrum will never arise. I don't think we will do that. It would defeat the whole purpose of these technologies. — Leontiskos
Last week, OpenAI announced it had given ChatGPT users the option to turn off their chat history. ChatGPT is a "generative AI", a machine learning algorithm that can understand language and generate written responses. Users can interact with it by asking questions, and the conversations users have with it are in turn stored by OpenAI so they can be used to train its machine learning models. This new control feature allows users to choose which conversations to use to train OpenAI models.
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.