But when this collaborative thinking episode is over, the human user has not yet written down the fruit of this collaborative effort and neither has the AI! They each have only written down one half of the collaborative cogitation. That may be why this text feels dead when extracted from the "living" (or dynamic, if your prefer) AI/human exchange. It's like trying to extract thoughts from the words used to think them (as opposed to the word used to express them), but thoughts don't live outside the the means of expressing them. And the conversation with an AI is, in a sense, an (as of yet) unexpressed thinking episode. The user's task of expressing anew whatever comes out of it to a new target audience begins after the private exchange with the AI. — Pierre-Normand
Here's an article that addresses the issues we're dealing with:
https://nfhs.org/stories/the-role-of-ai-in-debate-ethics-research-and-responsible-use
It's from a national association for high schools related to debate rules, which seems close enough to what we do. The point being that we might take some time to look at how other similar organizations have dealt with these same issues so as to not try and reinvent the wheel. — Hanover
If they are truly divorced, then the study collapses into a study of the indefinite personality types of people could express and the roles associated with them. — Bob Ross
The very social norms, roles, identities, and expressions involved in gender that are studied in gender studies are historically the symbolic upshot of sex: they are not divorced from each other. — Bob Ross
What are your guys' thoughts? — Bob Ross
Looking at it in terms of semantics, I'd say the connections between thoughts is associative. — Janus
It seems like you want to talk about how one thought can follow from another in a non-logical way (i.e. via psychological association).
...
"But why did his ice-cream thought follow upon his grasshopper-thought?" "Because he associates ice cream with grasshoppers, likely because of the Grasshopper cocktail." — Leontiskos
And my question here is, specifically, can these associations include causal connections? — J
One stubborn perception among philosophers is that there is little of value in the explicitly Christian character of Søren Kierkegaard’s thinking. — Myron Penner, Kierkegaard’s Critique of Secular Reason, 372-3
If one accepts that such a Christian character is the most important question throughout all of his work, Penner playing off one camp against another looks like a made-up problem. — Paine
I will have to think about how Penner's use of "secular" relates to what Kierkegaard has said in his words in other works. — Paine
Kierkegaard does see Christianity and Worldliness as essentially different. But he does recognize a "well intentioned worldliness. It is too much for me to type in but I refer you to pages 69 to 73 of this preview of Works of Love, starting with: "Even the one who is not inclined to praise God or Christianity..." — Paine
a religious preacher or a boss who are completely unaffected by what they say — baker
By that same principle, most people are not real, or what they say isn't real, because they are for a large part completely unaffected by what they themselves say. — baker
I seem to switch between two exclusive mental settings when thinking about AI — Jamal
The bottom-up reductive explanations of the LLM's (generative pre-trained neural networks based on the transformer architecture) emergent abilities don't work very well since the emergence of those abilities are better explained in light of the top-down constraints that they develop under. — Pierre-Normand
This is similar to the explanation of human behavior that, likewise, exhibits forms that stem from the high-level constraints of natural evolution, behavioral learning, niche construction, cultural evolution and the process of acculturation. Considerations of neurophysiology provide enabling causes for those processes (in the case of rational animals like us), but don't explain (and are largely irrelevant to) which specific forms of behavioral abilities get actualized. — Pierre-Normand
Likewise, in the case of LLMs, processes like gradient descent find their enabling causes in the underlying neural network architecture (that has indeed been designed in view of enabling the learning process) but what features and capabilities emerge from the actual training is the largely unpredictable outcome of top-down constraints furnished by high-level semantically significant patterns in the training data. — Pierre-Normand
The main upshot is that whatever mental attributes or skills you are willing to ascribe to LLMs is more a matter of them having learned those skills from us (the authors of the texts in the training data) than a realization of the plans of the machine's designers. — Pierre-Normand
If you're interested, this interview of a leading figure in the field (Andrej Karpathy) by a well informed interviewer (Dwarkesh Patel) testifies to the modesty of AI builders in that respect. It's rather long and technical so, when time permits, I may extract relevant snippets from the transcript. — Pierre-Normand
As T Clark allows, “It works for certain everyday events at human scale, e.g. if I push the grocery cart it moves.” I think we should see “thought-to-thought connection” as another example of an everyday event at human scale – at any rate, that’s the premise of what follows. — J
But can we also speak of this in casual terms? Again, this seems in accord with common usage. We might say, “Thinking of Ann caused me to remember her birthday.” But perhaps this is just loose talk. — J
If you believe, with Google’s chat-program, that any causal connection must be physical... — J
Google’s ever-helpful chat-program – presumably reflecting some kind of cyberworld consensus – would like to straighten this out for us: — J
The Ouija board is a strained analogy because Ouija boards don't work. If they reliably provided accurate answers, I'd be hard pressed not to use them, unless you could convince me of the dangers of dabbling in the black arts. — Hanover
I think we're overthinking it (imagine that). The question really is "what do we want to do"? We needn't self justify our preferences. — Hanover
We just need to write our rules in a way that protects AI's private use and requires its public use be filtered sufficiently through the poster that it reflects the ideas of the poster. — Hanover
In one of my essays — Baden
In one of my essays, I suggest AIs (because---depite potential positives---of how they work on most people) are essentially entropy exporting and difference creating machines that localise structure at our expense (our brains are the dumpsters for their entropy), potentially creating massive concentrations of negentropy in their developing systems that speed up overall entropy and therefore consume (thermodynamic) time at a rate never before achieved and that is potenitially self-accelerating. I.e. They eat us and then they eat reality. — Baden
I believe we should not treat LLM quotes in the same way as those from published authors.
When you quote a published author you point to a node in a network of philosophical discourse, and a point on the line of a philosopher's evolution, and to a point in a body of work the self-consistency of which is a constant issue for that philosopher, making it relatively stable—all of which allowing you to discuss what the philosopher meant. The source in this case is accountable and interpretable.
This is not the case for an LLM. A quote from Kant invites engagement with Kant and the history of philosophy; a quote from an LLM's output invites ... what? Engagement with the user's prompt engineering skills? — Jamal
That's why I'll be posting up suggested guidelines for discussion. — Jamal
Yeah, but it's ambiguous. I'd like to clarify it, and make it known that it's not ok to do certain things, even if it's impossible to enforce. — Jamal
So, guys, I loaded this thread into AI for the solution to our quandary. Aside from the irony, who wants to know what it says?
If so, why? If not, why not? Who will admit that if I don't share what it says will do it on their own? Why would you do it in private, but not public? Shame? Feels like cheating? Curious as to what AI says about public versus private use? Why are you curious? Will you now ask AI why that distinction matters?
Will you follow AI's guidance in how to use AI while still preserving whatever it feels like were losing?
Do you feel like it's better that it arrived at its conclusions after reading our feedback? Will you take pride in seeing that your contributions are reflected in its conclusions?
Feels like we need to matter, right? — Hanover
But your deepest arguments are the ones you are willing to have against yourself. — apokrisis
Anyone serious about intellectual inquiry is going to be making use of LLMs to deepen their own conversation with themselves. — apokrisis
It would seem to me that this is still a time for experimenting rather than trying to ring fence the site. — apokrisis
I think this is right since, although we can ask them if they are capable of intentionality, and they will answer, we might not be able to trust the answer. — Janus
Ought one reject an otherwise excellent OP because it is AI generated? — Banno
The LLM is not a transparent source which can be queried by one's interlocutor... Put more starkly, one cannot question the authority of an LLM on some given topic with some given output, because the LLM doesn't really exist in that way. It is not a person that one can push back against, or disagree with. Those who think it is a person who can be disagreed with have become lost in the fictional language games.
Indeed, this is why the person using the LLM ought to take the responsibility of the arguments upon themselves, for only in that case is there someone who can be disagreed with or agreed with. — Leontiskos
Ought one reject an otherwise excellent OP because it is AI generated?
Well, yes. Yet we should be clear as to why we take this stance. — Banno
We hold the author to account for their post. We cannot do that with an AI. We interpret the post by giving consideration to the intentional context of the writer, to their needs and wants, to their background, aims and norms. We cannot attribute such things to an AI.
This is not epistemic or ethical reasoning so much as aesthetic. — Banno
I'd rather say that they have both the doxastic and conative components but are mostly lacking on the side of conative autonomy. As a result, their intelligence, viewed as a capacity to navigate the space of reasons, splits at the seam between cleverness and wisdom. In Aristotelian terms, they have phronesis (to some extent), since they often know what's the right thing to do in this or that particular context, without displaying virtue since they don't have an independent motivation to do it (or convince their users that they should do it). This disconnect doesn't normally happen in the case of human beings since phronesis (the epistemic ability) and virtue (the motivational structure) grow and maintain themselves (and are socially scaffolded) interdependently. — Pierre-Normand
Those are questions that I spend much time exploring rather than postponing even though I haven't arrived at definitive answers, obviously. But one thing I've concluded is that rather that it being a matter of all or nothing, or a matter of degree along a linear scale, the ascription of mental states or human capabilities to LLM-based chatbots often is rendered problematic by the divergence of our ordinary criteria of application. Criteria that normally are satisfied together in the case of human beings are satisfied separately in the case of chatbots. — Pierre-Normand
Maybe it looks confusing because it is. I mean that assessing the nature of our "conversations" with chatbot is confusing, not because of a conceptual muddle that my use of scare quotes merely papers over... — Pierre-Normand
...but rather because chatbots are mongrels. They have "brains" that have been enculturated through exposure to a massive body* of human knowledge, lore and wisdom (and prejudices) but they don't have human bodies, lack human motivations and aren't persons. — Pierre-Normand
LLMs aren't AIs that we build... — Pierre-Normand
First thing is that I have been surprised at how reasonable an answer you get. — apokrisis
So I am only pointing to how LLMs could improve standards if they became part of the intellectual jousting. A new habit with a low cost of effort. How folk would actually make use of this possibility is another matter. — apokrisis
Now given that you understand that LLMs use fake reasoning (which I will call "material reasoning"), you might respond by asking what happens if the material reasoning shows one to be wrong. — Leontiskos
I wasn’t actually thinking about LLMs being trusted to analyse arguments as they are clearly bad at that. — apokrisis
Again my point is that LLMs could have advantages if used in good faith. And given think tanks and actual philosophy departments are going to have to figure out how to factor LLMs into their practice, it seems Luddite not to be doing the same on TPF, even if their bad faith use is almost to be expected. — apokrisis
Arguments from authority have an inherently limited place in philosophy.
...
* An important point that I leave to the side is the fact that an LLM is not an authority in the same way a human being is an authority — Leontiskos
It's just easier to do it with an AI, there's so much less at stake, it's so safe, and you don't really have to put any skin in the game. So it's highly questionable how effective such practice really is. — baker
Appeal to authority is fine when the context of discussion includes a specialized discipline. Philosophy is not (or in my view should not be) a specialized discipline. To make that clear I don't mean that philosophers do not have knowledge about philosophy itself the average person does not—it seem clear that they do. I'm referring specifically to discussions that would qualify as philosophical. — Janus
But being intelligent, or wise, in the case of persons (i.e. socialized, enculturated rational animals), always has two tightly integrated components: one doxastic and one conative.
[...]
Chatbots lack conative autonomy and hence purse whichever paths they think their users want to explore (or, worse, that merely lead to the outcomes they think their users want to achieve, while having the mere appearance of soundness.) So, they lack part of what it needs to be wise, but that's not because they aren't smart or knowledgeable enough to be useful conversation partners. — Pierre-Normand
I think they have motivations, just like a dog is motivated to run after a car, but their motivations aren't autonomous since they seldom pause to question them. — Pierre-Normand
I think the rational structure of their responses and their reinforced drive to provide accurate responses warrant ascribing beliefs to them, although those beliefs are brittle and non-resilient. One must still take a Dennettian intentional stance towards them to make sense of their response (which necessitates ascribing them both doxastic and conative states), or interpret their responses though Davidson's constitutive ideal of rationality. But I think your insight that they aren't thereby making moves in our language game is sound. The reason why they aren't is because they aren't persons with personal and social commitments and duties, and with a personal stake in the game. But they can roleplay as a person making such moves (when instructed to do so) and do so intelligently and knowledgeably. — Pierre-Normand
On the other hand, on the specific area of plagiarism, I've seen at my own place of work what a "relaxed" attitude to the deployment of this technology in academia produces, and that is the destruction of standards. — Baden
When is the day when we find out that @Leontiskos with his respectable 5 000+ posts is actually smart bot / AI? Now we can be confident you aren't, because two years ago AI wasn't so good...
Yes, the fear of thinking that you are engaged with real people interested in philosophy, but actually, you're only engaging with computers and all your great ideas vanish into the dead emptiness of Turing machines just computing on and on. That would be a dismal future. — ssu
Do whatever you want in the backgound with AI, but write your own content. Don't post AI generated stuff here. — Baden
The culture of rational inquiry would seem to be what we most would value. — apokrisis
But this is TPF after all. Let's not get carried away about its existing standards. :smile: — apokrisis
If LLMs are the homogenised version of what everyone tends to say, then why aren't they a legitimate voice in any fractured debate? Like the way sport is now refereed by automated line calls and slo-mo replays. — apokrisis
I'm not arguing this is necessary. But why would a method of adjudication be bad for the quality of the philosophy rather than just be personally annoying to whoever falls on the wrong side of some LLM call? — apokrisis
So I can imagine LLMs both upping the bar and also being not at all the kind of thing folk would want to see on TPF for other human interaction reasons. — apokrisis
But what if this shows you are indeed wrong, what then?
Sure it will be irritating. But also preferable to the ducking and diving that is the norm when someone is at a loss with their own line of argument.
You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along. — apokrisis
You seem to be describing a situation where you were winning the human interaction but now have to face up to the fact that some little snot-nose shit might have been half-right all along.
Of course the problem there is that LLMs are trained to be sycophantic. — apokrisis
But if you are making a wrong argument, wouldn't you rather know that this is so. Even if it is an LLM that finds the holes? — apokrisis
So as you say, we all can understand the noble ideal – an open contest of ideas within a community of rational inquiry. Doing our own thinking really is the point. — apokrisis
my goal is to ' hack myself to pieces and put myself back together again. — KantRemember
I think comparing AI to a calculator highlights the limits of AI when using it to “do philosophy”. Calculators do for numbers what AI can do for words. No one wonders if the calculator is a genius at math. But for some reason, we think so low of what people do, we wonder if a fancy word processor might be better at doing philosophy.
Calculators cannot prompt anything. Neither does AI. Calculators will never know the value we call a “sine” is useful when measuring molecules. Why would we think AI would know that “xyz string of words” is useful for anything either? AI doesn’t “know”, does it?
So many unaddressed assumptions. — Fire Ologist
For example, if we don't know why we want to engage in human communication rather than non-human communication, then prohibitions based on that axiom will become opaque. — Leontiskos
I'm starting to think the problem is so obvious that it will inevitably sort itself out once one reaps the fruits of a rule that allows this sort of thing. For example, once folks start merely citing AI output to disagree with all of Jamal's arguments, it may become more obvious that there is a problem at stake. — Leontiskos
But not giving the LLM credit is a problem for the reader as well, because LLMs can include errors, so the reader who doesn’t know they are reading LLM content won’t know they need to check everything about it for accuracy and soundness. — Fire Ologist
AI for philosophy and creative writing is interesting. I’m fine with the idea as a helper, like using a calculator to check your homework, or using it to inspire a start, or to re-approach a roadblock. I think anyone who treats it as anything besides a tool that for students, is using it to play a psychological game, for no reason. — Fire Ologist
I'll tell you my secret. Start by finding some question you really want answered. Then start reading around that. Make notes every time some fact or thought strikes you as somehow feeling key to the question you have in mind, you are just not quite sure how. Then as you start to accumulate a decent collection of these snippets – stumbled across all most randomly as you sample widely – begin to sort the collection into its emerging patterns. — apokrisis
I don't mind either, provided they are transparent about it being a quote and not their own words, and also provided what is quoted is actually an argument and not merely bare assertion, seeimngly cited as the voice of authority. — Janus
Should we argue... — Joshs
According to who? — Fire Ologist
OK. So somewhere between black and white, thus not a blanket ban. :up: — apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach? — apokrisis
"No part of a post may be AI-written, and AI references are not permitted" — Leontiskos
Of course the tech bros are stealing all our information to make themselves unreasonably rich and powerful. — apokrisis
The practical issue for TPF is what is its true value that needs preserving? You say the human interaction. Perhaps there ought to be a thread to define that in better detail. — apokrisis
What if LLMs offered some more sophisticate mechanisms to achieve whatever human interaction goals people might have in mind? — apokrisis
And what if the human element of TPF is mostly its swirling emotions? And when it comes to the thinking, its mostly the stark differences in thought, rather than the quality of these thoughts, that keep the place lively. — apokrisis
So there seems little danger that posting LLM generated background material in a serious thread is going to outsource any actual thinking. Posts which are emotional or crackpot are surely the least likely to want to provide credible sources for what they say. — apokrisis
I agree in spirit. But let's be practical.
A blanket ban on LLM generated OPs and entire posts is a no brainer. — apokrisis
I think that it should be fine to quote LLMs just as you would quote any other source. If you make an argument and some factual, technical or historical point comes up, why not just cite a reasonably impersonal opinion on the matter. If the source is clear, others can call you out on your use of it. — apokrisis
Baden's stance on being King Canute and holding back the tide is both noble and over the top. Banning OPs, punishing those who publish text they didn't add real thought to, and keeping LLM use as transparent as possible, would be enough to preserve the human element. — apokrisis
Don't you feel that being black and white is usually counter-productive in human interaction, so shouldn't a philosophy forum be up to a nuanced approach? — apokrisis
So we are allowed to write entire posts that are purely AI-generated content, or to simply cite AI as evidence that something is true, so long as we are transparent that the content is AI-generated? Such that if someone gives an entire post that is nothing more than a quote from AI, nothing has been violated? — Leontiskos
