Although I'm not actually that familiar with TikTok, there has been controversy over its AI gathering data from its user's phones to recommend videos and such, do you have any familiarity with this controversy? — Judaka
Knowledge can be a means to power, but rarely does it amount to much, and I'm not too sure what the actual concern is. Could you give a context? Does TikTok, or gambling apps using AI, or stuff like that, represent your concern well, or is it something else? — Judaka
...The researchers submitted eight responses generated by ChatGPT, the application powered by the GPT-4 artificial intelligence engine. They also submitted answers from a control group of 24 UM students taking Guzik's entrepreneurship and personal finance classes. These scores were compared with 2,700 college students nationally who took the TTCT in 2016. All submissions were scored by Scholastic Testing Service, which didn't know AI was involved.
The results placed ChatGPT in elite company for creativity. The AI application was in the top percentile for fluency -- the ability to generate a large volume of ideas -- and for originality -- the ability to come up with new ideas. The AI slipped a bit -- to the 97th percentile -- for flexibility, the ability to generate different types and categories of ideas...
I've never heard a perspective like this. Can you give an example showing the cause for your concern? — Judaka
https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply deep learning to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting program, which the researchers named Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient had discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver. There are a lot of methods that are “pretty good” at predicting disease from a patient’s records, says Joel Dudley, who leads the Mount Sinai team. But, he adds, “this was just way better.”
At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. The new tool offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, say, a change in the drugs someone is being prescribed. “We can build these models,” Dudley says ruefully, “but we don’t know how they work.”
Currently there is no true AI, there is simulated AI. However, even simulated AI can replace numerous workers in middle management and low level creative fields. This can/will have a devastating impact on employment and thus the economy as well as social stability. — LuckyR
Blinking and breathing are not acts in the philosophical sense. — Leontiskos
Every intentional human act involves an intention, and the intention is the primary defining characteristic of an act.
Every properly human act involves an intention, and the intention is the primary defining characteristic of an act. — Leontiskos
To date, it is unclear that cellular automata, neural networks, or the like can do anything that Universal Turing Machines cannot. — Count Timothy von Icarus
The wisdom of the crowd is the collective opinion of a diverse independent group of individuals rather than that of a single expert. This process, while not new to the Information Age, has been pushed into the mainstream spotlight by social information sites such as Quora, Reddit, Stack Exchange, Wikipedia, Yahoo! Answers, and other web resources which rely on collective human knowledge.[1] An explanation for this phenomenon is that there is idiosyncratic noise associated with each individual judgment, and taking the average over a large number of responses will go some way toward canceling the effect of this noise.[2]
Yes. And ever changing. If psychology is affected by culture (and I'm certain it is) then what was true yesterday in psychology might not be true today. We're playing catch up. — Isaac
Corrective rather than constructive, and the consistency being enforced is that of the narrative your current model is organized around, rather than "the way the world really is" or something. — Srap Tasmaner
Pattern recognition. Thats a huge part of what the brain does and it’s so dedicated to finding patterns that it will even see patterns that aren’t there, optical illusions etc. — DingoJones
But beyond the general idea of it, it seems very speculative, and it seems inherently so - I don't see a path out of the speculation for most hypotheses in the evo-psych realm.
I think that pretty much sums up what I think of evo psych - the basic tenet of it is pretty much obviously true, but any specific hypothesis is probably untestable, unverifiable, unsatisfiable. — flannel jesus
I tend to frame the effect of reason in terms effects on our priors, so reasoning is still post hoc, but has an effect. Basically, if the process of reasoning (which is effectively predictive modeling of our own thinking process), flags up a part of the process that doesn't fit the narrative, it'll send suppressive constraints down to that part to filter out the 'crazy' answers that don't fit. — Isaac
What I'm convinced doesn't happen (contrary to Kahneman, I think - long time since I've read him) is any cognitive hacking in real time. I can see how it might cash out like that on a human scale (one decision at a time), but at a deeper neurological scale, my commitments to an active inference model of cognition don't allow for such an intervention. We only get to improve for next time. — Isaac
It appears like the cellular responses (so-called learning) took five times longer to occur in living tissue than it took in prior studies inanimate mass, "in vitro". That is very clear evidence that the relationship between stimulus and effect, is not direct. The cause of this five-fold delay (clear evidence that there is not a direct cause/effect relation) is simply dismissed as "noise" in the living brain.
Furthermore, it is noted that the the subjects upon which the manipulation is carried out are unconscious, and so it is implied that "attention" could add so much extra "noise" that the entire process modeled by the laboratory manipulation might be completely irrelevant to actual learning carried out by an attentive, conscious subject. Read the following:
"It is important to note that these findings were obtained in anaesthetized animals, and remain to be confirmed in the awake state. Indeed, factors such as attention are likely to influence cellular learning processes (Markram et al., 2012). — Metaphysician Undercover
Noam Chomsky argued:
"You find that people cooperate, you say, 'Yeah, that contributes to their genes' perpetuating.' You find that they fight, you say, ‘Sure, that's obvious, because it means that their genes perpetuate and not somebody else's. In fact, just about anything you find, you can make up some story for it."[43][44]
— Chomsky — schopenhauer1
Most of them were published subsequent to 2005, from what I can see. David Chalmer's article was published in 1996. I think much of the literature reflects that, as it was an influential article and put the idea on the agenda, so to speak. — Wayfarer
That's very deceptive use of equivocation. — Metaphysician Undercover
Did you read the article that this thread is about? Do you have any idea of what the issue being discussed is? — Wayfarer
I think you equivocate. Neural networks of AI are said to be "trained". But we weren't talking AI, we were talking about biological neurons, involved in a person reading. — Metaphysician Undercover
OK, I should have written 'excludes consideration of the first-person perspective....' — Wayfarer
Yes, similar to that, but not quite the same. An individual is trained, a person or some other being. We do not train a part of a person. I find that to be an absurd usage of the term to say that a person trains a part of one's body, like saying that a man trains his penis when to have an erection and when not to.
Anyway, it's off topic and I see that discussion with you on this subject would probably be pointless, as you seem to be indoctrinated. — Metaphysician Undercover
Understandable, I think understanding human motivation and the human condition is valid. I do it all the time. Evo-psych basis for things is harder to prove. — schopenhauer1
Eek, that doesn't seem like good science. — schopenhauer1
Absolutely. The thing about psychological theories is that everyone has them, you have to have, otherwise your strategies when interacting with others are random. We don't just throw darts blindfold when deciding how to respond, we have a theory about what our actions/speech is going to do, how it's going to work. That's a psychological theory. — Isaac
If psychology fails, it is its methodology that's at fault, not it's objectives. — Isaac
Have a great trip! We'll be here when you get back. — Srap Tasmaner
thread where I'm pissing on the law of non-contradiction: — Srap Tasmaner
The LNC however does affirm that it is not possible to hold a belief that A with .90 probability while at the same time holding a belief that A with .10 probability. — javra
"Okay, everyone, you all need to move back now, that's it, move on back now, DON'T GO IN THE WOODS!" Just ever so slightly lost his cool as this grizzly ambled toward us, it was awesome. — Srap Tasmaner
