Did they? AI models typically use thousands or millions of datapoints for their algorithm. Did the programmers categorise all of them? — Lionino
The only way a computer knows what is "happy" is if someone feeds it in say one-hundred pictures of "happy" faces and tags each one as "happy". Then you can feed it in millions more pictures, if you like, and refine it's capabilities. — Pantagruel
I am also a techno-animist, so to me the technology goes beyond being only a tool. More like a gateway.
Another way of saying what this thread is all about, is to state that (in my opinion, experience and observation) atheist animism is the default worldview of humanity.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3281765#:~:text=First%2C%20AIs%20could%20display%20a,the%20doubt%E2%80%9D%20in%20uncertain%20situations.
[...]
https://theconversation.com/emotion-reading-tech-fails-the-racial-bias-test-108404
https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html — Pantagruel
Ask ChatGPT if its answers could be subject to unknown selection biases its developers may have passed on to it accidentally through data-categorization. — Pantagruel
Or would it be more appropriate to say that advancing technology is good in virtue of something else? It's obviously much more common to argue the latter here, and the most common argument is that "technological progress is good because it allows for greater productivity and higher levels of consumption." — Count Timothy von Icarus
In any case, I still have not seen any proof that programmers are categorising their own data by hand. — Lionino
We should never have invented agriculture. — bert1
an artificial neural network’s initial training involvesbeing fed large amounts of data — Pantagruel
the AI system trained with such historical data will simply inherit this bias — Pantagruel
If anything, the researchers are simply pointing out that people believe in AIs more than they should.In a series of three experiments, we empirically tested whether (a) people follow the biased recommendations offered by an AI system, even if this advice is noticeably erroneous (Experiment 1); (b) people who have performed a task assisted by the biased recommendations will reproduce the same type of errors than the system when they have to perform the same task without assistance, showing an inherited bias (Experiment 2); and (c) performing a task first without assistance will prevent people from following the biased recommendations of an AI and, thus, from committing the same errors, when they later perform the same task assisted by a biased AI system.
I've been studying neural networks since the 1990's, long before they were popular, or even commonly known.
There is no "AI" my friend — Pantagruel
The article you linked sets out to show that humans may inherit the biased information given to them by an AI (duh), not that AI inherits human bias. :meh: — Lionino
Which bias originally derived from the bias input data, as is in the article. — Pantagruel
In the classification task, participants were instructed to observe a series of tissue samples, to decide, for each sample, whether it was affected or not by a fictitious disease called Lindsay Syndrome. Each tissue sample had cells of two colours, but one of them was presented in a greater proportion and volunteers were instructed to follow this criterion to identify the presence of the syndrome.
Like I said, it's a fact. Do some reading. — Pantagruel
"Training up" a neural net. — Pantagruel
Categorization is supplied, it's not intrinsic to the nature of a picture Copernicus. — Pantagruel
Yes, supplied by external sources, not by the researchers. There, the fourth time. — Lionino
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.